Search (61 results, page 1 of 4)

  • × theme_ss:"Automatisches Klassifizieren"
  • × type_ss:"a"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.055082146 = product of:
      0.09180357 = sum of:
        0.04823906 = product of:
          0.19295624 = sum of:
            0.19295624 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19295624 = score(doc=562,freq=2.0), product of:
                0.3433275 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04049623 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.027104476 = weight(_text_:j in 562) [ClassicSimilarity], result of:
          0.027104476 = score(doc=562,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.016460039 = product of:
          0.032920077 = sum of:
            0.032920077 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.032920077 = score(doc=562,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.05
    0.053484313 = product of:
      0.08914052 = sum of:
        0.033236515 = weight(_text_:u in 3284) [ClassicSimilarity], result of:
          0.033236515 = score(doc=3284,freq=6.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.25064746 = fieldWeight in 3284, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.044930637 = weight(_text_:b in 3284) [ClassicSimilarity], result of:
          0.044930637 = score(doc=3284,freq=8.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.31315655 = fieldWeight in 3284, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.01097336 = product of:
          0.02194672 = sum of:
            0.02194672 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.02194672 = score(doc=3284,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  3. Pfister, J.: Clustering von Patent-Dokumenten am Beispiel der Datenbanken des Fachinformationszentrums Karlsruhe (2006) 0.03
    0.029807007 = product of:
      0.07451752 = sum of:
        0.036139302 = weight(_text_:j in 5976) [ClassicSimilarity], result of:
          0.036139302 = score(doc=5976,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=5976)
        0.03837822 = weight(_text_:u in 5976) [ClassicSimilarity], result of:
          0.03837822 = score(doc=5976,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.28942272 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=5976)
      0.4 = coord(2/5)
    
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  4. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.03
    0.02904301 = product of:
      0.072607525 = sum of:
        0.045174126 = weight(_text_:j in 2748) [ClassicSimilarity], result of:
          0.045174126 = score(doc=2748,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.35106707 = fieldWeight in 2748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.078125 = fieldNorm(doc=2748)
        0.027433401 = product of:
          0.054866802 = sum of:
            0.054866802 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.054866802 = score(doc=2748,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al
  5. Han, K.; Rezapour, R.; Nakamura, K.; Devkota, D.; Miller, D.C.; Diesner, J.: ¬An expert-in-the-loop method for domain-specific document categorization based on small training data (2023) 0.02
    0.020267485 = product of:
      0.050668713 = sum of:
        0.022587063 = weight(_text_:j in 967) [ClassicSimilarity], result of:
          0.022587063 = score(doc=967,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17553353 = fieldWeight in 967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=967)
        0.02808165 = weight(_text_:b in 967) [ClassicSimilarity], result of:
          0.02808165 = score(doc=967,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.19572285 = fieldWeight in 967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=967)
      0.4 = coord(2/5)
    
    Abstract
    Automated text categorization methods are of broad relevance for domain experts since they free researchers and practitioners from manual labeling, save their resources (e.g., time, labor), and enrich the data with information helpful to study substantive questions. Despite a variety of newly developed categorization methods that require substantial amounts of annotated data, little is known about how to build models when (a) labeling texts with categories requires substantial domain expertise and/or in-depth reading, (b) only a few annotated documents are available for model training, and (c) no relevant computational resources, such as pretrained models, are available. In a collaboration with environmental scientists who study the socio-ecological impact of funded biodiversity conservation projects, we develop a method that integrates deep domain expertise with computational models to automatically categorize project reports based on a small sample of 93 annotated documents. Our results suggest that domain expertise can improve automated categorization and that the magnitude of these improvements is influenced by the experts' understanding of categories and their confidence in their annotation, as well as data sparsity and additional category characteristics such as the portion of exclusive keywords that can identify a category.
  6. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.02
    0.018629381 = product of:
      0.046573453 = sum of:
        0.022587063 = weight(_text_:j in 2300) [ClassicSimilarity], result of:
          0.022587063 = score(doc=2300,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17553353 = fieldWeight in 2300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2300)
        0.023986388 = weight(_text_:u in 2300) [ClassicSimilarity], result of:
          0.023986388 = score(doc=2300,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.1808892 = fieldWeight in 2300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2300)
      0.4 = coord(2/5)
    
    Source
    Classification and authority control: expanding resource discovery: proceedings of the International UDC Seminar 2015, 29-30 October 2015, Lisbon, Portugal. Eds.: Slavic, A. u. M.I. Cordeiro
  7. Ru, C.; Tang, J.; Li, S.; Xie, S.; Wang, T.: Using semantic similarity to reduce wrong labels in distant supervision for relation extraction (2018) 0.02
    0.018629381 = product of:
      0.046573453 = sum of:
        0.022587063 = weight(_text_:j in 5055) [ClassicSimilarity], result of:
          0.022587063 = score(doc=5055,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.17553353 = fieldWeight in 5055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5055)
        0.023986388 = weight(_text_:u in 5055) [ClassicSimilarity], result of:
          0.023986388 = score(doc=5055,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.1808892 = fieldWeight in 5055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5055)
      0.4 = coord(2/5)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  8. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.02
    0.017425805 = product of:
      0.043564513 = sum of:
        0.027104476 = weight(_text_:j in 2158) [ClassicSimilarity], result of:
          0.027104476 = score(doc=2158,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.21064025 = fieldWeight in 2158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.016460039 = product of:
          0.032920077 = sum of:
            0.032920077 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.032920077 = score(doc=2158,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    4. 8.2015 19:22:04
  9. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.02
    0.01671934 = product of:
      0.04179835 = sum of:
        0.02808165 = weight(_text_:b in 1107) [ClassicSimilarity], result of:
          0.02808165 = score(doc=1107,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.19572285 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.0137167005 = product of:
          0.027433401 = sum of:
            0.027433401 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.027433401 = score(doc=1107,freq=2.0), product of:
                0.1418109 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049623 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  10. Panyr, J.: STEINADLER: ein Verfahren zur automatischen Deskribierung und zur automatischen thematischen Klassifikation (1978) 0.01
    0.014455721 = product of:
      0.072278604 = sum of:
        0.072278604 = weight(_text_:j in 5169) [ClassicSimilarity], result of:
          0.072278604 = score(doc=5169,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.5617073 = fieldWeight in 5169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.125 = fieldNorm(doc=5169)
      0.2 = coord(1/5)
    
  11. Kleinoeder, H.H.; Puzicha, J.: Automatische Katalogisierung am Beispiel einer Pilotanwendung (2002) 0.01
    0.012648756 = product of:
      0.06324378 = sum of:
        0.06324378 = weight(_text_:j in 1154) [ClassicSimilarity], result of:
          0.06324378 = score(doc=1154,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.4914939 = fieldWeight in 1154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.109375 = fieldNorm(doc=1154)
      0.2 = coord(1/5)
    
  12. Shen, D.; Chen, Z.; Yang, Q.; Zeng, H.J.; Zhang, B.; Lu, Y.; Ma, W.Y.: Web page classification through summarization (2004) 0.01
    0.01123266 = product of:
      0.0561633 = sum of:
        0.0561633 = weight(_text_:b in 4132) [ClassicSimilarity], result of:
          0.0561633 = score(doc=4132,freq=2.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.3914457 = fieldWeight in 4132, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=4132)
      0.2 = coord(1/5)
    
  13. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.01
    0.0079426905 = product of:
      0.03971345 = sum of:
        0.03971345 = weight(_text_:b in 237) [ClassicSimilarity], result of:
          0.03971345 = score(doc=237,freq=4.0), product of:
            0.1434766 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.04049623 = queryNorm
            0.2767939 = fieldWeight in 237, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
      0.2 = coord(1/5)
    
    Abstract
    We study the problem of question topic classification using a very large real-world Community Question Answering (CQA) dataset from Yahoo! Answers. The dataset comprises 3.9 million questions and these questions are organized into more than 1,000 categories in a hierarchy. To the best knowledge, this is the first systematic evaluation of the performance of different classification methods on question topic classification as well as short texts. Specifically, we empirically evaluate the following in classifying questions into CQA categories: (a) the usefulness of n-gram features and bag-of-word features; (b) the performance of three standard classification algorithms (naive Bayes, maximum entropy, and support vector machines); (c) the performance of the state-of-the-art hierarchical classification algorithms; (d) the effect of training data size on performance; and (e) the effectiveness of the different components of CQA data, including subject, content, asker, and the best answer. The experimental results show what aspects are important for question topic classification in terms of both effectiveness and efficiency. We believe that the experimental findings from this study will be useful in real-world classification problems.
  14. Schulze, U.: Erfahrungen bei der Anwendung automatischer Klassifizierungsverfahren zur Inhaltsanalyse einer Dokumentenmenge (1978) 0.01
    0.007675644 = product of:
      0.03837822 = sum of:
        0.03837822 = weight(_text_:u in 83) [ClassicSimilarity], result of:
          0.03837822 = score(doc=83,freq=2.0), product of:
            0.13260265 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.04049623 = queryNorm
            0.28942272 = fieldWeight in 83, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=83)
      0.2 = coord(1/5)
    
  15. Díaz, I.; Ranilla, J.; Montañes, E.; Fernández, J.; Combarro, E.F.: Improving performance of text categorization by combining filtering and support vector machines (2004) 0.01
    0.0076663042 = product of:
      0.03833152 = sum of:
        0.03833152 = weight(_text_:j in 2234) [ClassicSimilarity], result of:
          0.03833152 = score(doc=2234,freq=4.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.2978903 = fieldWeight in 2234, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=2234)
      0.2 = coord(1/5)
    
  16. Panyr, J.: Vektorraum-Modell und Clusteranalyse in Information-Retrieval-Systemen (1987) 0.01
    0.0072278604 = product of:
      0.036139302 = sum of:
        0.036139302 = weight(_text_:j in 2322) [ClassicSimilarity], result of:
          0.036139302 = score(doc=2322,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 2322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=2322)
      0.2 = coord(1/5)
    
  17. Bollmann, P.; Konrad, E.; Schneider, H.-J.; Zuse, H.: Anwendung automatischer Klassifikationsverfahren mit dem System FAKYR (1978) 0.01
    0.0072278604 = product of:
      0.036139302 = sum of:
        0.036139302 = weight(_text_:j in 82) [ClassicSimilarity], result of:
          0.036139302 = score(doc=82,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 82, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=82)
      0.2 = coord(1/5)
    
  18. Panyr, J.: Automatische Indexierung und Klassifikation (1983) 0.01
    0.0072278604 = product of:
      0.036139302 = sum of:
        0.036139302 = weight(_text_:j in 7692) [ClassicSimilarity], result of:
          0.036139302 = score(doc=7692,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 7692, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=7692)
      0.2 = coord(1/5)
    
  19. Krauth, J.: Evaluation von Verfahren der automatischen Klassifikation (1983) 0.01
    0.0072278604 = product of:
      0.036139302 = sum of:
        0.036139302 = weight(_text_:j in 111) [ClassicSimilarity], result of:
          0.036139302 = score(doc=111,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 111, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=111)
      0.2 = coord(1/5)
    
  20. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.01
    0.0072278604 = product of:
      0.036139302 = sum of:
        0.036139302 = weight(_text_:j in 4088) [ClassicSimilarity], result of:
          0.036139302 = score(doc=4088,freq=2.0), product of:
            0.12867662 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.04049623 = queryNorm
            0.28085366 = fieldWeight in 4088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0625 = fieldNorm(doc=4088)
      0.2 = coord(1/5)
    

Languages

  • e 46
  • d 15