Search (74 results, page 1 of 4)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.05
    0.049216356 = product of:
      0.09843271 = sum of:
        0.044643503 = product of:
          0.17857401 = sum of:
            0.17857401 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.17857401 = score(doc=562,freq=2.0), product of:
                0.31773716 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037477795 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.038556036 = weight(_text_:t in 562) [ClassicSimilarity], result of:
          0.038556036 = score(doc=562,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.26114836 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.015233172 = product of:
          0.030466344 = sum of:
            0.030466344 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.030466344 = score(doc=562,freq=2.0), product of:
                0.13124086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037477795 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Pfister, J.: Clustering von Patent-Dokumenten am Beispiel der Datenbanken des Fachinformationszentrums Karlsruhe (2006) 0.03
    0.028975233 = product of:
      0.0869257 = sum of:
        0.051408045 = weight(_text_:t in 5976) [ClassicSimilarity], result of:
          0.051408045 = score(doc=5976,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.34819782 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.0625 = fieldNorm(doc=5976)
        0.03551765 = weight(_text_:u in 5976) [ClassicSimilarity], result of:
          0.03551765 = score(doc=5976,freq=2.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.28942272 = fieldWeight in 5976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=5976)
      0.33333334 = coord(2/6)
    
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  3. Schulze, U.: Erfahrungen bei der Anwendung automatischer Klassifizierungsverfahren zur Inhaltsanalyse einer Dokumentenmenge (1978) 0.03
    0.027547507 = product of:
      0.08264252 = sum of:
        0.03551765 = weight(_text_:u in 83) [ClassicSimilarity], result of:
          0.03551765 = score(doc=83,freq=2.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.28942272 = fieldWeight in 83, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0625 = fieldNorm(doc=83)
        0.047124866 = weight(_text_:i in 83) [ClassicSimilarity], result of:
          0.047124866 = score(doc=83,freq=2.0), product of:
            0.14135611 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.037477795 = queryNorm
            0.33337694 = fieldWeight in 83, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0625 = fieldNorm(doc=83)
      0.33333334 = coord(2/6)
    
    Source
    Kooperation in der Klassifikation I. Proc. der Sekt.1-3 der 2. Fachtagung der Gesellschaft für Klassifikation, Frankfurt-Hoechst, 6.-7.4.1978. Bearb.: W. Dahlberg
  4. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.023261895 = product of:
      0.069785684 = sum of:
        0.044397067 = weight(_text_:u in 611) [ClassicSimilarity], result of:
          0.044397067 = score(doc=611,freq=2.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.3617784 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.02538862 = product of:
          0.05077724 = sum of:
            0.05077724 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.05077724 = score(doc=611,freq=2.0), product of:
                0.13124086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037477795 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 8.2009 12:54:24
  5. Mu, T.; Goulermas, J.Y.; Korkontzelos, I.; Ananiadou, S.: Descriptive document clustering via discriminant learning in a co-embedded space of multilevel similarities (2016) 0.02
    0.02052769 = product of:
      0.061583072 = sum of:
        0.03213003 = weight(_text_:t in 2496) [ClassicSimilarity], result of:
          0.03213003 = score(doc=2496,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.21762364 = fieldWeight in 2496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2496)
        0.029453041 = weight(_text_:i in 2496) [ClassicSimilarity], result of:
          0.029453041 = score(doc=2496,freq=2.0), product of:
            0.14135611 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.037477795 = queryNorm
            0.20836058 = fieldWeight in 2496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2496)
      0.33333334 = coord(2/6)
    
  6. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.02
    0.018115735 = product of:
      0.054347202 = sum of:
        0.04165289 = weight(_text_:i in 1107) [ClassicSimilarity], result of:
          0.04165289 = score(doc=1107,freq=4.0), product of:
            0.14135611 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.037477795 = queryNorm
            0.29466638 = fieldWeight in 1107, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.01269431 = product of:
          0.02538862 = sum of:
            0.02538862 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.02538862 = score(doc=1107,freq=2.0), product of:
                0.13124086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037477795 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  7. Ru, C.; Tang, J.; Li, S.; Xie, S.; Wang, T.: Using semantic similarity to reduce wrong labels in distant supervision for relation extraction (2018) 0.02
    0.01810952 = product of:
      0.05432856 = sum of:
        0.03213003 = weight(_text_:t in 5055) [ClassicSimilarity], result of:
          0.03213003 = score(doc=5055,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.21762364 = fieldWeight in 5055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5055)
        0.022198534 = weight(_text_:u in 5055) [ClassicSimilarity], result of:
          0.022198534 = score(doc=5055,freq=2.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.1808892 = fieldWeight in 5055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5055)
      0.33333334 = coord(2/6)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  8. Schek, M.: Automatische Klassifizierung in Erschließung und Recherche eines Pressearchivs (2006) 0.01
    0.013773753 = product of:
      0.04132126 = sum of:
        0.017758826 = weight(_text_:u in 6043) [ClassicSimilarity], result of:
          0.017758826 = score(doc=6043,freq=2.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.14471136 = fieldWeight in 6043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=6043)
        0.023562433 = weight(_text_:i in 6043) [ClassicSimilarity], result of:
          0.023562433 = score(doc=6043,freq=2.0), product of:
            0.14135611 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.037477795 = queryNorm
            0.16668847 = fieldWeight in 6043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.03125 = fieldNorm(doc=6043)
      0.33333334 = coord(2/6)
    
    Object
    I-Views
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  9. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.01
    0.013638213 = product of:
      0.04091464 = sum of:
        0.030759193 = weight(_text_:u in 3284) [ClassicSimilarity], result of:
          0.030759193 = score(doc=3284,freq=6.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.25064746 = fieldWeight in 3284, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.010155449 = product of:
          0.020310897 = sum of:
            0.020310897 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.020310897 = score(doc=3284,freq=2.0), product of:
                0.13124086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037477795 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Das Klassifizieren von Objekten (z. B. Fauna, Flora, Texte) ist ein Verfahren, das auf menschlicher Intelligenz basiert. In der Informatik - insbesondere im Gebiet der Künstlichen Intelligenz (KI) - wird u. a. untersucht, inweit Verfahren, die menschliche Intelligenz benötigen, automatisiert werden können. Hierbei hat sich herausgestellt, dass die Lösung von Alltagsproblemen eine größere Herausforderung darstellt, als die Lösung von Spezialproblemen, wie z. B. das Erstellen eines Schachcomputers. So ist "Rybka" der seit Juni 2007 amtierende Computerschach-Weltmeistern. Inwieweit Alltagsprobleme mit Methoden der Künstlichen Intelligenz gelöst werden können, ist eine - für den allgemeinen Fall - noch offene Frage. Beim Lösen von Alltagsproblemen spielt die Verarbeitung der natürlichen Sprache, wie z. B. das Verstehen, eine wesentliche Rolle. Den "gesunden Menschenverstand" als Maschine (in der Cyc-Wissensbasis in Form von Fakten und Regeln) zu realisieren, ist Lenat's Ziel seit 1984. Bezüglich des KI-Paradeprojektes "Cyc" gibt es CycOptimisten und Cyc-Pessimisten. Das Verstehen der natürlichen Sprache (z. B. Werktitel, Zusammenfassung, Vorwort, Inhalt) ist auch beim intellektuellen Klassifizieren von bibliografischen Titeldatensätzen oder Netzpublikationen notwendig, um diese Textobjekte korrekt klassifizieren zu können. Seit dem Jahr 2007 werden von der Deutschen Nationalbibliothek nahezu alle Veröffentlichungen mit der Dewey Dezimalklassifikation (DDC) intellektuell klassifiziert.
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  10. Ardö, A.; Koch, T.: Automatic classification applied to full-text Internet documents in a robot-generated subject index (1999) 0.01
    0.012852012 = product of:
      0.07711207 = sum of:
        0.07711207 = weight(_text_:t in 382) [ClassicSimilarity], result of:
          0.07711207 = score(doc=382,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.5222967 = fieldWeight in 382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.09375 = fieldNorm(doc=382)
      0.16666667 = coord(1/6)
    
  11. Schek, M.: Automatische Klassifizierung und Visualisierung im Archiv der Süddeutschen Zeitung (2005) 0.01
    0.012052035 = product of:
      0.036156103 = sum of:
        0.015538973 = weight(_text_:u in 4884) [ClassicSimilarity], result of:
          0.015538973 = score(doc=4884,freq=2.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.12662244 = fieldWeight in 4884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4884)
        0.020617131 = weight(_text_:i in 4884) [ClassicSimilarity], result of:
          0.020617131 = score(doc=4884,freq=2.0), product of:
            0.14135611 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.037477795 = queryNorm
            0.14585242 = fieldWeight in 4884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4884)
      0.33333334 = coord(2/6)
    
    Object
    i-views
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Shafer, K.E.: Automatic Subject Assignment via the Scorpion System (2001) 0.01
    0.011781218 = product of:
      0.07068731 = sum of:
        0.07068731 = weight(_text_:i in 1043) [ClassicSimilarity], result of:
          0.07068731 = score(doc=1043,freq=2.0), product of:
            0.14135611 = queryWeight, product of:
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.037477795 = queryNorm
            0.50006545 = fieldWeight in 1043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.7717297 = idf(docFreq=2765, maxDocs=44218)
              0.09375 = fieldNorm(doc=1043)
      0.16666667 = coord(1/6)
    
    Footnote
    Teil eines Themenheftes: OCLC and the Internet: An Historical Overview of Research Activities, 1990-1999 - Part I
  13. Braun, T.: Dokumentklassifikation durch Clustering (o.J.) 0.01
    0.01071001 = product of:
      0.06426006 = sum of:
        0.06426006 = weight(_text_:t in 1671) [ClassicSimilarity], result of:
          0.06426006 = score(doc=1671,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.43524727 = fieldWeight in 1671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.078125 = fieldNorm(doc=1671)
      0.16666667 = coord(1/6)
    
  14. Liu, R.-L.: Context-based term frequency assessment for text classification (2010) 0.01
    0.009087745 = product of:
      0.054526467 = sum of:
        0.054526467 = weight(_text_:t in 3331) [ClassicSimilarity], result of:
          0.054526467 = score(doc=3331,freq=4.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.36931956 = fieldWeight in 3331, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.046875 = fieldNorm(doc=3331)
      0.16666667 = coord(1/6)
    
    Abstract
    Automatic text classification (TC) is essential for the management of information. To properly classify a document d, it is essential to identify the semantics of each term t in d, while the semantics heavily depend on context (neighboring terms) of t in d. Therefore, we present a technique CTFA (Context-based Term Frequency Assessment) that improves text classifiers by considering term contexts in test documents. The results of the term context recognition are used to assess term frequencies of terms, and hence CTFA may easily work with various kinds of text classifiers that base their TC decisions on term frequencies, without needing to modify the classifiers. Moreover, CTFA is efficient, and neither huge memory nor domain-specific knowledge is required. Empirical results show that CTFA successfully enhances performance of several kinds of text classifiers on different experimental data.
  15. Panyr, J.: Automatische Klassifikation und Information Retrieval : Anwendung und Entwicklung komplexer Verfahren in Information-Retrieval-Systemen und ihre Evaluierung (1986) 0.01
    0.008879414 = product of:
      0.05327648 = sum of:
        0.05327648 = weight(_text_:u in 32) [ClassicSimilarity], result of:
          0.05327648 = score(doc=32,freq=2.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.43413407 = fieldWeight in 32, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=32)
      0.16666667 = coord(1/6)
    
    Footnote
    Zugleich Dissertation U Saarbrücken 1085
  16. Reiner, U.: Automatic analysis of DDC notations (2007) 0.01
    0.008879414 = product of:
      0.05327648 = sum of:
        0.05327648 = weight(_text_:u in 118) [ClassicSimilarity], result of:
          0.05327648 = score(doc=118,freq=2.0), product of:
            0.12271895 = queryWeight, product of:
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.037477795 = queryNorm
            0.43413407 = fieldWeight in 118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2744443 = idf(docFreq=4547, maxDocs=44218)
              0.09375 = fieldNorm(doc=118)
      0.16666667 = coord(1/6)
    
  17. Koch, T.: Nutzung von Klassifikationssystemen zur verbesserten Beschreibung, Organisation und Suche von Internetressourcen (1998) 0.01
    0.0085680075 = product of:
      0.051408045 = sum of:
        0.051408045 = weight(_text_:t in 1030) [ClassicSimilarity], result of:
          0.051408045 = score(doc=1030,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.34819782 = fieldWeight in 1030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.0625 = fieldNorm(doc=1030)
      0.16666667 = coord(1/6)
    
  18. Koch, T.; Ardö, A.: Automatic classification of full-text HTML-documents from one specific subject area : DESIRE II D3.6a, Working Paper 2 (2000) 0.01
    0.0085680075 = product of:
      0.051408045 = sum of:
        0.051408045 = weight(_text_:t in 1667) [ClassicSimilarity], result of:
          0.051408045 = score(doc=1667,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.34819782 = fieldWeight in 1667, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.0625 = fieldNorm(doc=1667)
      0.16666667 = coord(1/6)
    
  19. Brückner, T.; Dambeck, H.: Sortierautomaten : Grundlagen der Textklassifizierung (2003) 0.01
    0.0085680075 = product of:
      0.051408045 = sum of:
        0.051408045 = weight(_text_:t in 2398) [ClassicSimilarity], result of:
          0.051408045 = score(doc=2398,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.34819782 = fieldWeight in 2398, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.0625 = fieldNorm(doc=2398)
      0.16666667 = coord(1/6)
    
  20. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.01
    0.0085680075 = product of:
      0.051408045 = sum of:
        0.051408045 = weight(_text_:t in 4088) [ClassicSimilarity], result of:
          0.051408045 = score(doc=4088,freq=2.0), product of:
            0.14764035 = queryWeight, product of:
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.037477795 = queryNorm
            0.34819782 = fieldWeight in 4088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9394085 = idf(docFreq=2338, maxDocs=44218)
              0.0625 = fieldNorm(doc=4088)
      0.16666667 = coord(1/6)
    

Years

Languages

  • e 50
  • d 24

Types

  • a 56
  • el 14
  • m 3
  • r 3
  • x 3
  • d 1
  • s 1
  • More… Less…