Search (63 results, page 1 of 4)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.112228476 = sum of:
      0.07104059 = product of:
        0.21312177 = sum of:
          0.21312177 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21312177 = score(doc=562,freq=2.0), product of:
              0.37920806 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04472842 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.02300763 = product of:
        0.04601526 = sum of:
          0.04601526 = weight(_text_:t in 562) [ClassicSimilarity], result of:
            0.04601526 = score(doc=562,freq=2.0), product of:
              0.17620352 = queryWeight, product of:
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.04472842 = queryNorm
              0.26114836 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
      0.01818025 = product of:
        0.0363605 = sum of:
          0.0363605 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.0363605 = score(doc=562,freq=2.0), product of:
              0.1566313 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04472842 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.03
    0.026670558 = product of:
      0.08001167 = sum of:
        0.08001167 = sum of:
          0.049711253 = weight(_text_:i in 1107) [ClassicSimilarity], result of:
            0.049711253 = score(doc=1107,freq=4.0), product of:
              0.16870351 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04472842 = queryNorm
              0.29466638 = fieldWeight in 1107, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.030300418 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.030300418 = score(doc=1107,freq=2.0), product of:
              0.1566313 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04472842 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.33333334 = coord(1/3)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  3. Mu, T.; Goulermas, J.Y.; Korkontzelos, I.; Ananiadou, S.: Descriptive document clustering via discriminant learning in a co-embedded space of multilevel similarities (2016) 0.02
    0.024499072 = product of:
      0.036748607 = sum of:
        0.019173026 = product of:
          0.038346052 = sum of:
            0.038346052 = weight(_text_:t in 2496) [ClassicSimilarity], result of:
              0.038346052 = score(doc=2496,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.21762364 = fieldWeight in 2496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2496)
          0.5 = coord(1/2)
        0.01757558 = product of:
          0.03515116 = sum of:
            0.03515116 = weight(_text_:i in 2496) [ClassicSimilarity], result of:
              0.03515116 = score(doc=2496,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.20836058 = fieldWeight in 2496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2496)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  4. Ardö, A.; Koch, T.: Automatic classification applied to full-text Internet documents in a robot-generated subject index (1999) 0.02
    0.01533842 = product of:
      0.04601526 = sum of:
        0.04601526 = product of:
          0.09203052 = sum of:
            0.09203052 = weight(_text_:t in 382) [ClassicSimilarity], result of:
              0.09203052 = score(doc=382,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.5222967 = fieldWeight in 382, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.09375 = fieldNorm(doc=382)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  5. Shafer, K.E.: Automatic Subject Assignment via the Scorpion System (2001) 0.01
    0.014060467 = product of:
      0.0421814 = sum of:
        0.0421814 = product of:
          0.0843628 = sum of:
            0.0843628 = weight(_text_:i in 1043) [ClassicSimilarity], result of:
              0.0843628 = score(doc=1043,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.50006545 = fieldWeight in 1043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1043)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Teil eines Themenheftes: OCLC and the Internet: An Historical Overview of Research Activities, 1990-1999 - Part I
  6. Braun, T.: Dokumentklassifikation durch Clustering (o.J.) 0.01
    0.012782018 = product of:
      0.038346052 = sum of:
        0.038346052 = product of:
          0.076692104 = sum of:
            0.076692104 = weight(_text_:t in 1671) [ClassicSimilarity], result of:
              0.076692104 = score(doc=1671,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.43524727 = fieldWeight in 1671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1671)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  7. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.012120167 = product of:
      0.0363605 = sum of:
        0.0363605 = product of:
          0.072721 = sum of:
            0.072721 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.072721 = score(doc=1046,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 5.2003 14:17:22
  8. Liu, R.-L.: Context-based term frequency assessment for text classification (2010) 0.01
    0.010845901 = product of:
      0.032537702 = sum of:
        0.032537702 = product of:
          0.065075405 = sum of:
            0.065075405 = weight(_text_:t in 3331) [ClassicSimilarity], result of:
              0.065075405 = score(doc=3331,freq=4.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.36931956 = fieldWeight in 3331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3331)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Automatic text classification (TC) is essential for the management of information. To properly classify a document d, it is essential to identify the semantics of each term t in d, while the semantics heavily depend on context (neighboring terms) of t in d. Therefore, we present a technique CTFA (Context-based Term Frequency Assessment) that improves text classifiers by considering term contexts in test documents. The results of the term context recognition are used to assess term frequencies of terms, and hence CTFA may easily work with various kinds of text classifiers that base their TC decisions on term frequencies, without needing to modify the classifiers. Moreover, CTFA is efficient, and neither huge memory nor domain-specific knowledge is required. Empirical results show that CTFA successfully enhances performance of several kinds of text classifiers on different experimental data.
  9. Koch, T.: Nutzung von Klassifikationssystemen zur verbesserten Beschreibung, Organisation und Suche von Internetressourcen (1998) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 1030) [ClassicSimilarity], result of:
              0.06135368 = score(doc=1030,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 1030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1030)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  10. Koch, T.; Ardö, A.: Automatic classification of full-text HTML-documents from one specific subject area : DESIRE II D3.6a, Working Paper 2 (2000) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 1667) [ClassicSimilarity], result of:
              0.06135368 = score(doc=1667,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 1667, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1667)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  11. Brückner, T.; Dambeck, H.: Sortierautomaten : Grundlagen der Textklassifizierung (2003) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 2398) [ClassicSimilarity], result of:
              0.06135368 = score(doc=2398,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 2398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2398)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  12. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 4088) [ClassicSimilarity], result of:
              0.06135368 = score(doc=4088,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 4088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4088)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  13. Pfister, J.: Clustering von Patent-Dokumenten am Beispiel der Datenbanken des Fachinformationszentrums Karlsruhe (2006) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 5976) [ClassicSimilarity], result of:
              0.06135368 = score(doc=5976,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 5976, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5976)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  14. Jersek, T.: Automatische DDC-Klassifizierung mit Lingo : Vorgehensweise und Ergebnisse (2012) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 122) [ClassicSimilarity], result of:
              0.06135368 = score(doc=122,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=122)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  15. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.010100139 = product of:
      0.030300418 = sum of:
        0.030300418 = product of:
          0.060600836 = sum of:
            0.060600836 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.060600836 = score(doc=611,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 8.2009 12:54:24
  16. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.010100139 = product of:
      0.030300418 = sum of:
        0.030300418 = product of:
          0.060600836 = sum of:
            0.060600836 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.060600836 = score(doc=2748,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 2.2016 18:25:22
  17. Fangmeyer, H.; Gloden, R.: Bewertung und Vergleich von Klassifikationsergebnissen bei automatischen Verfahren (1978) 0.01
    0.009373644 = product of:
      0.028120931 = sum of:
        0.028120931 = product of:
          0.056241862 = sum of:
            0.056241862 = weight(_text_:i in 81) [ClassicSimilarity], result of:
              0.056241862 = score(doc=81,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.33337694 = fieldWeight in 81, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=81)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Kooperation in der Klassifikation I. Proc. der Sekt.1-3 der 2. Fachtagung der Gesellschaft für Klassifikation, Frankfurt-Hoechst, 6.-7.4.1978. Bearb.: W. Dahlberg
  18. Bollmann, P.; Konrad, E.; Schneider, H.-J.; Zuse, H.: Anwendung automatischer Klassifikationsverfahren mit dem System FAKYR (1978) 0.01
    0.009373644 = product of:
      0.028120931 = sum of:
        0.028120931 = product of:
          0.056241862 = sum of:
            0.056241862 = weight(_text_:i in 82) [ClassicSimilarity], result of:
              0.056241862 = score(doc=82,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.33337694 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Kooperation in der Klassifikation I. Proc. der Sekt.1-3 der 2. Fachtagung der Gesellschaft für Klassifikation, Frankfurt-Hoechst, 6.-7.4.1978. Bearb.: W. Dahlberg
  19. Schulze, U.: Erfahrungen bei der Anwendung automatischer Klassifizierungsverfahren zur Inhaltsanalyse einer Dokumentenmenge (1978) 0.01
    0.009373644 = product of:
      0.028120931 = sum of:
        0.028120931 = product of:
          0.056241862 = sum of:
            0.056241862 = weight(_text_:i in 83) [ClassicSimilarity], result of:
              0.056241862 = score(doc=83,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.33337694 = fieldWeight in 83, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=83)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Kooperation in der Klassifikation I. Proc. der Sekt.1-3 der 2. Fachtagung der Gesellschaft für Klassifikation, Frankfurt-Hoechst, 6.-7.4.1978. Bearb.: W. Dahlberg
  20. Cheng, P.T.K.; Wu, A.K.W.: ACS: an automatic classification system (1995) 0.01
    0.009373644 = product of:
      0.028120931 = sum of:
        0.028120931 = product of:
          0.056241862 = sum of:
            0.056241862 = weight(_text_:i in 2188) [ClassicSimilarity], result of:
              0.056241862 = score(doc=2188,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.33337694 = fieldWeight in 2188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2188)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper, we introduce ACS, an automatic classification system for school libraries. First, various approaches towards automatic classification, namely (i) rule-based, (ii) browse and search, and (iii) partial match, are critically reviewed. The central issues of scheme selection, text analysis and similarity measures are discussed. A novel approach towards detecting book-class similarity with Modified Overlap Coefficient (MOC) is also proposed. Finally, the design and implementation of ACS is presented. The test result of over 80% correctness in automatic classification and a cost reduction of 75% compared to manual classification suggest that ACS is highly adoptable

Years

Languages

  • e 44
  • d 19

Types

  • a 52
  • el 11
  • x 2
  • r 1
  • More… Less…