Search (203 results, page 1 of 11)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.13
    0.13493797 = product of:
      0.20240696 = sum of:
        0.047273763 = product of:
          0.14182128 = sum of:
            0.14182128 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.14182128 = score(doc=562,freq=2.0), product of:
                0.25234294 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.029764405 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.0052465936 = weight(_text_:a in 562) [ClassicSimilarity], result of:
          0.0052465936 = score(doc=562,freq=8.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.15287387 = fieldWeight in 562, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.14182128 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.14182128 = score(doc=562,freq=2.0), product of:
            0.25234294 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.029764405 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.008065332 = product of:
          0.024195995 = sum of:
            0.024195995 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.024195995 = score(doc=562,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.6666667 = coord(4/6)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  2. Greiner, G.: Intellektuelles und automatisches Klassifizieren (1981) 0.01
    0.008517393 = product of:
      0.025552178 = sum of:
        0.006995458 = weight(_text_:a in 1103) [ClassicSimilarity], result of:
          0.006995458 = score(doc=1103,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.20383182 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=1103)
        0.01855672 = product of:
          0.07422688 = sum of:
            0.07422688 = weight(_text_:g in 1103) [ClassicSimilarity], result of:
              0.07422688 = score(doc=1103,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.663964 = fieldWeight in 1103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.125 = fieldNorm(doc=1103)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Type
    a
  3. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.007125753 = product of:
      0.021377258 = sum of:
        0.0052465936 = weight(_text_:a in 1046) [ClassicSimilarity], result of:
          0.0052465936 = score(doc=1046,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.15287387 = fieldWeight in 1046, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1046)
        0.016130663 = product of:
          0.04839199 = sum of:
            0.04839199 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.04839199 = score(doc=1046,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    5. 5.2003 14:17:22
    Type
    a
  4. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.0061970316 = product of:
      0.018591095 = sum of:
        0.0091815395 = weight(_text_:a in 5273) [ClassicSimilarity], result of:
          0.0091815395 = score(doc=5273,freq=18.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.26752928 = fieldWeight in 5273, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5273)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.028228661 = score(doc=5273,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    In text categorization tasks, classification on some class hierarchies has better results than in cases without the hierarchy. Currently, because a large number of documents are divided into several subgroups in a hierarchy, we can appropriately use a hierarchical classification method. However, we have no systematic method to build a hierarchical classification system that performs well with large collections of practical data. In this article, we introduce a new evaluation scheme for internal node classifiers, which can be used effectively to develop a hierarchical classification system. We also show that our method for constructing the hierarchical classification system is very effective, especially for the task of constructing classifiers applied to hierarchy tree with a lot of levels.
    Date
    22. 7.2006 16:24:52
    Type
    a
  5. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.0059381276 = product of:
      0.017814383 = sum of:
        0.0043721613 = weight(_text_:a in 2748) [ClassicSimilarity], result of:
          0.0043721613 = score(doc=2748,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12739488 = fieldWeight in 2748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2748)
        0.013442221 = product of:
          0.040326662 = sum of:
            0.040326662 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.040326662 = score(doc=2748,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    1. 2.2016 18:25:22
    Type
    a
  6. McKiernan, G.: Automated categorisation of Web resources : a profile of selected projects, research, products, and services (1996) 0.01
    0.0059270402 = product of:
      0.01778112 = sum of:
        0.0061831702 = weight(_text_:a in 2533) [ClassicSimilarity], result of:
          0.0061831702 = score(doc=2533,freq=4.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.18016359 = fieldWeight in 2533, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
        0.01159795 = product of:
          0.0463918 = sum of:
            0.0463918 = weight(_text_:g in 2533) [ClassicSimilarity], result of:
              0.0463918 = score(doc=2533,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.4149775 = fieldWeight in 2533, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2533)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Type
    a
  7. Leroy, G.; Miller, T.; Rosemblat, G.; Browne, A.: ¬A balanced approach to health information evaluation : a vocabulary-based naïve Bayes classifier and readability formulas (2008) 0.01
    0.005753664 = product of:
      0.017260991 = sum of:
        0.007419804 = weight(_text_:a in 1998) [ClassicSimilarity], result of:
          0.007419804 = score(doc=1998,freq=16.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.2161963 = fieldWeight in 1998, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1998)
        0.009841187 = product of:
          0.039364748 = sum of:
            0.039364748 = weight(_text_:g in 1998) [ClassicSimilarity], result of:
              0.039364748 = score(doc=1998,freq=4.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.35212007 = fieldWeight in 1998, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1998)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Since millions seek health information online, it is vital for this information to be comprehensible. Most studies use readability formulas, which ignore vocabulary, and conclude that online health information is too difficult. We developed a vocabularly-based, naïve Bayes classifier to distinguish between three difficulty levels in text. It proved 98% accurate in a 250-document evaluation. We compared our classifier with readability formulas for 90 new documents with different origins and asked representative human evaluators, an expert and a consumer, to judge each document. Average readability grade levels for educational and commercial pages was 10th grade or higher, too difficult according to current literature. In contrast, the classifier showed that 70-90% of these pages were written at an intermediate, appropriate level indicating that vocabulary usage is frequently appropriate in text considered too difficult by readability formula evaluations. The expert considered the pages more difficult for a consumer than the consumer did.
    Type
    a
  8. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.005588608 = product of:
      0.016765824 = sum of:
        0.008700492 = weight(_text_:a in 2158) [ClassicSimilarity], result of:
          0.008700492 = score(doc=2158,freq=22.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.25351265 = fieldWeight in 2158, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.008065332 = product of:
          0.024195995 = sum of:
            0.024195995 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.024195995 = score(doc=2158,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
    Type
    a
  9. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.0054176897 = product of:
      0.016253069 = sum of:
        0.006843515 = weight(_text_:a in 1673) [ClassicSimilarity], result of:
          0.006843515 = score(doc=1673,freq=10.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.19940455 = fieldWeight in 1673, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.028228661 = score(doc=1673,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia; vgl. auch: http://www7.scu.edu.au/programme/posters/1846/com1846.htm.
    Type
    a
  10. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.0054176897 = product of:
      0.016253069 = sum of:
        0.006843515 = weight(_text_:a in 2560) [ClassicSimilarity], result of:
          0.006843515 = score(doc=2560,freq=10.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.19940455 = fieldWeight in 2560, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2560)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.028228661 = score(doc=2560,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    The proliferation of digital resources and their integration into a traditional library setting has created a pressing need for an automated tool that organizes textual information based on library classification schemes. Automated text classification is a research field of developing tools, methods, and models to automate text classification. This article describes the current popular approach for text classification and major text classification projects and applications that are based on library classification schemes. Related issues and challenges are discussed, and a number of considerations for the challenges are examined.
    Date
    22. 9.2008 18:31:54
    Type
    a
  11. Möller, G.: Automatic classification of the World Wide Web using Universal Decimal Classification (1999) 0.01
    0.0053233705 = product of:
      0.01597011 = sum of:
        0.0043721613 = weight(_text_:a in 494) [ClassicSimilarity], result of:
          0.0043721613 = score(doc=494,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12739488 = fieldWeight in 494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=494)
        0.01159795 = product of:
          0.0463918 = sum of:
            0.0463918 = weight(_text_:g in 494) [ClassicSimilarity], result of:
              0.0463918 = score(doc=494,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.4149775 = fieldWeight in 494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.078125 = fieldNorm(doc=494)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Type
    a
  12. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.0050019748 = product of:
      0.015005924 = sum of:
        0.0069405916 = weight(_text_:a in 2760) [ClassicSimilarity], result of:
          0.0069405916 = score(doc=2760,freq=14.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.20223314 = fieldWeight in 2760, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.008065332 = product of:
          0.024195995 = sum of:
            0.024195995 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.024195995 = score(doc=2760,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Information is often organized as a text hierarchy. A hierarchical text-classification system is thus essential for the management, sharing, and dissemination of information. It aims to automatically classify each incoming document into zero, one, or several categories in the text hierarchy. In this paper, we present a technique called CRHTC (context recognition for hierarchical text classification) that performs hierarchical text classification by recognizing the context of discussion (COD) of each category. A category's COD is governed by its ancestor categories, whose contents indicate contextual backgrounds of the category. A document may be classified into a category only if its content matches the category's COD. CRHTC does not require any trials to manually set parameters, and hence is more portable and easier to implement than other methods. It is empirically evaluated under various conditions. The results show that CRHTC achieves both better and more stable performance than several hierarchical and nonhierarchical text-classification methodologies.
    Date
    22. 3.2009 19:11:54
    Type
    a
  13. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.00
    0.004764639 = product of:
      0.0142939165 = sum of:
        0.0075728064 = weight(_text_:a in 1107) [ClassicSimilarity], result of:
          0.0075728064 = score(doc=1107,freq=24.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.22065444 = fieldWeight in 1107, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.0067211105 = product of:
          0.020163331 = sum of:
            0.020163331 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.020163331 = score(doc=1107,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
    Type
    a
  14. Dubin, D.: Dimensions and discriminability (1998) 0.00
    0.004579258 = product of:
      0.0137377735 = sum of:
        0.004328219 = weight(_text_:a in 2338) [ClassicSimilarity], result of:
          0.004328219 = score(doc=2338,freq=4.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.12611452 = fieldWeight in 2338, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2338)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.028228661 = score(doc=2338,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Visualization interfaces can improve subject access by highlighting the inclusion of document representation components in similarity and discrimination relationships. Within a set of retrieved documents, what kinds of groupings can index terms and subject headings make explicit? The role of controlled vocabulary in classifying search output is examined
    Date
    22. 9.1997 19:16:05
    Type
    a
  15. Hu, G.; Zhou, S.; Guan, J.; Hu, X.: Towards effective document clustering : a constrained K-means based approach (2008) 0.00
    0.004473177 = product of:
      0.013419529 = sum of:
        0.005300964 = weight(_text_:a in 2113) [ClassicSimilarity], result of:
          0.005300964 = score(doc=2113,freq=6.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.1544581 = fieldWeight in 2113, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2113)
        0.008118565 = product of:
          0.03247426 = sum of:
            0.03247426 = weight(_text_:g in 2113) [ClassicSimilarity], result of:
              0.03247426 = score(doc=2113,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.29048425 = fieldWeight in 2113, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2113)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Document clustering is an important tool for document collection organization and browsing. In real applications, some limited knowledge about cluster membership of a small number of documents is often available, such as some pairs of documents belonging to the same cluster. This kind of prior knowledge can be served as constraints for the clustering process. We integrate the constraints into the trace formulation of the sum of square Euclidean distance function of K-means. Then, the combined criterion function is transformed into trace maximization, which is further optimized by eigen-decomposition. Our experimental evaluation shows that the proposed semi-supervised clustering method can achieve better performance, compared to three existing methods.
    Type
    a
  16. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.00
    0.004301427 = product of:
      0.012904281 = sum of:
        0.0061831702 = weight(_text_:a in 2765) [ClassicSimilarity], result of:
          0.0061831702 = score(doc=2765,freq=16.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.18016359 = fieldWeight in 2765, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2765)
        0.0067211105 = product of:
          0.020163331 = sum of:
            0.020163331 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.020163331 = score(doc=2765,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
    Type
    a
  17. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.00
    0.004203005 = product of:
      0.012609015 = sum of:
        0.0045436835 = weight(_text_:a in 690) [ClassicSimilarity], result of:
          0.0045436835 = score(doc=690,freq=6.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.13239266 = fieldWeight in 690, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=690)
        0.008065332 = product of:
          0.024195995 = sum of:
            0.024195995 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.024195995 = score(doc=690,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
    Type
    a
  18. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.00
    0.004156689 = product of:
      0.0124700675 = sum of:
        0.003060513 = weight(_text_:a in 141) [ClassicSimilarity], result of:
          0.003060513 = score(doc=141,freq=2.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.089176424 = fieldWeight in 141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=141)
        0.009409554 = product of:
          0.028228661 = sum of:
            0.028228661 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.028228661 = score(doc=141,freq=2.0), product of:
                0.104229875 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029764405 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Pages
    S.1-22
    Type
    a
  19. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.00
    0.0038609337 = product of:
      0.011582801 = sum of:
        0.005783826 = weight(_text_:a in 3311) [ClassicSimilarity], result of:
          0.005783826 = score(doc=3311,freq=14.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.1685276 = fieldWeight in 3311, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3311)
        0.005798975 = product of:
          0.0231959 = sum of:
            0.0231959 = weight(_text_:g in 3311) [ClassicSimilarity], result of:
              0.0231959 = score(doc=3311,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.20748875 = fieldWeight in 3311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3311)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. Although some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The article reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single "gold standard" method when evaluating indexing and retrieval, and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on evaluation approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard, evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.
    Type
    a
  20. Kanaan, G.; Al-Shalabi, R.; Ghwanmeh, S.; Al-Ma'adeed, H.: ¬A comparison of text-classification techniques applied to Arabic text (2009) 0.00
    0.0038341512 = product of:
      0.011502453 = sum of:
        0.0045436835 = weight(_text_:a in 3096) [ClassicSimilarity], result of:
          0.0045436835 = score(doc=3096,freq=6.0), product of:
            0.034319755 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.029764405 = queryNorm
            0.13239266 = fieldWeight in 3096, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3096)
        0.0069587696 = product of:
          0.027835079 = sum of:
            0.027835079 = weight(_text_:g in 3096) [ClassicSimilarity], result of:
              0.027835079 = score(doc=3096,freq=2.0), product of:
                0.11179353 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.029764405 = queryNorm
                0.24898648 = fieldWeight in 3096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3096)
          0.25 = coord(1/4)
      0.33333334 = coord(2/6)
    
    Abstract
    Many algorithms have been implemented for the problem of text classification. Most of the work in this area was carried out for English text. Very little research has been carried out on Arabic text. The nature of Arabic text is different than that of English text, and preprocessing of Arabic text is more challenging. This paper presents an implementation of three automatic text-classification techniques for Arabic text. A corpus of 1445 Arabic text documents belonging to nine categories has been automatically classified using the kNN, Rocchio, and naïve Bayes algorithms. The research results reveal that Naïve Bayes was the best performer, followed by kNN and Rocchio.
    Type
    a

Years

Languages

  • e 168
  • d 32
  • a 1
  • chi 1
  • More… Less…

Types

  • a 178
  • el 27
  • r 3
  • m 2
  • s 2
  • x 2
  • More… Less…