Search (20 results, page 1 of 1)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10146004 = sum of:
      0.08078585 = product of:
        0.24235754 = sum of:
          0.24235754 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24235754 = score(doc=562,freq=2.0), product of:
              0.43122733 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05086421 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020674193 = product of:
        0.041348387 = sum of:
          0.041348387 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041348387 = score(doc=562,freq=2.0), product of:
              0.1781178 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05086421 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Fagni, T.; Sebastiani, F.: Selecting negative examples for hierarchical text classification: An experimental comparison (2010) 0.03
    0.034979578 = product of:
      0.069959156 = sum of:
        0.069959156 = product of:
          0.13991831 = sum of:
            0.13991831 = weight(_text_:policy in 4101) [ClassicSimilarity], result of:
              0.13991831 = score(doc=4101,freq=6.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.5130373 = fieldWeight in 4101, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4101)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Hierarchical text classification (HTC) approaches have recently attracted a lot of interest on the part of researchers in human language technology and machine learning, since they have been shown to bring about equal, if not better, classification accuracy with respect to their "flat" counterparts while allowing exponential time savings at both learning and classification time. A typical component of HTC methods is a "local" policy for selecting negative examples: Given a category c, its negative training examples are by default identified with the training examples that are negative for c and positive for the categories which are siblings of c in the hierarchy. However, this policy has always been taken for granted and never been subjected to careful scrutiny since first proposed 15 years ago. This article proposes a thorough experimental comparison between this policy and three other policies for the selection of negative examples in HTC contexts, one of which (BEST LOCAL (k)) is being proposed for the first time in this article. We compare these policies on the hierarchical versions of three supervised learning algorithms (boosting, support vector machines, and naïve Bayes) by performing experiments on two standard TC datasets, REUTERS-21578 and RCV1-V2.
  3. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.020674193 = product of:
      0.041348387 = sum of:
        0.041348387 = product of:
          0.08269677 = sum of:
            0.08269677 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.08269677 = score(doc=1046,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  4. Pech, G.; Delgado, C.; Sorella, S.P.: Classifying papers into subfields using Abstracts, Titles, Keywords and KeyWords Plus through pattern detection and optimization procedures : an application in Physics (2022) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 744) [ClassicSimilarity], result of:
              0.08078188 = score(doc=744,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Classifying papers according to the fields of knowledge is critical to clearly understand the dynamics of scientific (sub)fields, their leading questions, and trends. Most studies rely on journal categories defined by popular databases such as WoS or Scopus, but some experts find that those categories may not correctly map the existing subfields nor identify the subfield of a specific article. This study addresses the classification problem using data from each paper (Abstract, Title, Keywords, and the KeyWords Plus) and the help of experts to identify the existing subfields and journals exclusive of each subfield. These "exclusive journals" are critical to obtain, through a pattern detection procedure that uses machine learning techniques (from software NVivo), a list of the frequent terms that are specific to each subfield. With that list of terms and with the help of optimization procedures, we can identify to which subfield each paper most likely belongs. This study can contribute to support scientific policy-makers, funding, and research institutions-via more accurate academic performance evaluations-, to support editors in their tasks to redefine the scopes of journals, and to support popular databases in their processes of refining categories.
  5. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017228495 = product of:
      0.03445699 = sum of:
        0.03445699 = product of:
          0.06891398 = sum of:
            0.06891398 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.06891398 = score(doc=611,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  6. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.017228495 = product of:
      0.03445699 = sum of:
        0.03445699 = product of:
          0.06891398 = sum of:
            0.06891398 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.06891398 = score(doc=2748,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  7. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.012059947 = product of:
      0.024119893 = sum of:
        0.024119893 = product of:
          0.048239786 = sum of:
            0.048239786 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.048239786 = score(doc=141,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  8. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.012059947 = product of:
      0.024119893 = sum of:
        0.024119893 = product of:
          0.048239786 = sum of:
            0.048239786 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.048239786 = score(doc=2338,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  9. Automatic classification research at OCLC (2002) 0.01
    0.012059947 = product of:
      0.024119893 = sum of:
        0.024119893 = product of:
          0.048239786 = sum of:
            0.048239786 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.048239786 = score(doc=1563,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
  10. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.012059947 = product of:
      0.024119893 = sum of:
        0.024119893 = product of:
          0.048239786 = sum of:
            0.048239786 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.048239786 = score(doc=1673,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  11. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.012059947 = product of:
      0.024119893 = sum of:
        0.024119893 = product of:
          0.048239786 = sum of:
            0.048239786 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.048239786 = score(doc=5273,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  12. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.012059947 = product of:
      0.024119893 = sum of:
        0.024119893 = product of:
          0.048239786 = sum of:
            0.048239786 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.048239786 = score(doc=2560,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
  13. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.010337097 = product of:
      0.020674193 = sum of:
        0.020674193 = product of:
          0.041348387 = sum of:
            0.041348387 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.041348387 = score(doc=2760,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54
  14. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.010337097 = product of:
      0.020674193 = sum of:
        0.020674193 = product of:
          0.041348387 = sum of:
            0.041348387 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.041348387 = score(doc=3051,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 19:51:28
  15. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.01
    0.010337097 = product of:
      0.020674193 = sum of:
        0.020674193 = product of:
          0.041348387 = sum of:
            0.041348387 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.041348387 = score(doc=690,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23. 3.2013 13:22:36
  16. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.010337097 = product of:
      0.020674193 = sum of:
        0.020674193 = product of:
          0.041348387 = sum of:
            0.041348387 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.041348387 = score(doc=2158,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 8.2015 19:22:04
  17. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.01
    0.008614248 = product of:
      0.017228495 = sum of:
        0.017228495 = product of:
          0.03445699 = sum of:
            0.03445699 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.03445699 = score(doc=2765,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:14:43
  18. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.008614248 = product of:
      0.017228495 = sum of:
        0.017228495 = product of:
          0.03445699 = sum of:
            0.03445699 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.03445699 = score(doc=1107,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28.10.2013 19:22:57
  19. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.01
    0.006891398 = product of:
      0.013782796 = sum of:
        0.013782796 = product of:
          0.027565593 = sum of:
            0.027565593 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
              0.027565593 = score(doc=2741,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.15476047 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    12. 9.2004 9:56:22
  20. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.01
    0.006891398 = product of:
      0.013782796 = sum of:
        0.013782796 = product of:
          0.027565593 = sum of:
            0.027565593 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.027565593 = score(doc=3284,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2010 14:41:24