Search (178 results, page 2 of 9)

  • × theme_ss:"Automatisches Klassifizieren"
  • × type_ss:"a"
  1. Chung, Y.-M.; Noh, Y.-H.: Developing a specialized directory system by automatically classifying Web documents (2003) 0.01
    0.013986527 = product of:
      0.027973054 = sum of:
        0.027973054 = product of:
          0.04195958 = sum of:
            0.038397755 = weight(_text_:k in 1566) [ClassicSimilarity], result of:
              0.038397755 = score(doc=1566,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23664509 = fieldWeight in 1566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1566)
            0.003561823 = weight(_text_:s in 1566) [ClassicSimilarity], result of:
              0.003561823 = score(doc=1566,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 1566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1566)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This study developed a specialized directory system using an automatic classification technique. Economics was selected as the subject field for the classification experiments with Web documents. The classification scheme of the directory follows the DDC, and subject terms representing each class number or subject category were selected from the DDC table to construct a representative term dictionary. In collecting and classifying the Web documents, various strategies were tested in order to find the optimal thresholds. In the classification experiments, Web documents in economics were classified into a total of 757 hierarchical subject categories built from the DDC scheme. The first and second experiments using the representative term dictionary resulted in relatively high precision ratios of 77 and 60%, respectively. The third experiment employing a machine learning-based k-nearest neighbours (kNN) classifier in a closed experimental setting achieved a precision ratio of 96%. This implies that it is possible to enhance the classification performance by applying a hybrid method combining a dictionary-based technique and a kNN classifier
    Source
    Journal of information science. 29(2003) no.2, S.117-126
  2. Sun, A.; Lim, E.-P.; Ng, W.-K.: Performance measurement framework for hierarchical text classification (2003) 0.01
    0.013986527 = product of:
      0.027973054 = sum of:
        0.027973054 = product of:
          0.04195958 = sum of:
            0.038397755 = weight(_text_:k in 1808) [ClassicSimilarity], result of:
              0.038397755 = score(doc=1808,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23664509 = fieldWeight in 1808, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1808)
            0.003561823 = weight(_text_:s in 1808) [ClassicSimilarity], result of:
              0.003561823 = score(doc=1808,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 1808, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1808)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.11, S.1014-1028
  3. Golub, K.: Automated subject classification of textual Web pages, based on a controlled vocabulary : challenges and recommendations (2006) 0.01
    0.013986527 = product of:
      0.027973054 = sum of:
        0.027973054 = product of:
          0.04195958 = sum of:
            0.038397755 = weight(_text_:k in 5897) [ClassicSimilarity], result of:
              0.038397755 = score(doc=5897,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23664509 = fieldWeight in 5897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5897)
            0.003561823 = weight(_text_:s in 5897) [ClassicSimilarity], result of:
              0.003561823 = score(doc=5897,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 5897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5897)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    New review of hypermedia and multimedia. 12(2006) no.1, S.11-27
  4. Golub, K.; Hamon, T.; Ardö, A.: Automated classification of textual documents based on a controlled vocabulary in engineering (2007) 0.01
    0.013986527 = product of:
      0.027973054 = sum of:
        0.027973054 = product of:
          0.04195958 = sum of:
            0.038397755 = weight(_text_:k in 1461) [ClassicSimilarity], result of:
              0.038397755 = score(doc=1461,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23664509 = fieldWeight in 1461, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1461)
            0.003561823 = weight(_text_:s in 1461) [ClassicSimilarity], result of:
              0.003561823 = score(doc=1461,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 1461, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1461)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization. 34(2007) no.4, S.247-263
  5. Reiner, U.: DDC-based search in the data of the German National Bibliography (2008) 0.01
    0.013986527 = product of:
      0.027973054 = sum of:
        0.027973054 = product of:
          0.04195958 = sum of:
            0.038397755 = weight(_text_:k in 2166) [ClassicSimilarity], result of:
              0.038397755 = score(doc=2166,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23664509 = fieldWeight in 2166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2166)
            0.003561823 = weight(_text_:s in 2166) [ClassicSimilarity], result of:
              0.003561823 = score(doc=2166,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 2166, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2166)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Pages
    S.121-129
    Source
    New pespectives on subject indexing and classification: essays in honour of Magda Heiner-Freiling. Red.: K. Knull-Schlomann, u.a
  6. Golub, K.: Automated subject classification of textual documents in the context of Web-based hierarchical browsing (2011) 0.01
    0.013986527 = product of:
      0.027973054 = sum of:
        0.027973054 = product of:
          0.04195958 = sum of:
            0.038397755 = weight(_text_:k in 4558) [ClassicSimilarity], result of:
              0.038397755 = score(doc=4558,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23664509 = fieldWeight in 4558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4558)
            0.003561823 = weight(_text_:s in 4558) [ClassicSimilarity], result of:
              0.003561823 = score(doc=4558,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 4558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4558)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization. 38(2011) no.3, S.230-244
  7. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.013503913 = product of:
      0.027007826 = sum of:
        0.027007826 = product of:
          0.04051174 = sum of:
            0.003561823 = weight(_text_:s in 2760) [ClassicSimilarity], result of:
              0.003561823 = score(doc=2760,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
            0.036949914 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.036949914 = score(doc=2760,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  8. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.013503913 = product of:
      0.027007826 = sum of:
        0.027007826 = product of:
          0.04051174 = sum of:
            0.003561823 = weight(_text_:s in 3051) [ClassicSimilarity], result of:
              0.003561823 = score(doc=3051,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
            0.036949914 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.036949914 = score(doc=3051,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 19:51:28
    Pages
    S.245-253
  9. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.013503913 = product of:
      0.027007826 = sum of:
        0.027007826 = product of:
          0.04051174 = sum of:
            0.003561823 = weight(_text_:s in 2158) [ClassicSimilarity], result of:
              0.003561823 = score(doc=2158,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
            0.036949914 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.036949914 = score(doc=2158,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  10. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.01
    0.01206526 = product of:
      0.02413052 = sum of:
        0.02413052 = product of:
          0.036195777 = sum of:
            0.031998128 = weight(_text_:k in 2300) [ClassicSimilarity], result of:
              0.031998128 = score(doc=2300,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 2300, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
            0.0041976487 = weight(_text_:s in 2300) [ClassicSimilarity], result of:
              0.0041976487 = score(doc=2300,freq=4.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.08494043 = fieldWeight in 2300, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Location
    S
    Pages
    S.163-175
  11. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.01
    0.011663082 = product of:
      0.023326164 = sum of:
        0.023326164 = product of:
          0.034989245 = sum of:
            0.0041976487 = weight(_text_:s in 2765) [ClassicSimilarity], result of:
              0.0041976487 = score(doc=2765,freq=4.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.08494043 = fieldWeight in 2765, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
            0.030791596 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.030791596 = score(doc=2765,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  12. Golub, K.: Automated subject classification of textual web documents (2006) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 5600) [ClassicSimilarity], result of:
              0.031998128 = score(doc=5600,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 5600, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5600)
            0.0029681858 = weight(_text_:s in 5600) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=5600,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 5600, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5600)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of documentation. 62(2006) no.3, S.350-371
  13. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 448) [ClassicSimilarity], result of:
              0.031998128 = score(doc=448,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 448, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=448)
            0.0029681858 = weight(_text_:s in 448) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=448,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 448, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=448)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.8, S.1207-1221
  14. Kishida, K.: High-speed rough clustering for very large document collections (2010) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 3463) [ClassicSimilarity], result of:
              0.031998128 = score(doc=3463,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 3463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3463)
            0.0029681858 = weight(_text_:s in 3463) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=3463,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 3463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3463)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1092-1104
  15. Golub, K.; Lykke, M.: Automated classification of web pages in hierarchical browsing (2009) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 3614) [ClassicSimilarity], result of:
              0.031998128 = score(doc=3614,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 3614, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3614)
            0.0029681858 = weight(_text_:s in 3614) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=3614,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 3614, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3614)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of documentation. 65(2009) no.6, S.901-925
  16. Fagni, T.; Sebastiani, F.: Selecting negative examples for hierarchical text classification: An experimental comparison (2010) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 4101) [ClassicSimilarity], result of:
              0.031998128 = score(doc=4101,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 4101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4101)
            0.0029681858 = weight(_text_:s in 4101) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=4101,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 4101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4101)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Hierarchical text classification (HTC) approaches have recently attracted a lot of interest on the part of researchers in human language technology and machine learning, since they have been shown to bring about equal, if not better, classification accuracy with respect to their "flat" counterparts while allowing exponential time savings at both learning and classification time. A typical component of HTC methods is a "local" policy for selecting negative examples: Given a category c, its negative training examples are by default identified with the training examples that are negative for c and positive for the categories which are siblings of c in the hierarchy. However, this policy has always been taken for granted and never been subjected to careful scrutiny since first proposed 15 years ago. This article proposes a thorough experimental comparison between this policy and three other policies for the selection of negative examples in HTC contexts, one of which (BEST LOCAL (k)) is being proposed for the first time in this article. We compare these policies on the hierarchical versions of three supervised learning algorithms (boosting, support vector machines, and naïve Bayes) by performing experiments on two standard TC datasets, REUTERS-21578 and RCV1-V2.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.11, S.2256-2265
  17. Ma, Z.; Sun, A.; Cong, G.: On predicting the popularity of newly emerging hashtags in Twitter (2013) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 967) [ClassicSimilarity], result of:
              0.031998128 = score(doc=967,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
            0.0029681858 = weight(_text_:s in 967) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=967,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Because of Twitter's popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naïve bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.7, S.1399-1410
  18. Yang, P.; Gao, W.; Tan, Q.; Wong, K.-F.: ¬A link-bridged topic model for cross-domain document classification (2013) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 2706) [ClassicSimilarity], result of:
              0.031998128 = score(doc=2706,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 2706, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2706)
            0.0029681858 = weight(_text_:s in 2706) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=2706,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 2706, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2706)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 49(2013) no.6, S.1181-1193
  19. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 3311) [ClassicSimilarity], result of:
              0.031998128 = score(doc=3311,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 3311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3311)
            0.0029681858 = weight(_text_:s in 3311) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=3311,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 3311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3311)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.3-16
  20. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.01125326 = product of:
      0.02250652 = sum of:
        0.02250652 = product of:
          0.03375978 = sum of:
            0.0029681858 = weight(_text_:s in 1107) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=1107,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
            0.030791596 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.030791596 = score(doc=1107,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277

Years

Languages

  • e 150
  • d 26
  • chi 1
  • More… Less…