Search (43 results, page 1 of 3)

  • × theme_ss:"Automatisches Klassifizieren"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.03
    0.025939818 = product of:
      0.051879637 = sum of:
        0.051879637 = sum of:
          0.010980166 = weight(_text_:d in 2158) [ClassicSimilarity], result of:
            0.010980166 = score(doc=2158,freq=2.0), product of:
              0.0871823 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.045888513 = queryNorm
              0.1259449 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.0035959128 = weight(_text_:s in 2158) [ClassicSimilarity], result of:
            0.0035959128 = score(doc=2158,freq=2.0), product of:
              0.049891718 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.045888513 = queryNorm
              0.072074346 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.03730356 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
            0.03730356 = score(doc=2158,freq=2.0), product of:
              0.16069375 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045888513 = queryNorm
              0.23214069 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
      0.5 = coord(1/2)
    
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  2. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.02272193 = product of:
      0.04544386 = sum of:
        0.04544386 = product of:
          0.06816579 = sum of:
            0.0059931884 = weight(_text_:s in 2748) [ClassicSimilarity], result of:
              0.0059931884 = score(doc=2748,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.120123915 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
            0.0621726 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.0621726 = score(doc=2748,freq=2.0), product of:
                0.16069375 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045888513 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
    Pages
    S.64-75
  3. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.01
    0.013633157 = product of:
      0.027266314 = sum of:
        0.027266314 = product of:
          0.04089947 = sum of:
            0.0035959128 = weight(_text_:s in 690) [ClassicSimilarity], result of:
              0.0035959128 = score(doc=690,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.072074346 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
            0.03730356 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.03730356 = score(doc=690,freq=2.0), product of:
                0.16069375 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045888513 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  4. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.011360965 = product of:
      0.02272193 = sum of:
        0.02272193 = product of:
          0.034082893 = sum of:
            0.0029965942 = weight(_text_:s in 1107) [ClassicSimilarity], result of:
              0.0029965942 = score(doc=1107,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.060061958 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
            0.0310863 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.0310863 = score(doc=1107,freq=2.0), product of:
                0.16069375 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045888513 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  5. Liu, R.-L.: Context-based term frequency assessment for text classification (2010) 0.01
    0.00753804 = product of:
      0.01507608 = sum of:
        0.01507608 = product of:
          0.02261412 = sum of:
            0.019018207 = weight(_text_:d in 3331) [ClassicSimilarity], result of:
              0.019018207 = score(doc=3331,freq=6.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.21814299 = fieldWeight in 3331, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3331)
            0.0035959128 = weight(_text_:s in 3331) [ClassicSimilarity], result of:
              0.0035959128 = score(doc=3331,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.072074346 = fieldWeight in 3331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3331)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Automatic text classification (TC) is essential for the management of information. To properly classify a document d, it is essential to identify the semantics of each term t in d, while the semantics heavily depend on context (neighboring terms) of t in d. Therefore, we present a technique CTFA (Context-based Term Frequency Assessment) that improves text classifiers by considering term contexts in test documents. The results of the term context recognition are used to assess term frequencies of terms, and hence CTFA may easily work with various kinds of text classifiers that base their TC decisions on term frequencies, without needing to modify the classifiers. Moreover, CTFA is efficient, and neither huge memory nor domain-specific knowledge is required. Empirical results show that CTFA successfully enhances performance of several kinds of text classifiers on different experimental data.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.2, S.300-309
  6. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.01
    0.0062817 = product of:
      0.0125634 = sum of:
        0.0125634 = product of:
          0.0188451 = sum of:
            0.015848506 = weight(_text_:d in 3311) [ClassicSimilarity], result of:
              0.015848506 = score(doc=3311,freq=6.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.18178582 = fieldWeight in 3311, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3311)
            0.0029965942 = weight(_text_:s in 3311) [ClassicSimilarity], result of:
              0.0029965942 = score(doc=3311,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.060061958 = fieldWeight in 3311, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3311)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.3-16
  7. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.01
    0.0057260245 = product of:
      0.011452049 = sum of:
        0.011452049 = product of:
          0.017178074 = sum of:
            0.01294025 = weight(_text_:d in 2300) [ClassicSimilarity], result of:
              0.01294025 = score(doc=2300,freq=4.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.1484275 = fieldWeight in 2300, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
            0.004237824 = weight(_text_:s in 2300) [ClassicSimilarity], result of:
              0.004237824 = score(doc=2300,freq=4.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.08494043 = fieldWeight in 2300, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Location
    S
    Pages
    S.163-175
  8. Kasprzik, A.: Automatisierte und semiautomatisierte Klassifizierung : eine Analyse aktueller Projekte (2014) 0.00
    0.004858693 = product of:
      0.009717386 = sum of:
        0.009717386 = product of:
          0.014576078 = sum of:
            0.010980166 = weight(_text_:d in 2470) [ClassicSimilarity], result of:
              0.010980166 = score(doc=2470,freq=2.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.1259449 = fieldWeight in 2470, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2470)
            0.0035959128 = weight(_text_:s in 2470) [ClassicSimilarity], result of:
              0.0035959128 = score(doc=2470,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.072074346 = fieldWeight in 2470, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2470)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Language
    d
    Source
    Perspektive Bibliothek. 3(2014) H.1, S.85-110
  9. Groß, T.; Faden, M.: Automatische Indexierung elektronischer Dokumente an der Deutschen Zentralbibliothek für Wirtschaftswissenschaften : Bericht über die Jahrestagung der Internationalen Buchwissenschaftlichen Gesellschaft (2010) 0.00
    0.004249825 = product of:
      0.00849965 = sum of:
        0.00849965 = product of:
          0.012749475 = sum of:
            0.0103522 = weight(_text_:d in 4051) [ClassicSimilarity], result of:
              0.0103522 = score(doc=4051,freq=4.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.118742 = fieldWeight in 4051, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4051)
            0.0023972753 = weight(_text_:s in 4051) [ClassicSimilarity], result of:
              0.0023972753 = score(doc=4051,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.048049565 = fieldWeight in 4051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4051)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Language
    d
    Location
    D
    Source
    Bibliotheksdienst. 44(2010) H.12, S.1120-1135
  10. HaCohen-Kerner, Y.; Beck, H.; Yehudai, E.; Rosenstein, M.; Mughaz, D.: Cuisine : classification using stylistic feature sets and/or name-based feature sets (2010) 0.00
    0.004048911 = product of:
      0.008097822 = sum of:
        0.008097822 = product of:
          0.012146732 = sum of:
            0.009150138 = weight(_text_:d in 3706) [ClassicSimilarity], result of:
              0.009150138 = score(doc=3706,freq=2.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.104954086 = fieldWeight in 3706, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3706)
            0.0029965942 = weight(_text_:s in 3706) [ClassicSimilarity], result of:
              0.0029965942 = score(doc=3706,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.060061958 = fieldWeight in 3706, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3706)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1644-1657
  11. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.00
    0.004048911 = product of:
      0.008097822 = sum of:
        0.008097822 = product of:
          0.012146732 = sum of:
            0.009150138 = weight(_text_:d in 237) [ClassicSimilarity], result of:
              0.009150138 = score(doc=237,freq=2.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.104954086 = fieldWeight in 237, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=237)
            0.0029965942 = weight(_text_:s in 237) [ClassicSimilarity], result of:
              0.0029965942 = score(doc=237,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.060061958 = fieldWeight in 237, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=237)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    We study the problem of question topic classification using a very large real-world Community Question Answering (CQA) dataset from Yahoo! Answers. The dataset comprises 3.9 million questions and these questions are organized into more than 1,000 categories in a hierarchy. To the best knowledge, this is the first systematic evaluation of the performance of different classification methods on question topic classification as well as short texts. Specifically, we empirically evaluate the following in classifying questions into CQA categories: (a) the usefulness of n-gram features and bag-of-word features; (b) the performance of three standard classification algorithms (naive Bayes, maximum entropy, and support vector machines); (c) the performance of the state-of-the-art hierarchical classification algorithms; (d) the effect of training data size on performance; and (e) the effectiveness of the different components of CQA data, including subject, content, asker, and the best answer. The experimental results show what aspects are important for question topic classification in terms of both effectiveness and efficiency. We believe that the experimental findings from this study will be useful in real-world classification problems.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.889-903
  12. Alberts, I.; Forest, D.: Email pragmatics and automatic classification : a study in the organizational context (2012) 0.00
    0.004048911 = product of:
      0.008097822 = sum of:
        0.008097822 = product of:
          0.012146732 = sum of:
            0.009150138 = weight(_text_:d in 238) [ClassicSimilarity], result of:
              0.009150138 = score(doc=238,freq=2.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.104954086 = fieldWeight in 238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=238)
            0.0029965942 = weight(_text_:s in 238) [ClassicSimilarity], result of:
              0.0029965942 = score(doc=238,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.060061958 = fieldWeight in 238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=238)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.904-922
  13. Vilares, D.; Alonso, M.A.; Gómez-Rodríguez, C.: On the usefulness of lexical and syntactic processing in polarity classification of Twitter messages (2015) 0.00
    0.004048911 = product of:
      0.008097822 = sum of:
        0.008097822 = product of:
          0.012146732 = sum of:
            0.009150138 = weight(_text_:d in 2161) [ClassicSimilarity], result of:
              0.009150138 = score(doc=2161,freq=2.0), product of:
                0.0871823 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045888513 = queryNorm
                0.104954086 = fieldWeight in 2161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2161)
            0.0029965942 = weight(_text_:s in 2161) [ClassicSimilarity], result of:
              0.0029965942 = score(doc=2161,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.060061958 = fieldWeight in 2161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2161)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1799-1816
  14. Ru, C.; Tang, J.; Li, S.; Xie, S.; Wang, T.: Using semantic similarity to reduce wrong labels in distant supervision for relation extraction (2018) 0.00
    8.650423E-4 = product of:
      0.0017300847 = sum of:
        0.0017300847 = product of:
          0.0051902537 = sum of:
            0.0051902537 = weight(_text_:s in 5055) [ClassicSimilarity], result of:
              0.0051902537 = score(doc=5055,freq=6.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.10403037 = fieldWeight in 5055, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5055)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 54(2018) no.4, S.593-608
  15. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.00
    8.475649E-4 = product of:
      0.0016951298 = sum of:
        0.0016951298 = product of:
          0.005085389 = sum of:
            0.005085389 = weight(_text_:s in 3464) [ClassicSimilarity], result of:
              0.005085389 = score(doc=3464,freq=4.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.101928525 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1105-1119
  16. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.00
    8.475649E-4 = product of:
      0.0016951298 = sum of:
        0.0016951298 = product of:
          0.005085389 = sum of:
            0.005085389 = weight(_text_:s in 3015) [ClassicSimilarity], result of:
              0.005085389 = score(doc=3015,freq=4.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.101928525 = fieldWeight in 3015, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3015)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1668-1678
  17. Barthel, S.; Tönnies, S.; Balke, W.-T.: Large-scale experiments for mathematical document classification (2013) 0.00
    7.0630404E-4 = product of:
      0.0014126081 = sum of:
        0.0014126081 = product of:
          0.004237824 = sum of:
            0.004237824 = weight(_text_:s in 1056) [ClassicSimilarity], result of:
              0.004237824 = score(doc=1056,freq=4.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.08494043 = fieldWeight in 1056, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1056)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Mu, T.; Goulermas, J.Y.; Korkontzelos, I.; Ananiadou, S.: Descriptive document clustering via discriminant learning in a co-embedded space of multilevel similarities (2016) 0.00
    7.0630404E-4 = product of:
      0.0014126081 = sum of:
        0.0014126081 = product of:
          0.004237824 = sum of:
            0.004237824 = weight(_text_:s in 2496) [ClassicSimilarity], result of:
              0.004237824 = score(doc=2496,freq=4.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.08494043 = fieldWeight in 2496, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2496)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.106-133
  19. Golub, K.: Automated subject classification of textual documents in the context of Web-based hierarchical browsing (2011) 0.00
    5.993188E-4 = product of:
      0.0011986376 = sum of:
        0.0011986376 = product of:
          0.0035959128 = sum of:
            0.0035959128 = weight(_text_:s in 4558) [ClassicSimilarity], result of:
              0.0035959128 = score(doc=4558,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.072074346 = fieldWeight in 4558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4558)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Knowledge organization. 38(2011) no.3, S.230-244
  20. Cortez, E.; Herrera, M.R.; Silva, A.S. da; Moura, E.S. de; Neubert, M.: Lightweight methods for large-scale product categorization (2011) 0.00
    5.993188E-4 = product of:
      0.0011986376 = sum of:
        0.0011986376 = product of:
          0.0035959128 = sum of:
            0.0035959128 = weight(_text_:s in 4758) [ClassicSimilarity], result of:
              0.0035959128 = score(doc=4758,freq=2.0), product of:
                0.049891718 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.045888513 = queryNorm
                0.072074346 = fieldWeight in 4758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4758)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.9, S.1839-1848

Languages

  • e 41
  • d 2
  • More… Less…