Search (35 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Klassifizieren"
  • × year_i:[2010 TO 2020}
  1. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.03
    0.025058959 = product of:
      0.100235835 = sum of:
        0.100235835 = sum of:
          0.036221053 = weight(_text_:science in 2748) [ClassicSimilarity], result of:
            0.036221053 = score(doc=2748,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.2910318 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
          0.06401478 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
            0.06401478 = score(doc=2748,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.38690117 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  2. Liu, R.-L.: Context-based term frequency assessment for text classification (2010) 0.02
    0.023225334 = product of:
      0.046450667 = sum of:
        0.035584353 = weight(_text_:management in 3331) [ClassicSimilarity], result of:
          0.035584353 = score(doc=3331,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.22344214 = fieldWeight in 3331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=3331)
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 3331) [ClassicSimilarity], result of:
              0.021732632 = score(doc=3331,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 3331, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3331)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Automatic text classification (TC) is essential for the management of information. To properly classify a document d, it is essential to identify the semantics of each term t in d, while the semantics heavily depend on context (neighboring terms) of t in d. Therefore, we present a technique CTFA (Context-based Term Frequency Assessment) that improves text classifiers by considering term contexts in test documents. The results of the term context recognition are used to assess term frequencies of terms, and hence CTFA may easily work with various kinds of text classifiers that base their TC decisions on term frequencies, without needing to modify the classifiers. Moreover, CTFA is efficient, and neither huge memory nor domain-specific knowledge is required. Empirical results show that CTFA successfully enhances performance of several kinds of text classifiers on different experimental data.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.2, S.300-309
  3. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.02
    0.015035374 = product of:
      0.060141496 = sum of:
        0.060141496 = sum of:
          0.021732632 = weight(_text_:science in 690) [ClassicSimilarity], result of:
            0.021732632 = score(doc=690,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.17461908 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
          0.038408864 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
            0.038408864 = score(doc=690,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.23214069 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
      0.25 = coord(1/4)
    
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  4. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.02
    0.015035374 = product of:
      0.060141496 = sum of:
        0.060141496 = sum of:
          0.021732632 = weight(_text_:science in 2158) [ClassicSimilarity], result of:
            0.021732632 = score(doc=2158,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.17461908 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.038408864 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
            0.038408864 = score(doc=2158,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.23214069 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
      0.25 = coord(1/4)
    
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  5. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.012529479 = product of:
      0.050117917 = sum of:
        0.050117917 = sum of:
          0.018110527 = weight(_text_:science in 1107) [ClassicSimilarity], result of:
            0.018110527 = score(doc=1107,freq=2.0), product of:
              0.124457374 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.047248192 = queryNorm
              0.1455159 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.03200739 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.03200739 = score(doc=1107,freq=2.0), product of:
              0.16545512 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047248192 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.25 = coord(1/4)
    
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  6. Yang, P.; Gao, W.; Tan, Q.; Wong, K.-F.: ¬A link-bridged topic model for cross-domain document classification (2013) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 2706) [ClassicSimilarity], result of:
          0.029653627 = score(doc=2706,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 2706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2706)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 49(2013) no.6, S.1181-1193
  7. Borodin, Y.; Polishchuk, V.; Mahmud, J.; Ramakrishnan, I.V.; Stent, A.: Live and learn from mistakes : a lightweight system for document classification (2013) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 2722) [ClassicSimilarity], result of:
          0.029653627 = score(doc=2722,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 2722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2722)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 49(2013) no.1, S.83-98
  8. Wang, H.; Hong, M.: Supervised Hebb rule based feature selection for text classification (2019) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 5036) [ClassicSimilarity], result of:
          0.029653627 = score(doc=5036,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 5036, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5036)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 56(2019) no.1, S.167-191
  9. Yilmaz, T.; Ozcan, R.; Altingovde, I.S.; Ulusoy, Ö.: Improving educational web search for question-like queries through subject classification (2019) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 5041) [ClassicSimilarity], result of:
          0.029653627 = score(doc=5041,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 5041, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5041)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 56(2019) no.1, S.228-246
  10. Ru, C.; Tang, J.; Li, S.; Xie, S.; Wang, T.: Using semantic similarity to reduce wrong labels in distant supervision for relation extraction (2018) 0.01
    0.007413407 = product of:
      0.029653627 = sum of:
        0.029653627 = weight(_text_:management in 5055) [ClassicSimilarity], result of:
          0.029653627 = score(doc=5055,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.18620178 = fieldWeight in 5055, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5055)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 54(2018) no.4, S.593-608
  11. Altinel, B.; Ganiz, M.C.: Semantic text classification : a survey of past and recent advances (2018) 0.01
    0.0059307255 = product of:
      0.023722902 = sum of:
        0.023722902 = weight(_text_:management in 5051) [ClassicSimilarity], result of:
          0.023722902 = score(doc=5051,freq=2.0), product of:
            0.15925534 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.047248192 = queryNorm
            0.14896142 = fieldWeight in 5051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=5051)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 54(2018) no.6, S.1129-1153
  12. Fang, H.: Classifying research articles in multidisciplinary sciences journals into subject categories (2015) 0.00
    0.0045276317 = product of:
      0.018110527 = sum of:
        0.018110527 = product of:
          0.036221053 = sum of:
            0.036221053 = weight(_text_:science in 2194) [ClassicSimilarity], result of:
              0.036221053 = score(doc=2194,freq=8.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.2910318 = fieldWeight in 2194, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2194)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In the Thomson Reuters Web of Science database, the subject categories of a journal are applied to all articles in the journal. However, many articles in multidisciplinary Sciences journals may only be represented by a small number of subject categories. To provide more accurate information on the research areas of articles in such journals, we can classify articles in these journals into subject categories as defined by Web of Science based on their references. For an article in a multidisciplinary sciences journal, the method counts the subject categories in all of the article's references indexed by Web of Science, and uses the most numerous subject categories of the references to determine the most appropriate classification of the article. We used articles in an issue of Proceedings of the National Academy of Sciences (PNAS) to validate the correctness of the method by comparing the obtained results with the categories of the articles as defined by PNAS and their content. This study shows that the method provides more precise search results for the subject category of interest in bibliometric investigations through recognition of articles in multidisciplinary sciences journals whose work relates to a particular subject category.
    Object
    Web of science
  13. Suominen, A.; Toivanen, H.: Map of science with topic modeling : comparison of unsupervised learning and human-assigned subject classification (2016) 0.00
    0.0045276317 = product of:
      0.018110527 = sum of:
        0.018110527 = product of:
          0.036221053 = sum of:
            0.036221053 = weight(_text_:science in 3121) [ClassicSimilarity], result of:
              0.036221053 = score(doc=3121,freq=8.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.2910318 = fieldWeight in 3121, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3121)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The delineation of coordinates is fundamental for the cartography of science, and accurate and credible classification of scientific knowledge presents a persistent challenge in this regard. We present a map of Finnish science based on unsupervised-learning classification, and discuss the advantages and disadvantages of this approach vis-à-vis those generated by human reasoning. We conclude that from theoretical and practical perspectives there exist several challenges for human reasoning-based classification frameworks of scientific knowledge, as they typically try to fit new-to-the-world knowledge into historical models of scientific knowledge, and cannot easily be deployed for new large-scale data sets. Automated classification schemes, in contrast, generate classification models only from the available text corpus, thereby identifying credibly novel bodies of knowledge. They also lend themselves to versatile large-scale data analysis, and enable a range of Big Data possibilities. However, we also argue that it is neither possible nor fruitful to declare one or another method a superior approach in terms of realism to classify scientific knowledge, and we believe that the merits of each approach are dependent on the practical objectives of analysis.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2464-2476
  14. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.00
    0.0038418232 = product of:
      0.015367293 = sum of:
        0.015367293 = product of:
          0.030734586 = sum of:
            0.030734586 = weight(_text_:science in 3464) [ClassicSimilarity], result of:
              0.030734586 = score(doc=3464,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.24694869 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1105-1119
  15. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.00
    0.0038418232 = product of:
      0.015367293 = sum of:
        0.015367293 = product of:
          0.030734586 = sum of:
            0.030734586 = weight(_text_:science in 3015) [ClassicSimilarity], result of:
              0.030734586 = score(doc=3015,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.24694869 = fieldWeight in 3015, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3015)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use-both individually and collectively-over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1668-1678
  16. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.00
    0.0032015191 = product of:
      0.012806077 = sum of:
        0.012806077 = product of:
          0.025612153 = sum of:
            0.025612153 = weight(_text_:science in 3311) [ClassicSimilarity], result of:
              0.025612153 = score(doc=3311,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.20579056 = fieldWeight in 3311, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3311)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Series
    Advances in information science
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.3-16
  17. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.00
    0.0032015191 = product of:
      0.012806077 = sum of:
        0.012806077 = product of:
          0.025612153 = sum of:
            0.025612153 = weight(_text_:science in 3627) [ClassicSimilarity], result of:
              0.025612153 = score(doc=3627,freq=4.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.20579056 = fieldWeight in 3627, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3627)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
  18. Cortez, E.; Herrera, M.R.; Silva, A.S. da; Moura, E.S. de; Neubert, M.: Lightweight methods for large-scale product categorization (2011) 0.00
    0.002716579 = product of:
      0.010866316 = sum of:
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 4758) [ClassicSimilarity], result of:
              0.021732632 = score(doc=4758,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 4758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4758)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.9, S.1839-1848
  19. Malo, P.; Sinha, A.; Wallenius, J.; Korhonen, P.: Concept-based document classification using Wikipedia and value function (2011) 0.00
    0.002716579 = product of:
      0.010866316 = sum of:
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 4948) [ClassicSimilarity], result of:
              0.021732632 = score(doc=4948,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 4948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4948)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.12, S.2496-2511
  20. Schaalje, G.B.; Blades, N.J.; Funai, T.: ¬An open-set size-adjusted Bayesian classifier for authorship attribution (2013) 0.00
    0.002716579 = product of:
      0.010866316 = sum of:
        0.010866316 = product of:
          0.021732632 = sum of:
            0.021732632 = weight(_text_:science in 1041) [ClassicSimilarity], result of:
              0.021732632 = score(doc=1041,freq=2.0), product of:
                0.124457374 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.047248192 = queryNorm
                0.17461908 = fieldWeight in 1041, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1041)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.9, S.1815-1825