Search (29 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.05
    0.049139693 = product of:
      0.07370954 = sum of:
        0.039130855 = weight(_text_:science in 2748) [ClassicSimilarity], result of:
          0.039130855 = score(doc=2748,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.2910318 = fieldWeight in 2748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.078125 = fieldNorm(doc=2748)
        0.034578685 = product of:
          0.06915737 = sum of:
            0.06915737 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.06915737 = score(doc=2748,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  2. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.03
    0.029483816 = product of:
      0.044225723 = sum of:
        0.023478512 = weight(_text_:science in 690) [ClassicSimilarity], result of:
          0.023478512 = score(doc=690,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=690)
        0.02074721 = product of:
          0.04149442 = sum of:
            0.04149442 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.04149442 = score(doc=690,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  3. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.03
    0.029483816 = product of:
      0.044225723 = sum of:
        0.023478512 = weight(_text_:science in 2158) [ClassicSimilarity], result of:
          0.023478512 = score(doc=2158,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 2158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.02074721 = product of:
          0.04149442 = sum of:
            0.04149442 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.04149442 = score(doc=2158,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  4. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.02
    0.024569847 = product of:
      0.03685477 = sum of:
        0.019565428 = weight(_text_:science in 1107) [ClassicSimilarity], result of:
          0.019565428 = score(doc=1107,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.017289342 = product of:
          0.034578685 = sum of:
            0.034578685 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.034578685 = score(doc=1107,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  5. Fang, H.: Classifying research articles in multidisciplinary sciences journals into subject categories (2015) 0.01
    0.013043619 = product of:
      0.039130855 = sum of:
        0.039130855 = weight(_text_:science in 2194) [ClassicSimilarity], result of:
          0.039130855 = score(doc=2194,freq=8.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.2910318 = fieldWeight in 2194, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2194)
      0.33333334 = coord(1/3)
    
    Abstract
    In the Thomson Reuters Web of Science database, the subject categories of a journal are applied to all articles in the journal. However, many articles in multidisciplinary Sciences journals may only be represented by a small number of subject categories. To provide more accurate information on the research areas of articles in such journals, we can classify articles in these journals into subject categories as defined by Web of Science based on their references. For an article in a multidisciplinary sciences journal, the method counts the subject categories in all of the article's references indexed by Web of Science, and uses the most numerous subject categories of the references to determine the most appropriate classification of the article. We used articles in an issue of Proceedings of the National Academy of Sciences (PNAS) to validate the correctness of the method by comparing the obtained results with the categories of the articles as defined by PNAS and their content. This study shows that the method provides more precise search results for the subject category of interest in bibliometric investigations through recognition of articles in multidisciplinary sciences journals whose work relates to a particular subject category.
    Object
    Web of science
  6. Suominen, A.; Toivanen, H.: Map of science with topic modeling : comparison of unsupervised learning and human-assigned subject classification (2016) 0.01
    0.013043619 = product of:
      0.039130855 = sum of:
        0.039130855 = weight(_text_:science in 3121) [ClassicSimilarity], result of:
          0.039130855 = score(doc=3121,freq=8.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.2910318 = fieldWeight in 3121, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3121)
      0.33333334 = coord(1/3)
    
    Abstract
    The delineation of coordinates is fundamental for the cartography of science, and accurate and credible classification of scientific knowledge presents a persistent challenge in this regard. We present a map of Finnish science based on unsupervised-learning classification, and discuss the advantages and disadvantages of this approach vis-à-vis those generated by human reasoning. We conclude that from theoretical and practical perspectives there exist several challenges for human reasoning-based classification frameworks of scientific knowledge, as they typically try to fit new-to-the-world knowledge into historical models of scientific knowledge, and cannot easily be deployed for new large-scale data sets. Automated classification schemes, in contrast, generate classification models only from the available text corpus, thereby identifying credibly novel bodies of knowledge. They also lend themselves to versatile large-scale data analysis, and enable a range of Big Data possibilities. However, we also argue that it is neither possible nor fruitful to declare one or another method a superior approach in terms of realism to classify scientific knowledge, and we believe that the merits of each approach are dependent on the practical objectives of analysis.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2464-2476
  7. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.01
    0.0110678775 = product of:
      0.03320363 = sum of:
        0.03320363 = weight(_text_:science in 3464) [ClassicSimilarity], result of:
          0.03320363 = score(doc=3464,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.24694869 = fieldWeight in 3464, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
      0.33333334 = coord(1/3)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1105-1119
  8. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.01
    0.0110678775 = product of:
      0.03320363 = sum of:
        0.03320363 = weight(_text_:science in 3015) [ClassicSimilarity], result of:
          0.03320363 = score(doc=3015,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.24694869 = fieldWeight in 3015, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3015)
      0.33333334 = coord(1/3)
    
    Abstract
    We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use-both individually and collectively-over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1668-1678
  9. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.01
    0.009223231 = product of:
      0.027669692 = sum of:
        0.027669692 = weight(_text_:science in 3311) [ClassicSimilarity], result of:
          0.027669692 = score(doc=3311,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20579056 = fieldWeight in 3311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3311)
      0.33333334 = coord(1/3)
    
    Series
    Advances in information science
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.3-16
  10. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.01
    0.009223231 = product of:
      0.027669692 = sum of:
        0.027669692 = weight(_text_:science in 3627) [ClassicSimilarity], result of:
          0.027669692 = score(doc=3627,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20579056 = fieldWeight in 3627, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3627)
      0.33333334 = coord(1/3)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
  11. Liu, R.-L.: Context-based term frequency assessment for text classification (2010) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 3331) [ClassicSimilarity], result of:
          0.023478512 = score(doc=3331,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 3331, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3331)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.2, S.300-309
  12. Cortez, E.; Herrera, M.R.; Silva, A.S. da; Moura, E.S. de; Neubert, M.: Lightweight methods for large-scale product categorization (2011) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 4758) [ClassicSimilarity], result of:
          0.023478512 = score(doc=4758,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 4758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=4758)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.9, S.1839-1848
  13. Malo, P.; Sinha, A.; Wallenius, J.; Korhonen, P.: Concept-based document classification using Wikipedia and value function (2011) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 4948) [ClassicSimilarity], result of:
          0.023478512 = score(doc=4948,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 4948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=4948)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.12, S.2496-2511
  14. Schaalje, G.B.; Blades, N.J.; Funai, T.: ¬An open-set size-adjusted Bayesian classifier for authorship attribution (2013) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 1041) [ClassicSimilarity], result of:
          0.023478512 = score(doc=1041,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 1041, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1041)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.9, S.1815-1825
  15. Desale, S.K.; Kumbhar, R.: Research on automatic classification of documents in library environment : a literature review (2013) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 1071) [ClassicSimilarity], result of:
          0.023478512 = score(doc=1071,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 1071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1071)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper aims to provide an overview of automatic classification research, which focuses on issues related to the automatic classification of documents in a library environment. The review covers literature published in mainstream library and information science studies. The review was done on literature published in both academic and professional LIS journals and other documents. This review reveals that basically three types of research are being done on automatic classification: 1) hierarchical classification using different library classification schemes, 2) text categorization and document categorization using different type of classifiers with or without using training documents, and 3) automatic bibliographic classification. Predominantly this research is directed towards solving problems of organization of digital documents in an online environment. However, very little research is devoted towards solving the problems of arrangement of physical documents.
  16. Aphinyanaphongs, Y.; Fu, L.D.; Li, Z.; Peskin, E.R.; Efstathiadis, E.; Aliferis, C.F.; Statnikov, A.: ¬A comprehensive empirical comparison of modern supervised classification and feature selection methods for text categorization (2014) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 1496) [ClassicSimilarity], result of:
          0.023478512 = score(doc=1496,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 1496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1496)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.10, S.1964-1987
  17. Barbu, E.: What kind of knowledge is in Wikipedia? : unsupervised extraction of properties for similar concepts (2014) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 1547) [ClassicSimilarity], result of:
          0.023478512 = score(doc=1547,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 1547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1547)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.12, S.2489-2497
  18. Ko, Y.: ¬A new term-weighting scheme for text classification using the odds of positive and negative class probabilities (2015) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 2339) [ClassicSimilarity], result of:
          0.023478512 = score(doc=2339,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=2339)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2553-2565
  19. Kishida, K.: High-speed rough clustering for very large document collections (2010) 0.01
    0.0065218094 = product of:
      0.019565428 = sum of:
        0.019565428 = weight(_text_:science in 3463) [ClassicSimilarity], result of:
          0.019565428 = score(doc=3463,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 3463, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3463)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1092-1104
  20. HaCohen-Kerner, Y.; Beck, H.; Yehudai, E.; Rosenstein, M.; Mughaz, D.: Cuisine : classification using stylistic feature sets and/or name-based feature sets (2010) 0.01
    0.0065218094 = product of:
      0.019565428 = sum of:
        0.019565428 = weight(_text_:science in 3706) [ClassicSimilarity], result of:
          0.019565428 = score(doc=3706,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 3706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3706)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1644-1657