Search (24 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.062937215 = product of:
      0.094405815 = sum of:
        0.07516904 = product of:
          0.22550711 = sum of:
            0.22550711 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.22550711 = score(doc=562,freq=2.0), product of:
                0.40124533 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047327764 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.019236775 = product of:
          0.03847355 = sum of:
            0.03847355 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03847355 = score(doc=562,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.02
    0.023487274 = product of:
      0.07046182 = sum of:
        0.07046182 = sum of:
          0.04481278 = weight(_text_:group in 2741) [ClassicSimilarity], result of:
            0.04481278 = score(doc=2741,freq=2.0), product of:
              0.21906674 = queryWeight, product of:
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.047327764 = queryNorm
              0.20456223 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.628715 = idf(docFreq=1173, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
          0.025649035 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
            0.025649035 = score(doc=2741,freq=2.0), product of:
              0.16573377 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047327764 = queryNorm
              0.15476047 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
      0.33333334 = coord(1/3)
    
    Abstract
    This study seeks to find out how human beings cluster Web pages naturally. Twenty Web pages retrieved by the Northem Light search engine for each of 10 queries were sorted by 3 subjects into categories that were natural or meaningful to them. lt was found that different subjects clustered the same set of Web pages quite differently and created different categories. The average inter-subject similarity of the clusters created was a low 0.27. Subjects created an average of 5.4 clusters for each sorting. The categories constructed can be divided into 10 types. About 1/3 of the categories created were topical. Another 20% of the categories relate to the degree of relevance or usefulness. The rest of the categories were subject-independent categories such as format, purpose, authoritativeness and direction to other sources. The authors plan to develop automatic methods for categorizing Web pages using the common categories created by the subjects. lt is hoped that the techniques developed can be used by Web search engines to automatically organize Web pages retrieved into categories that are natural to users. 1. Introduction The World Wide Web is an increasingly important source of information for people globally because of its ease of access, the ease of publishing, its ability to transcend geographic and national boundaries, its flexibility and heterogeneity and its dynamic nature. However, Web users also find it increasingly difficult to locate relevant and useful information in this vast information storehouse. Web search engines, despite their scope and power, appear to be quite ineffective. They retrieve too many pages, and though they attempt to rank retrieved pages in order of probable relevance, often the relevant documents do not appear in the top-ranked 10 or 20 documents displayed. Several studies have found that users do not know how to use the advanced features of Web search engines, and do not know how to formulate and re-formulate queries. Users also typically exert minimal effort in performing, evaluating and refining their searches, and are unwilling to scan more than 10 or 20 items retrieved (Jansen, Spink, Bateman & Saracevic, 1998). This suggests that the conventional ranked-list display of search results does not satisfy user requirements, and that better ways of presenting and summarizing search results have to be developed. One promising approach is to group retrieved pages into clusters or categories to allow users to navigate immediately to the "promising" clusters where the most useful Web pages are likely to be located. This approach has been adopted by a number of search engines (notably Northem Light) and search agents.
    Date
    12. 9.2004 9:56:22
  3. Reiner, U.: Automatic analysis of DDC notations (2007) 0.02
    0.02240639 = product of:
      0.06721917 = sum of:
        0.06721917 = product of:
          0.13443834 = sum of:
            0.13443834 = weight(_text_:group in 118) [ClassicSimilarity], result of:
              0.13443834 = score(doc=118,freq=2.0), product of:
                0.21906674 = queryWeight, product of:
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.047327764 = queryNorm
                0.6136867 = fieldWeight in 118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.09375 = fieldNorm(doc=118)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Vortrag während der EDUG-Konferenz der European DDC users' group am 11.06.2007 in Bern.
  4. May, A.D.: Automatic classification of e-mail messages by message type (1997) 0.01
    0.013070395 = product of:
      0.039211184 = sum of:
        0.039211184 = product of:
          0.07842237 = sum of:
            0.07842237 = weight(_text_:group in 6493) [ClassicSimilarity], result of:
              0.07842237 = score(doc=6493,freq=2.0), product of:
                0.21906674 = queryWeight, product of:
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.047327764 = queryNorm
                0.35798392 = fieldWeight in 6493, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6493)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes a system that automatically classifies e-mail messages in the HUMANIST electronic discussion group into one of 4 classes: questions, responses, announcement or administartive. A total of 1.372 messages were analyzed. The automatic classification of a message was based on string matching between a message text and predefined string sets for each of the massage types. The system's automated ability to accurately classify a message was compared against manually assigned codes. The Cohen's Kappa of .55 suggested that there was a statistical agreement between the automatic and manually assigned codes
  5. Orwig, R.E.; Chen, H.; Nunamaker, J.F.: ¬A graphical, self-organizing approach to classifying electronic meeting output (1997) 0.01
    0.013070395 = product of:
      0.039211184 = sum of:
        0.039211184 = product of:
          0.07842237 = sum of:
            0.07842237 = weight(_text_:group in 6928) [ClassicSimilarity], result of:
              0.07842237 = score(doc=6928,freq=2.0), product of:
                0.21906674 = queryWeight, product of:
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.047327764 = queryNorm
                0.35798392 = fieldWeight in 6928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6928)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes research in the application of a Kohonen Self-Organizing Map (SOM) to the problem of classification of electronic brainstorming output and an evaluation of the results. Describes an electronic meeting system and describes the classification problem that exists in the group problem solving process. Surveys the literature concerning classification. Describes the application of the Kohonen SOM to the meeting output classification problem. Describes an experiment that evaluated the classification performed by the Kohonen SOM by comparing it with those of a human expert and a Hopfield neural network. Discusses conclusions and directions for future research
  6. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.012824517 = product of:
      0.03847355 = sum of:
        0.03847355 = product of:
          0.0769471 = sum of:
            0.0769471 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.0769471 = score(doc=1046,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 5.2003 14:17:22
  7. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.01
    0.011203195 = product of:
      0.033609584 = sum of:
        0.033609584 = product of:
          0.06721917 = sum of:
            0.06721917 = weight(_text_:group in 2563) [ClassicSimilarity], result of:
              0.06721917 = score(doc=2563,freq=2.0), product of:
                0.21906674 = queryWeight, product of:
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.047327764 = queryNorm
                0.30684334 = fieldWeight in 2563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Topic discovery is an important means for marketing, e-Business and social science studies. As well, it can be applied to various purposes, such as identifying a group with certain properties and observing the emergence and diminishment of a certain cyber community. Previous topic discovery work (J.M. Kleinberg, Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, p. 668) requires manual judgment of usefulness of outcomes and is thus incapable of handling the explosive growth of the Internet. In this paper, we propose the Automatic Topic Discovery (ATD) method, which combines a method of base set construction, a clustering algorithm and an iterative principal eigenvector computation method to discover the topics relevant to a given query without using manual examination. Given a query, ATD returns with topics associated with the query and top representative pages for each topic. Our experiments show that the ATD method performs better than the traditional eigenvector method in terms of computation time and topic discovery quality.
  8. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.010687098 = product of:
      0.032061294 = sum of:
        0.032061294 = product of:
          0.06412259 = sum of:
            0.06412259 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.06412259 = score(doc=611,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 8.2009 12:54:24
  9. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.010687098 = product of:
      0.032061294 = sum of:
        0.032061294 = product of:
          0.06412259 = sum of:
            0.06412259 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.06412259 = score(doc=2748,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 2.2016 18:25:22
  10. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.01
    0.009335997 = product of:
      0.028007988 = sum of:
        0.028007988 = product of:
          0.056015976 = sum of:
            0.056015976 = weight(_text_:group in 448) [ClassicSimilarity], result of:
              0.056015976 = score(doc=448,freq=2.0), product of:
                0.21906674 = queryWeight, product of:
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2557028 = fieldWeight in 448, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=448)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
  11. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.01
    0.009335997 = product of:
      0.028007988 = sum of:
        0.028007988 = product of:
          0.056015976 = sum of:
            0.056015976 = weight(_text_:group in 3627) [ClassicSimilarity], result of:
              0.056015976 = score(doc=3627,freq=2.0), product of:
                0.21906674 = queryWeight, product of:
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2557028 = fieldWeight in 3627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.628715 = idf(docFreq=1173, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3627)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
  12. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.0074809687 = product of:
      0.022442905 = sum of:
        0.022442905 = product of:
          0.04488581 = sum of:
            0.04488581 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.04488581 = score(doc=141,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.1-22
  13. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.0074809687 = product of:
      0.022442905 = sum of:
        0.022442905 = product of:
          0.04488581 = sum of:
            0.04488581 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.04488581 = score(doc=2338,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.1997 19:16:05
  14. Automatic classification research at OCLC (2002) 0.01
    0.0074809687 = product of:
      0.022442905 = sum of:
        0.022442905 = product of:
          0.04488581 = sum of:
            0.04488581 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.04488581 = score(doc=1563,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 5.2003 9:22:09
  15. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.0074809687 = product of:
      0.022442905 = sum of:
        0.022442905 = product of:
          0.04488581 = sum of:
            0.04488581 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.04488581 = score(doc=1673,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:08:06
  16. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.0074809687 = product of:
      0.022442905 = sum of:
        0.022442905 = product of:
          0.04488581 = sum of:
            0.04488581 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.04488581 = score(doc=5273,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 16:24:52
  17. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.0074809687 = product of:
      0.022442905 = sum of:
        0.022442905 = product of:
          0.04488581 = sum of:
            0.04488581 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.04488581 = score(doc=2560,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2008 18:31:54
  18. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.0064122584 = product of:
      0.019236775 = sum of:
        0.019236775 = product of:
          0.03847355 = sum of:
            0.03847355 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.03847355 = score(doc=2760,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2009 19:11:54
  19. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.0064122584 = product of:
      0.019236775 = sum of:
        0.019236775 = product of:
          0.03847355 = sum of:
            0.03847355 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.03847355 = score(doc=3051,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 8.2009 19:51:28
  20. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.01
    0.0064122584 = product of:
      0.019236775 = sum of:
        0.019236775 = product of:
          0.03847355 = sum of:
            0.03847355 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.03847355 = score(doc=690,freq=2.0), product of:
                0.16573377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047327764 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    23. 3.2013 13:22:36