Search (31 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10390238 = sum of:
      0.08273052 = product of:
        0.24819154 = sum of:
          0.24819154 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24819154 = score(doc=562,freq=2.0), product of:
              0.44160777 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052088603 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.021171859 = product of:
        0.042343717 = sum of:
          0.042343717 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042343717 = score(doc=562,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Automatic classification research at OCLC (2002) 0.05
    0.04643757 = product of:
      0.09287514 = sum of:
        0.09287514 = sum of:
          0.043474134 = weight(_text_:libraries in 1563) [ClassicSimilarity], result of:
            0.043474134 = score(doc=1563,freq=2.0), product of:
              0.1711139 = queryWeight, product of:
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.052088603 = queryNorm
              0.25406548 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2850544 = idf(docFreq=4499, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
          0.049401004 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
            0.049401004 = score(doc=1563,freq=2.0), product of:
              0.18240541 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052088603 = queryNorm
              0.2708308 = fieldWeight in 1563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1563)
      0.5 = coord(1/2)
    
    Abstract
    OCLC enlists the cooperation of the world's libraries to make the written record of humankind's cultural heritage more accessible through electronic media. Part of this goal can be accomplished through the application of the principles of knowledge organization. We believe that cultural artifacts are effectively lost unless they are indexed, cataloged and classified. Accordingly, OCLC has developed products, sponsored research projects, and encouraged the participation in international standards communities whose outcome has been improved library classification schemes, cataloging productivity tools, and new proposals for the creation and maintenance of metadata. Though cataloging and classification requires expert intellectual effort, we recognize that at least some of the work must be automated if we hope to keep pace with cultural change
    Date
    5. 5.2003 9:22:09
  3. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.021171859 = product of:
      0.042343717 = sum of:
        0.042343717 = product of:
          0.084687434 = sum of:
            0.084687434 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.084687434 = score(doc=1046,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  4. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017643217 = product of:
      0.035286434 = sum of:
        0.035286434 = product of:
          0.07057287 = sum of:
            0.07057287 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.07057287 = score(doc=611,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  5. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.017643217 = product of:
      0.035286434 = sum of:
        0.035286434 = product of:
          0.07057287 = sum of:
            0.07057287 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.07057287 = score(doc=2748,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  6. Barthel, S.; Tönnies, S.; Balke, W.-T.: Large-scale experiments for mathematical document classification (2013) 0.01
    0.013446322 = product of:
      0.026892643 = sum of:
        0.026892643 = product of:
          0.053785287 = sum of:
            0.053785287 = weight(_text_:libraries in 1056) [ClassicSimilarity], result of:
              0.053785287 = score(doc=1056,freq=6.0), product of:
                0.1711139 = queryWeight, product of:
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.052088603 = queryNorm
                0.3143245 = fieldWeight in 1056, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1056)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The ever increasing amount of digitally available information is curse and blessing at the same time. On the one hand, users have increasingly large amounts of information at their fingertips. On the other hand, the assessment and refinement of web search results becomes more and more tiresome and difficult for non-experts in a domain. Therefore, established digital libraries offer specialized collections with a certain degree of quality. This quality can largely be attributed to the great effort invested into semantic enrichment of the provided documents e.g. by annotating their documents with respect to a domain-specific taxonomy. This process is still done manually in many domains, e.g. chemistry CAS, medicine MeSH, or mathematics MSC. But due to the growing amount of data, this manual task gets more and more time consuming and expensive. The only solution for this problem seems to employ automated classification algorithms, but from evaluations done in previous research, conclusions to a real world scenario are difficult to make. We therefore conducted a large scale feasibility study on a real world data set from one of the biggest mathematical digital libraries, i.e. Zentralblatt MATH, with special focus on its practical applicability.
    Source
    15th International Conference on Asia-Pacific Digital Libraries ICADL 2013. Bangalore, India. [to appear, 2013]
  7. Golub, K.; Hansson, J.; Soergel, D.; Tudhope, D.: Managing classification in libraries : a methodological outline for evaluating automatic subject indexing and classification in Swedish library catalogues (2015) 0.01
    0.013446322 = product of:
      0.026892643 = sum of:
        0.026892643 = product of:
          0.053785287 = sum of:
            0.053785287 = weight(_text_:libraries in 2300) [ClassicSimilarity], result of:
              0.053785287 = score(doc=2300,freq=6.0), product of:
                0.1711139 = queryWeight, product of:
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.052088603 = queryNorm
                0.3143245 = fieldWeight in 2300, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2300)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subject terms play a crucial role in resource discovery but require substantial effort to produce. Automatic subject classification and indexing address problems of scale and sustainability and can be used to enrich existing bibliographic records, establish more connections across and between resources and enhance consistency of bibliographic data. The paper aims to put forward a complex methodological framework to evaluate automatic classification tools of Swedish textual documents based on the Dewey Decimal Classification (DDC) recently introduced to Swedish libraries. Three major complementary approaches are suggested: a quality-built gold standard, retrieval effects, domain analysis. The gold standard is built based on input from at least two catalogue librarians, end-users expert in the subject, end users inexperienced in the subject and automated tools. Retrieval effects are studied through a combination of assigned and free tasks, including factual and comprehensive types. The study also takes into consideration the different role and character of subject terms in various knowledge domains, such as scientific disciplines. As a theoretical framework, domain analysis is used and applied in relation to the implementation of DDC in Swedish libraries and chosen domains of knowledge within the DDC itself.
  8. Cheng, P.T.K.; Wu, A.K.W.: ACS: an automatic classification system (1995) 0.01
    0.0124211805 = product of:
      0.024842361 = sum of:
        0.024842361 = product of:
          0.049684722 = sum of:
            0.049684722 = weight(_text_:libraries in 2188) [ClassicSimilarity], result of:
              0.049684722 = score(doc=2188,freq=2.0), product of:
                0.1711139 = queryWeight, product of:
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.052088603 = queryNorm
                0.29036054 = fieldWeight in 2188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2188)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we introduce ACS, an automatic classification system for school libraries. First, various approaches towards automatic classification, namely (i) rule-based, (ii) browse and search, and (iii) partial match, are critically reviewed. The central issues of scheme selection, text analysis and similarity measures are discussed. A novel approach towards detecting book-class similarity with Modified Overlap Coefficient (MOC) is also proposed. Finally, the design and implementation of ACS is presented. The test result of over 80% correctness in automatic classification and a cost reduction of 75% compared to manual classification suggest that ACS is highly adoptable
  9. Ingwersen, P.; Wormell, I.: Ranganathan in the perspective of advanced information retrieval (1992) 0.01
    0.0124211805 = product of:
      0.024842361 = sum of:
        0.024842361 = product of:
          0.049684722 = sum of:
            0.049684722 = weight(_text_:libraries in 7695) [ClassicSimilarity], result of:
              0.049684722 = score(doc=7695,freq=2.0), product of:
                0.1711139 = queryWeight, product of:
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.052088603 = queryNorm
                0.29036054 = fieldWeight in 7695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7695)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examnines Ranganathan's approach to knowledge organisation and its relevance to intellectual accessibility in libraries. Discusses the current and future developments of his methodology and theories in knowledge-based systems. Topics covered include: semi-automatic classification and structure of thesauri; user-intermediary interactions in information retrieval (IR); semantic value-theory and uncertainty principles in IR; and case grammar
  10. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.012350251 = product of:
      0.024700502 = sum of:
        0.024700502 = product of:
          0.049401004 = sum of:
            0.049401004 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.049401004 = score(doc=141,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  11. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.012350251 = product of:
      0.024700502 = sum of:
        0.024700502 = product of:
          0.049401004 = sum of:
            0.049401004 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.049401004 = score(doc=2338,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  12. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.012350251 = product of:
      0.024700502 = sum of:
        0.024700502 = product of:
          0.049401004 = sum of:
            0.049401004 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.049401004 = score(doc=1673,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  13. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.012350251 = product of:
      0.024700502 = sum of:
        0.024700502 = product of:
          0.049401004 = sum of:
            0.049401004 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.049401004 = score(doc=5273,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  14. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.012350251 = product of:
      0.024700502 = sum of:
        0.024700502 = product of:
          0.049401004 = sum of:
            0.049401004 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.049401004 = score(doc=2560,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
  15. Pong, J.Y.-H.; Kwok, R.C.-W.; Lau, R.Y.-K.; Hao, J.-X.; Wong, P.C.-C.: ¬A comparative study of two automatic document classification methods in a library setting (2008) 0.01
    0.010978876 = product of:
      0.021957751 = sum of:
        0.021957751 = product of:
          0.043915503 = sum of:
            0.043915503 = weight(_text_:libraries in 2532) [ClassicSimilarity], result of:
              0.043915503 = score(doc=2532,freq=4.0), product of:
                0.1711139 = queryWeight, product of:
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.052088603 = queryNorm
                0.25664487 = fieldWeight in 2532, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2532)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In current library practice, trained human experts usually carry out document cataloguing and indexing based on a manual approach. With the explosive growth in the number of electronic documents available on the Internet and digital libraries, it is increasingly difficult for library practitioners to categorize both electronic documents and traditional library materials using just a manual approach. To improve the effectiveness and efficiency of document categorization at the library setting, more in-depth studies of using automatic document classification methods to categorize library items are required. Machine learning research has advanced rapidly in recent years. However, applying machine learning techniques to improve library practice is still a relatively unexplored area. This paper illustrates the design and development of a machine learning based automatic document classification system to alleviate the manual categorization problem encountered within the library setting. Two supervised machine learning algorithms have been tested. Our empirical tests show that supervised machine learning algorithms in general, and the k-nearest neighbours (KNN) algorithm in particular, can be used to develop an effective document classification system to enhance current library practice. Moreover, some concrete recommendations regarding how to practically apply the KNN algorithm to develop automatic document classification in a library setting are made. To our best knowledge, this is the first in-depth study of applying the KNN algorithm to automatic document classification based on the widely used LCC classification scheme adopted by many large libraries.
  16. Cui, H.; Heidorn, P.B.; Zhang, H.: ¬An approach to automatic classification of text for information retrieval (2002) 0.01
    0.0108685335 = product of:
      0.021737067 = sum of:
        0.021737067 = product of:
          0.043474134 = sum of:
            0.043474134 = weight(_text_:libraries in 174) [ClassicSimilarity], result of:
              0.043474134 = score(doc=174,freq=2.0), product of:
                0.1711139 = queryWeight, product of:
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.052088603 = queryNorm
                0.25406548 = fieldWeight in 174, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2850544 = idf(docFreq=4499, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=174)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the Second ACM/IEEE-CS Joint Conference on Digital Libraries : JCDL 2002 ; July 14 - 18, 2002, Portland, Oregon, USA. Ed. by Gary Marchionini
  17. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.010585929 = product of:
      0.021171859 = sum of:
        0.021171859 = product of:
          0.042343717 = sum of:
            0.042343717 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.042343717 = score(doc=2760,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54
  18. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.010585929 = product of:
      0.021171859 = sum of:
        0.021171859 = product of:
          0.042343717 = sum of:
            0.042343717 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.042343717 = score(doc=3051,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 19:51:28
  19. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.01
    0.010585929 = product of:
      0.021171859 = sum of:
        0.021171859 = product of:
          0.042343717 = sum of:
            0.042343717 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.042343717 = score(doc=690,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23. 3.2013 13:22:36
  20. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.010585929 = product of:
      0.021171859 = sum of:
        0.021171859 = product of:
          0.042343717 = sum of:
            0.042343717 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.042343717 = score(doc=2158,freq=2.0), product of:
                0.18240541 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052088603 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 8.2015 19:22:04