Search (24 results, page 1 of 2)

  • × theme_ss:"Automatisches Abstracting"
  1. Goh, A.; Hui, S.C.: TES: a text extraction system (1996) 0.08
    0.07970848 = product of:
      0.15941696 = sum of:
        0.05488808 = weight(_text_:26 in 6599) [ClassicSimilarity], result of:
          0.05488808 = score(doc=6599,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.31214553 = fieldWeight in 6599, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0625 = fieldNorm(doc=6599)
        0.104528874 = sum of:
          0.050559945 = weight(_text_:access in 6599) [ClassicSimilarity], result of:
            0.050559945 = score(doc=6599,freq=2.0), product of:
              0.16876608 = queryWeight, product of:
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.04979191 = queryNorm
              0.29958594 = fieldWeight in 6599, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.389428 = idf(docFreq=4053, maxDocs=44218)
                0.0625 = fieldNorm(doc=6599)
          0.05396893 = weight(_text_:22 in 6599) [ClassicSimilarity], result of:
            0.05396893 = score(doc=6599,freq=2.0), product of:
              0.17436278 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04979191 = queryNorm
              0.30952093 = fieldWeight in 6599, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6599)
      0.5 = coord(2/4)
    
    Abstract
    With the onset of the information explosion arising from digital libraries and access to a wealth of information through the Internet, the need to efficiently determine the relevance of a document becomes even more urgent. Describes a text extraction system (TES), which retrieves a set of sentences from a document to form an indicative abstract. Such an automated process enables information to be filtered more quickly. Discusses the combination of various text extraction techniques. Compares results with manually produced abstracts
    Date
    26. 2.1997 10:22:43
  2. Uyttendaele, C.; Moens, M.-F.; Dumortier, J.: SALOMON: automatic abstracting of legal cases for effective access to court decisions (1998) 0.04
    0.03965472 = product of:
      0.07930944 = sum of:
        0.048027072 = weight(_text_:26 in 495) [ClassicSimilarity], result of:
          0.048027072 = score(doc=495,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.27312735 = fieldWeight in 495, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=495)
        0.03128237 = product of:
          0.06256474 = sum of:
            0.06256474 = weight(_text_:access in 495) [ClassicSimilarity], result of:
              0.06256474 = score(doc=495,freq=4.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.3707187 = fieldWeight in 495, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=495)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The SALOMON project summarises Belgian criminal cases in order to improve access to the large number of existing and future cases. A double methodology was used when developing SALOMON: the cases are processed by employing additional knowledge to interpret structural patterns and features on the one hand and by way of occurrence statistics of index terms on the other. SALOMON performs an initial categorisation and structuring of the cases and subsequently extracts the most relevant text units of the alleged offences and of the opinion of the court. The SALOMON techniques do not themselves solve any legal questions, but they do guide the use effectively towards relevant texts
    Date
    26. 4.2000 18:41:43
  3. McKeown, K.; Robin, J.; Kukich, K.: Generating concise natural language summaries (1995) 0.03
    0.029729806 = product of:
      0.11891922 = sum of:
        0.11891922 = weight(_text_:description in 2932) [ClassicSimilarity], result of:
          0.11891922 = score(doc=2932,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.5136877 = fieldWeight in 2932, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.078125 = fieldNorm(doc=2932)
      0.25 = coord(1/4)
    
    Abstract
    Description of the problems for summary generation, the applications developed (for basket ball games - STREAK and for telephone network planning activity - PLANDOC), the linguistic constructions that the systems use to convey information concisely and the textual constraints that determine what information gets included
  4. Endres-Niggemeyer, B.: SimSum : an empirically founded simulation of summarizing (2000) 0.02
    0.024013536 = product of:
      0.096054144 = sum of:
        0.096054144 = weight(_text_:26 in 3343) [ClassicSimilarity], result of:
          0.096054144 = score(doc=3343,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.5462547 = fieldWeight in 3343, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.109375 = fieldNorm(doc=3343)
      0.25 = coord(1/4)
    
    Date
    15. 8.2002 18:26:20
  5. Brandow, R.; Mitze, K.; Rau, L.F.: Automatic condensation of electronic publications by sentence selection (1995) 0.02
    0.023783846 = product of:
      0.09513538 = sum of:
        0.09513538 = weight(_text_:description in 2929) [ClassicSimilarity], result of:
          0.09513538 = score(doc=2929,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.41095015 = fieldWeight in 2929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.0625 = fieldNorm(doc=2929)
      0.25 = coord(1/4)
    
    Abstract
    Description of a system that performs domain-independent automatic condensation of news from a large commercial news service encompassing 41 different publications. This system was evaluated against a system that condensed the same articles using only the first portions of the texts (the löead), up to the target length of the summaries. 3 lengths of articles were evaluated for 250 documents by both systems, totalling 1.500 suitability judgements in all. The lead-based summaries outperformed the 'intelligent' summaries significantly, achieving acceptability ratings of over 90%, compared to 74,7%
  6. Xu, D.; Cheng, G.; Qu, Y.: Preferences in Wikipedia abstracts : empirical findings and implications for automatic entity summarization (2014) 0.02
    0.017837884 = product of:
      0.071351536 = sum of:
        0.071351536 = weight(_text_:description in 2700) [ClassicSimilarity], result of:
          0.071351536 = score(doc=2700,freq=2.0), product of:
            0.23150103 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.04979191 = queryNorm
            0.3082126 = fieldWeight in 2700, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.046875 = fieldNorm(doc=2700)
      0.25 = coord(1/4)
    
    Abstract
    The volume of entity-centric structured data grows rapidly on the Web. The description of an entity, composed of property-value pairs (a.k.a. features), has become very large in many applications. To avoid information overload, efforts have been made to automatically select a limited number of features to be shown to the user based on certain criteria, which is called automatic entity summarization. However, to the best of our knowledge, there is a lack of extensive studies on how humans rank and select features in practice, which can provide empirical support and inspire future research. In this article, we present a large-scale statistical analysis of the descriptions of entities provided by DBpedia and the abstracts of their corresponding Wikipedia articles, to empirically study, along several different dimensions, which kinds of features are preferable when humans summarize. Implications for automatic entity summarization are drawn from the findings.
  7. Ercan, G.; Cicekli, I.: Using lexical chains for keyword extraction (2007) 0.01
    0.012006768 = product of:
      0.048027072 = sum of:
        0.048027072 = weight(_text_:26 in 951) [ClassicSimilarity], result of:
          0.048027072 = score(doc=951,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.27312735 = fieldWeight in 951, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=951)
      0.25 = coord(1/4)
    
    Date
    26.12.2007 16:26:11
  8. Endres-Niggemeyer, B.: Summarizing information (1998) 0.01
    0.010291515 = product of:
      0.04116606 = sum of:
        0.04116606 = weight(_text_:26 in 688) [ClassicSimilarity], result of:
          0.04116606 = score(doc=688,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.23410915 = fieldWeight in 688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.046875 = fieldNorm(doc=688)
      0.25 = coord(1/4)
    
    Date
    26. 5.1996 11:11:10
  9. Chen, H.-H.; Kuo, J.-J.; Huang, S.-J.; Lin, C.-J.; Wung, H.-C.: ¬A summarization system for Chinese news from multiple sources (2003) 0.01
    0.010291515 = product of:
      0.04116606 = sum of:
        0.04116606 = weight(_text_:26 in 2115) [ClassicSimilarity], result of:
          0.04116606 = score(doc=2115,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.23410915 = fieldWeight in 2115, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.046875 = fieldNorm(doc=2115)
      0.25 = coord(1/4)
    
    Date
    24. 1.2004 18:26:52
  10. Jones, S.; Paynter, G.W.: Automatic extractionof document keyphrases for use in digital libraries : evaluations and applications (2002) 0.01
    0.008576263 = product of:
      0.03430505 = sum of:
        0.03430505 = weight(_text_:26 in 601) [ClassicSimilarity], result of:
          0.03430505 = score(doc=601,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.19509095 = fieldWeight in 601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=601)
      0.25 = coord(1/4)
    
    Date
    26. 5.2002 15:32:08
  11. Wei, F.; Li, W.; Lu, Q.; He, Y.: Applying two-level reinforcement ranking in query-oriented multidocument summarization (2009) 0.01
    0.008576263 = product of:
      0.03430505 = sum of:
        0.03430505 = weight(_text_:26 in 3120) [ClassicSimilarity], result of:
          0.03430505 = score(doc=3120,freq=2.0), product of:
            0.17584132 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.04979191 = queryNorm
            0.19509095 = fieldWeight in 3120, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3120)
      0.25 = coord(1/4)
    
    Date
    26. 9.2009 11:16:24
  12. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.01
    0.006746116 = product of:
      0.026984464 = sum of:
        0.026984464 = product of:
          0.05396893 = sum of:
            0.05396893 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
              0.05396893 = score(doc=6751,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.30952093 = fieldWeight in 6751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6751)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    6. 3.1997 16:22:15
  13. Jones, P.A.; Bradbeer, P.V.G.: Discovery of optimal weights in a concept selection system (1996) 0.01
    0.006746116 = product of:
      0.026984464 = sum of:
        0.026984464 = product of:
          0.05396893 = sum of:
            0.05396893 = weight(_text_:22 in 6974) [ClassicSimilarity], result of:
              0.05396893 = score(doc=6974,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.30952093 = fieldWeight in 6974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6974)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  14. Plaza, L.; Stevenson, M.; Díaz, A.: Resolving ambiguity in biomedical text to improve summarization (2012) 0.01
    0.005529994 = product of:
      0.022119977 = sum of:
        0.022119977 = product of:
          0.044239953 = sum of:
            0.044239953 = weight(_text_:access in 2734) [ClassicSimilarity], result of:
              0.044239953 = score(doc=2734,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.2621377 = fieldWeight in 2734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2734)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Access to the vast body of research literature that is now available on biomedicine and related fields can be improved with automatic summarization. This paper describes a summarization system for the biomedical domain that represents documents as graphs formed from concepts and relations in the UMLS Metathesaurus. This system has to deal with the ambiguities that occur in biomedical documents. We describe a variety of strategies that make use of MetaMap and Word Sense Disambiguation (WSD) to accurately map biomedical documents onto UMLS Metathesaurus concepts. Evaluation is carried out using a collection of 150 biomedical scientific articles from the BioMed Central corpus. We find that using WSD improves the quality of the summaries generated.
  15. Vanderwende, L.; Suzuki, H.; Brockett, J.M.; Nenkova, A.: Beyond SumBasic : task-focused summarization with sentence simplification and lexical expansion (2007) 0.01
    0.005059587 = product of:
      0.020238347 = sum of:
        0.020238347 = product of:
          0.040476695 = sum of:
            0.040476695 = weight(_text_:22 in 948) [ClassicSimilarity], result of:
              0.040476695 = score(doc=948,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.23214069 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=948)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems.
  16. Moens, M.-F.; Uyttendaele, C.: Automatic text structuring and categorization as a first step in summarizing legal cases (1997) 0.00
    0.0047399946 = product of:
      0.018959979 = sum of:
        0.018959979 = product of:
          0.037919957 = sum of:
            0.037919957 = weight(_text_:access in 2256) [ClassicSimilarity], result of:
              0.037919957 = score(doc=2256,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.22468945 = fieldWeight in 2256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2256)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The SALOMON system automatically summarizes Belgian criminal cases in order to improve access to the large number of existing and future court decisions. SALOMON extracts relevant text units from the case text to form a case summary. Such a case profile facilitates the rapid determination of the relevance of the case or may be employed in text search. In a first important abstracting step SALOMON performs an initial categorization of legal criminal cases and structures the case text into separate legally relevant and irrelevant components. A text grammar represented as a semantic network is used to automatically determine the category of the case and its components. Extracts from the case general data and identifies text portions relevant for further abstracting. Prior knowledge of the text structure and its indicative cues may support automatic abstracting. A text grammar is a promising form for representing the knowledge involved
  17. Moens, M.-F.; Uyttendaele, C.; Dumotier, J.: Abstracting of legal cases : the potential of clustering based on the selection of representative objects (1999) 0.00
    0.0047399946 = product of:
      0.018959979 = sum of:
        0.018959979 = product of:
          0.037919957 = sum of:
            0.037919957 = weight(_text_:access in 2944) [ClassicSimilarity], result of:
              0.037919957 = score(doc=2944,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.22468945 = fieldWeight in 2944, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2944)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The SALOMON project automatically summarizes Belgian criminal cases in order to improve access to the large number of existing and future court decisions. SALOMON extracts text units from the case text to form a case summary. Such a case summary facilitates the rapid determination of the relevance of the case or may be employed in text search. an important part of the research concerns the development of techniques for automatic recognition of representative text paragraphs (or sentences) in texts of unrestricted domains. these techniques are employed to eliminate redundant material in the case texts, and to identify informative text paragraphs which are relevant to include in the case summary. An evaluation of a test set of 700 criminal cases demonstrates that the algorithms have an application potential for automatic indexing, abstracting, and text linkage
  18. Wang, W.; Hwang, D.: Abstraction Assistant : an automatic text abstraction system (2010) 0.00
    0.0047399946 = product of:
      0.018959979 = sum of:
        0.018959979 = product of:
          0.037919957 = sum of:
            0.037919957 = weight(_text_:access in 3981) [ClassicSimilarity], result of:
              0.037919957 = score(doc=3981,freq=2.0), product of:
                0.16876608 = queryWeight, product of:
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.04979191 = queryNorm
                0.22468945 = fieldWeight in 3981, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.389428 = idf(docFreq=4053, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3981)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In the interest of standardization and quality assurance, it is desirable for authors and staff of access services to follow the American National Standards Institute (ANSI) guidelines in preparing abstracts. Using the statistical approach an extraction system (the Abstraction Assistant) was developed to generate informative abstracts to meet the ANSI guidelines for structural content elements. The system performance is evaluated by comparing the system-generated abstracts with the author's original abstracts and the manually enhanced system abstracts on three criteria: balance (satisfaction of the ANSI standards), fluency (text coherence), and understandability (clarity). The results suggest that it is possible to use the system output directly without manual modification, but there are issues that need to be addressed in further studies to make the system a better tool.
  19. Wu, Y.-f.B.; Li, Q.; Bot, R.S.; Chen, X.: Finding nuggets in documents : a machine learning approach (2006) 0.00
    0.0042163227 = product of:
      0.01686529 = sum of:
        0.01686529 = product of:
          0.03373058 = sum of:
            0.03373058 = weight(_text_:22 in 5290) [ClassicSimilarity], result of:
              0.03373058 = score(doc=5290,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.19345059 = fieldWeight in 5290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5290)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 17:25:48
  20. Kim, H.H.; Kim, Y.H.: Generic speech summarization of transcribed lecture videos : using tags and their semantic relations (2016) 0.00
    0.0042163227 = product of:
      0.01686529 = sum of:
        0.01686529 = product of:
          0.03373058 = sum of:
            0.03373058 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.03373058 = score(doc=2640,freq=2.0), product of:
                0.17436278 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04979191 = queryNorm
                0.19345059 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2016 12:29:41