Search (36 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.22
    0.22437611 = product of:
      0.29916814 = sum of:
        0.07029469 = product of:
          0.21088406 = sum of:
            0.21088406 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.21088406 = score(doc=562,freq=2.0), product of:
                0.3752265 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04425879 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.21088406 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.21088406 = score(doc=562,freq=2.0), product of:
            0.3752265 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04425879 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.017989364 = product of:
          0.035978727 = sum of:
            0.035978727 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.035978727 = score(doc=562,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.14
    0.14058939 = product of:
      0.28117877 = sum of:
        0.07029469 = product of:
          0.21088406 = sum of:
            0.21088406 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21088406 = score(doc=862,freq=2.0), product of:
                0.3752265 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04425879 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.21088406 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.21088406 = score(doc=862,freq=2.0), product of:
            0.3752265 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04425879 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.09
    0.08859447 = product of:
      0.17718893 = sum of:
        0.15620133 = weight(_text_:assisted in 1361) [ClassicSimilarity], result of:
          0.15620133 = score(doc=1361,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.52244925 = fieldWeight in 1361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1361)
        0.020987593 = product of:
          0.041975185 = sum of:
            0.041975185 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
              0.041975185 = score(doc=1361,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.2708308 = fieldWeight in 1361, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1361)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    THESYS is based on the natural language processing of free-text databases. It yields statistically evaluated correlations between words of the database. These correlations correspond to traditional thesaurus relations. The person who has to build a thesaurus is thus assisted by the proposals made by THESYS. THESYS is being tested on commercial databases under real world conditions. It is part of a text processing project at Siemens, called TINA (Text-Inhalts-Analyse). Software from TINA is actually being applied and evaluated by the US Department of Commerce for patent search and indexing (REALIST: REtrieval Aids by Linguistics and STatistics)
    Date
    6. 1.1999 10:22:07
  4. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.06
    0.055225514 = product of:
      0.22090206 = sum of:
        0.22090206 = weight(_text_:assisted in 1139) [ClassicSimilarity], result of:
          0.22090206 = score(doc=1139,freq=4.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.7388549 = fieldWeight in 1139, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
      0.25 = coord(1/4)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  5. Oard, D.W.; He, D.; Wang, J.: User-assisted query translation for interactive cross-language information retrieval (2008) 0.05
    0.047336154 = product of:
      0.18934461 = sum of:
        0.18934461 = weight(_text_:assisted in 2030) [ClassicSimilarity], result of:
          0.18934461 = score(doc=2030,freq=4.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.6333042 = fieldWeight in 2030, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=2030)
      0.25 = coord(1/4)
    
    Abstract
    Interactive Cross-Language Information Retrieval (CLIR), a process in which searcher and system collaborate to find documents that satisfy an information need regardless of the language in which those documents are written, calls for designs in which synergies between searcher and system can be leveraged so that the strengths of one can cover weaknesses of the other. This paper describes an approach that employs user-assisted query translation to help searchers better understand the system's operation. Supporting interaction and interface designs are introduced, and results from three user studies are presented. The results indicate that experienced searchers presented with this new system evolve new search strategies that make effective use of the new capabilities, that they achieve retrieval effectiveness comparable to results obtained using fully automatic techniques, and that reported satisfaction with support for cross-language searching increased. The paper concludes with a description of a freely available interactive CLIR system that incorporates lessons learned from this research.
  6. Gillaspie, L.: ¬The role of linguistic phenomena in retrieval performance (1995) 0.04
    0.04462895 = product of:
      0.1785158 = sum of:
        0.1785158 = weight(_text_:assisted in 3861) [ClassicSimilarity], result of:
          0.1785158 = score(doc=3861,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.5970849 = fieldWeight in 3861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0625 = fieldNorm(doc=3861)
      0.25 = coord(1/4)
    
    Abstract
    This progress report presents findings from a failure analysis of 2 commercial full text computer assisted legal research (CALR) systems. Linguistic analyzes of unretrieved documents als false drops reveal a number of potential causes for performance problems in these databases, ranging from synonymy and homography to discourse level cohesive relations. Ecxamines and discusses examples of natural language phenomena that affects Boolean retrieval system performance
  7. Armstrong, G.: Computer-assisted literary analysis using the TACT a text-retrieval program (1996) 0.04
    0.04462895 = product of:
      0.1785158 = sum of:
        0.1785158 = weight(_text_:assisted in 5690) [ClassicSimilarity], result of:
          0.1785158 = score(doc=5690,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.5970849 = fieldWeight in 5690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.0625 = fieldNorm(doc=5690)
      0.25 = coord(1/4)
    
  8. Jaaranen, K.; Lehtola, A.; Tenni, J.; Bounsaythip, C.: Webtran tools for in-company language support (2000) 0.03
    0.033471715 = product of:
      0.13388686 = sum of:
        0.13388686 = weight(_text_:assisted in 5553) [ClassicSimilarity], result of:
          0.13388686 = score(doc=5553,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 5553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=5553)
      0.25 = coord(1/4)
    
    Abstract
    Webtran tools for authoring and translating domain specific texts can make the multilingual text production in a company more efficient and less expensive. Tile tools have been in production use since spring 2000 for checking and translating product article texts of a specific domain, namely an in-company language in sales catalogues of a mail-order company. Webtran tools have been developed by VTT Information Technology. Use experiences have shown that an automatic translation process is faster than phrase-lexicon assisted manual translation, if an in-company language model is created to control and support the language used within the company
  9. Anguiano Peña, G.; Naumis Peña, C.: Method for selecting specialized terms from a general language corpus (2015) 0.03
    0.033471715 = product of:
      0.13388686 = sum of:
        0.13388686 = weight(_text_:assisted in 2196) [ClassicSimilarity], result of:
          0.13388686 = score(doc=2196,freq=2.0), product of:
            0.29897895 = queryWeight, product of:
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.04425879 = queryNorm
            0.44781366 = fieldWeight in 2196, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7552447 = idf(docFreq=139, maxDocs=44218)
              0.046875 = fieldNorm(doc=2196)
      0.25 = coord(1/4)
    
    Abstract
    Among the many aspects studied by library and information science are linguistic phenomena associated with document content analysis, for purposes of both information organization and retrieval. To this end, terms used in scientific and technical language must be recovered and their area of domain and behavior studied. Through language, society controls the knowledge available to people. Document content analysis, in this case of scientific texts, facilitates gathering knowledge of lexical units and their major applications and separating such specialized terms from the general language, to create indexing languages. The model presented here or other lexicographic resources with similar characteristics may be useful in the near future, in computer-assisted indexing or as corpora monitors, with respect to new text analyses or specialized corpora. Thus, using techniques for document content analysis of a lexicographically labeled general language corpus proposed herein, components which enable the extraction of lexical units from specialized language may be obtained and characterized.
  10. Kreymer, O.: ¬An evaluation of help mechanisms in natural language information retrieval systems (2002) 0.01
    0.012917642 = product of:
      0.051670566 = sum of:
        0.051670566 = product of:
          0.10334113 = sum of:
            0.10334113 = weight(_text_:instruction in 2557) [ClassicSimilarity], result of:
              0.10334113 = score(doc=2557,freq=2.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.39342776 = fieldWeight in 2557, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2557)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The field of natural language processing (NLP) demonstrates rapid changes in the design of information retrieval systems and human-computer interaction. While natural language is being looked on as the most effective tool for information retrieval in a contemporary information environment, the systems using it are only beginning to emerge. This study attempts to evaluate the current state of NLP information retrieval systems from the user's point of view: what techniques are used by these systems to guide their users through the search process? The analysis focused on the structure and components of the systems' help mechanisms. Results of the study demonstrated that systems which claimed to be using natural language searching in fact used a wide range of information retrieval techniques from real natural language processing to Boolean searching. As a result, the user assistance mechanisms of these systems also varied. While pseudo-NLP systems would suit a more traditional method of instruction, real NLP systems primarily utilised the methods of explanation and user-system dialogue.
  11. Warner, A.J.: Natural language processing (1987) 0.01
    0.01199291 = product of:
      0.04797164 = sum of:
        0.04797164 = product of:
          0.09594328 = sum of:
            0.09594328 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09594328 = score(doc=337,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  12. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.010493796 = product of:
      0.041975185 = sum of:
        0.041975185 = product of:
          0.08395037 = sum of:
            0.08395037 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.08395037 = score(doc=3164,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  13. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.010493796 = product of:
      0.041975185 = sum of:
        0.041975185 = product of:
          0.08395037 = sum of:
            0.08395037 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08395037 = score(doc=4506,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    8.10.2000 11:52:22
  14. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.010493796 = product of:
      0.041975185 = sum of:
        0.041975185 = product of:
          0.08395037 = sum of:
            0.08395037 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.08395037 = score(doc=6672,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
  15. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.010493796 = product of:
      0.041975185 = sum of:
        0.041975185 = product of:
          0.08395037 = sum of:
            0.08395037 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.08395037 = score(doc=3117,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    28. 2.1999 10:48:22
  16. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.008994682 = product of:
      0.035978727 = sum of:
        0.035978727 = product of:
          0.071957454 = sum of:
            0.071957454 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.071957454 = score(doc=4483,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    15. 3.2000 10:22:37
  17. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.01
    0.007495569 = product of:
      0.029982276 = sum of:
        0.029982276 = product of:
          0.059964553 = sum of:
            0.059964553 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.059964553 = score(doc=1463,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
  18. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.01
    0.005996455 = product of:
      0.02398582 = sum of:
        0.02398582 = product of:
          0.04797164 = sum of:
            0.04797164 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.04797164 = score(doc=8521,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
  19. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.01
    0.005996455 = product of:
      0.02398582 = sum of:
        0.02398582 = product of:
          0.04797164 = sum of:
            0.04797164 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.04797164 = score(doc=6752,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    6. 3.1997 16:22:15
  20. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.01
    0.005996455 = product of:
      0.02398582 = sum of:
        0.02398582 = product of:
          0.04797164 = sum of:
            0.04797164 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.04797164 = score(doc=6753,freq=2.0), product of:
                0.15498674 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04425879 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    6. 3.1997 16:22:15