Search (31 results, page 1 of 2)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Ellis, D.; Vasconcelos, A.: ¬The relevance of facet analysis for World Wide Web subject organization and searching (2000) 0.03
    0.03461188 = product of:
      0.06922376 = sum of:
        0.06922376 = product of:
          0.13844752 = sum of:
            0.13844752 = weight(_text_:word in 2477) [ClassicSimilarity], result of:
              0.13844752 = score(doc=2477,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.49155584 = fieldWeight in 2477, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2477)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is a revised version of the earlier article by Ellis and Vasconcelos (1999) (see Not Relevant, below), though that is not indicated, and much of it is identical, word for word. There is a new section covering the work of Elizabeth Duncan, which is useful and informative, but the reader is better advised to go to the originals if available.
  2. Hill, J.S.: Online classification number access : some practical considerations (1984) 0.03
    0.029111583 = product of:
      0.058223166 = sum of:
        0.058223166 = product of:
          0.11644633 = sum of:
            0.11644633 = weight(_text_:22 in 7684) [ClassicSimilarity], result of:
              0.11644633 = score(doc=7684,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.61904186 = fieldWeight in 7684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=7684)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of academic librarianship. 10(1984), S.17-22
  3. Poynder, R.: Web research engines? (1996) 0.02
    0.024474295 = product of:
      0.04894859 = sum of:
        0.04894859 = product of:
          0.09789718 = sum of:
            0.09789718 = weight(_text_:word in 5698) [ClassicSimilarity], result of:
              0.09789718 = score(doc=5698,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.34758246 = fieldWeight in 5698, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5698)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the shortcomings of search engines for the WWW comparing their current capabilities to those of the first generation CD-ROM products. Some allow phrase searching and most are improving their Boolean searching. Few allow truncation, wild cards or nested logic. They are stateless, losing previous search criteria. Unlike the indexing and classification systems for today's CD-ROMs, those for Web pages are random, unstructured and of variable quality. Considers that at best Web search engines can only offer free text searching. Discusses whether automatic data classification systems such as Infoseek Ultra can overcome the haphazard nature of the Web with neural network technology, and whether Boolean search techniques may be redundant when replaced by technology such as the Euroferret search engine. However, artificial intelligence is rarely successful on huge, varied databases. Relevance ranking and automatic query expansion still use the same simple inverted indexes. Most Web search engines do nothing more than word counting. Further complications arise with foreign languages
  4. Ellis, D.; Vasconcelos, A.: Ranganathan and the Net : using facet analysis to search and organise the World Wide Web (1999) 0.02
    0.024474295 = product of:
      0.04894859 = sum of:
        0.04894859 = product of:
          0.09789718 = sum of:
            0.09789718 = weight(_text_:word in 726) [ClassicSimilarity], result of:
              0.09789718 = score(doc=726,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.34758246 = fieldWeight in 726, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=726)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper documents the continuing relevance of facet analysis as a technique for searching and organising WWW based materials. The 2 approaches underlying WWW searching and indexing - word and concept based indexing - are outlined. It is argued that facet analysis as an a posteriori approach to classification using words from the subject field as the concept terms in the classification derived represents an excellent approach to searching and organising the results of WWW searches using either search engines or search directories. Finally it is argued that the underlying philosophy of facet analysis is better suited to the disparate nature of WWW resources and searchers than the assumptions of contemporaray IR research.
  5. Yu, N.: Readings & Web resources for faceted classification 0.02
    0.024474295 = product of:
      0.04894859 = sum of:
        0.04894859 = product of:
          0.09789718 = sum of:
            0.09789718 = weight(_text_:word in 4394) [ClassicSimilarity], result of:
              0.09789718 = score(doc=4394,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.34758246 = fieldWeight in 4394, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4394)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The term "facet" has been used in various places, while in most cases it is just a buzz word to replace what is indeed "aspect" or "category". The references below either define and explain the original concept of facet or provide guidelines for building 'real' faceted search/browse. I was interested in faceted classification because it seems to be a natural and efficient way for organizing and browsing Web collections. However, to automatically generate facets and their isolates is extremely difficult since it involves concept extraction and concept grouping, both of which are difficult problems by themselves. And it is almost impossible to achieve mutually exclusive and jointly exhaustive 'true' facets without human judgment. Nowadays, faceted search/browse widely exists, implicitly or explicitly, on a majority of retail websites due to the multi-aspects nature of the data. However, it is still rarely seen on any digital library sites. (I could be wrong since I haven't kept myself updated with this field for a while.)
  6. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.021833686 = product of:
      0.043667372 = sum of:
        0.043667372 = product of:
          0.087334745 = sum of:
            0.087334745 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.087334745 = score(doc=6040,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:42:47
  7. Satyapal, B.G.; Satyapal, N.S.: SATSAN AUTOMATRIX Version 1 : a computer programme for synthesis of Colon class number according to the postulational approach (2006) 0.02
    0.020395245 = product of:
      0.04079049 = sum of:
        0.04079049 = product of:
          0.08158098 = sum of:
            0.08158098 = weight(_text_:word in 1492) [ClassicSimilarity], result of:
              0.08158098 = score(doc=1492,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.28965205 = fieldWeight in 1492, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1492)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the features und capabilities of the software SATSAN AUTOMATRIX version 1 for semi-automatic synthesis of Colon Class number (CCN) for a given subject according to the Postulational Approach formulated by S.R. Ranganathan. The present Auto-Matrix version l gives the user more facilities to carry out facet analysis of a subject (simple, compound. or complex) preparatory to synthesizing the corresponding CCN. The software also enables searching for and using previously constructed class numbers automatically, maintenance and use of databases of CC Index, facet formulae and CC schedules for subjects going with different Basic Subjects. The paper begins with a brief account of the authors' consultations with und directions received from. Prof A. Neelameghan in the course of developing the software. Oracle 8 and VB6 have been used in writing the programmes. But for operating SATSAN it is not necessary for users to he proficient in VB6 and Oracle 8 languages. Any computer literate with the basic knowledge of Microsoft Word will he able to use this application software.
  8. Comaromi, C.L.: Summation of classification as an enhancement of intellectual access to information in an online environment (1990) 0.02
    0.018194739 = product of:
      0.036389478 = sum of:
        0.036389478 = product of:
          0.072778955 = sum of:
            0.072778955 = weight(_text_:22 in 3576) [ClassicSimilarity], result of:
              0.072778955 = score(doc=3576,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.38690117 = fieldWeight in 3576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 1.2007 12:22:40
  9. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.018194739 = product of:
      0.036389478 = sum of:
        0.036389478 = product of:
          0.072778955 = sum of:
            0.072778955 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.072778955 = score(doc=611,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  10. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.02
    0.01730594 = product of:
      0.03461188 = sum of:
        0.03461188 = product of:
          0.06922376 = sum of:
            0.06922376 = weight(_text_:word in 1202) [ClassicSimilarity], result of:
              0.06922376 = score(doc=1202,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.24577792 = fieldWeight in 1202, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1202)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
  11. Lösse, M.; Svensson, L.: "Classification at a Crossroad" : Internationales UDC-Seminar 2009 in Den Haag, Niederlande (2010) 0.02
    0.0154387485 = product of:
      0.030877497 = sum of:
        0.030877497 = product of:
          0.061754994 = sum of:
            0.061754994 = weight(_text_:22 in 4379) [ClassicSimilarity], result of:
              0.061754994 = score(doc=4379,freq=4.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.32829654 = fieldWeight in 4379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4379)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Am 29. und 30. Oktober 2009 fand in der Königlichen Bibliothek in Den Haag das zweite internationale UDC-Seminar zum Thema "Classification at a Crossroad" statt. Organisiert wurde diese Konferenz - wie auch die erste Konferenz dieser Art im Jahr 2007 - vom UDC-Konsortium (UDCC). Im Mittelpunkt der diesjährigen Veranstaltung stand die Erschließung des World Wide Web unter besserer Nutzung von Klassifikationen (im Besonderen natürlich der UDC), einschließlich benutzerfreundlicher Repräsentationen von Informationen und Wissen. Standards, neue Technologien und Dienste, semantische Suche und der multilinguale Zugriff spielten ebenfalls eine Rolle. 135 Teilnehmer aus 35 Ländern waren dazu nach Den Haag gekommen. Das Programm umfasste mit 22 Vorträgen aus 14 verschiedenen Ländern eine breite Palette, wobei Großbritannien mit fünf Beiträgen am stärksten vertreten war. Die Tagesschwerpunkte wurden an beiden Konferenztagen durch die Eröffnungsvorträge gesetzt, die dann in insgesamt sechs thematischen Sitzungen weiter vertieft wurden.
    Date
    22. 1.2010 15:06:54
  12. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.01
    0.014555791 = product of:
      0.029111583 = sum of:
        0.029111583 = product of:
          0.058223166 = sum of:
            0.058223166 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
              0.058223166 = score(doc=2871,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.30952093 = fieldWeight in 2871, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2871)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 7.2004 12:22:52
  13. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.01
    0.014555791 = product of:
      0.029111583 = sum of:
        0.029111583 = product of:
          0.058223166 = sum of:
            0.058223166 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.058223166 = score(doc=4869,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:39:23
  14. Van Dijck, P.: Introduction to XFML (2003) 0.01
    0.014555791 = product of:
      0.029111583 = sum of:
        0.029111583 = product of:
          0.058223166 = sum of:
            0.058223166 = weight(_text_:22 in 2474) [ClassicSimilarity], result of:
              0.058223166 = score(doc=2474,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.30952093 = fieldWeight in 2474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2474)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    http://www.xml.com/lpt/a/2003/01/22/xfml.html
  15. Dack, D.: Australian attends conference on Dewey (1989) 0.01
    0.012736317 = product of:
      0.025472634 = sum of:
        0.025472634 = product of:
          0.050945267 = sum of:
            0.050945267 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
              0.050945267 = score(doc=2509,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.2708308 = fieldWeight in 2509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2509)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.11.1995 11:52:22
  16. Vizine-Goetz, D.: OCLC investigates using classification tools to organize Internet data (1998) 0.01
    0.012736317 = product of:
      0.025472634 = sum of:
        0.025472634 = product of:
          0.050945267 = sum of:
            0.050945267 = weight(_text_:22 in 2342) [ClassicSimilarity], result of:
              0.050945267 = score(doc=2342,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.2708308 = fieldWeight in 2342, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2342)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  17. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.01
    0.012736317 = product of:
      0.025472634 = sum of:
        0.025472634 = product of:
          0.050945267 = sum of:
            0.050945267 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.050945267 = score(doc=57,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.2708308 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30.12.2001 16:22:41
  18. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.012736317 = product of:
      0.025472634 = sum of:
        0.025472634 = product of:
          0.050945267 = sum of:
            0.050945267 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.050945267 = score(doc=1673,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  19. Alex, H.; Heiner-Freiling, M.: Melvil (2005) 0.01
    0.012736317 = product of:
      0.025472634 = sum of:
        0.025472634 = product of:
          0.050945267 = sum of:
            0.050945267 = weight(_text_:22 in 4321) [ClassicSimilarity], result of:
              0.050945267 = score(doc=4321,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.2708308 = fieldWeight in 4321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4321)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ab Januar 2006 wird Die Deutsche Bibliothek ein neues Webangebot mit dem Namen Melvil starten, das ein Ergebnis ihres Engagements für die DDC und das Projekt DDC Deutsch ist. Der angebotene Webservice basiert auf der Übersetzung der 22. Ausgabe der DDC, die im Oktober 2005 als Druckausgabe im K. G. Saur Verlag erscheint. Er bietet jedoch darüber hinausgehende Features, die den Klassifizierer bei seiner Arbeit unterstützen und erstmals eine verbale Recherche für Endnutzer über DDCerschlossene Titel ermöglichen. Der Webservice Melvil gliedert sich in drei Anwendungen: - MelvilClass, - MelvilSearch und - MelvilSoap.
  20. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.01
    0.012736317 = product of:
      0.025472634 = sum of:
        0.025472634 = product of:
          0.050945267 = sum of:
            0.050945267 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.050945267 = score(doc=88,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22