Search (43 results, page 2 of 3)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Voorbij, H.: ¬Een goede titel behoeft geen trefwoord, of toch wel? : een vergelijkend oderzoek titelwoorden - trefwoorden (1997) 0.01
    0.009384007 = product of:
      0.02815202 = sum of:
        0.02815202 = product of:
          0.05630404 = sum of:
            0.05630404 = weight(_text_:indexing in 1446) [ClassicSimilarity], result of:
              0.05630404 = score(doc=1446,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29604656 = fieldWeight in 1446, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1446)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A recent survey at the Royal Library in the Netherlands showed that subject headings are more efficient than title keywords for retrieval purposes. 475 Dutch publications were selected at random and assigned subject headings. The study showed that subject headings provided additional useful information in 56% of titles. Subsequent searching of the library's online catalogue showed that 88% of titles were retrieved via subject headings against 57% through title keywords. Further precision may be achieved with the help of indexing staff, but at considerable cost
  2. Munoz, A.M.; Munoz, F.A.: Nuevas areas de conocimiento y la problematica documental : la prospectiva de la paz en la Universidad de Granada (1997) 0.01
    0.009384007 = product of:
      0.02815202 = sum of:
        0.02815202 = product of:
          0.05630404 = sum of:
            0.05630404 = weight(_text_:indexing in 340) [ClassicSimilarity], result of:
              0.05630404 = score(doc=340,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.29604656 = fieldWeight in 340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=340)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Report of a study from the user's point of view, investigating the facility with which bibliographical material can be identified in a multidisciplinary field, the prospective for peace, from the University's resources. Searches (uniterm and relational) were effected using all available tools - OPACs, CD-ROM collections, online databases, manual catalogues, the Internet - both on the University's system and on national research institutions. Overall results returned a low rate of pertinence (1,86%). This is due not to lack of user search expertise but the lack of subject specific indexing coupled with using a MARC format
  3. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.008975455 = product of:
      0.026926363 = sum of:
        0.026926363 = product of:
          0.053852726 = sum of:
            0.053852726 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.053852726 = score(doc=6971,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.01
    0.008975455 = product of:
      0.026926363 = sum of:
        0.026926363 = product of:
          0.053852726 = sum of:
            0.053852726 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.053852726 = score(doc=744,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:01:00
  5. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.01
    0.008975455 = product of:
      0.026926363 = sum of:
        0.026926363 = product of:
          0.053852726 = sum of:
            0.053852726 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.053852726 = score(doc=3087,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  6. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.01
    0.008975455 = product of:
      0.026926363 = sum of:
        0.026926363 = product of:
          0.053852726 = sum of:
            0.053852726 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.053852726 = score(doc=3572,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.3, S.24-26,28
  7. Ellis, D.: Progress and problems in information retrieval (1996) 0.01
    0.008975455 = product of:
      0.026926363 = sum of:
        0.026926363 = product of:
          0.053852726 = sum of:
            0.053852726 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.053852726 = score(doc=789,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    26. 7.2002 20:22:46
  8. Tibbo, H.R.: ¬The epic struggle : subject retrieval from large bibliographic databases (1994) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 2179) [ClassicSimilarity], result of:
              0.048260607 = score(doc=2179,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 2179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2179)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses a retrieval study that focused on collection level archival records in the OCLC OLUC, made accessible through the EPIC online search system. Data were also collected from the local OPAC at North Carolina University at Chapel Hill (UNC-CH) in which UNC-CH produced OCLC records are loaded. The chief objective was to explore the retrieval environments in which a random sample of USMARC AMC records produced at UNC-CH were found: specifically to obtain a picture of the density of these databases in regard to each subject heading applied and, more generally, for each records. Key questions were: how many records would be retrieved for each subject heading attached to each of the records; and what was the nature of these subject headings vis a vis the numer of hits associated with them. Results show that large retrieval sets are a potential problem with national bibliographic utilities and that the local and national retrieval environments can vary greatly. The need for specifity in indexing is emphasized
  9. Hersh, W.; Pentecost, J.; Hickam, D.: ¬A task-oriented approach to information retrieval evaluation : overview and design for empirical testing (1996) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 3001) [ClassicSimilarity], result of:
              0.048260607 = score(doc=3001,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 3001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3001)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    As retrieval system become more oriented towards end-users, there is an increasing need for improved methods to evaluate their effectiveness. We performed a task-oriented assessment of 2 MEDLINE searching systems, one which promotes traditional Boolean searching on human-indexed thesaurus terms and the other natural language searching on words in the title, abstracts and indexing terms. Medical students were randomized to one of the 2 systems and given clinical questions to answer. The students were able to use each system successfully, with no significant differences in questions correctly answered, time taken, relevant articles retrieved, or user satisfaction between the systems. This approach to evaluation was successful in measuring effectiveness of system use and demonstrates that both types of systems can be used equally well with minimal training
  10. Hersh, W.R.; Pentecost, J.; Hickam, D.H.: ¬A task-oriented approach to retrieval system evaluation (1995) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 3867) [ClassicSimilarity], result of:
              0.048260607 = score(doc=3867,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 3867, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3867)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    There is a need for improved methods to evaluate the effectiveness of end user information retrieval systems. Performs a task oriented assessment of 2 MEDLINE searching systems, one which promotes Boolean searching on human indexed thesaurus terms and the other natural language searching on words in the title, abstract, and indexing terms. Each was used by medical students to answer clinical questions. Students were able to use each system successfully, with no significant differences in questions correctly answered, time taken, relevant articles retrieved, or user satisfaction between the systems. This approach to evaluation was successful in measuring effectiveness of system use and demonstrates that both types of systems can be used equally well with minimal training
  11. Cavanagh, A.K.: ¬A comparison of the retrieval performance of multi-disciplinary table-of-contents databases with conventional specialised databases (1997) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 770) [ClassicSimilarity], result of:
              0.048260607 = score(doc=770,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 770, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=770)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In an endeavour to compare retrieval performance and periodical overlap in a biological field, the same topic was searched on 5 Table of Contents (ToC) databases and 3 specialised biological databases. Performance was assessed in terms of precision and recall. The ToC databases in general had higher precision in that most material found was relevant. They were less satisfactory in recall where some located fewer than 50% of identified high relevance articles. Subject specific databases had overall better recall but lower precision with many more false drops and items of low relevance occuring. These differences were associated with variations in indexing practice and policy and searching capabilities of the various databases. In a further comparison, it was found that the electronic databases, as a group, identified only 75% of the articles known from independent source to have been published in the field
  12. Sparck Jones, K.: Reflections on TREC (1997) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 580) [ClassicSimilarity], result of:
              0.048260607 = score(doc=580,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within its terms of the approaches to system performance factors these represent; analyses the test results for solid, overall conclusions that can be drawn from them; and, in the light of the particular features of the test data, assesses TREC both for generally applicable findings that emerge from it and for directions it offers for future research
  13. Bodoff, D.; Kambil, A.: Partial coordination : II. A preliminary evaluation and failure analysis (1998) 0.01
    0.0080434345 = product of:
      0.024130303 = sum of:
        0.024130303 = product of:
          0.048260607 = sum of:
            0.048260607 = weight(_text_:indexing in 2323) [ClassicSimilarity], result of:
              0.048260607 = score(doc=2323,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2537542 = fieldWeight in 2323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2323)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Partial coordination is a new method for cataloging documents for subject access. It is especially designed to enhance the precision of document searches in online environments. This article reports a preliminary evaluation of partial coordination that shows promising results compared with full-text retrieval. We also report the difficulties in empirically evaluating the effectiveness of automatic full-text retrieval in contrast to mixed methods such as partial coordination which combine human cataloging with computerized retrieval. Based on our study, we propose research in this area will substantially benefit from a common framework for failure analysis and a common data set. This will allow information retrieval researchers adapting 'library style'cataloging to large electronic document collections, as well as those developing automated or mixed methods, to directly compare their proposals for indexing and retrieval. This article concludes by suggesting guidelines for constructing such as testbed
  14. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.01
    0.007853523 = product of:
      0.023560567 = sum of:
        0.023560567 = product of:
          0.047121134 = sum of:
            0.047121134 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.047121134 = score(doc=7302,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  15. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.01
    0.007853523 = product of:
      0.023560567 = sum of:
        0.023560567 = product of:
          0.047121134 = sum of:
            0.047121134 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
              0.047121134 = score(doc=3002,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2708308 = fieldWeight in 3002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3002)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  16. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.01
    0.007853523 = product of:
      0.023560567 = sum of:
        0.023560567 = product of:
          0.047121134 = sum of:
            0.047121134 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.047121134 = score(doc=3368,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 2.1996 13:14:10
  17. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.01
    0.007853523 = product of:
      0.023560567 = sum of:
        0.023560567 = product of:
          0.047121134 = sum of:
            0.047121134 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.047121134 = score(doc=5598,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    2.11.1996 13:08:22
  18. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.01
    0.0067315903 = product of:
      0.02019477 = sum of:
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.04038954 = score(doc=1757,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  19. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.01
    0.0067315903 = product of:
      0.02019477 = sum of:
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
              0.04038954 = score(doc=4341,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 4341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4341)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of information science. 22(1996) no.2, S.79-92
  20. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.01
    0.0067315903 = product of:
      0.02019477 = sum of:
        0.02019477 = product of:
          0.04038954 = sum of:
            0.04038954 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.04038954 = score(doc=6967,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon

Languages

  • e 35
  • d 4
  • f 2
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 38
  • m 2
  • s 2
  • el 1
  • r 1
  • More… Less…