Search (47 results, page 1 of 3)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.03
    0.03406783 = product of:
      0.06813566 = sum of:
        0.042235587 = weight(_text_:data in 3107) [ClassicSimilarity], result of:
          0.042235587 = score(doc=3107,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34936053 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=3107)
        0.02590007 = product of:
          0.05180014 = sum of:
            0.05180014 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.05180014 = score(doc=3107,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    27. 2.1999 20:59:22
  2. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.03
    0.029970573 = product of:
      0.059941147 = sum of:
        0.041811097 = weight(_text_:data in 7302) [ClassicSimilarity], result of:
          0.041811097 = score(doc=7302,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34584928 = fieldWeight in 7302, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.0362601 = score(doc=7302,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  3. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.017033914 = product of:
      0.03406783 = sum of:
        0.021117793 = weight(_text_:data in 2339) [ClassicSimilarity], result of:
          0.021117793 = score(doc=2339,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.17468026 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.012950035 = product of:
          0.02590007 = sum of:
            0.02590007 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.02590007 = score(doc=2339,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  4. Van der Walt, H.E.A.; Brakel, P.A. van: Method for the evaluation of the retrieval effectiveness of a CD-ROM bibliographic database (1991) 0.02
    0.016527288 = product of:
      0.06610915 = sum of:
        0.06610915 = weight(_text_:data in 3114) [ClassicSimilarity], result of:
          0.06610915 = score(doc=3114,freq=10.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.5468357 = fieldWeight in 3114, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3114)
      0.25 = coord(1/4)
    
    Abstract
    Addresses the problem of how potential users of CD-ROM data bases can objectively establish which version of the same data base is best suited for a specific situation. The problem was solved by applying the retrieval effectiveness of current on-line data base search systems as a standard measurement. 5 search queries from the medical sciences were presented by experienced users of MEDLINE. Search strategies were written for both DIALOG and DATA-STAR. Search results were compared to create a recall base from documents present in both on-line searches. This recall base was then used to establish the retrieval and precision of 4 CD-ROM data bases: MEDLINE, Compact Cambrdge MEDLINE, DIALOG OnDisc, Comprehensive MEDLINE/EBSCO
  5. Wildemuth, B.M.: Measures of success in searching a full-text fact base (1990) 0.01
    0.0128019815 = product of:
      0.051207926 = sum of:
        0.051207926 = weight(_text_:data in 2050) [ClassicSimilarity], result of:
          0.051207926 = score(doc=2050,freq=6.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.42357713 = fieldWeight in 2050, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2050)
      0.25 = coord(1/4)
    
    Abstract
    The traditional measures of online searching proficiency (recall and precision) are less appropriate when applied to the searching of full text databases. The pilot study investigated and evaluated 5 measures of overall success in searching a full text data bank. Data was drawn from INQUIRER searches conducted by medical students at North Carolina Univ. at Chapel Hill. INQUIRER ia an online database of facts and concepts in microbiology. The 5 measures were: success/failure; precision; search term overlap; number of search cycles; and time per search. Concludes that the last 4 measures look promising for the evaluation of fact data bases such as ENQUIRER
  6. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.01
    0.0128019815 = product of:
      0.051207926 = sum of:
        0.051207926 = weight(_text_:data in 5804) [ClassicSimilarity], result of:
          0.051207926 = score(doc=5804,freq=6.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.42357713 = fieldWeight in 5804, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
      0.25 = coord(1/4)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness
  7. Savoy, J.; Calvé, A. le; Vrajitoru, D.: Report on the TREC5 experiment : data fusion and collection fusion (1997) 0.01
    0.012670675 = product of:
      0.0506827 = sum of:
        0.0506827 = weight(_text_:data in 3108) [ClassicSimilarity], result of:
          0.0506827 = score(doc=3108,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 3108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=3108)
      0.25 = coord(1/4)
    
  8. Taghva, K.: ¬The effects of noisy data on text retrieval (1994) 0.01
    0.011946027 = product of:
      0.04778411 = sum of:
        0.04778411 = weight(_text_:data in 7227) [ClassicSimilarity], result of:
          0.04778411 = score(doc=7227,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.3952563 = fieldWeight in 7227, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=7227)
      0.25 = coord(1/4)
    
    Abstract
    Reports of the results of experiments on query evaluation on the presence of noisy data, in particular, an OCR-generated database and its corresponding 99.8 % correct version are used to process a set of queries to determine the effect the degraded version will have on retrieval. With the set of scientific documents used in the testing, the effect is insignificant. Improves the result by applying an automatic postprocessing system designed to correct the kinds of errors generated by recognition devices
  9. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.01
    0.011946027 = product of:
      0.04778411 = sum of:
        0.04778411 = weight(_text_:data in 5689) [ClassicSimilarity], result of:
          0.04778411 = score(doc=5689,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.3952563 = fieldWeight in 5689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
      0.25 = coord(1/4)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
  10. Su, L.T.: Value of search results as a whole as a measure of information retrieval performance (1996) 0.01
    0.010973128 = product of:
      0.04389251 = sum of:
        0.04389251 = weight(_text_:data in 7439) [ClassicSimilarity], result of:
          0.04389251 = score(doc=7439,freq=6.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.3630661 = fieldWeight in 7439, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7439)
      0.25 = coord(1/4)
    
    Abstract
    Examines: the conceptual categories or dimensions of the users' reasons for assigning particular ratings on the value of search results, and the relationships between these dimensions of value and the dimensions of success identified in an earlier study. 40 end users with individual information problems from an academic environment were observed, interacting with 6 professional intermediaries searching on their behalf in large operational systems at the users' own costs. A search was conducted for each individual problem in the users' presence and with user participation. Quantitative data consisting of scores for all measures studied and verbal data containing reasons for assigning certain ratings to selected measures were collected. The portion of the verbal data including users' reasons for assigning particular value ratings from the previous study will be trancribed and content analyzed for the current study
  11. Wildemuth, B.M.; Jacob, E.K.; Fullington, A.;; Bliek, R. de; Friedman, C.P.: ¬A detailed analysis of end-user search behaviours (1991) 0.01
    0.009144273 = product of:
      0.03657709 = sum of:
        0.03657709 = weight(_text_:data in 2423) [ClassicSimilarity], result of:
          0.03657709 = score(doc=2423,freq=6.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.30255508 = fieldWeight in 2423, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2423)
      0.25 = coord(1/4)
    
    Abstract
    Search statements in this revision process can be viewed as a 'move' in the overall search strategy. Very little is known about how end users develop and revise their search strategies. A study was conducted to analyse the moves made in 244 data base searches conducted by 26 medical students at the University of North Carolina at Chapel Hill. Students search INQUIRER, a data base of facts and concepts in microbiology. The searches were conducted during a 3-week period in spring 1990 and were recorded by the INQUIRER system. Each search statement was categorised, using Fidel's online searching moves (S. Online review 9(1985) S.61-74) and Bates' search tactics (s. JASIS 30(1979) S.205-214). Further analyses indicated that the most common moves were Browse/Specity, Select Exhaust, Intersect, and Vary, and that selection of moves varied by student and by problem. Analysis of search tactics (combinations of moves) identified 5 common search approaches. The results of this study have implcations for future research on search behaviours, for thedesign of system interfaces and data base structures, and for the training of end users
  12. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.009065025 = product of:
      0.0362601 = sum of:
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.0725202 = score(doc=6418,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Online. 22(1998) no.6, S.57-58
  13. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.009065025 = product of:
      0.0362601 = sum of:
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.0725202 = score(doc=5089,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:43:54
  14. Guglielmo, E.J.; Rowe, N.C.: Natural-language retrieval of images based on descriptive captions (1996) 0.01
    0.008959521 = product of:
      0.035838082 = sum of:
        0.035838082 = weight(_text_:data in 6624) [ClassicSimilarity], result of:
          0.035838082 = score(doc=6624,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.29644224 = fieldWeight in 6624, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=6624)
      0.25 = coord(1/4)
    
    Abstract
    Describes a prototype intelligent information retrieval system that uses natural-language understanding to efficiently locate captioned data. Multimedia data generally requires captions to explain its features and significance. Such descriptive captions often rely on long nominal compunds (strings of consecutive nouns) which create problems of ambiguous word sense. Presents a system in which captions and user queries are parsed and interpreted to produce a logical form, using a detailed theory of the meaning of nominal compounds. A fine-grain match can then compare the logical form of the query to the logical forms for each caption. To improve system efficiency, the system performs a coarse-grain match with index files, using nouns and verbs extracted from the query. Experiments with randomly selected queries and captions from an existing image library show an increase of 30% in precision and 50% in recall over the keyphrase approach currently used. Processing times have a median of 7 seconds as compared to 8 minutes for the existing system
  15. Sparck Jones, K.: Reflections on TREC : TREC-2 (1995) 0.01
    0.008447117 = product of:
      0.03378847 = sum of:
        0.03378847 = weight(_text_:data in 1916) [ClassicSimilarity], result of:
          0.03378847 = score(doc=1916,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2794884 = fieldWeight in 1916, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=1916)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the TREC programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results for solid, overall conclusions that can be drawn from them; and, in the light of the particular features of the test data, assesses TREC both for generally applicable findings that emerge from it and for directions it offers for future research
  16. Wilbur, W.J.: Human subjectivity and performance limits in document retrieval (1996) 0.01
    0.008447117 = product of:
      0.03378847 = sum of:
        0.03378847 = weight(_text_:data in 6607) [ClassicSimilarity], result of:
          0.03378847 = score(doc=6607,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2794884 = fieldWeight in 6607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=6607)
      0.25 = coord(1/4)
    
    Abstract
    Test sets for the document retrieval task composed of human relevance judgments have been constructed that allow one to compare human performance directly with that of automatic methods and that place absolute limits on performance by any method. Current retrieval systems are found to generate only about half of the information allowed by these absolute limits. The data suggests that most of the improvement that could be achieved consistent with these limits can only be achieved by incorporating specific subject information into retrieval systems
  17. Harman, D.K.: ¬The first text retrieval conference : TREC-1, 1992 (1993) 0.01
    0.008447117 = product of:
      0.03378847 = sum of:
        0.03378847 = weight(_text_:data in 1317) [ClassicSimilarity], result of:
          0.03378847 = score(doc=1317,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2794884 = fieldWeight in 1317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=1317)
      0.25 = coord(1/4)
    
    Abstract
    Reports on the 1st Text Retrieval Conference (TREC-1) held in Rockville, MD, 4-6 Nov. 1992. The TREC experiment is being run by the National Institute of Standards and Technology to allow information retrieval researchers to scale up from small collection of data to larger sized experiments. Gropus of researchers have been provided with text documents compressed on CD-ROM. They used experimental retrieval system to search the text and evaluate the results
  18. Shenouda, W.: Online bibliographic searching : how end-users modify their search strategies (1990) 0.01
    0.0073912274 = product of:
      0.02956491 = sum of:
        0.02956491 = weight(_text_:data in 4895) [ClassicSimilarity], result of:
          0.02956491 = score(doc=4895,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.24455236 = fieldWeight in 4895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4895)
      0.25 = coord(1/4)
    
    Abstract
    The study attempted to idendify how end-users modify their initial search strategies in the light of new information presented during their interaction with an online bibliographic information retrieval system in a real environment. This exploratory study was also conducted to determine the effectiveness of the changes, made by users during the online process, in retrieving relevant documents. Analysis of this data shows that all end-users modify their searches during the online process. Results indicate that certain changes were made more frequently than others. Changes affecting relevance and characteristics of end-users' online search behaviour were also identified
  19. Armstrong, C.J.; Medawar, K.: Investigation into the quality of databases in general use in the UK (1996) 0.01
    0.0073912274 = product of:
      0.02956491 = sum of:
        0.02956491 = weight(_text_:data in 6768) [ClassicSimilarity], result of:
          0.02956491 = score(doc=6768,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.24455236 = fieldWeight in 6768, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6768)
      0.25 = coord(1/4)
    
    Abstract
    Reports on a Centre for Information Quality Management (CIQM) BLRRD funded project which investigated the quality of databases in general use in the UK. Gives a literature review of quality in library and information services. Reports the results of a CIQM questionnaire survey on the quality problems of databases and their affect on users. Carries out databases evaluations of: INSPEC on ESA-IRS, INSPEC on KR Data-Star, INSPEC on UMI CD-ROM, BNB on CD-ROM, and Information Science Abstracts Plus CD-ROM. Sets out a methodology for evaluation of bibliographic databases
  20. Robertson, S.E.; Walker, S.; Hancock-Beaulieu, M.M.: Large test collection experiments of an operational, interactive system : OKAPI at TREC (1995) 0.01
    0.0073912274 = product of:
      0.02956491 = sum of:
        0.02956491 = weight(_text_:data in 6964) [ClassicSimilarity], result of:
          0.02956491 = score(doc=6964,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.24455236 = fieldWeight in 6964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6964)
      0.25 = coord(1/4)
    
    Abstract
    The Okapi system has been used in a series of experiments on the TREC collections, investiganting probabilistic methods, relevance feedback, and query expansion, and interaction issues. Some new probabilistic models have been developed, resulting in simple weigthing functions that take account of document length and within document and within query term frequency. All have been shown to be beneficial when based on large quantities of relevance data as in the routing task. Interaction issues are much more difficult to evaluate in the TREC framework, and no benefits have yet been demonstrated from feedback based on small numbers of 'relevant' items identified by intermediary searchers

Types

  • a 41
  • m 3
  • s 2
  • el 1
  • r 1
  • More… Less…