Search (37 results, page 1 of 2)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Sanderson, M.: ¬The Reuters test collection (1996) 0.02
    0.018896578 = product of:
      0.047241446 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 6971) [ClassicSimilarity], result of:
              0.054937813 = score(doc=6971,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
        0.019772539 = product of:
          0.039545078 = sum of:
            0.039545078 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.039545078 = score(doc=6971,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes the Reuters test collection, which at 22.173 references is significantly larger than most traditional test collections. In addition, Reuters has none of the recall calculation problems normally associated with some of the larger test collections available. Explains the method derived by D.D. Lewis to perform retrieval experiments on the Reuters collection and illustrates the use of the Reuters collection using some simple retrieval experiments that compare the performance of stemming algorithms
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  2. Ellis, D.: Progress and problems in information retrieval (1996) 0.02
    0.018896578 = product of:
      0.047241446 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 789) [ClassicSimilarity], result of:
              0.054937813 = score(doc=789,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
        0.019772539 = product of:
          0.039545078 = sum of:
            0.039545078 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.039545078 = score(doc=789,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    26. 7.2002 20:22:46
  3. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.02
    0.016534507 = product of:
      0.04133627 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 3368) [ClassicSimilarity], result of:
              0.048070587 = score(doc=3368,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
        0.017300973 = product of:
          0.034601945 = sum of:
            0.034601945 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.034601945 = score(doc=3368,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
  4. Smeaton, A.F.; Harman, D.: ¬The TREC experiments and their impact on Europe (1997) 0.02
    0.016310833 = product of:
      0.08155417 = sum of:
        0.08155417 = product of:
          0.16310833 = sum of:
            0.16310833 = weight(_text_:exercises in 7702) [ClassicSimilarity], result of:
              0.16310833 = score(doc=7702,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.62861085 = fieldWeight in 7702, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7702)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Reviews the overall results of the TREC experiments in information retrieval, which differed from other information retrieval research projects in that the document collections used in the research were massive, and the groups participating in the collaborative evaluation are among the main organizations in the field. Reviews the findings of TREC, the way in which it operates and the specialist 'tracks' it supports and concentrates on european involvement in TREC, examining the participants and the emergence of European TREC like exercises
  5. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.01
    0.014271979 = product of:
      0.071359895 = sum of:
        0.071359895 = product of:
          0.14271979 = sum of:
            0.14271979 = weight(_text_:exercises in 5804) [ClassicSimilarity], result of:
              0.14271979 = score(doc=5804,freq=2.0), product of:
                0.25947425 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5500345 = fieldWeight in 5804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5804)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness
  6. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.06920389 = score(doc=6418,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Online. 22(1998) no.6, S.57-58
  7. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.0069203894 = product of:
      0.034601945 = sum of:
        0.034601945 = product of:
          0.06920389 = sum of:
            0.06920389 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.06920389 = score(doc=5089,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 7.2006 18:43:54
  8. Tague-Sutcliffe, J.M.: Some perspectives on the evaluation of information retrieval systems (1996) 0.01
    0.0067982078 = product of:
      0.03399104 = sum of:
        0.03399104 = product of:
          0.06798208 = sum of:
            0.06798208 = weight(_text_:problems in 4163) [ClassicSimilarity], result of:
              0.06798208 = score(doc=4163,freq=4.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.4514426 = fieldWeight in 4163, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4163)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    As an introduction to the papers in this special issue, some of the major problems facing in investigators evaluating information retrieval systems are presented. These problems include the question of the necessity of using real users, as opposed to subject experts, in making relevance judgements, the possibility of evaluating individual components of the retrieval process, rather than the process as a whole, the kinds of aggregation that are appropriate for the measures used in evaluating systems, the value of an analytic or simulatory, as opposed to an experimental, approach in evaluation retrieval systems, the difficulties in evaluating interactive systems, and the kind of generalization which are possible from information retrieval tests.
  9. Keen, E.M.: Some aspects of proximity searching in text retrieval systems (1992) 0.01
    0.0054937815 = product of:
      0.027468907 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 6190) [ClassicSimilarity], result of:
              0.054937813 = score(doc=6190,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 6190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6190)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Describes and evaluates the proximity search facilities in external online systems and in-house retrieval software. Discusses and illustrates capabilities, syntax and circumstances of use. Presents measurements of the overheads required by proximity for storage, record input time and search time. The search strategy narrowing effect of proximity is illustrated by recall and precision test results. Usage and problems lead to a number of design ideas for better implementation: some based on existing Boolean strategies, one on the use of weighted proximity to automatically produce ranked output. A comparison of Boolean, quorum and proximate term pairs distance is included
  10. Frei, H.P.; Meienberg, S.; Schäuble, P.: ¬The perils of interpreting recall and precision values (1991) 0.01
    0.0054937815 = product of:
      0.027468907 = sum of:
        0.027468907 = product of:
          0.054937813 = sum of:
            0.054937813 = weight(_text_:problems in 786) [ClassicSimilarity], result of:
              0.054937813 = score(doc=786,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.36482072 = fieldWeight in 786, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0625 = fieldNorm(doc=786)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The traditional recall and precision measure is inappropriate when retrieval algorithms that retrieve information from Wide Area Networks are evaluated. The principle reason is that information available in WANs is dynamic and its size os orders of magnitude greater than the size of the usual test collections. To overcome these problems, a new efffectiveness measure has been developed, which we call the 'usefulness measure'
  11. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.04943135 = score(doc=3103,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27. 2.1999 20:55:22
  12. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.00
    0.004943135 = product of:
      0.024715675 = sum of:
        0.024715675 = product of:
          0.04943135 = sum of:
            0.04943135 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.04943135 = score(doc=3107,freq=2.0), product of:
                0.12776221 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036484417 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27. 2.1999 20:59:22
  13. Tillotson, J.: Is keyword searching the answer? (1995) 0.00
    0.0048070587 = product of:
      0.024035294 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 1857) [ClassicSimilarity], result of:
              0.048070587 = score(doc=1857,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 1857, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1857)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Examines 3 aspects of keyword searching to see if defaulting to keyword searches might serve as a solution to the problems users find when performing subject searches in OPACs. Investigates if keyword searching produces useful results; if people who use keyword searches to find information on a subject report that they are satisfied with the results; and how keyword searching and controlled vocabulary searching are offered and explained in currently available OPAC interfaces. Concludes that both keyword and controlled vocabulary searching ought to be easily available in an OPAC, and that improvements need to be made in explanation and help offered to subject searchers
  14. Hersh, W.R.; Hickam, D.H.: ¬An evaluation of interactive Boolean and natural language searching with an online medical textbook (1995) 0.00
    0.0048070587 = product of:
      0.024035294 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 2651) [ClassicSimilarity], result of:
              0.048070587 = score(doc=2651,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 2651, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Few studies have compared the interactive use of Boolean and natural language search systems. Studies the use of 3 retrieval systems by senior medical students searching on queries generated by actual physicians in a clinical setting. The searchers were randomized to search on 2 or 3 different retrieval systems: a Boolean system, a word-based natural language system, and a concept-based natural language system. Results showed no statistically significant differences in recall or precision among the 3 systems. Likewise, there is no user preference for any system over the other. The study revealed problems with traditional measures of retrieval evaluation when applied to the interactive search setting
  15. Armstrong, C.J.; Medawar, K.: Investigation into the quality of databases in general use in the UK (1996) 0.00
    0.0048070587 = product of:
      0.024035294 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 6768) [ClassicSimilarity], result of:
              0.048070587 = score(doc=6768,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 6768, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6768)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Reports on a Centre for Information Quality Management (CIQM) BLRRD funded project which investigated the quality of databases in general use in the UK. Gives a literature review of quality in library and information services. Reports the results of a CIQM questionnaire survey on the quality problems of databases and their affect on users. Carries out databases evaluations of: INSPEC on ESA-IRS, INSPEC on KR Data-Star, INSPEC on UMI CD-ROM, BNB on CD-ROM, and Information Science Abstracts Plus CD-ROM. Sets out a methodology for evaluation of bibliographic databases
  16. Harter, S.P.; Hert, C.A.: Evaluation of information retrieval systems : approaches, issues, and methods (1997) 0.00
    0.0048070587 = product of:
      0.024035294 = sum of:
        0.024035294 = product of:
          0.048070587 = sum of:
            0.048070587 = weight(_text_:problems in 2264) [ClassicSimilarity], result of:
              0.048070587 = score(doc=2264,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.31921813 = fieldWeight in 2264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2264)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    State of the art review of information retrieval systems, defined as systems retrieving documents a sopposed to numerical data. Explains the classic Cranfield studies that have served as a standard for retrieval testing since the 1960s and discusses the Cranfield model and its relevance based measures of retrieval effectiveness. Details sosme of the problems with the Cranfield instruments and issues of validity and reliability, generalizability, usefulness and basic concepts. Discusses the evaluation of the Internet search engines in light of the Cranfield model, noting the very real differences between batch systems (Cranfield) and interactive systems (Internet). Because the Internet collection is not fixed, it is impossible to determine recall as a measure of retrieval effectiveness. considers future directions in evaluating information retrieval systems
  17. Park, T.K.: ¬The nature of relevance in information retrieval : an empirical study (1993) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 5336) [ClassicSimilarity], result of:
              0.04120336 = score(doc=5336,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 5336, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5336)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Experimental research in information retrieval (IR) depends on the idea of relevance. Because of its key role in IR, recent questions about relevance have raised issues of methododlogical concern and have shaken the philosophical foundations of IR theory development. Despite an existing set of theoretical definitions of this concept, our understanding of relevance from users' perspectives is still limited. Using naturalistic inquiry methodology, this article reports an emprical study of user-based relevance interpretations. A model is presented that reflects the nature of the thought process of users who are evaluating bibliographic citations produced by a document retrieval system. Three major categories of variables affecting relevance assessments - internal context, external context, and problem context - are idetified and described. Users' relevance assessments involve multiple layers of interpretations that are derived from individuals' experiences, perceptions, and private knowledge related to the particular information problems at hand
  18. Guglielmo, E.J.; Rowe, N.C.: Natural-language retrieval of images based on descriptive captions (1996) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 6624) [ClassicSimilarity], result of:
              0.04120336 = score(doc=6624,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 6624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6624)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Describes a prototype intelligent information retrieval system that uses natural-language understanding to efficiently locate captioned data. Multimedia data generally requires captions to explain its features and significance. Such descriptive captions often rely on long nominal compunds (strings of consecutive nouns) which create problems of ambiguous word sense. Presents a system in which captions and user queries are parsed and interpreted to produce a logical form, using a detailed theory of the meaning of nominal compounds. A fine-grain match can then compare the logical form of the query to the logical forms for each caption. To improve system efficiency, the system performs a coarse-grain match with index files, using nouns and verbs extracted from the query. Experiments with randomly selected queries and captions from an existing image library show an increase of 30% in precision and 50% in recall over the keyphrase approach currently used. Processing times have a median of 7 seconds as compared to 8 minutes for the existing system
  19. Su, L.T.: Value of search results as a whole as a measure of information retrieval performance (1996) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 7439) [ClassicSimilarity], result of:
              0.04120336 = score(doc=7439,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 7439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7439)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Examines: the conceptual categories or dimensions of the users' reasons for assigning particular ratings on the value of search results, and the relationships between these dimensions of value and the dimensions of success identified in an earlier study. 40 end users with individual information problems from an academic environment were observed, interacting with 6 professional intermediaries searching on their behalf in large operational systems at the users' own costs. A search was conducted for each individual problem in the users' presence and with user participation. Quantitative data consisting of scores for all measures studied and verbal data containing reasons for assigning certain ratings to selected measures were collected. The portion of the verbal data including users' reasons for assigning particular value ratings from the previous study will be trancribed and content analyzed for the current study
  20. Frei, H.P.; Schäuble, P.: Determining the effectiveness of retrieval algorithms (1991) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 787) [ClassicSimilarity], result of:
              0.04120336 = score(doc=787,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=787)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A new effectiveness measure ('usefulness measure') is proposed to circumvent the problems associated with the classical recall and precision measures. It is difficult to evaluate systems that filter extremly dynamic information; the determination of all relevant dodcuments in a real life collection is hardly affordable, and the specification of binary relevance assessments is often problematic. The new measure relies on an statistical approach with which two retrieval algorithms are compared. In contrast to the classical recall and precision measures, the new measure requires only relative judgments, and the reply of the retrieval system os compared directly with the information need of the user rather than with the query. The new measure has the added ability to determine an error probability that indicates haw stable the usefulness measure is. Using a test collection of abstracts from CACM, it is shown that our new measure is also capable of disclosing the effect of manually assigned descriptors and yields a results similar to that of the traditional recall and precision measures.