Search (193 results, page 2 of 10)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Burgin, R.: ¬The Monte Carlo method and the evaluation of retrieval system performance (1999) 0.01
    0.010017214 = product of:
      0.0701205 = sum of:
        0.011415146 = weight(_text_:information in 2946) [ClassicSimilarity], result of:
          0.011415146 = score(doc=2946,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 2946, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2946)
        0.05870535 = weight(_text_:retrieval in 2946) [ClassicSimilarity], result of:
          0.05870535 = score(doc=2946,freq=12.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.6549133 = fieldWeight in 2946, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2946)
      0.14285715 = coord(2/14)
    
    Abstract
    The ability to distinguish between acceptable and unacceptable levels of retrieval performance and the ability to distinguish between significant and non-significant differences between retrieval result are important to traditional information retrieval experiments. The Monte Carlo method is shown to represent an attractive alternative to the hypergeometric model for identifying the levels at which random retrieval performance is exceeded in retrieval test collections and for overcoming some of the limitations of the hypergeometric model
    Source
    Journal of the American Society for Information Science. 50(1999) no.2, S.181-191
  2. Shafique, M.; Chaudhry, A.S.: Intelligent agent-based online information retrieval (1995) 0.01
    0.00982186 = product of:
      0.06875302 = sum of:
        0.01482871 = weight(_text_:information in 3851) [ClassicSimilarity], result of:
          0.01482871 = score(doc=3851,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2850541 = fieldWeight in 3851, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3851)
        0.053924307 = weight(_text_:retrieval in 3851) [ClassicSimilarity], result of:
          0.053924307 = score(doc=3851,freq=18.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.60157627 = fieldWeight in 3851, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3851)
      0.14285715 = coord(2/14)
    
    Abstract
    Describes an intelligent agent based information retrieval model. The relevance matrix used by the intelligent agent consists of rows and columns; rows represent the documents and columns are used for keywords. Entries represent predetermined weights of keywords in documents. The search/query vector is constructed by the intelligent agent through explicit interaction with the user, using an interactive query refinement techniques. With manipulation of the relevance matrix against the search vector, the agent uses the manipulated information to filter the document representations and retrieve the most relevant documents, consequently improving the retrieval performance. Work is in progress on an experiment to compare the retrieval results from a conventional retrieval model and an intelligent agent based retrieval model. A test document collection on artificial intelligence has been selected as a sample. Retrieval tests are being carried out on a selected group of researchers using the 2 retrieval systems. Results will be compared to assess the retrieval performance using precision and recall matrices
    Imprint
    Oxford : Learned Information
    Source
    Online information 95: Proceedings of the 19th International online information meeting, London, 5-7 December 1995. Ed.: D.I. Raitt u. B. Jeapes
  3. Sheridan, P.; Ballerini, J.P.; Schäuble, P.: Building a large multilingual test collection from comparable news documents (1998) 0.01
    0.009709007 = product of:
      0.06796305 = sum of:
        0.01712272 = weight(_text_:information in 6298) [ClassicSimilarity], result of:
          0.01712272 = score(doc=6298,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3291521 = fieldWeight in 6298, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=6298)
        0.050840326 = weight(_text_:retrieval in 6298) [ClassicSimilarity], result of:
          0.050840326 = score(doc=6298,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5671716 = fieldWeight in 6298, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=6298)
      0.14285715 = coord(2/14)
    
    Series
    The Kluwer International series on information retrieval
    Source
    Cross-language information retrieval. Ed.: G. Grefenstette
  4. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.01
    0.009653008 = product of:
      0.06757105 = sum of:
        0.013980643 = weight(_text_:information in 3845) [ClassicSimilarity], result of:
          0.013980643 = score(doc=3845,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2687516 = fieldWeight in 3845, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3845)
        0.05359041 = weight(_text_:retrieval in 3845) [ClassicSimilarity], result of:
          0.05359041 = score(doc=3845,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.59785134 = fieldWeight in 3845, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3845)
      0.14285715 = coord(2/14)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
    Source
    Information processing and management. 27(1991) no.5, S.405-432
  5. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.01
    0.009356101 = product of:
      0.065492705 = sum of:
        0.014125523 = weight(_text_:information in 5804) [ClassicSimilarity], result of:
          0.014125523 = score(doc=5804,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 5804, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
        0.05136718 = weight(_text_:retrieval in 5804) [ClassicSimilarity], result of:
          0.05136718 = score(doc=5804,freq=12.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5730491 = fieldWeight in 5804, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
      0.14285715 = coord(2/14)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness
  6. Harter, S.P.; Hert, C.A.: Evaluation of information retrieval systems : approaches, issues, and methods (1997) 0.01
    0.009356101 = product of:
      0.065492705 = sum of:
        0.014125523 = weight(_text_:information in 2264) [ClassicSimilarity], result of:
          0.014125523 = score(doc=2264,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 2264, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2264)
        0.05136718 = weight(_text_:retrieval in 2264) [ClassicSimilarity], result of:
          0.05136718 = score(doc=2264,freq=12.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5730491 = fieldWeight in 2264, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2264)
      0.14285715 = coord(2/14)
    
    Abstract
    State of the art review of information retrieval systems, defined as systems retrieving documents a sopposed to numerical data. Explains the classic Cranfield studies that have served as a standard for retrieval testing since the 1960s and discusses the Cranfield model and its relevance based measures of retrieval effectiveness. Details sosme of the problems with the Cranfield instruments and issues of validity and reliability, generalizability, usefulness and basic concepts. Discusses the evaluation of the Internet search engines in light of the Cranfield model, noting the very real differences between batch systems (Cranfield) and interactive systems (Internet). Because the Internet collection is not fixed, it is impossible to determine recall as a measure of retrieval effectiveness. considers future directions in evaluating information retrieval systems
    Source
    Annual review of information science and technology. 32(1997), S.3-94
  7. Strzalkowski, T.; Perez-Carballo, J.: Natural language information retrieval : TREC-4 report (1996) 0.01
    0.008992559 = product of:
      0.062947914 = sum of:
        0.012107591 = weight(_text_:information in 3211) [ClassicSimilarity], result of:
          0.012107591 = score(doc=3211,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 3211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3211)
        0.050840326 = weight(_text_:retrieval in 3211) [ClassicSimilarity], result of:
          0.050840326 = score(doc=3211,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5671716 = fieldWeight in 3211, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=3211)
      0.14285715 = coord(2/14)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  8. Wilbur, W.J.: Human subjectivity and performance limits in document retrieval (1996) 0.01
    0.008844766 = product of:
      0.06191336 = sum of:
        0.013980643 = weight(_text_:information in 6607) [ClassicSimilarity], result of:
          0.013980643 = score(doc=6607,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2687516 = fieldWeight in 6607, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6607)
        0.047932718 = weight(_text_:retrieval in 6607) [ClassicSimilarity], result of:
          0.047932718 = score(doc=6607,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5347345 = fieldWeight in 6607, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6607)
      0.14285715 = coord(2/14)
    
    Abstract
    Test sets for the document retrieval task composed of human relevance judgments have been constructed that allow one to compare human performance directly with that of automatic methods and that place absolute limits on performance by any method. Current retrieval systems are found to generate only about half of the information allowed by these absolute limits. The data suggests that most of the improvement that could be achieved consistent with these limits can only be achieved by incorporating specific subject information into retrieval systems
    Source
    Information processing and management. 32(1996) no.5, S.515-527
  9. Wolfram, D.; Volz, A.; Dimitroff, A.: ¬The effect of linkage structure on retrieval performance in a hypertext-based bibliographic retrieval system (1996) 0.01
    0.008765062 = product of:
      0.06135543 = sum of:
        0.009988253 = weight(_text_:information in 6622) [ClassicSimilarity], result of:
          0.009988253 = score(doc=6622,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1920054 = fieldWeight in 6622, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6622)
        0.05136718 = weight(_text_:retrieval in 6622) [ClassicSimilarity], result of:
          0.05136718 = score(doc=6622,freq=12.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5730491 = fieldWeight in 6622, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6622)
      0.14285715 = coord(2/14)
    
    Abstract
    Investigates how linkage environments in a hypertext based bibliographic retrieval system affect retrieval performance for novice and experienced searchers, 2 systems, 1 with inter record linkages to authors and descriptors and 1 that also included title and abstract keywords, were tested. No significant differences in retrieval performance and system usage were found for most search tests. The enhanced system did provide better performance where title and abstract keywords provided the most direct access to relevant records. The findings have implications for the design of bilbiographic information retrieval systems using hypertext linkages
    Source
    Information processing and management. 32(1996) no.5, S.529-541
  10. ¬The Second Text Retrieval Conference : TREC-2 (1995) 0.01
    0.00876083 = product of:
      0.061325807 = sum of:
        0.0104854815 = weight(_text_:information in 1320) [ClassicSimilarity], result of:
          0.0104854815 = score(doc=1320,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.20156369 = fieldWeight in 1320, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1320)
        0.050840326 = weight(_text_:retrieval in 1320) [ClassicSimilarity], result of:
          0.050840326 = score(doc=1320,freq=16.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5671716 = fieldWeight in 1320, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1320)
      0.14285715 = coord(2/14)
    
    Abstract
    A special issue devoted to papers from the 2nd Text Retrieval Conference (TREC-2) held in Aug 93
    Content
    Enthält die Beiträge: HARMAN, D.: Overview of the Second Text Retrieval Conference (TREC-2); SPRACK JONES, K.: Reflections on TREC; BUCKLEY, C., J. ALLAN u. G. SALTON: Automatic routing and retrieval using SMART: TREC-2; CALLAN, J.P., W.B. CROFT u. J. BROGLIO: TREC and TIPSTER experiments with INQUERY; ROBERTSON, S.R., S. WALKER u. M.M. HANCOCK-BEAULIEU: Large test collection experiments on an operational, interactive system: OKAPI at TREC; ZOBEL, J., A. MOFFAT, R. WILKINSON u. R. SACKS-DAVIS: Efficient retrieval of partial documents; METTLER, M. u. F. NORDBY: TREC routing experiments with the TRW/Paracel Fast Data Finder; EVANS, D.A. u. R.G. LEFFERTS: CLARIT-TREC experiments; STRZALKOWSKI, T.: Natural language information retrieval; CAID, W.R., S.T. DUMAIS u. S.I. GALLANT: Learned vector-space models for document retrieval; BELKIN, N.J. P. KANTOR, E.A. FOX u. J.A. SHAW: Combining the evidence of multiple query representations for information retrieval
    Source
    Information processing and management. 31(1995) no.3, S.269-448
  11. Tague-Sutcliffe, J.M.: Some perspectives on the evaluation of information retrieval systems (1996) 0.01
    0.008716733 = product of:
      0.06101713 = sum of:
        0.014125523 = weight(_text_:information in 4163) [ClassicSimilarity], result of:
          0.014125523 = score(doc=4163,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 4163, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4163)
        0.046891607 = weight(_text_:retrieval in 4163) [ClassicSimilarity], result of:
          0.046891607 = score(doc=4163,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5231199 = fieldWeight in 4163, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4163)
      0.14285715 = coord(2/14)
    
    Abstract
    As an introduction to the papers in this special issue, some of the major problems facing in investigators evaluating information retrieval systems are presented. These problems include the question of the necessity of using real users, as opposed to subject experts, in making relevance judgements, the possibility of evaluating individual components of the retrieval process, rather than the process as a whole, the kinds of aggregation that are appropriate for the measures used in evaluating systems, the value of an analytic or simulatory, as opposed to an experimental, approach in evaluation retrieval systems, the difficulties in evaluating interactive systems, and the kind of generalization which are possible from information retrieval tests.
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.1-3
  12. Wan, T.-L.; Evens, M.; Wan, Y.-W.; Pao, Y.-Y.: Experiments with automatic indexing and a relational thesaurus in a Chinese information retrieval system (1997) 0.01
    0.008716733 = product of:
      0.06101713 = sum of:
        0.014125523 = weight(_text_:information in 956) [ClassicSimilarity], result of:
          0.014125523 = score(doc=956,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 956, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=956)
        0.046891607 = weight(_text_:retrieval in 956) [ClassicSimilarity], result of:
          0.046891607 = score(doc=956,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5231199 = fieldWeight in 956, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=956)
      0.14285715 = coord(2/14)
    
    Abstract
    This article describes a series of experiments with an interactive Chinese information retrieval system named CIRS and an interactive relational thesaurus. 2 important issues have been explored: whether thesauri enhance the retrieval effectiveness of Chinese documents, and whether automatic indexing can complete with manual indexing in a Chinese information retrieval system. Recall and precision are used to measure and evaluate the effectiveness of the system. Statistical analysis of the recall and precision measures suggest that the use of the relational thesaurus does improve the retrieval effectiveness both in the automatic indexing environment and in the manual indexing environment and that automatic indexing is at least as good as manual indexing
    Source
    Journal of the American Society for Information Science. 48(1997) no.12, S.1086-1096
  13. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.008668196 = product of:
      0.060677372 = sum of:
        0.04194113 = weight(_text_:retrieval in 6418) [ClassicSimilarity], result of:
          0.04194113 = score(doc=6418,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 6418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=6418)
        0.018736245 = product of:
          0.056208733 = sum of:
            0.056208733 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.056208733 = score(doc=6418,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Source
    Online. 22(1998) no.6, S.57-58
  14. Harman, D.K.: ¬The first text retrieval conference : TREC-1, 1992 (1993) 0.01
    0.008478267 = product of:
      0.059347864 = sum of:
        0.011415146 = weight(_text_:information in 1317) [ClassicSimilarity], result of:
          0.011415146 = score(doc=1317,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 1317, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1317)
        0.047932718 = weight(_text_:retrieval in 1317) [ClassicSimilarity], result of:
          0.047932718 = score(doc=1317,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5347345 = fieldWeight in 1317, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=1317)
      0.14285715 = coord(2/14)
    
    Abstract
    Reports on the 1st Text Retrieval Conference (TREC-1) held in Rockville, MD, 4-6 Nov. 1992. The TREC experiment is being run by the National Institute of Standards and Technology to allow information retrieval researchers to scale up from small collection of data to larger sized experiments. Gropus of researchers have been provided with text documents compressed on CD-ROM. They used experimental retrieval system to search the text and evaluate the results
    Source
    Information processing and management. 29(1993) no.4, S.411-414
  15. Dunlop, M.D.; Johnson, C.W.; Reid, J.: Exploring the layers of information retrieval evaluation (1998) 0.01
    0.008247707 = product of:
      0.057733946 = sum of:
        0.015792815 = weight(_text_:information in 3762) [ClassicSimilarity], result of:
          0.015792815 = score(doc=3762,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3035872 = fieldWeight in 3762, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3762)
        0.04194113 = weight(_text_:retrieval in 3762) [ClassicSimilarity], result of:
          0.04194113 = score(doc=3762,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 3762, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3762)
      0.14285715 = coord(2/14)
    
    Abstract
    Presents current work on modelling interactive information retrieval systems and users' interactions with them. Analyzes the papers in this special issue in the context of evaluation in information retrieval (IR) by examining the different layers at which IR use could be evaluated. IR poses the double evaluation problem of evaluating both the underlying system effectiveness and the overall ability of the system to aid users. The papers look at different issues in combining human-computer interaction (HCI) research with IR research and provide insights into the problem of evaluating the information seeking process
    Footnote
    Contribution to a special section of articles related to human-computer interaction and information retrieval
  16. Harman, D.K.: ¬The TREC conferences (1995) 0.01
    0.008090839 = product of:
      0.05663587 = sum of:
        0.014268933 = weight(_text_:information in 1932) [ClassicSimilarity], result of:
          0.014268933 = score(doc=1932,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27429342 = fieldWeight in 1932, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1932)
        0.042366937 = weight(_text_:retrieval in 1932) [ClassicSimilarity], result of:
          0.042366937 = score(doc=1932,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.47264296 = fieldWeight in 1932, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=1932)
      0.14285715 = coord(2/14)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufman 1997. S.247-256.
    Source
    Hypertext - Information Retrieval - Multimedia: HIM '95. Synergieeffekte elektronischer Informationssysteme. Hrsg.: R. Kuhlen u. M. Rittberger
  17. Harter, S.P.: Search term combinations and retrieval overlap : a proposed methodology and case study (1990) 0.01
    0.008009522 = product of:
      0.05606665 = sum of:
        0.014125523 = weight(_text_:information in 339) [ClassicSimilarity], result of:
          0.014125523 = score(doc=339,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=339)
        0.04194113 = weight(_text_:retrieval in 339) [ClassicSimilarity], result of:
          0.04194113 = score(doc=339,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=339)
      0.14285715 = coord(2/14)
    
    Source
    Journal of the American Society for Information Science. 41(1990) no.2, S.132-146
  18. Wilbur, W.J.: Human subjectivity and performance limits in document retrieval (1999) 0.01
    0.008009522 = product of:
      0.05606665 = sum of:
        0.014125523 = weight(_text_:information in 4539) [ClassicSimilarity], result of:
          0.014125523 = score(doc=4539,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 4539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4539)
        0.04194113 = weight(_text_:retrieval in 4539) [ClassicSimilarity], result of:
          0.04194113 = score(doc=4539,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 4539, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=4539)
      0.14285715 = coord(2/14)
    
    Source
    Encyclopedia of library and information science. Vol.64, [=Suppl.27]
  19. Gillman, P.: Text retrieval (1998) 0.01
    0.008000636 = product of:
      0.056004446 = sum of:
        0.008071727 = weight(_text_:information in 1502) [ClassicSimilarity], result of:
          0.008071727 = score(doc=1502,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 1502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
        0.047932718 = weight(_text_:retrieval in 1502) [ClassicSimilarity], result of:
          0.047932718 = score(doc=1502,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5347345 = fieldWeight in 1502, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
      0.14285715 = coord(2/14)
    
    Abstract
    Considers some of the papers given at the 1997 Text Retrieval conference (TR 97) in the context of the development of text retrieval software and research, from the Cranfield experiments of the early 1960s up to the recent TREC tests. Suggests that the primitive techniques currently employed for searching the WWW appear to ignore all the serious work done on information retrieval over the past 4 decades
  20. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.01
    0.008000636 = product of:
      0.056004446 = sum of:
        0.008071727 = weight(_text_:information in 5689) [ClassicSimilarity], result of:
          0.008071727 = score(doc=5689,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 5689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
        0.047932718 = weight(_text_:retrieval in 5689) [ClassicSimilarity], result of:
          0.047932718 = score(doc=5689,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5347345 = fieldWeight in 5689, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
      0.14285715 = coord(2/14)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
    Source
    Journal of information science. 18(1992) no.2, S.139-147
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval

Types

  • a 183
  • s 6
  • m 3
  • el 1
  • r 1
  • More… Less…