Search (4 results, page 1 of 1)

  • × author_ss:"Beaulieu, M."
  • × theme_ss:"Retrievalstudien"
  1. Beaulieu, M.: Approaches to user-based studies in information seeking and retrieval : a Sheffield perspective (2003) 0.01
    0.0058892816 = product of:
      0.023557127 = sum of:
        0.023557127 = weight(_text_:information in 4692) [ClassicSimilarity], result of:
          0.023557127 = score(doc=4692,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3840108 = fieldWeight in 4692, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4692)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 29(2003) no.4, S.239-248
  2. Robertson, S.E.; Walker, S.; Beaulieu, M.: Experimentation as a way of life : Okapi at TREC (2000) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 6030) [ClassicSimilarity], result of:
          0.014277775 = score(doc=6030,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 6030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=6030)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 36(2000) no.1, S.95-108
  3. Robertson, S.E.; Walker, S.; Beaulieu, M.: Laboratory experiments with Okapi : participation in the TREC programme (1997) 0.00
    0.0029446408 = product of:
      0.011778563 = sum of:
        0.011778563 = weight(_text_:information in 2216) [ClassicSimilarity], result of:
          0.011778563 = score(doc=2216,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1920054 = fieldWeight in 2216, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2216)
      0.25 = coord(1/4)
    
    Abstract
    Briefly reviews the history of laboratory testing of information retrieval systems, focusing on the idea of a general purpose test collection of documents, queries and relevance judgements. Gives an overview of the methods used in TREC (Text Retrieval Conference) which is concerned with an ideal test collection, and discusses the Okapi team's participation in TREC. Also discusses some of the issues surrounding the difficult problem of interactive evaluation in TREC. The reconciliation of the requirements of the laboratory context with the concerns of interactive retrieval has a long way to go
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
  4. Beaulieu, M.; Robertson, S.; Rasmussen, E.: Evaluating interactive systems in TREC (1996) 0.00
    0.0029446408 = product of:
      0.011778563 = sum of:
        0.011778563 = weight(_text_:information in 2998) [ClassicSimilarity], result of:
          0.011778563 = score(doc=2998,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1920054 = fieldWeight in 2998, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2998)
      0.25 = coord(1/4)
    
    Abstract
    The TREC experiments were designed to allow large-scale laboratory testing of information retrieval techniques. As the experiments have progressed, groups within TREC have become increasingly interested in finding ways to allow user interaction without invalidating the experimental design. The development of an 'interactive track' within TREC to accomodate user interaction has required some modifications in the way the retrieval task is designed. In particular there is a need to simulate a realistic interactive searching task within a laboratory environment. Through successive interactive studies in TREC, the Okapi team at City University London has identified methodological issues relevant to this process. A diagnostic experiment was conducted as a follow-up to TREC searches which attempted to isolate the human nad automatic contributions to query formulation and retrieval performance
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.85-94