Search (1 results, page 1 of 1)

  • × author_ss:"Beaulieu, M."
  • × author_ss:"Robertson, S."
  • × theme_ss:"Retrievalstudien"
  1. Beaulieu, M.; Robertson, S.; Rasmussen, E.: Evaluating interactive systems in TREC (1996) 0.00
    0.00424829 = product of:
      0.01699316 = sum of:
        0.01699316 = weight(_text_:information in 2998) [ClassicSimilarity], result of:
          0.01699316 = score(doc=2998,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1920054 = fieldWeight in 2998, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2998)
      0.25 = coord(1/4)
    
    Abstract
    The TREC experiments were designed to allow large-scale laboratory testing of information retrieval techniques. As the experiments have progressed, groups within TREC have become increasingly interested in finding ways to allow user interaction without invalidating the experimental design. The development of an 'interactive track' within TREC to accomodate user interaction has required some modifications in the way the retrieval task is designed. In particular there is a need to simulate a realistic interactive searching task within a laboratory environment. Through successive interactive studies in TREC, the Okapi team at City University London has identified methodological issues relevant to this process. A diagnostic experiment was conducted as a follow-up to TREC searches which attempted to isolate the human nad automatic contributions to query formulation and retrieval performance
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.85-94