Search (4 results, page 1 of 1)

  • × author_ss:"Robertson, S."
  • × theme_ss:"Retrievalstudien"
  1. Robertson, S.; Callan, J.: Routing and filtering (2005) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 4688) [ClassicSimilarity], result of:
          0.016657405 = score(doc=4688,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 4688, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4688)
      0.25 = coord(1/4)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  2. Robertson, S.: How Okapi came to TREC (2005) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 5087) [ClassicSimilarity], result of:
          0.014277775 = score(doc=5087,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 5087, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=5087)
      0.25 = coord(1/4)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  3. Robertson, S.: On the history of evaluation in IR (2009) 0.00
    0.0033653039 = product of:
      0.013461215 = sum of:
        0.013461215 = weight(_text_:information in 3653) [ClassicSimilarity], result of:
          0.013461215 = score(doc=3653,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21943474 = fieldWeight in 3653, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3653)
      0.25 = coord(1/4)
    
    Abstract
    This paper is a personal take on the history of evaluation experiments in information retrieval. It describes some of the early experiments that were formative in our understanding, and goes on to discuss the current dominance of TREC (the Text REtrieval Conference) and to assess its impact.
    Source
    Information science in transition, Ed.: A. Gilchrist
  4. Beaulieu, M.; Robertson, S.; Rasmussen, E.: Evaluating interactive systems in TREC (1996) 0.00
    0.0029446408 = product of:
      0.011778563 = sum of:
        0.011778563 = weight(_text_:information in 2998) [ClassicSimilarity], result of:
          0.011778563 = score(doc=2998,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1920054 = fieldWeight in 2998, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2998)
      0.25 = coord(1/4)
    
    Abstract
    The TREC experiments were designed to allow large-scale laboratory testing of information retrieval techniques. As the experiments have progressed, groups within TREC have become increasingly interested in finding ways to allow user interaction without invalidating the experimental design. The development of an 'interactive track' within TREC to accomodate user interaction has required some modifications in the way the retrieval task is designed. In particular there is a need to simulate a realistic interactive searching task within a laboratory environment. Through successive interactive studies in TREC, the Okapi team at City University London has identified methodological issues relevant to this process. A diagnostic experiment was conducted as a follow-up to TREC searches which attempted to isolate the human nad automatic contributions to query formulation and retrieval performance
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.85-94