Search (3 results, page 1 of 1)

  • × author_ss:"Sparck Jones, K."
  • × theme_ss:"Retrievalstudien"
  1. Sparck Jones, K.: Reflections on TREC : TREC-2 (1995) 0.04
    0.03726713 = product of:
      0.07453426 = sum of:
        0.07453426 = product of:
          0.14906852 = sum of:
            0.14906852 = weight(_text_:light in 1916) [ClassicSimilarity], result of:
              0.14906852 = score(doc=1916,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.51047 = fieldWeight in 1916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1916)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Discusses the TREC programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within it in terms of the approaches to system performance factors these represent; analyses the test results for solid, overall conclusions that can be drawn from them; and, in the light of the particular features of the test data, assesses TREC both for generally applicable findings that emerge from it and for directions it offers for future research
  2. Sparck Jones, K.; Rijsbergen, C.J. van: Progress in documentation : Information retrieval test collection (1976) 0.03
    0.03260874 = product of:
      0.06521748 = sum of:
        0.06521748 = product of:
          0.13043496 = sum of:
            0.13043496 = weight(_text_:light in 4161) [ClassicSimilarity], result of:
              0.13043496 = score(doc=4161,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.44666123 = fieldWeight in 4161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many retrieval experiments have been based on inadequate test collections, and current research is hampered by the lack of proper collections. This short review does not attempt a fully docuemted survey of all the collections used in the past decade: hopefully representative examples have been studied to throw light on the requriements test collections should meet, to show how past collections have been defective, and to suggest guidelines for a future "ideal" test collection. This specifications for this collection can be taken as an indirect comment on our present state of knowledge of major retrieval system variables, and experience in conducting experiments.
  3. Sparck Jones, K.: Reflections on TREC (1997) 0.03
    0.027950348 = product of:
      0.055900697 = sum of:
        0.055900697 = product of:
          0.11180139 = sum of:
            0.11180139 = weight(_text_:light in 580) [ClassicSimilarity], result of:
              0.11180139 = score(doc=580,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.3828525 = fieldWeight in 580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.046875 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses the Text REtrieval Conferences (TREC) programme as a major enterprise in information retrieval research. It reviews its structure as an evaluation exercise, characterises the methods of indexing and retrieval being tested within its terms of the approaches to system performance factors these represent; analyses the test results for solid, overall conclusions that can be drawn from them; and, in the light of the particular features of the test data, assesses TREC both for generally applicable findings that emerge from it and for directions it offers for future research