Search (5 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × author_ss:"Smeaton, A.F."
  1. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 5804) [ClassicSimilarity], result of:
          0.016657405 = score(doc=5804,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 5804, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
      0.25 = coord(1/4)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness
  2. Smeaton, A.F.: TREC-6: personal highlights (2000) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 6439) [ClassicSimilarity], result of:
          0.016657405 = score(doc=6439,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 6439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6439)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 36(2000) no.1, S.87-94
  3. Smeaton, A.F.; Harman, D.: ¬The TREC experiments and their impact on Europe (1997) 0.00
    0.004121639 = product of:
      0.016486555 = sum of:
        0.016486555 = weight(_text_:information in 7702) [ClassicSimilarity], result of:
          0.016486555 = score(doc=7702,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2687516 = fieldWeight in 7702, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7702)
      0.25 = coord(1/4)
    
    Abstract
    Reviews the overall results of the TREC experiments in information retrieval, which differed from other information retrieval research projects in that the document collections used in the research were massive, and the groups participating in the collaborative evaluation are among the main organizations in the field. Reviews the findings of TREC, the way in which it operates and the specialist 'tracks' it supports and concentrates on european involvement in TREC, examining the participants and the emergence of European TREC like exercises
    Source
    Journal of information science. 23(1997) no.2, S.169-174
  4. Keenan, S.; Smeaton, A.F.; Keogh, G.: ¬The effect of pool depth on system evaluation in TREC (2001) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 5908) [ClassicSimilarity], result of:
          0.008413259 = score(doc=5908,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 5908, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5908)
      0.25 = coord(1/4)
    
    Abstract
    The TREC benchmarking exercise for information retrieval (IR) experiments has provided a forum and an opportunity for IR researchers to evaluate the performance of their approaches to the IR task and has resulted in improvements in IR effectiveness. Typically, retrieval performance has been measured in terms of precision and recall, and comparisons between different IR approaches have been based on these measures. These measures are in turn dependent on the so-called "pool depth" used to discover relevant documents. Whereas there is evidence to suggest that the pool depth size used for TREC evaluations adequately identifies the relevant documents in the entire test data collection, we consider how it affects the evaluations of individual systems. The data used comes from the Sixth TREC conference, TREC-6. By fitting appropriate regression models we explore whether different pool depths confer advantages or disadvantages on different retrieval systems when they are compared. As a consequence of this model fitting, a pair of measures for each retrieval run, which are related to precision and recall, emerge. For each system, these give an extrapolation for the number of relevant documents the system would have been deemed to have retrieved if an indefinitely large pool size had been used, and also a measure of the sensitivity of each system to pool size. We concur that even on the basis of analyses of individual systems, the pool depth of 100 used by TREC is adequate
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.7, S.570-574
  5. Thornley, C.V.; Johnson, A.C.; Smeaton, A.F.; Lee, H.: ¬The scholarly impact of TRECVid (2003-2009) (2011) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 4363) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=4363,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 4363, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4363)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.4, S.613-627