Search (5 results, page 1 of 1)

  • × author_ss:"Smeaton, A.F."
  • × theme_ss:"Retrievalstudien"
  1. Smeaton, A.F.; Harman, D.: ¬The TREC experiments and their impact on Europe (1997) 0.00
    0.003148173 = product of:
      0.018889038 = sum of:
        0.018889038 = weight(_text_:in in 7702) [ClassicSimilarity], result of:
          0.018889038 = score(doc=7702,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.31810042 = fieldWeight in 7702, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=7702)
      0.16666667 = coord(1/6)
    
    Abstract
    Reviews the overall results of the TREC experiments in information retrieval, which differed from other information retrieval research projects in that the document collections used in the research were massive, and the groups participating in the collaborative evaluation are among the main organizations in the field. Reviews the findings of TREC, the way in which it operates and the specialist 'tracks' it supports and concentrates on european involvement in TREC, examining the participants and the emergence of European TREC like exercises
  2. Keenan, S.; Smeaton, A.F.; Keogh, G.: ¬The effect of pool depth on system evaluation in TREC (2001) 0.00
    0.001821651 = product of:
      0.010929906 = sum of:
        0.010929906 = weight(_text_:in in 5908) [ClassicSimilarity], result of:
          0.010929906 = score(doc=5908,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 5908, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5908)
      0.16666667 = coord(1/6)
    
    Abstract
    The TREC benchmarking exercise for information retrieval (IR) experiments has provided a forum and an opportunity for IR researchers to evaluate the performance of their approaches to the IR task and has resulted in improvements in IR effectiveness. Typically, retrieval performance has been measured in terms of precision and recall, and comparisons between different IR approaches have been based on these measures. These measures are in turn dependent on the so-called "pool depth" used to discover relevant documents. Whereas there is evidence to suggest that the pool depth size used for TREC evaluations adequately identifies the relevant documents in the entire test data collection, we consider how it affects the evaluations of individual systems. The data used comes from the Sixth TREC conference, TREC-6. By fitting appropriate regression models we explore whether different pool depths confer advantages or disadvantages on different retrieval systems when they are compared. As a consequence of this model fitting, a pair of measures for each retrieval run, which are related to precision and recall, emerge. For each system, these give an extrapolation for the number of relevant documents the system would have been deemed to have retrieved if an indefinitely large pool size had been used, and also a measure of the sensitivity of each system to pool size. We concur that even on the basis of analyses of individual systems, the pool depth of 100 used by TREC is adequate
  3. Thornley, C.V.; Johnson, A.C.; Smeaton, A.F.; Lee, H.: ¬The scholarly impact of TRECVid (2003-2009) (2011) 0.00
    0.0016629322 = product of:
      0.009977593 = sum of:
        0.009977593 = weight(_text_:in in 4363) [ClassicSimilarity], result of:
          0.009977593 = score(doc=4363,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.16802745 = fieldWeight in 4363, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4363)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper reports on an investigation into the scholarly impact of the TRECVid (Text Retrieval and Evaluation Conference, Video Retrieval Evaluation) benchmarking conferences between 2003 and 2009. The contribution of TRECVid to research in video retrieval is assessed by analyzing publication content to show the development of techniques and approaches over time and by analyzing publication impact through publication numbers and citation analysis. Popular conference and journal venues for TRECVid publications are identified in terms of number of citations received. For a selection of participants at different career stages, the relative importance of TRECVid publications in terms of citations vis à vis their other publications is investigated. TRECVid, as an evaluation conference, provides data on which research teams 'scored' highly against the evaluation criteria and the relationship between 'top scoring' teams at TRECVid and the 'top scoring' papers in terms of citations is analyzed. A strong relationship was found between 'success' at TRECVid and 'success' at citations both for high scoring and low scoring teams. The implications of the study in terms of the value of TRECVid as a research activity, and the value of bibliometric analysis as a research evaluation tool, are discussed.
  4. Smeaton, A.F.; Kelledy, L.; O'Donnell, R.: TREC-4 experiments at Dublin City University : thresholding posting lists, query expansion with WordNet and POS tagging of Spanish (1996) 0.00
    0.0014873719 = product of:
      0.008924231 = sum of:
        0.008924231 = weight(_text_:in in 7000) [ClassicSimilarity], result of:
          0.008924231 = score(doc=7000,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.15028831 = fieldWeight in 7000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=7000)
      0.16666667 = coord(1/6)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  5. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.00
    0.0014724231 = product of:
      0.008834538 = sum of:
        0.008834538 = weight(_text_:in in 5804) [ClassicSimilarity], result of:
          0.008834538 = score(doc=5804,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14877784 = fieldWeight in 5804, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
      0.16666667 = coord(1/6)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness