Search (12 results, page 1 of 1)

  • × author_ss:"Smeaton, A.F."
  1. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.05
    0.046197 = product of:
      0.092394 = sum of:
        0.06271934 = weight(_text_:data in 5804) [ClassicSimilarity], result of:
          0.06271934 = score(doc=5804,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 5804, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 5804) [ClassicSimilarity], result of:
              0.05934933 = score(doc=5804,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 5804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5804)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness
  2. Richardson, R.; Smeaton, A.F.; Murphy, J.: Using WordNet for conceptual distance measurement (1996) 0.04
    0.042462487 = product of:
      0.08492497 = sum of:
        0.06271934 = weight(_text_:data in 6965) [ClassicSimilarity], result of:
          0.06271934 = score(doc=6965,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 6965, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6965)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 6965) [ClassicSimilarity], result of:
              0.044411276 = score(doc=6965,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 6965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6965)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Reports results of research to develop an information retrieval technique employing a conceptual distance measure between words and based on a large thesaurus. The techniques is specifically designed for data sharing in large scale autonomous distributed federated databases (FDBS). The prototype federated dictionary system, FEDDICT, stores information on the location of data sets within the FDBS and on semantic relationships exisitng between these data sets. WordNet is used and tested as the medium for bulding and operating FEDDICT
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Sheridan, P.; Smeaton, A.F.: ¬The application of morpho-syntactic language processing to effective phrase matching (1992) 0.02
    0.020983158 = product of:
      0.08393263 = sum of:
        0.08393263 = product of:
          0.16786526 = sum of:
            0.16786526 = weight(_text_:processing in 6575) [ClassicSimilarity], result of:
              0.16786526 = score(doc=6575,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.8855322 = fieldWeight in 6575, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6575)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 28(1992) no.3, S.349-369
  4. Smeaton, A.F.: Natural language processing used in information retrieval tasks : an overview of achievements to date (1995) 0.01
    0.014837332 = product of:
      0.05934933 = sum of:
        0.05934933 = product of:
          0.11869866 = sum of:
            0.11869866 = weight(_text_:processing in 1265) [ClassicSimilarity], result of:
              0.11869866 = score(doc=1265,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.6261658 = fieldWeight in 1265, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1265)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  5. Smeaton, A.F.: TREC-6: personal highlights (2000) 0.01
    0.014837332 = product of:
      0.05934933 = sum of:
        0.05934933 = product of:
          0.11869866 = sum of:
            0.11869866 = weight(_text_:processing in 6439) [ClassicSimilarity], result of:
              0.11869866 = score(doc=6439,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.6261658 = fieldWeight in 6439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6439)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 36(2000) no.1, S.87-94
  6. Smeaton, A.F.: Progress in the application of natural language processing to information retrieval tasks (1992) 0.01
    0.012717713 = product of:
      0.05087085 = sum of:
        0.05087085 = product of:
          0.1017417 = sum of:
            0.1017417 = weight(_text_:processing in 7080) [ClassicSimilarity], result of:
              0.1017417 = score(doc=7080,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.53671354 = fieldWeight in 7080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7080)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  7. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.011102819 = product of:
      0.044411276 = sum of:
        0.044411276 = product of:
          0.08882255 = sum of:
            0.08882255 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.08882255 = score(doc=2134,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 3.2001 13:32:22
  8. Keenan, S.; Smeaton, A.F.; Keogh, G.: ¬The effect of pool depth on system evaluation in TREC (2001) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 5908) [ClassicSimilarity], result of:
          0.03657866 = score(doc=5908,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 5908, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5908)
      0.25 = coord(1/4)
    
    Abstract
    The TREC benchmarking exercise for information retrieval (IR) experiments has provided a forum and an opportunity for IR researchers to evaluate the performance of their approaches to the IR task and has resulted in improvements in IR effectiveness. Typically, retrieval performance has been measured in terms of precision and recall, and comparisons between different IR approaches have been based on these measures. These measures are in turn dependent on the so-called "pool depth" used to discover relevant documents. Whereas there is evidence to suggest that the pool depth size used for TREC evaluations adequately identifies the relevant documents in the entire test data collection, we consider how it affects the evaluations of individual systems. The data used comes from the Sixth TREC conference, TREC-6. By fitting appropriate regression models we explore whether different pool depths confer advantages or disadvantages on different retrieval systems when they are compared. As a consequence of this model fitting, a pair of measures for each retrieval run, which are related to precision and recall, emerge. For each system, these give an extrapolation for the number of relevant documents the system would have been deemed to have retrieved if an indefinitely large pool size had been used, and also a measure of the sensitivity of each system to pool size. We concur that even on the basis of analyses of individual systems, the pool depth of 100 used by TREC is adequate
  9. Smeaton, A.F.: Prospects for intelligent, language-based information retrieval (1991) 0.01
    0.007418666 = product of:
      0.029674664 = sum of:
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 3700) [ClassicSimilarity], result of:
              0.05934933 = score(doc=3700,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 3700, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3700)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Current approaches to text retrieval based on indexing by words or index terms and on retrieving by specifying a Boolean combination of keywords are well known, as are their limitations. Statistical approaches to retrieval, as exemplified in commercial products like STATUS/IQ and Personal Librarian, are slightly better but still have their own weaknesses. Approaches to the indexing and retrieval of text based on techniques of automatic natural language processing (NLP) may soon start to realise their potential in terms of improving the quality and effectiveness of information retrieval. Examines some of the current attempts at using various NLP techniques in both the indexing and retrieval operations
  10. Thornley, C.V.; Johnson, A.C.; Smeaton, A.F.; Lee, H.: ¬The scholarly impact of TRECVid (2003-2009) (2011) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 4363) [ClassicSimilarity], result of:
          0.02586502 = score(doc=4363,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 4363, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4363)
      0.25 = coord(1/4)
    
    Abstract
    This paper reports on an investigation into the scholarly impact of the TRECVid (Text Retrieval and Evaluation Conference, Video Retrieval Evaluation) benchmarking conferences between 2003 and 2009. The contribution of TRECVid to research in video retrieval is assessed by analyzing publication content to show the development of techniques and approaches over time and by analyzing publication impact through publication numbers and citation analysis. Popular conference and journal venues for TRECVid publications are identified in terms of number of citations received. For a selection of participants at different career stages, the relative importance of TRECVid publications in terms of citations vis à vis their other publications is investigated. TRECVid, as an evaluation conference, provides data on which research teams 'scored' highly against the evaluation criteria and the relationship between 'top scoring' teams at TRECVid and the 'top scoring' papers in terms of citations is analyzed. A strong relationship was found between 'success' at TRECVid and 'success' at citations both for high scoring and low scoring teams. The implications of the study in terms of the value of TRECVid as a research activity, and the value of bibliometric analysis as a research evaluation tool, are discussed.
  11. Kelledy, F.; Smeaton, A.F.: Signature files and beyond (1996) 0.00
    0.0047583506 = product of:
      0.019033402 = sum of:
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 6973) [ClassicSimilarity], result of:
              0.038066804 = score(doc=6973,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 6973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6973)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  12. O'Donnell, R.; Smeaton, A.F.: ¬A linguistic approach to information retrieval (1996) 0.00
    0.0047583506 = product of:
      0.019033402 = sum of:
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 2575) [ClassicSimilarity], result of:
              0.038066804 = score(doc=2575,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 2575, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2575)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon