Search (18 results, page 1 of 1)

  • × author_ss:"Smeaton, A.F."
  1. Smeaton, A.F.: Retrieving information from hypertext : issues and problems (1991) 0.01
    0.007138887 = product of:
      0.028555548 = sum of:
        0.028555548 = weight(_text_:information in 4278) [ClassicSimilarity], result of:
          0.028555548 = score(doc=4278,freq=18.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.46549135 = fieldWeight in 4278, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4278)
      0.25 = coord(1/4)
    
    Abstract
    Hypertext uses a browsing rather than a searching strategy. Hypertext systems have found applications in a number of areas. They give users choice of information but this can prove a drawback. Examnines the effectiveness of hypertext as a way of retrieving information and reviews conventional information retrieval techniques. Considers previous attempts at combining information retrieval and hypertext and outlines a prototype systems developed to generate guided tours for users to direct them through hypertext to information they have requested. Discusses how adding this kind of itelligent guidance to a hypertext system would affect its usability as an information system
    Source
    European journal of information systems. 1(1991) no.4, S.239-247
    Theme
    Information
  2. Smeaton, A.F.: Information retrieval and hypertext : competing technologies or complementary access methods (1992) 0.01
    0.0067306077 = product of:
      0.02692243 = sum of:
        0.02692243 = weight(_text_:information in 7503) [ClassicSimilarity], result of:
          0.02692243 = score(doc=7503,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.43886948 = fieldWeight in 7503, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=7503)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 2(1992), S.221-233
  3. Agosti, M.; Smeaton, A.F.: Information retrieval and hypertext (1996) 0.01
    0.0066512655 = product of:
      0.026605062 = sum of:
        0.026605062 = weight(_text_:information in 497) [ClassicSimilarity], result of:
          0.026605062 = score(doc=497,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.43369597 = fieldWeight in 497, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=497)
      0.25 = coord(1/4)
    
    COMPASS
    Information retrieval
    LCSH
    Information retrieval
    Subject
    Information retrieval
    Information retrieval
  4. Smeaton, A.F.: Natural language processing used in information retrieval tasks : an overview of achievements to date (1995) 0.01
    0.0058892816 = product of:
      0.023557127 = sum of:
        0.023557127 = weight(_text_:information in 1265) [ClassicSimilarity], result of:
          0.023557127 = score(doc=1265,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.3840108 = fieldWeight in 1265, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=1265)
      0.25 = coord(1/4)
    
    Source
    Encyclopedia of library and information science. Vol.55, [=Suppl.18]
  5. O'Donnell, R.; Smeaton, A.F.: ¬A linguistic approach to information retrieval (1996) 0.00
    0.004371658 = product of:
      0.017486632 = sum of:
        0.017486632 = weight(_text_:information in 2575) [ClassicSimilarity], result of:
          0.017486632 = score(doc=2575,freq=12.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2850541 = fieldWeight in 2575, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2575)
      0.25 = coord(1/4)
    
    Abstract
    An important aspect of information retrieval systems is domain independence, where the subject of the information is not restricted to certain domains of knowledge. This should be able to represent any topic and although the text representation does not involve any semantic knowledge, lexical and syntactic analysis of the text allows the representation to remain domain independent. Reports research at Dublin City University, Ireland, which concentrates on the lexical and syntactic levels of natural language analysis and describes a domain independent automatic information retrieval system which accesses a very large database of newspaper text from the Wall Street Journal. The system represents the text in the form of syntax trees, and these trees are used in the matching process. Reports early results from the stuyd
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  6. Sheridan, P.; Smeaton, A.F.: ¬The application of morpho-syntactic language processing to effective phrase matching (1992) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 6575) [ClassicSimilarity], result of:
          0.016657405 = score(doc=6575,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 6575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6575)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 28(1992) no.3, S.349-369
  7. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 5804) [ClassicSimilarity], result of:
          0.016657405 = score(doc=5804,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 5804, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
      0.25 = coord(1/4)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness
  8. Richardson, R.; Smeaton, A.F.; Murphy, J.: Using WordNet for conceptual distance measurement (1996) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 6965) [ClassicSimilarity], result of:
          0.016657405 = score(doc=6965,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 6965, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6965)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of research to develop an information retrieval technique employing a conceptual distance measure between words and based on a large thesaurus. The techniques is specifically designed for data sharing in large scale autonomous distributed federated databases (FDBS). The prototype federated dictionary system, FEDDICT, stores information on the location of data sets within the FDBS and on semantic relationships exisitng between these data sets. WordNet is used and tested as the medium for bulding and operating FEDDICT
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  9. Smeaton, A.F.: TREC-6: personal highlights (2000) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 6439) [ClassicSimilarity], result of:
          0.016657405 = score(doc=6439,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 6439, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6439)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 36(2000) no.1, S.87-94
  10. Smeaton, A.F.; Harman, D.: ¬The TREC experiments and their impact on Europe (1997) 0.00
    0.004121639 = product of:
      0.016486555 = sum of:
        0.016486555 = weight(_text_:information in 7702) [ClassicSimilarity], result of:
          0.016486555 = score(doc=7702,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2687516 = fieldWeight in 7702, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7702)
      0.25 = coord(1/4)
    
    Abstract
    Reviews the overall results of the TREC experiments in information retrieval, which differed from other information retrieval research projects in that the document collections used in the research were massive, and the groups participating in the collaborative evaluation are among the main organizations in the field. Reviews the findings of TREC, the way in which it operates and the specialist 'tracks' it supports and concentrates on european involvement in TREC, examining the participants and the emergence of European TREC like exercises
    Source
    Journal of information science. 23(1997) no.2, S.169-174
  11. Smeaton, A.F.: Progress in the application of natural language processing to information retrieval tasks (1992) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 7080) [ClassicSimilarity], result of:
          0.014277775 = score(doc=7080,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 7080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=7080)
      0.25 = coord(1/4)
    
  12. Smeaton, A.F.: Prospects for intelligent, language-based information retrieval (1991) 0.00
    0.0029446408 = product of:
      0.011778563 = sum of:
        0.011778563 = weight(_text_:information in 3700) [ClassicSimilarity], result of:
          0.011778563 = score(doc=3700,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1920054 = fieldWeight in 3700, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3700)
      0.25 = coord(1/4)
    
    Abstract
    Current approaches to text retrieval based on indexing by words or index terms and on retrieving by specifying a Boolean combination of keywords are well known, as are their limitations. Statistical approaches to retrieval, as exemplified in commercial products like STATUS/IQ and Personal Librarian, are slightly better but still have their own weaknesses. Approaches to the indexing and retrieval of text based on techniques of automatic natural language processing (NLP) may soon start to realise their potential in terms of improving the quality and effectiveness of information retrieval. Examines some of the current attempts at using various NLP techniques in both the indexing and retrieval operations
  13. Smeaton, A.F.: Indexing, browsing, and searching of digital video (2003) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 4274) [ClassicSimilarity], result of:
          0.010304097 = score(doc=4274,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 4274, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4274)
      0.25 = coord(1/4)
    
    Abstract
    Video is a communications medium that normally brings together moving pictures with a synchronized audio track into a discrete piece or pieces of information. A "piece" of video is variously referred to as a frame, a shot, a scene, a Clip, a program, or an episode; these pieces are distinguished by their length and by their composition. We shall return to the definition of each of these in the section an automatically structuring and indexing digital video. In modern society, Video is commonplace and is usually equated with television, movies, or home Video produced by a Video camera or camcorder. We also accept Video recorded from closed circuit TVs for security and surveillance as part of our daily lives. In short, Video is ubiquitous. Digital Video is, as the name suggests, the creation or capture of Video information in digital format. Most Video produced today, commercial, surveillance, or domestic, is produced in digital form, although the medium of Video predates the development of digital computing by several decades. The essential nature of Video has not changed with the advent of digital computing. It is still moving pictures and synchronized audio. However, the production methods and the end product have gone through significant evolution, in the last decade especially.
    Source
    Annual review of information science and technology. 38(2004), S.371-409
  14. Kelledy, F.; Smeaton, A.F.: Signature files and beyond (1996) 0.00
    0.0025239778 = product of:
      0.010095911 = sum of:
        0.010095911 = weight(_text_:information in 6973) [ClassicSimilarity], result of:
          0.010095911 = score(doc=6973,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16457605 = fieldWeight in 6973, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6973)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  15. Richardson, R.; Smeaton, A.F.: Automatic word sense disambiguation in a KBIR application (1995) 0.00
    0.002379629 = product of:
      0.009518516 = sum of:
        0.009518516 = weight(_text_:information in 5796) [ClassicSimilarity], result of:
          0.009518516 = score(doc=5796,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1551638 = fieldWeight in 5796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5796)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the implementation and design of an automatic word sense disambiguator. The semantic tagger is used in an overall Knowledge Based Information Retrieval (KBIR) system which uses a WordNet derived knowledge base (KB) and 2 independent semantic similarity estimators. The KB is used as a controlled vocabulary to represent documents and queries and the semantic similarity estimators are employed to determine the degree of relatedness between the KB representations
  16. Keenan, S.; Smeaton, A.F.; Keogh, G.: ¬The effect of pool depth on system evaluation in TREC (2001) 0.00
    0.0021033147 = product of:
      0.008413259 = sum of:
        0.008413259 = weight(_text_:information in 5908) [ClassicSimilarity], result of:
          0.008413259 = score(doc=5908,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13714671 = fieldWeight in 5908, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5908)
      0.25 = coord(1/4)
    
    Abstract
    The TREC benchmarking exercise for information retrieval (IR) experiments has provided a forum and an opportunity for IR researchers to evaluate the performance of their approaches to the IR task and has resulted in improvements in IR effectiveness. Typically, retrieval performance has been measured in terms of precision and recall, and comparisons between different IR approaches have been based on these measures. These measures are in turn dependent on the so-called "pool depth" used to discover relevant documents. Whereas there is evidence to suggest that the pool depth size used for TREC evaluations adequately identifies the relevant documents in the entire test data collection, we consider how it affects the evaluations of individual systems. The data used comes from the Sixth TREC conference, TREC-6. By fitting appropriate regression models we explore whether different pool depths confer advantages or disadvantages on different retrieval systems when they are compared. As a consequence of this model fitting, a pair of measures for each retrieval run, which are related to precision and recall, emerge. For each system, these give an extrapolation for the number of relevant documents the system would have been deemed to have retrieved if an indefinitely large pool size had been used, and also a measure of the sensitivity of each system to pool size. We concur that even on the basis of analyses of individual systems, the pool depth of 100 used by TREC is adequate
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.7, S.570-574
  17. Smeaton, A.F.; Morrissey, P.J.: Experiments on the automatic construction of hypertext from texts (1995) 0.00
    0.0017847219 = product of:
      0.0071388874 = sum of:
        0.0071388874 = weight(_text_:information in 7253) [ClassicSimilarity], result of:
          0.0071388874 = score(doc=7253,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.116372846 = fieldWeight in 7253, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=7253)
      0.25 = coord(1/4)
    
    Abstract
    Describes an approach to semi-automatically generate a hypertext from linear texts, based on initially creatign nodes and composite nodes composed of 'mini-hypertexts'. Node-node similarity values are computed using standard information retrieval techniques and these similarity measures are then used to selectively create node-node links based on the strength of similarity between them. The process is a novel one because the link creation process also uses values from a dynamically computed metric which measures the topological compactness of the overall hypertext being generated. Describes experiments on generating a hypertext from a collection of 846 software product descriptions comprising 8,5 MBytes of text which yield some guidelines on how the process should be automated. This text to hypertext conversion method is put into the context of an overall hypertext authoring tool currently under development
  18. Thornley, C.V.; Johnson, A.C.; Smeaton, A.F.; Lee, H.: ¬The scholarly impact of TRECVid (2003-2009) (2011) 0.00
    0.0014872681 = product of:
      0.0059490725 = sum of:
        0.0059490725 = weight(_text_:information in 4363) [ClassicSimilarity], result of:
          0.0059490725 = score(doc=4363,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.09697737 = fieldWeight in 4363, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4363)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.4, S.613-627