Search (4 results, page 1 of 1)

  • × author_ss:"Smeaton, A.F."
  • × year_i:[1990 TO 2000}
  1. O'Donnell, R.; Smeaton, A.F.: ¬A linguistic approach to information retrieval (1996) 0.05
    0.045242097 = product of:
      0.13572629 = sum of:
        0.13572629 = sum of:
          0.09508291 = weight(_text_:reports in 2575) [ClassicSimilarity], result of:
            0.09508291 = score(doc=2575,freq=4.0), product of:
              0.2251839 = queryWeight, product of:
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.04999695 = queryNorm
              0.4222456 = fieldWeight in 2575, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.046875 = fieldNorm(doc=2575)
          0.04064338 = weight(_text_:22 in 2575) [ClassicSimilarity], result of:
            0.04064338 = score(doc=2575,freq=2.0), product of:
              0.1750808 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04999695 = queryNorm
              0.23214069 = fieldWeight in 2575, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2575)
      0.33333334 = coord(1/3)
    
    Abstract
    An important aspect of information retrieval systems is domain independence, where the subject of the information is not restricted to certain domains of knowledge. This should be able to represent any topic and although the text representation does not involve any semantic knowledge, lexical and syntactic analysis of the text allows the representation to remain domain independent. Reports research at Dublin City University, Ireland, which concentrates on the lexical and syntactic levels of natural language analysis and describes a domain independent automatic information retrieval system which accesses a very large database of newspaper text from the Wall Street Journal. The system represents the text in the form of syntax trees, and these trees are used in the matching process. Reports early results from the stuyd
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  2. Richardson, R.; Smeaton, A.F.; Murphy, J.: Using WordNet for conceptual distance measurement (1996) 0.04
    0.041952223 = product of:
      0.12585667 = sum of:
        0.12585667 = sum of:
          0.07843939 = weight(_text_:reports in 6965) [ClassicSimilarity], result of:
            0.07843939 = score(doc=6965,freq=2.0), product of:
              0.2251839 = queryWeight, product of:
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.04999695 = queryNorm
              0.34833482 = fieldWeight in 6965, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.0546875 = fieldNorm(doc=6965)
          0.047417276 = weight(_text_:22 in 6965) [ClassicSimilarity], result of:
            0.047417276 = score(doc=6965,freq=2.0), product of:
              0.1750808 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04999695 = queryNorm
              0.2708308 = fieldWeight in 6965, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=6965)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of research to develop an information retrieval technique employing a conceptual distance measure between words and based on a large thesaurus. The techniques is specifically designed for data sharing in large scale autonomous distributed federated databases (FDBS). The prototype federated dictionary system, FEDDICT, stores information on the location of data sets within the FDBS and on semantic relationships exisitng between these data sets. WordNet is used and tested as the medium for bulding and operating FEDDICT
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Kelledy, F.; Smeaton, A.F.: Signature files and beyond (1996) 0.04
    0.03595905 = product of:
      0.10787715 = sum of:
        0.10787715 = sum of:
          0.06723377 = weight(_text_:reports in 6973) [ClassicSimilarity], result of:
            0.06723377 = score(doc=6973,freq=2.0), product of:
              0.2251839 = queryWeight, product of:
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.04999695 = queryNorm
              0.29857272 = fieldWeight in 6973, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.046875 = fieldNorm(doc=6973)
          0.04064338 = weight(_text_:22 in 6973) [ClassicSimilarity], result of:
            0.04064338 = score(doc=6973,freq=2.0), product of:
              0.1750808 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04999695 = queryNorm
              0.23214069 = fieldWeight in 6973, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6973)
      0.33333334 = coord(1/3)
    
    Abstract
    Proposes that signature files be used as a viable alternative to other indexing strategies such as inverted files for searching through large volumes of text. Demonstrates through simulation, that search times can be further reduced by enhancing the basic signature file concept using deterministic partitioning algorithms which eliminate the need for an exhaustive search of the entire signature file. Reports research to evaluate the performance of some deterministic partitioning algorithms in a non simulated environment using 276 MB of raw newspaper text (taken from the Wall Street Journal) and real user queries. Presents a selection of results to illustrate trends and highlight important aspects of the performance of these methods under realistic rather than simulated operating conditions. As a result of the research reported here certain aspects of this approach to signature files are shown to be found wanting and require improvement. Suggests lines of future research on the partitioning of signature files
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.01
    0.013073232 = product of:
      0.039219696 = sum of:
        0.039219696 = product of:
          0.07843939 = sum of:
            0.07843939 = weight(_text_:reports in 5804) [ClassicSimilarity], result of:
              0.07843939 = score(doc=5804,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.34833482 = fieldWeight in 5804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5804)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness