Search (6 results, page 1 of 1)

  • × author_ss:"Smeaton, A.F."
  • × year_i:[1990 TO 2000}
  1. Richardson, R.; Smeaton, A.F.; Murphy, J.: Using WordNet for conceptual distance measurement (1996) 0.02
    0.015250247 = product of:
      0.030500494 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 6965) [ClassicSimilarity], result of:
              0.03295912 = score(doc=6965,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 6965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6965)
          0.25 = coord(1/4)
        0.022260714 = product of:
          0.04452143 = sum of:
            0.04452143 = weight(_text_:22 in 6965) [ClassicSimilarity], result of:
              0.04452143 = score(doc=6965,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.2708308 = fieldWeight in 6965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6965)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Reports results of research to develop an information retrieval technique employing a conceptual distance measure between words and based on a large thesaurus. The techniques is specifically designed for data sharing in large scale autonomous distributed federated databases (FDBS). The prototype federated dictionary system, FEDDICT, stores information on the location of data sets within the FDBS and on semantic relationships exisitng between these data sets. WordNet is used and tested as the medium for bulding and operating FEDDICT
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  2. Kelledy, F.; Smeaton, A.F.: Signature files and beyond (1996) 0.00
    0.0047701527 = product of:
      0.019080611 = sum of:
        0.019080611 = product of:
          0.038161222 = sum of:
            0.038161222 = weight(_text_:22 in 6973) [ClassicSimilarity], result of:
              0.038161222 = score(doc=6973,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23214069 = fieldWeight in 6973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6973)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. O'Donnell, R.; Smeaton, A.F.: ¬A linguistic approach to information retrieval (1996) 0.00
    0.0047701527 = product of:
      0.019080611 = sum of:
        0.019080611 = product of:
          0.038161222 = sum of:
            0.038161222 = weight(_text_:22 in 2575) [ClassicSimilarity], result of:
              0.038161222 = score(doc=2575,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23214069 = fieldWeight in 2575, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2575)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Smeaton, A.F.: Prospects for intelligent, language-based information retrieval (1991) 0.00
    0.0035679291 = product of:
      0.014271717 = sum of:
        0.014271717 = product of:
          0.057086866 = sum of:
            0.057086866 = weight(_text_:based in 3700) [ClassicSimilarity], result of:
              0.057086866 = score(doc=3700,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.40361002 = fieldWeight in 3700, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3700)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Current approaches to text retrieval based on indexing by words or index terms and on retrieving by specifying a Boolean combination of keywords are well known, as are their limitations. Statistical approaches to retrieval, as exemplified in commercial products like STATUS/IQ and Personal Librarian, are slightly better but still have their own weaknesses. Approaches to the indexing and retrieval of text based on techniques of automatic natural language processing (NLP) may soon start to realise their potential in terms of improving the quality and effectiveness of information retrieval. Examines some of the current attempts at using various NLP techniques in both the indexing and retrieval operations
  5. Smeaton, A.F.; Morrissey, P.J.: Experiments on the automatic construction of hypertext from texts (1995) 0.00
    0.0024970302 = product of:
      0.009988121 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 7253) [ClassicSimilarity], result of:
              0.039952483 = score(doc=7253,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 7253, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7253)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Describes an approach to semi-automatically generate a hypertext from linear texts, based on initially creatign nodes and composite nodes composed of 'mini-hypertexts'. Node-node similarity values are computed using standard information retrieval techniques and these similarity measures are then used to selectively create node-node links based on the strength of similarity between them. The process is a novel one because the link creation process also uses values from a dynamically computed metric which measures the topological compactness of the overall hypertext being generated. Describes experiments on generating a hypertext from a collection of 846 software product descriptions comprising 8,5 MBytes of text which yield some guidelines on how the process should be automated. This text to hypertext conversion method is put into the context of an overall hypertext authoring tool currently under development
  6. Richardson, R.; Smeaton, A.F.: Automatic word sense disambiguation in a KBIR application (1995) 0.00
    0.0023542228 = product of:
      0.009416891 = sum of:
        0.009416891 = product of:
          0.037667565 = sum of:
            0.037667565 = weight(_text_:based in 5796) [ClassicSimilarity], result of:
              0.037667565 = score(doc=5796,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.26631355 = fieldWeight in 5796, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5796)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the implementation and design of an automatic word sense disambiguator. The semantic tagger is used in an overall Knowledge Based Information Retrieval (KBIR) system which uses a WordNet derived knowledge base (KB) and 2 independent semantic similarity estimators. The KB is used as a controlled vocabulary to represent documents and queries and the semantic similarity estimators are employed to determine the degree of relatedness between the KB representations