Search (34 results, page 1 of 2)

  • × author_ss:"Willett, P."
  1. Robertson, A.M.; Willett, P.: Applications of n-grams in textual information systems (1998) 0.05
    0.05396908 = product of:
      0.16190724 = sum of:
        0.09491582 = weight(_text_:applications in 4715) [ClassicSimilarity], result of:
          0.09491582 = score(doc=4715,freq=4.0), product of:
            0.17247584 = queryWeight, product of:
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.03917671 = queryNorm
            0.5503137 = fieldWeight in 4715, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4025097 = idf(docFreq=1471, maxDocs=44218)
              0.0625 = fieldNorm(doc=4715)
        0.020741362 = weight(_text_:of in 4715) [ClassicSimilarity], result of:
          0.020741362 = score(doc=4715,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33856338 = fieldWeight in 4715, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4715)
        0.046250064 = weight(_text_:systems in 4715) [ClassicSimilarity], result of:
          0.046250064 = score(doc=4715,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.38414678 = fieldWeight in 4715, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=4715)
      0.33333334 = coord(3/9)
    
    Abstract
    Provides an introduction to the use of n-grams in textual information systems, where an n-gram is a string of n, usually adjacent, characters, extracted from a section of continuous text. Applications that can be implemented efficiently and effectively using sets of n-grams include spelling errors detection and correction, query expansion, information retrieval with serial, inverted and signature files, dictionary look up, text compression, and language identification
    Source
    Journal of documentation. 54(1998) no.1, S.48-69
  2. Furner-Hines, J.; Willett, P.: ¬The use of hypertext in libraries in the United Kingdom (1994) 0.02
    0.020505097 = product of:
      0.09227294 = sum of:
        0.022897845 = weight(_text_:of in 5383) [ClassicSimilarity], result of:
          0.022897845 = score(doc=5383,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.37376386 = fieldWeight in 5383, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5383)
        0.06937509 = weight(_text_:systems in 5383) [ClassicSimilarity], result of:
          0.06937509 = score(doc=5383,freq=16.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.57622015 = fieldWeight in 5383, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=5383)
      0.22222222 = coord(2/9)
    
    Abstract
    State of the art review of hypertext systems in use in UK libraries. Systems include public access point of information (POI) systems that provide guidance to users of local resources, and networked document retrieval systems, such as WWW, that enable users to access texts stored on machines linked by the Internet. Particular emphasis is placed on those systems that are produced inhouse by the libraries in which they are used. The review is based on a series of telephone or face to face interviews conducted with representatives of those organizations that a literature review and mailed questionnaire survey identified as current users of hypertext. Considers issues relating to system development and usability, and presents a set of appropriate guidelines for the designers of future systems. Concludes that: the principle application of hypertext systems in UK libraries is in the implementation of POI systems; that such development is most advanced in the academic sector; and that such development is set to increase in tandem with use of the WWW
  3. Willett, P.: Best-match text retrieval (1993) 0.02
    0.019808581 = product of:
      0.08913861 = sum of:
        0.018332949 = weight(_text_:of in 7818) [ClassicSimilarity], result of:
          0.018332949 = score(doc=7818,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2992506 = fieldWeight in 7818, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=7818)
        0.07080566 = weight(_text_:systems in 7818) [ClassicSimilarity], result of:
          0.07080566 = score(doc=7818,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5881023 = fieldWeight in 7818, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.078125 = fieldNorm(doc=7818)
      0.22222222 = coord(2/9)
    
    Abstract
    Provides an introduction to the computational techniques that underlie best match searching retrieval systems. Discusses: problems of traditional Boolean systems; characteristics of best-match searching; automatic indexing; term conflation; matching of documents and queries (dealing with similarity measures, initial weights, relevance weights, and the matching algorithm); and describes operational best-match systems
  4. Furner, J.; Willett, P.: ¬A survey of hypertext-based public-access point-of-information systems in UK libraries (1995) 0.02
    0.019509401 = product of:
      0.08779231 = sum of:
        0.022897845 = weight(_text_:of in 2044) [ClassicSimilarity], result of:
          0.022897845 = score(doc=2044,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.37376386 = fieldWeight in 2044, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2044)
        0.06489446 = weight(_text_:systems in 2044) [ClassicSimilarity], result of:
          0.06489446 = score(doc=2044,freq=14.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.5390046 = fieldWeight in 2044, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2044)
      0.22222222 = coord(2/9)
    
    Abstract
    We have recently completed a survey of the operational use of hypertext-based information systems in academic, public and special libraries in the UK. A literatur search, questionnaire and both telephone and face-to-face interviews demonstrate that the principle application of hypertext systems is for the implementation of public-access point-of-information systems, which provide guidance to the users of local information resources. In this paper, we describe the principle issuse relating to the design and usage of these systems that were raised in the interviews and that we experienced when using the systems for ourselves. We then present a set of technical recommendations with the intention of helping the developers of future systems, with special attention being given to the need to develop effective methods for system evaluation
    Source
    Journal of information science. 21(1995) no.4, S.243-255
  5. Perry, R.; Willett, P.: ¬A revies of the use of inverted files for best match searching in information retrieval systems (1983) 0.02
    0.018421704 = product of:
      0.08289766 = sum of:
        0.025666127 = weight(_text_:of in 2701) [ClassicSimilarity], result of:
          0.025666127 = score(doc=2701,freq=6.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.41895083 = fieldWeight in 2701, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=2701)
        0.057231534 = weight(_text_:systems in 2701) [ClassicSimilarity], result of:
          0.057231534 = score(doc=2701,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.47535738 = fieldWeight in 2701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.109375 = fieldNorm(doc=2701)
      0.22222222 = coord(2/9)
    
    Source
    Journal of information science. 6(1983), S.59-66
  6. Griffiths, A.; Luckhurst, H.C.; Willett, P.: Using interdocument similarity information in document retrieval systems (1986) 0.02
    0.016011083 = product of:
      0.07204988 = sum of:
        0.014818345 = weight(_text_:of in 2415) [ClassicSimilarity], result of:
          0.014818345 = score(doc=2415,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.24188137 = fieldWeight in 2415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=2415)
        0.057231534 = weight(_text_:systems in 2415) [ClassicSimilarity], result of:
          0.057231534 = score(doc=2415,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.47535738 = fieldWeight in 2415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.109375 = fieldNorm(doc=2415)
      0.22222222 = coord(2/9)
    
    Source
    Journal of the American Society for Information Science. 37(1986) no.1, S.3-11
  7. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.01
    0.014886984 = product of:
      0.066991426 = sum of:
        0.020741362 = weight(_text_:of in 5689) [ClassicSimilarity], result of:
          0.020741362 = score(doc=5689,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33856338 = fieldWeight in 5689, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
        0.046250064 = weight(_text_:systems in 5689) [ClassicSimilarity], result of:
          0.046250064 = score(doc=5689,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.38414678 = fieldWeight in 5689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
      0.22222222 = coord(2/9)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
    Source
    Journal of information science. 18(1992) no.2, S.139-147
  8. Ellis, D.; Furner-Hines, J.; Willett, P.: Measuring the consistency of assignment of hypertext links in full-text documents (1994) 0.01
    0.01401974 = product of:
      0.06308883 = sum of:
        0.028401282 = weight(_text_:of in 1052) [ClassicSimilarity], result of:
          0.028401282 = score(doc=1052,freq=40.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.46359703 = fieldWeight in 1052, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1052)
        0.034687545 = weight(_text_:systems in 1052) [ClassicSimilarity], result of:
          0.034687545 = score(doc=1052,freq=4.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.28811008 = fieldWeight in 1052, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=1052)
      0.22222222 = coord(2/9)
    
    Abstract
    Studies of document retrieval systems have suggested that the degree of consistency in the terms assigned to documents by indexers is positively associated with retrieval effectiveness. The study investigated the consistency of assignment of links in separate hypertext versions of the same full text database assuming that a measure of agreement may be related to the subsequent utility of the resulting hypertext document. Describes the calculations involved in measuring the degree of similarity between pairs of structured objetcs of a certain type (Those that may be represented in graph theoretic form). Initial results show little similarity between the sets of links identified by different people and this finding is comparable with those of studies of inter indexer consistency, where it has been found that there is generally only alow level of agreement between the sets of indexing terms assigned to a document of different indexers
    Source
    Information retrieval: new systems and current research. Proceedings of the 15th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Glasgow 1993. Ed.: Ruben Leon
  9. Furner-Hines, J.; Willett, P.: ¬The use of hypertext in libraries in the United Kingdom (1994) 0.01
    0.012589732 = product of:
      0.056653794 = sum of:
        0.023950063 = weight(_text_:of in 1792) [ClassicSimilarity], result of:
          0.023950063 = score(doc=1792,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.39093933 = fieldWeight in 1792, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1792)
        0.03270373 = weight(_text_:systems in 1792) [ClassicSimilarity], result of:
          0.03270373 = score(doc=1792,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 1792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=1792)
      0.22222222 = coord(2/9)
    
    Abstract
    Presents a summary of the major findings of a survey of the use of hypertext systems and the production of hypertext products in UK libraries. Not surprisingly, academic libraries are found to be both the most enthusiastic users and producers. There are normally 4 principal stages in a library's development of a hypertext system, although the possibility of leapfrogging via WWW is acknowledged
  10. Ingwersen, P.; Willett, P.: ¬An introduction to algorithmic and cognitive approaches for information retrieval (1995) 0.01
    0.012589732 = product of:
      0.056653794 = sum of:
        0.023950063 = weight(_text_:of in 4344) [ClassicSimilarity], result of:
          0.023950063 = score(doc=4344,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.39093933 = fieldWeight in 4344, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4344)
        0.03270373 = weight(_text_:systems in 4344) [ClassicSimilarity], result of:
          0.03270373 = score(doc=4344,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2716328 = fieldWeight in 4344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=4344)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper provides an over-view of 2, complementary approaches to the design and implementation of information retrieval systems. The first approach focuses on the algorithms and data structures that are needed to maximise the effectiveness and the efficiency of the searches that can be carried out on text databases, while the second adopts a cognitive approach that focuses on the role of the user and of the knowledge sources involved in information retrieval. The paper argues for an holistic view of information retrieval that is capable of encompassing both of these approaches
  11. Ellis, D.; Furner-Hines, J.; Willett, P.: On the creation of hypertext links in full-text documents : measurement of inter-linker consistency (1994) 0.01
    0.011194981 = product of:
      0.050377414 = sum of:
        0.029937578 = weight(_text_:of in 7493) [ClassicSimilarity], result of:
          0.029937578 = score(doc=7493,freq=64.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.48867416 = fieldWeight in 7493, product of:
              8.0 = tf(freq=64.0), with freq of:
                64.0 = termFreq=64.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7493)
        0.020439833 = weight(_text_:systems in 7493) [ClassicSimilarity], result of:
          0.020439833 = score(doc=7493,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1697705 = fieldWeight in 7493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7493)
      0.22222222 = coord(2/9)
    
    Abstract
    In important stage in the process of retrieval of objects from a hypertext database is the creation of a set of inter-nodal links that are intended to represent the relationships existing between objects; this operation is often undertaken manually, just as index terms are often manually assigned to documents in a conventional retrieval system. Studies of conventional systems have suggested that a degree of consistency in the terms assigned to documents by indexers is positively associated with retrieval effectiveness. It is thus of interest to investigate the consistency of assignment of links in separate hypertext versions of the same full-text document, since a measure of agreement may be related to the subsequent utility of the resulting hypertext databases. The calculation of values indicating the degree of similarity between objects is a technique that has been widely used in the fields of textual and chemical information retrieval; in this paper we describe the application of arithmetic coefficients and topological indices to the measurement of the degree of similarity between the sets of inter-nodal links in hypertext databases. We publish the results of a study in which several different of links are inserted, by different people, between the paragraphs of each of a number of full-text documents. Our results show little similary between the sets of links identified by different people; this finding is comparable with those of studies of inter-indexer consistency, where it has been found that there is generally only a low level of agreement between the sets of idenx terms assigned to a document by different indexers
    Source
    Journal of documentation. 50(1994) no.2, S.67-98
  12. Ellis, D.; Furner-Hines, J.; Willett, P.: Measuring the degree of similarity between objects in text retrieval systems (1993) 0.01
    0.010539032 = product of:
      0.047425643 = sum of:
        0.022897845 = weight(_text_:of in 6716) [ClassicSimilarity], result of:
          0.022897845 = score(doc=6716,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.37376386 = fieldWeight in 6716, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=6716)
        0.0245278 = weight(_text_:systems in 6716) [ClassicSimilarity], result of:
          0.0245278 = score(doc=6716,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 6716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=6716)
      0.22222222 = coord(2/9)
    
    Abstract
    Describes the use of a variety of similarity coefficients in the measurement of the degree of similarity between objects that contain textual information, such as documents, paragraphs, index terms or queries. The work is intended as a preliminary to future investigation of the calculations involved in measuring the degree of similarity between structured objects that may be represented by graph theoretic forms. Descusses the role of similarity coefficients in text retrieval in terms of: document and query similarity; document and document similarity; cocitation analysis; term and term similarity; and the similarity between sets of judgements, such as relevance judgements. Describes several methods for expressing the formulae used to define similarity coefficients and compares their attributes. Concludes with details the characteristics of similarity coefficients; equivalence and monotonicity; consideration of negative matches; geometric analyses; and the meaning of correlation coefficients
  13. Artymiuk, P.J.; Spriggs, R.V.; Willett, P.: Graph theoretic methods for the analysis of structural relationships in biological macromolecules (2005) 0.01
    0.0077724145 = product of:
      0.034975864 = sum of:
        0.019052157 = weight(_text_:of in 5258) [ClassicSimilarity], result of:
          0.019052157 = score(doc=5258,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3109903 = fieldWeight in 5258, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5258)
        0.015923709 = product of:
          0.031847417 = sum of:
            0.031847417 = weight(_text_:22 in 5258) [ClassicSimilarity], result of:
              0.031847417 = score(doc=5258,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.23214069 = fieldWeight in 5258, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5258)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Subgraph isomorphism and maximum common subgraph isomorphism algorithms from graph theory provide an effective and an efficient way of identifying structural relationships between biological macromolecules. They thus provide a natural complement to the pattern matching algorithms that are used in bioinformatics to identify sequence relationships. Examples are provided of the use of graph theory to analyze proteins for which three-dimensional crystallographic or NMR structures are available, focusing on the use of the Bron-Kerbosch clique detection algorithm to identify common folding motifs and of the Ullmann subgraph isomorphism algorithm to identify patterns of amino acid residues. Our methods are also applicable to other types of biological macromolecule, such as carbohydrate and nucleic acid structures.
    Date
    22. 7.2006 14:40:10
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.5, S.518-528
  14. Ellis, D.; Furner, J.; Willett, P.: On the creation of hypertext links in full-text documents : measurement of retrieval effectiveness (1996) 0.00
    0.00327401 = product of:
      0.02946609 = sum of:
        0.02946609 = weight(_text_:of in 4214) [ClassicSimilarity], result of:
          0.02946609 = score(doc=4214,freq=62.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.480978 = fieldWeight in 4214, product of:
              7.8740077 = tf(freq=62.0), with freq of:
                62.0 = termFreq=62.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4214)
      0.11111111 = coord(1/9)
    
    Abstract
    An important stage in the process or retrieval of objects from a hypertext database is the creation of a set of internodal links that are intended to represent the relationships existing between objects; this operation is often undertaken manually, just as index terms are often manually assigned to documents in a conventional retrieval system. In an earlier article (1994), the results were published of a study in which several different sets of links were inserted, each by a different person, between the paragraphs of each of a number of full-text documents. These results showed little similarity between the link-sets, a finding that was comparable with those of studies of inter-indexer consistency, which suggest that there is generally only a low level of agreement between the sets of index terms assigned to a document by different indexers. In this article, a description is provided of an investigation into the nature of the relationship existing between (i) the levels of inter-linker consistency obtaining among the group of hypertext databases used in our earlier experiments, and (ii) the levels of effectiveness of a number of searches carried out in those databases. An account is given of the implementation of the searches and of the methods used in the calculation of numerical values expressing their effectiveness. Analysis of the results of a comparison between recorded levels of consistency and those of effectiveness does not allow us to draw conclusions about the consistency - effectiveness relationship that are equivalent to those drawn in comparable studies of inter-indexer consistency
    Source
    Journal of the American Society for Information Science. 47(1996) no.4, S.287-300
  15. Ellis, D.; Furner-Hines, J.; Willett, P.: ¬The creation of hypertext links in full-text documents (1994) 0.00
    0.0032336279 = product of:
      0.029102651 = sum of:
        0.029102651 = weight(_text_:of in 1084) [ClassicSimilarity], result of:
          0.029102651 = score(doc=1084,freq=42.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.47504556 = fieldWeight in 1084, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1084)
      0.11111111 = coord(1/9)
    
    Abstract
    An important stage in the process of retrieval of objects from a hypertext database is the creation of a set of internodal links that are intended to represent the relationships existing between objetcs; an operation that is usually undertaken manually such as the allocation of subject index terms to documents. Reports results of a study in which several different sets of hypertext links were inserted, each by a different person, between the paragraphs of each of a number of full text documents. The similarity between the members of each pair of link sets was then evaluated. Results indicated that little similarity existed among the link sets, a finding comparable with those of studies of inter indexer consistency, which suggests that there is generally only a low level of agreemenet between the sets of index terms assigned to a document by indexers. Concludes with that part of the study designed to test the validity of making these kinds of assumptions in the context of hypertext link sets
  16. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.00
    0.002661118 = product of:
      0.023950063 = sum of:
        0.023950063 = weight(_text_:of in 760) [ClassicSimilarity], result of:
          0.023950063 = score(doc=760,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.39093933 = fieldWeight in 760, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
      0.11111111 = coord(1/9)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
  17. Wakeling, S.; Creaser, C.; Pinfield, S.; Fry, J.; Spezi, V.; Willett, P.; Paramita, M.: Motivations, understandings, and experiences of open-access mega-journal authors : results of a large-scale survey (2019) 0.00
    0.0024947983 = product of:
      0.022453185 = sum of:
        0.022453185 = weight(_text_:of in 5317) [ClassicSimilarity], result of:
          0.022453185 = score(doc=5317,freq=36.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36650562 = fieldWeight in 5317, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5317)
      0.11111111 = coord(1/9)
    
    Abstract
    Open-access mega-journals (OAMJs) are characterized by their large scale, wide scope, open-access (OA) business model, and "soundness-only" peer review. The last of these controversially discounts the novelty, significance, and relevance of submitted articles and assesses only their "soundness." This article reports the results of an international survey of authors (n = 11,883), comparing the responses of OAMJ authors with those of other OA and subscription journals, and drawing comparisons between different OAMJs. Strikingly, OAMJ authors showed a low understanding of soundness-only peer review: two-thirds believed OAMJs took into account novelty, significance, and relevance, although there were marked geographical variations. Author satisfaction with OAMJs, however, was high, with more than 80% of OAMJ authors saying they would publish again in the same journal, although there were variations by title, and levels were slightly lower than subscription journals (over 90%). Their reasons for choosing to publish in OAMJs included a wide variety of factors, not significantly different from reasons given by authors of other journals, with the most important including the quality of the journal and quality of peer review. About half of OAMJ articles had been submitted elsewhere before submission to the OAMJ with some evidence of a "cascade" of articles between journals from the same publisher.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.7, S.754-768
  18. Willett, P.: From chemical documentation to chemoinformatics : 50 years of chemical information science (2009) 0.00
    0.002489248 = product of:
      0.022403233 = sum of:
        0.022403233 = weight(_text_:of in 3656) [ClassicSimilarity], result of:
          0.022403233 = score(doc=3656,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.36569026 = fieldWeight in 3656, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3656)
      0.11111111 = coord(1/9)
    
    Abstract
    This paper summarizes the historical development of the discipline that is now called 'chemoinformatics'. It shows how this has evolved, principally as a result of technological developments in chemistry and biology during the past decade, from long-established techniques for the modelling and searching of chemical molecules. A total of 30 papers, the earliest dating back to 1957, are briefly summarized to highlight some of the key publications and to show the development of the discipline.
  19. Robertson, A.M.; Willett, P.: Retrieval techniques for historical English text : searching the sixteenth and seventeenth century titles in the Catalogue of Caterbury Cathedral Library using spelling-correction methods (1992) 0.00
    0.0023284785 = product of:
      0.020956306 = sum of:
        0.020956306 = weight(_text_:of in 4209) [ClassicSimilarity], result of:
          0.020956306 = score(doc=4209,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34207192 = fieldWeight in 4209, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4209)
      0.11111111 = coord(1/9)
    
    Abstract
    A range of techniques has been developed for the correction of misspellings in machine readable texts. Discusses the use of such techniques for the identification of words in the sixteenth and seventeenth century titles from the Catalogue of Canterbury Cathedral Library that are most similar to query words in modern English. The experiments used digram matching, non phonetic coding, and dynamic programming methods for spelling correction. These allow very high recall searches to be carried out, although the latter methods are very demanding of computer resources
    Source
    Online information 92. Proc. of the 16th Int. Online Information Meeting, London, 8-10.12.1992. Ed. by David I. Raitt
  20. Robertson, A.M.; Willett, P.: Generation of equifrequent groups of words using a genetic algorithm (1994) 0.00
    0.0023284785 = product of:
      0.020956306 = sum of:
        0.020956306 = weight(_text_:of in 8158) [ClassicSimilarity], result of:
          0.020956306 = score(doc=8158,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34207192 = fieldWeight in 8158, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8158)
      0.11111111 = coord(1/9)
    
    Abstract
    Genetic algorithms are a class of non-deterministic algorithms that derive from Darwinian evolution and that provide good, though not necessarily optimal, solutions to combinatorial problems. We describe their application to the identification of characteristics that occur approximately equifrequently in a database, using two different methods for the creation of the chromosome data structures that lie at the heart of a genetic algortihm. Experiments with files of English and Turkish text suggest that the genetic algorithm developed here can produce results superior to those produced by existing non-deterministic algorithms; however, the results are inferior to those produced by an existing deterministic algorithm
    Source
    Journal of documentation. 50(1994) no.3, S.213-232