Search (28 results, page 1 of 2)

  • × author_ss:"Willett, P."
  1. Artymiuk, P.J.; Spriggs, R.V.; Willett, P.: Graph theoretic methods for the analysis of structural relationships in biological macromolecules (2005) 0.06
    0.061577972 = product of:
      0.10262995 = sum of:
        0.022607451 = weight(_text_:on in 5258) [ClassicSimilarity], result of:
          0.022607451 = score(doc=5258,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.20619515 = fieldWeight in 5258, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5258)
        0.0101838745 = weight(_text_:information in 5258) [ClassicSimilarity], result of:
          0.0101838745 = score(doc=5258,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.116372846 = fieldWeight in 5258, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5258)
        0.06983863 = sum of:
          0.029314637 = weight(_text_:technology in 5258) [ClassicSimilarity], result of:
            0.029314637 = score(doc=5258,freq=2.0), product of:
              0.14847288 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.049850095 = queryNorm
              0.19744103 = fieldWeight in 5258, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=5258)
          0.040523995 = weight(_text_:22 in 5258) [ClassicSimilarity], result of:
            0.040523995 = score(doc=5258,freq=2.0), product of:
              0.17456654 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049850095 = queryNorm
              0.23214069 = fieldWeight in 5258, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5258)
      0.6 = coord(3/5)
    
    Abstract
    Subgraph isomorphism and maximum common subgraph isomorphism algorithms from graph theory provide an effective and an efficient way of identifying structural relationships between biological macromolecules. They thus provide a natural complement to the pattern matching algorithms that are used in bioinformatics to identify sequence relationships. Examples are provided of the use of graph theory to analyze proteins for which three-dimensional crystallographic or NMR structures are available, focusing on the use of the Bron-Kerbosch clique detection algorithm to identify common folding motifs and of the Ullmann subgraph isomorphism algorithm to identify patterns of amino acid residues. Our methods are also applicable to other types of biological macromolecule, such as carbohydrate and nucleic acid structures.
    Date
    22. 7.2006 14:40:10
    Footnote
    Beitrag in einem special issue on bioinformatics
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.5, S.518-528
  2. Robertson, A.M.; Willett, P.: Applications of n-grams in textual information systems (1998) 0.06
    0.058484394 = product of:
      0.14621098 = sum of:
        0.12269233 = weight(_text_:section in 4715) [ClassicSimilarity], result of:
          0.12269233 = score(doc=4715,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.46641576 = fieldWeight in 4715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.0625 = fieldNorm(doc=4715)
        0.023518652 = weight(_text_:information in 4715) [ClassicSimilarity], result of:
          0.023518652 = score(doc=4715,freq=6.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.2687516 = fieldWeight in 4715, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4715)
      0.4 = coord(2/5)
    
    Abstract
    Provides an introduction to the use of n-grams in textual information systems, where an n-gram is a string of n, usually adjacent, characters, extracted from a section of continuous text. Applications that can be implemented efficiently and effectively using sets of n-grams include spelling errors detection and correction, query expansion, information retrieval with serial, inverted and signature files, dictionary look up, text compression, and language identification
  3. Ingwersen, P.; Willett, P.: ¬An introduction to algorithmic and cognitive approaches for information retrieval (1995) 0.03
    0.025629926 = product of:
      0.064074814 = sum of:
        0.036917817 = weight(_text_:on in 4344) [ClassicSimilarity], result of:
          0.036917817 = score(doc=4344,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.33671528 = fieldWeight in 4344, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=4344)
        0.027156997 = weight(_text_:information in 4344) [ClassicSimilarity], result of:
          0.027156997 = score(doc=4344,freq=8.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.3103276 = fieldWeight in 4344, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4344)
      0.4 = coord(2/5)
    
    Abstract
    This paper provides an over-view of 2, complementary approaches to the design and implementation of information retrieval systems. The first approach focuses on the algorithms and data structures that are needed to maximise the effectiveness and the efficiency of the searches that can be carried out on text databases, while the second adopts a cognitive approach that focuses on the role of the user and of the knowledge sources involved in information retrieval. The paper argues for an holistic view of information retrieval that is capable of encompassing both of these approaches
  4. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.02
    0.020198528 = product of:
      0.050496317 = sum of:
        0.036917817 = weight(_text_:on in 5689) [ClassicSimilarity], result of:
          0.036917817 = score(doc=5689,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.33671528 = fieldWeight in 5689, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
        0.013578499 = weight(_text_:information in 5689) [ClassicSimilarity], result of:
          0.013578499 = score(doc=5689,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.1551638 = fieldWeight in 5689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
      0.4 = coord(2/5)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
    Source
    Journal of information science. 18(1992) no.2, S.139-147
  5. Robertson, A.M.; Willett, P.: Use of genetic algorithms in information retrieval (1995) 0.02
    0.017933264 = product of:
      0.04483316 = sum of:
        0.02131451 = weight(_text_:on in 2418) [ClassicSimilarity], result of:
          0.02131451 = score(doc=2418,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.19440265 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=2418)
        0.023518652 = weight(_text_:information in 2418) [ClassicSimilarity], result of:
          0.023518652 = score(doc=2418,freq=6.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.2687516 = fieldWeight in 2418, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2418)
      0.4 = coord(2/5)
    
    Abstract
    Reviews the basic techniques involving genetic algorithms and their application to 2 problems in information retrieval: the generation of equifrequent groups of index terms; and the identification of optimal query and term weights. The algorithm developed for the generation of equifrequent groupings proved to be effective in operation, achieving results comparable with those obtained using a good deterministic algorithm. The algorithm developed for the identification of optimal query and term weighting involves fitness function that is based on full relevance information
  6. Wade, S.J.; Willett, P.; Bawden, D.: SIBRIS : the Sandwich Interactive Browsing and Ranking Information System (1989) 0.02
    0.015691606 = product of:
      0.039229013 = sum of:
        0.018650195 = weight(_text_:on in 2828) [ClassicSimilarity], result of:
          0.018650195 = score(doc=2828,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.17010231 = fieldWeight in 2828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2828)
        0.02057882 = weight(_text_:information in 2828) [ClassicSimilarity], result of:
          0.02057882 = score(doc=2828,freq=6.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.23515764 = fieldWeight in 2828, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2828)
      0.4 = coord(2/5)
    
    Abstract
    SIBRIS (Sandwich Interactive Browsing and Ranking Information System) is an interactive text retrieval system which has been developed to support the browsing of library and product files at Pfizer Central Research, Sandwich, UK. Once an initial ranking has been produced, the system will allow the user to select any document displayed on the screen at any point during the browse and to use that as the basis for another search. Facilities have been included to enable the user to keep track of the browse and to facilitate backtracking, thus allowing the user to move away from the original query to wander in and out of different areas of interest.
    Source
    Journal of information science. 15(1989) no.4/5, S.249-260
  7. Furner-Hines, J.; Willett, P.: ¬The use of hypertext in libraries in the United Kingdom (1994) 0.02
    0.015148896 = product of:
      0.03787224 = sum of:
        0.027688364 = weight(_text_:on in 5383) [ClassicSimilarity], result of:
          0.027688364 = score(doc=5383,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.25253648 = fieldWeight in 5383, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5383)
        0.0101838745 = weight(_text_:information in 5383) [ClassicSimilarity], result of:
          0.0101838745 = score(doc=5383,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.116372846 = fieldWeight in 5383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5383)
      0.4 = coord(2/5)
    
    Abstract
    State of the art review of hypertext systems in use in UK libraries. Systems include public access point of information (POI) systems that provide guidance to users of local resources, and networked document retrieval systems, such as WWW, that enable users to access texts stored on machines linked by the Internet. Particular emphasis is placed on those systems that are produced inhouse by the libraries in which they are used. The review is based on a series of telephone or face to face interviews conducted with representatives of those organizations that a literature review and mailed questionnaire survey identified as current users of hypertext. Considers issues relating to system development and usability, and presents a set of appropriate guidelines for the designers of future systems. Concludes that: the principle application of hypertext systems in UK libraries is in the implementation of POI systems; that such development is most advanced in the academic sector; and that such development is set to increase in tandem with use of the WWW
  8. Shaw, R.J.; Willett, P.: On the non-random nature of nearest-neighbour document clusters (1993) 0.01
    0.013957202 = product of:
      0.034893006 = sum of:
        0.02131451 = weight(_text_:on in 5817) [ClassicSimilarity], result of:
          0.02131451 = score(doc=5817,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.19440265 = fieldWeight in 5817, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=5817)
        0.013578499 = weight(_text_:information in 5817) [ClassicSimilarity], result of:
          0.013578499 = score(doc=5817,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.1551638 = fieldWeight in 5817, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5817)
      0.4 = coord(2/5)
    
    Source
    Information processing and management. 29(1993) no.4, S.449-452
  9. Robertson, M.; Willett, P.: ¬An upperbound to the performance of ranked output searching : optimal weighting of query terms using a genetic algorithms (1996) 0.01
    0.013957202 = product of:
      0.034893006 = sum of:
        0.02131451 = weight(_text_:on in 6977) [ClassicSimilarity], result of:
          0.02131451 = score(doc=6977,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.19440265 = fieldWeight in 6977, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=6977)
        0.013578499 = weight(_text_:information in 6977) [ClassicSimilarity], result of:
          0.013578499 = score(doc=6977,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.1551638 = fieldWeight in 6977, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6977)
      0.4 = coord(2/5)
    
    Abstract
    Describes the development of a genetic algorithm (GA) for the assignment of weights to query terms in a ranked output document retrieval system. The GA involves a fitness function that is based on full relevance information, and the rankings resulting from the use of these weights are compared with the Robertson-Sparck Jones F4 retrospective relevance weight
  10. Ellis, D.; Furner-Hines, J.; Willett, P.: On the creation of hypertext links in full-text documents : measurement of inter-linker consistency (1994) 0.01
    0.0087232515 = product of:
      0.021808129 = sum of:
        0.013321568 = weight(_text_:on in 7493) [ClassicSimilarity], result of:
          0.013321568 = score(doc=7493,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.121501654 = fieldWeight in 7493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7493)
        0.0084865615 = weight(_text_:information in 7493) [ClassicSimilarity], result of:
          0.0084865615 = score(doc=7493,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.09697737 = fieldWeight in 7493, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7493)
      0.4 = coord(2/5)
    
    Abstract
    In important stage in the process of retrieval of objects from a hypertext database is the creation of a set of inter-nodal links that are intended to represent the relationships existing between objects; this operation is often undertaken manually, just as index terms are often manually assigned to documents in a conventional retrieval system. Studies of conventional systems have suggested that a degree of consistency in the terms assigned to documents by indexers is positively associated with retrieval effectiveness. It is thus of interest to investigate the consistency of assignment of links in separate hypertext versions of the same full-text document, since a measure of agreement may be related to the subsequent utility of the resulting hypertext databases. The calculation of values indicating the degree of similarity between objects is a technique that has been widely used in the fields of textual and chemical information retrieval; in this paper we describe the application of arithmetic coefficients and topological indices to the measurement of the degree of similarity between the sets of inter-nodal links in hypertext databases. We publish the results of a study in which several different of links are inserted, by different people, between the paragraphs of each of a number of full-text documents. Our results show little similary between the sets of links identified by different people; this finding is comparable with those of studies of inter-indexer consistency, where it has been found that there is generally only a low level of agreement between the sets of idenx terms assigned to a document by different indexers
  11. Ellis, D.; Furner, J.; Willett, P.: On the creation of hypertext links in full-text documents : measurement of retrieval effectiveness (1996) 0.01
    0.0087232515 = product of:
      0.021808129 = sum of:
        0.013321568 = weight(_text_:on in 4214) [ClassicSimilarity], result of:
          0.013321568 = score(doc=4214,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.121501654 = fieldWeight in 4214, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4214)
        0.0084865615 = weight(_text_:information in 4214) [ClassicSimilarity], result of:
          0.0084865615 = score(doc=4214,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.09697737 = fieldWeight in 4214, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4214)
      0.4 = coord(2/5)
    
    Source
    Journal of the American Society for Information Science. 47(1996) no.4, S.287-300
  12. Wakeling, S.; Creaser, C.; Pinfield, S.; Fry, J.; Spezi, V.; Willett, P.; Paramita, M.: Motivations, understandings, and experiences of open-access mega-journal authors : results of a large-scale survey (2019) 0.01
    0.008280397 = product of:
      0.020700993 = sum of:
        0.0084865615 = weight(_text_:information in 5317) [ClassicSimilarity], result of:
          0.0084865615 = score(doc=5317,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.09697737 = fieldWeight in 5317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5317)
        0.012214432 = product of:
          0.024428863 = sum of:
            0.024428863 = weight(_text_:technology in 5317) [ClassicSimilarity], result of:
              0.024428863 = score(doc=5317,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.16453418 = fieldWeight in 5317, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5317)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.7, S.754-768
  13. Griffiths, A.; Luckhurst, H.C.; Willett, P.: Using interdocument similarity information in document retrieval systems (1986) 0.01
    0.0067210137 = product of:
      0.03360507 = sum of:
        0.03360507 = weight(_text_:information in 2415) [ClassicSimilarity], result of:
          0.03360507 = score(doc=2415,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.3840108 = fieldWeight in 2415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=2415)
      0.2 = coord(1/5)
    
    Source
    Journal of the American Society for Information Science. 37(1986) no.1, S.3-11
  14. Perry, R.; Willett, P.: ¬A revies of the use of inverted files for best match searching in information retrieval systems (1983) 0.01
    0.0067210137 = product of:
      0.03360507 = sum of:
        0.03360507 = weight(_text_:information in 2701) [ClassicSimilarity], result of:
          0.03360507 = score(doc=2701,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.3840108 = fieldWeight in 2701, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=2701)
      0.2 = coord(1/5)
    
    Source
    Journal of information science. 6(1983), S.59-66
  15. Willett, P.: Recent trends in hierarchic document clustering : a critical review (1988) 0.01
    0.0054313997 = product of:
      0.027156997 = sum of:
        0.027156997 = weight(_text_:information in 2604) [ClassicSimilarity], result of:
          0.027156997 = score(doc=2604,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.3103276 = fieldWeight in 2604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=2604)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 24(1988) no.5, S.577-597
  16. Spezi, V.; Wakeling, S.; Pinfield, S.; Creaser, C.; Fry, J.; Willett, P.: Open-access mega-journals : the future of scholarly communication or academic dumping ground? a review (2017) 0.01
    0.0053286273 = product of:
      0.026643137 = sum of:
        0.026643137 = weight(_text_:on in 3548) [ClassicSimilarity], result of:
          0.026643137 = score(doc=3548,freq=8.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.24300331 = fieldWeight in 3548, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3548)
      0.2 = coord(1/5)
    
    Abstract
    Purpose Open-access mega-journals (OAMJs) represent an increasingly important part of the scholarly communication landscape. OAMJs, such as PLOS ONE, are large scale, broad scope journals that operate an open access business model (normally based on article-processing charges), and which employ a novel form of peer review, focussing on scientific "soundness" and eschewing judgement of novelty or importance. The purpose of this paper is to examine the discourses relating to OAMJs, and their place within scholarly publishing, and considers attitudes towards mega-journals within the academic community. Design/methodology/approach This paper presents a review of the literature of OAMJs structured around four defining characteristics: scale, disciplinary scope, peer review policy, and economic model. The existing scholarly literature was augmented by searches of more informal outputs, such as blogs and e-mail discussion lists, to capture the debate in its entirety. Findings While the academic literature relating specifically to OAMJs is relatively sparse, discussion in other fora is detailed and animated, with debates ranging from the sustainability and ethics of the mega-journal model, to the impact of soundness-only peer review on article quality and discoverability, and the potential for OAMJs to represent a paradigm-shifting development in scholarly publishing. Originality/value This paper represents the first comprehensive review of the mega-journal phenomenon, drawing not only on the published academic literature, but also grey, professional and informal sources. The paper advances a number of ways in which the role of OAMJs in the scholarly communication environment can be conceptualised.
  17. Li, J.; Willett, P.: ArticleRank : a PageRank-based alternative to numbers of citations for analysing citation networks (2009) 0.00
    0.004614727 = product of:
      0.023073634 = sum of:
        0.023073634 = weight(_text_:on in 751) [ClassicSimilarity], result of:
          0.023073634 = score(doc=751,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.21044704 = fieldWeight in 751, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=751)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The purpose of this paper is to suggest an alternative to the widely used Times Cited criterion for analysing citation networks. The approach involves taking account of the natures of the papers that cite a given paper, so as to differentiate between papers that attract the same number of citations. Design/methodology/approach - ArticleRank is an algorithm that has been derived from Google's PageRank algorithm to measure the influence of journal articles. ArticleRank is applied to two datasets - a citation network based on an early paper on webometrics, and a self-citation network based on the 19 most cited papers in the Journal of Documentation - using citation data taken from the Web of Knowledge database. Findings - ArticleRank values provide a different ranking of a set of papers from that provided by the corresponding Times Cited values, and overcomes the inability of the latter to differentiate between papers with the same numbers of citations. The difference in rankings between Times Cited and ArticleRank is greatest for the most heavily cited articles in a dataset. Originality/value - This is a novel application of the PageRank algorithm.
  18. Furner, J.; Willett, P.: ¬A survey of hypertext-based public-access point-of-information systems in UK libraries (1995) 0.00
    0.004554367 = product of:
      0.022771835 = sum of:
        0.022771835 = weight(_text_:information in 2044) [ClassicSimilarity], result of:
          0.022771835 = score(doc=2044,freq=10.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.2602176 = fieldWeight in 2044, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2044)
      0.2 = coord(1/5)
    
    Abstract
    We have recently completed a survey of the operational use of hypertext-based information systems in academic, public and special libraries in the UK. A literatur search, questionnaire and both telephone and face-to-face interviews demonstrate that the principle application of hypertext systems is for the implementation of public-access point-of-information systems, which provide guidance to the users of local information resources. In this paper, we describe the principle issuse relating to the design and usage of these systems that were raised in the interviews and that we experienced when using the systems for ourselves. We then present a set of technical recommendations with the intention of helping the developers of future systems, with special attention being given to the need to develop effective methods for system evaluation
    Source
    Journal of information science. 21(1995) no.4, S.243-255
  19. Robertson, A.M.; Willett, P.: Identification of word-variants in historical text databases : report for the period October 1990 to September 1992 (1994) 0.00
    0.004262902 = product of:
      0.02131451 = sum of:
        0.02131451 = weight(_text_:on in 939) [ClassicSimilarity], result of:
          0.02131451 = score(doc=939,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.19440265 = fieldWeight in 939, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=939)
      0.2 = coord(1/5)
    
    Abstract
    Databases of historical texts are increasingly becoming available for end user searching via online or CD-ROM databases. Many of the words in these databases are spelt differently from today with resultant loss of retrieval. The project evaluated a range of techniques that can suggest historical variants of modern language query words, the work deriving from earlier work on spelling correction
  20. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.00
    0.004262902 = product of:
      0.02131451 = sum of:
        0.02131451 = weight(_text_:on in 760) [ClassicSimilarity], result of:
          0.02131451 = score(doc=760,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.19440265 = fieldWeight in 760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
      0.2 = coord(1/5)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines