Search (36 results, page 1 of 2)

  • × author_ss:"Willett, P."
  1. Al-Hawamdeh, S.; Smith, G.; Willett, P.; Vere, R. de: Using nearest-neighbour searching techniques to access full-text documents (1991) 0.03
    0.032498594 = product of:
      0.04874789 = sum of:
        0.01187196 = weight(_text_:a in 2300) [ClassicSimilarity], result of:
          0.01187196 = score(doc=2300,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22789092 = fieldWeight in 2300, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=2300)
        0.03687593 = product of:
          0.07375186 = sum of:
            0.07375186 = weight(_text_:de in 2300) [ClassicSimilarity], result of:
              0.07375186 = score(doc=2300,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.37984797 = fieldWeight in 2300, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2300)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Summarises the results to date of a continuing programme of research at Sheffield Univ. to investigate the use of nearest-neighbour retrieval algorithms for full text searching. Given a natural language query statement, the research methods result in a ranking of the paragraphs comprising a full text document in order of decreasing similarity with the query, where the similarity for each paragraph is determined by the number of keyword stems that it has in common with the query
    Type
    a
  2. Artymiuk, P.J.; Spriggs, R.V.; Willett, P.: Graph theoretic methods for the analysis of structural relationships in biological macromolecules (2005) 0.02
    0.015996836 = product of:
      0.023995254 = sum of:
        0.0056313644 = weight(_text_:a in 5258) [ClassicSimilarity], result of:
          0.0056313644 = score(doc=5258,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10809815 = fieldWeight in 5258, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5258)
        0.01836389 = product of:
          0.03672778 = sum of:
            0.03672778 = weight(_text_:22 in 5258) [ClassicSimilarity], result of:
              0.03672778 = score(doc=5258,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.23214069 = fieldWeight in 5258, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5258)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Subgraph isomorphism and maximum common subgraph isomorphism algorithms from graph theory provide an effective and an efficient way of identifying structural relationships between biological macromolecules. They thus provide a natural complement to the pattern matching algorithms that are used in bioinformatics to identify sequence relationships. Examples are provided of the use of graph theory to analyze proteins for which three-dimensional crystallographic or NMR structures are available, focusing on the use of the Bron-Kerbosch clique detection algorithm to identify common folding motifs and of the Ullmann subgraph isomorphism algorithm to identify patterns of amino acid residues. Our methods are also applicable to other types of biological macromolecule, such as carbohydrate and nucleic acid structures.
    Date
    22. 7.2006 14:40:10
    Type
    a
  3. Griffiths, A.; Robinson, L.A.; Willett, P.: Hierarchic agglomerative clustering methods for automatic document classification (1984) 0.01
    0.0050056577 = product of:
      0.015016973 = sum of:
        0.015016973 = weight(_text_:a in 2414) [ClassicSimilarity], result of:
          0.015016973 = score(doc=2414,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.28826174 = fieldWeight in 2414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=2414)
      0.33333334 = coord(1/3)
    
    Type
    a
  4. Willett, P.: Recent trends in hierarchic document clustering : a critical review (1988) 0.01
    0.0050056577 = product of:
      0.015016973 = sum of:
        0.015016973 = weight(_text_:a in 2604) [ClassicSimilarity], result of:
          0.015016973 = score(doc=2604,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.28826174 = fieldWeight in 2604, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=2604)
      0.33333334 = coord(1/3)
    
    Type
    a
  5. Al-Hawamdeh, S.; Smith, G.; Willett, P.: Paragraph-based access to full-text documents using a hypertext system (1991) 0.01
    0.0050056577 = product of:
      0.015016973 = sum of:
        0.015016973 = weight(_text_:a in 7504) [ClassicSimilarity], result of:
          0.015016973 = score(doc=7504,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.28826174 = fieldWeight in 7504, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=7504)
      0.33333334 = coord(1/3)
    
    Type
    a
  6. Griffiths, A.; Luckhurst, H.C.; Willett, P.: Using interdocument similarity information in document retrieval systems (1986) 0.00
    0.0043799505 = product of:
      0.013139851 = sum of:
        0.013139851 = weight(_text_:a in 2415) [ClassicSimilarity], result of:
          0.013139851 = score(doc=2415,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.25222903 = fieldWeight in 2415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=2415)
      0.33333334 = coord(1/3)
    
    Type
    a
  7. Perry, R.; Willett, P.: ¬A revies of the use of inverted files for best match searching in information retrieval systems (1983) 0.00
    0.0043799505 = product of:
      0.013139851 = sum of:
        0.013139851 = weight(_text_:a in 2701) [ClassicSimilarity], result of:
          0.013139851 = score(doc=2701,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.25222903 = fieldWeight in 2701, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=2701)
      0.33333334 = coord(1/3)
    
    Type
    a
  8. Li, X.; Cox, A.; Ford, N.; Creaser, C.; Fry, J.; Willett, P.: Knowledge construction by users : a content analysis framework and a knowledge construction process model for virtual product user communities (2017) 0.00
    0.0042839246 = product of:
      0.012851773 = sum of:
        0.012851773 = weight(_text_:a in 3574) [ClassicSimilarity], result of:
          0.012851773 = score(doc=3574,freq=30.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.24669915 = fieldWeight in 3574, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3574)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The purpose of this paper is to develop a content analysis framework and from that derive a process model of knowledge construction in the context of virtual product user communities, organization sponsored online forums where product users collaboratively construct knowledge to solve their technical problems. Design/methodology/approach The study is based on a deductive and qualitative content analysis of discussion threads about solving technical problems selected from a series of virtual product user communities. Data are complemented with thematic analysis of interviews with forum members. Findings The research develops a content analysis framework for knowledge construction. It is based on a combination of existing codes derived from frameworks developed for computer-supported collaborative learning and new categories identified from the data. Analysis using this framework allows the authors to propose a knowledge construction process model showing how these elements are organized around a typical "trial and error" knowledge construction strategy. Practical implications The research makes suggestions about organizations' management of knowledge activities in virtual product user communities, including moderators' roles in facilitation. Originality/value The paper outlines a new framework for analysing knowledge activities where there is a low level of critical thinking and a model of knowledge construction by trial and error. The new framework and model can be applied in other similar contexts.
    Type
    a
  9. Ellis, D.; Furner, J.; Willett, P.: On the creation of hypertext links in full-text documents : measurement of retrieval effectiveness (1996) 0.00
    0.003988117 = product of:
      0.01196435 = sum of:
        0.01196435 = weight(_text_:a in 4214) [ClassicSimilarity], result of:
          0.01196435 = score(doc=4214,freq=26.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22966442 = fieldWeight in 4214, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4214)
      0.33333334 = coord(1/3)
    
    Abstract
    An important stage in the process or retrieval of objects from a hypertext database is the creation of a set of internodal links that are intended to represent the relationships existing between objects; this operation is often undertaken manually, just as index terms are often manually assigned to documents in a conventional retrieval system. In an earlier article (1994), the results were published of a study in which several different sets of links were inserted, each by a different person, between the paragraphs of each of a number of full-text documents. These results showed little similarity between the link-sets, a finding that was comparable with those of studies of inter-indexer consistency, which suggest that there is generally only a low level of agreement between the sets of index terms assigned to a document by different indexers. In this article, a description is provided of an investigation into the nature of the relationship existing between (i) the levels of inter-linker consistency obtaining among the group of hypertext databases used in our earlier experiments, and (ii) the levels of effectiveness of a number of searches carried out in those databases. An account is given of the implementation of the searches and of the methods used in the calculation of numerical values expressing their effectiveness. Analysis of the results of a comparison between recorded levels of consistency and those of effectiveness does not allow us to draw conclusions about the consistency - effectiveness relationship that are equivalent to those drawn in comparable studies of inter-indexer consistency
    Type
    a
  10. Furner-Hines, J.; Willett, P.: ¬The use of hypertext in libraries in the United Kingdom (1994) 0.00
    0.00395732 = product of:
      0.01187196 = sum of:
        0.01187196 = weight(_text_:a in 1792) [ClassicSimilarity], result of:
          0.01187196 = score(doc=1792,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22789092 = fieldWeight in 1792, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1792)
      0.33333334 = coord(1/3)
    
    Abstract
    Presents a summary of the major findings of a survey of the use of hypertext systems and the production of hypertext products in UK libraries. Not surprisingly, academic libraries are found to be both the most enthusiastic users and producers. There are normally 4 principal stages in a library's development of a hypertext system, although the possibility of leapfrogging via WWW is acknowledged
    Type
    a
  11. Robertson, M.; Willett, P.: ¬An upperbound to the performance of ranked output searching : optimal weighting of query terms using a genetic algorithms (1996) 0.00
    0.00395732 = product of:
      0.01187196 = sum of:
        0.01187196 = weight(_text_:a in 6977) [ClassicSimilarity], result of:
          0.01187196 = score(doc=6977,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22789092 = fieldWeight in 6977, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6977)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes the development of a genetic algorithm (GA) for the assignment of weights to query terms in a ranked output document retrieval system. The GA involves a fitness function that is based on full relevance information, and the rankings resulting from the use of these weights are compared with the Robertson-Sparck Jones F4 retrospective relevance weight
    Type
    a
  12. Ellis, D.; Furner-Hines, J.; Willett, P.: ¬The creation of hypertext links in full-text documents (1994) 0.00
    0.003754243 = product of:
      0.011262729 = sum of:
        0.011262729 = weight(_text_:a in 1084) [ClassicSimilarity], result of:
          0.011262729 = score(doc=1084,freq=16.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.2161963 = fieldWeight in 1084, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1084)
      0.33333334 = coord(1/3)
    
    Abstract
    An important stage in the process of retrieval of objects from a hypertext database is the creation of a set of internodal links that are intended to represent the relationships existing between objetcs; an operation that is usually undertaken manually such as the allocation of subject index terms to documents. Reports results of a study in which several different sets of hypertext links were inserted, each by a different person, between the paragraphs of each of a number of full text documents. The similarity between the members of each pair of link sets was then evaluated. Results indicated that little similarity existed among the link sets, a finding comparable with those of studies of inter indexer consistency, which suggests that there is generally only a low level of agreemenet between the sets of index terms assigned to a document by indexers. Concludes with that part of the study designed to test the validity of making these kinds of assumptions in the context of hypertext link sets
  13. Ellis, D.; Furner-Hines, J.; Willett, P.: On the creation of hypertext links in full-text documents : measurement of inter-linker consistency (1994) 0.00
    0.0036685336 = product of:
      0.011005601 = sum of:
        0.011005601 = weight(_text_:a in 7493) [ClassicSimilarity], result of:
          0.011005601 = score(doc=7493,freq=22.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.21126054 = fieldWeight in 7493, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=7493)
      0.33333334 = coord(1/3)
    
    Abstract
    In important stage in the process of retrieval of objects from a hypertext database is the creation of a set of inter-nodal links that are intended to represent the relationships existing between objects; this operation is often undertaken manually, just as index terms are often manually assigned to documents in a conventional retrieval system. Studies of conventional systems have suggested that a degree of consistency in the terms assigned to documents by indexers is positively associated with retrieval effectiveness. It is thus of interest to investigate the consistency of assignment of links in separate hypertext versions of the same full-text document, since a measure of agreement may be related to the subsequent utility of the resulting hypertext databases. The calculation of values indicating the degree of similarity between objects is a technique that has been widely used in the fields of textual and chemical information retrieval; in this paper we describe the application of arithmetic coefficients and topological indices to the measurement of the degree of similarity between the sets of inter-nodal links in hypertext databases. We publish the results of a study in which several different of links are inserted, by different people, between the paragraphs of each of a number of full-text documents. Our results show little similary between the sets of links identified by different people; this finding is comparable with those of studies of inter-indexer consistency, where it has been found that there is generally only a low level of agreement between the sets of idenx terms assigned to a document by different indexers
    Type
    a
  14. Willett, P.: From chemical documentation to chemoinformatics : 50 years of chemical information science (2009) 0.00
    0.0035395343 = product of:
      0.010618603 = sum of:
        0.010618603 = weight(_text_:a in 3656) [ClassicSimilarity], result of:
          0.010618603 = score(doc=3656,freq=8.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.20383182 = fieldWeight in 3656, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3656)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper summarizes the historical development of the discipline that is now called 'chemoinformatics'. It shows how this has evolved, principally as a result of technological developments in chemistry and biology during the past decade, from long-established techniques for the modelling and searching of chemical molecules. A total of 30 papers, the earliest dating back to 1957, are briefly summarized to highlight some of the key publications and to show the development of the discipline.
    Source
    Information science in transition, Ed.: A. Gilchrist
    Type
    a
  15. Robertson, A.M.; Willett, P.: Generation of equifrequent groups of words using a genetic algorithm (1994) 0.00
    0.003462655 = product of:
      0.010387965 = sum of:
        0.010387965 = weight(_text_:a in 8158) [ClassicSimilarity], result of:
          0.010387965 = score(doc=8158,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 8158, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8158)
      0.33333334 = coord(1/3)
    
    Abstract
    Genetic algorithms are a class of non-deterministic algorithms that derive from Darwinian evolution and that provide good, though not necessarily optimal, solutions to combinatorial problems. We describe their application to the identification of characteristics that occur approximately equifrequently in a database, using two different methods for the creation of the chromosome data structures that lie at the heart of a genetic algortihm. Experiments with files of English and Turkish text suggest that the genetic algorithm developed here can produce results superior to those produced by existing non-deterministic algorithms; however, the results are inferior to those produced by an existing deterministic algorithm
    Type
    a
  16. Li, J.; Willett, P.: ArticleRank : a PageRank-based alternative to numbers of citations for analysing citation networks (2009) 0.00
    0.0033183135 = product of:
      0.0099549405 = sum of:
        0.0099549405 = weight(_text_:a in 751) [ClassicSimilarity], result of:
          0.0099549405 = score(doc=751,freq=18.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19109234 = fieldWeight in 751, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=751)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to suggest an alternative to the widely used Times Cited criterion for analysing citation networks. The approach involves taking account of the natures of the papers that cite a given paper, so as to differentiate between papers that attract the same number of citations. Design/methodology/approach - ArticleRank is an algorithm that has been derived from Google's PageRank algorithm to measure the influence of journal articles. ArticleRank is applied to two datasets - a citation network based on an early paper on webometrics, and a self-citation network based on the 19 most cited papers in the Journal of Documentation - using citation data taken from the Web of Knowledge database. Findings - ArticleRank values provide a different ranking of a set of papers from that provided by the corresponding Times Cited values, and overcomes the inability of the latter to differentiate between papers with the same numbers of citations. The difference in rankings between Times Cited and ArticleRank is greatest for the most heavily cited articles in a dataset. Originality/value - This is a novel application of the PageRank algorithm.
    Type
    a
  17. Ekmekcioglu, F.C.; Willett, P.: Effectiveness of stemming for Turkish text retrieval (2000) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 5423) [ClassicSimilarity], result of:
          0.009291277 = score(doc=5423,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 5423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=5423)
      0.33333334 = coord(1/3)
    
    Type
    a
  18. Willett, P.; Robertson, S.: In memoriam: Karen Sparck Jones (2007) 0.00
    0.0030970925 = product of:
      0.009291277 = sum of:
        0.009291277 = weight(_text_:a in 833) [ClassicSimilarity], result of:
          0.009291277 = score(doc=833,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17835285 = fieldWeight in 833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=833)
      0.33333334 = coord(1/3)
    
    Type
    a
  19. Jones, G.; Robertson, A.M.; Willett, P.: ¬An introduction to genetic algorithms and to their use in information retrieval (1994) 0.00
    0.003065327 = product of:
      0.009195981 = sum of:
        0.009195981 = weight(_text_:a in 7415) [ClassicSimilarity], result of:
          0.009195981 = score(doc=7415,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17652355 = fieldWeight in 7415, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper provides an introduction to genetic algorithms, a new approach to the investigation of computationally-intensive problems that may be insoluble using conventional, deterministic approaches. A genetic algorithm takes an initial set of possible starting solutions and then iteratively improves theses solutions using operators that are analogous to those involved in Darwinian evolution. The approach is illusrated by reference to several problems in information retrieval
    Type
    a
  20. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.00
    0.003065327 = product of:
      0.009195981 = sum of:
        0.009195981 = weight(_text_:a in 760) [ClassicSimilarity], result of:
          0.009195981 = score(doc=760,freq=6.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.17652355 = fieldWeight in 760, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=760)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
    Type
    a