Search (38 results, page 2 of 2)

  • × author_ss:"Robertson, S.E."
  1. Bovey, J.D.; Robertson, S.E.: ¬An algorithm for weighted searching on a Boolean system (1984) 0.00
    0.002289546 = product of:
      0.012592502 = sum of:
        0.007731652 = weight(_text_:a in 788) [ClassicSimilarity], result of:
          0.007731652 = score(doc=788,freq=4.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.25222903 = fieldWeight in 788, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=788)
        0.0048608496 = weight(_text_:s in 788) [ClassicSimilarity], result of:
          0.0048608496 = score(doc=788,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.16817348 = fieldWeight in 788, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=788)
      0.18181819 = coord(2/11)
    
    Source
    Information technology: research and development. 3(1984) no.1, S.84-87
    Type
    a
  2. Robertson, S.E.; Walker, S.; Beaulieu, M.: Experimentation as a way of life : Okapi at TREC (2000) 0.00
    0.002276249 = product of:
      0.012519369 = sum of:
        0.0066271294 = weight(_text_:a in 6030) [ClassicSimilarity], result of:
          0.0066271294 = score(doc=6030,freq=4.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.2161963 = fieldWeight in 6030, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=6030)
        0.00589224 = weight(_text_:s in 6030) [ClassicSimilarity], result of:
          0.00589224 = score(doc=6030,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.20385705 = fieldWeight in 6030, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=6030)
      0.18181819 = coord(2/11)
    
    Source
    Information processing and management. 36(2000) no.1, S.95-108
    Type
    a
  3. Robertson, S.E.; Belkin, N.J.: Ranking in principle (1978) 0.00
    0.002146068 = product of:
      0.011803374 = sum of:
        0.0062481174 = weight(_text_:a in 1143) [ClassicSimilarity], result of:
          0.0062481174 = score(doc=1143,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.20383182 = fieldWeight in 1143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=1143)
        0.0055552567 = weight(_text_:s in 1143) [ClassicSimilarity], result of:
          0.0055552567 = score(doc=1143,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.19219826 = fieldWeight in 1143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=1143)
      0.18181819 = coord(2/11)
    
    Source
    Journal of documentation. 34(1978), S.93-100
    Type
    a
  4. Robertson, S.E.; Hancock-Beaulieu, M.M.: On the evaluation of IR systems (1992) 0.00
    0.002146068 = product of:
      0.011803374 = sum of:
        0.0062481174 = weight(_text_:a in 2619) [ClassicSimilarity], result of:
          0.0062481174 = score(doc=2619,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.20383182 = fieldWeight in 2619, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=2619)
        0.0055552567 = weight(_text_:s in 2619) [ClassicSimilarity], result of:
          0.0055552567 = score(doc=2619,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.19219826 = fieldWeight in 2619, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=2619)
      0.18181819 = coord(2/11)
    
    Source
    Information processing and management. 28(1992) no.4, S.457-466
    Type
    a
  5. Robertson, S.E.: ¬The probabilistic character of relevance (1977) 0.00
    0.002146068 = product of:
      0.011803374 = sum of:
        0.0062481174 = weight(_text_:a in 7399) [ClassicSimilarity], result of:
          0.0062481174 = score(doc=7399,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.20383182 = fieldWeight in 7399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=7399)
        0.0055552567 = weight(_text_:s in 7399) [ClassicSimilarity], result of:
          0.0055552567 = score(doc=7399,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.19219826 = fieldWeight in 7399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=7399)
      0.18181819 = coord(2/11)
    
    Source
    Information processing and management. 13(1977), S.247-251
    Type
    a
  6. Robertson, S.E.: Query-document symmetry and dual models (1994) 0.00
    0.0017751341 = product of:
      0.009763237 = sum of:
        0.0069856085 = weight(_text_:a in 8159) [ClassicSimilarity], result of:
          0.0069856085 = score(doc=8159,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.22789092 = fieldWeight in 8159, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=8159)
        0.0027776284 = weight(_text_:s in 8159) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=8159,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 8159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=8159)
      0.18181819 = coord(2/11)
    
    Abstract
    The idea that there is some natural symmetry between queries and documents is explained. If symmetry can be assumed, then it lead to a conception of 'dual' models in information retrieval (given a model, we can construct a dual model in which the roles of documents and queries are reversed). But symmetry breaks down in various ways, which may invalidate this construction. If we can construct a dual, it is not obvious that it can be combined with the original
    Source
    Journal of documentation. 50(1994) no.3, S.233-238
    Type
    a
  7. Robertson, S.E.: ¬The parametric description of retrieval tests : Part II: Overall measures (1969) 0.00
    0.0017362813 = product of:
      0.009549547 = sum of:
        0.006112407 = weight(_text_:a in 4156) [ClassicSimilarity], result of:
          0.006112407 = score(doc=4156,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.19940455 = fieldWeight in 4156, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4156)
        0.00343714 = weight(_text_:s in 4156) [ClassicSimilarity], result of:
          0.00343714 = score(doc=4156,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.118916616 = fieldWeight in 4156, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4156)
      0.18181819 = coord(2/11)
    
    Abstract
    Two general requirements for overall measures of retrieval effectiveness are proposed, namely that the measures should be as far as possible independent of generality (this is interpreted to mean that it can be described in terms of recall and fallout), and that it should be able to measure the effectiveness of a performance curve (it should not be restricted to a simple 2X2 table). Several measures that have been proposed are examined with these conditions in mind. It turns out that most of the satisfactory ones are directly or indirectly related to swet's measure A, the area under the recall-fallout curve. In particular, Brookes' measure S and Rocchio's normalized recall are versions of A.
    Source
    Journal of documentation. 25(1969) no.2, S.93-106
    Type
    a
  8. Robertson, S.E.: ¬The parametric description of retrieval tests : Part I: The basic parameters (1969) 0.00
    0.0016593148 = product of:
      0.009126231 = sum of:
        0.006695806 = weight(_text_:a in 4155) [ClassicSimilarity], result of:
          0.006695806 = score(doc=4155,freq=12.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.21843673 = fieldWeight in 4155, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4155)
        0.0024304248 = weight(_text_:s in 4155) [ClassicSimilarity], result of:
          0.0024304248 = score(doc=4155,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.08408674 = fieldWeight in 4155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4155)
      0.18181819 = coord(2/11)
    
    Abstract
    Some parameters and techniques in use for describing the results of test on IR system are analysed. Several considerations outside the scope of the usual 2X2 table are relevant to the choice of parameters. In particular, a variable which produces a 'performance curve' of a system corresponds to an extension of the 2x2 table. Also, the statistical relationships between parameters are all-important. It is considered that precision is not such a useful measure of performance (in conjunction with recall)as fallout. A more powerful alternative to Cleverdon's 'invitable inverse relationship between recall and precision'is proposed and justified, namely that the recall-fallout graph is convex.
    Source
    Journal of documentation. 25(1969) no.1, S.1-27
    Type
    a
  9. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing in information retrieval : an updated review (1997) 0.00
    0.0016410447 = product of:
      0.009025746 = sum of:
        0.0062481174 = weight(_text_:a in 7450) [ClassicSimilarity], result of:
          0.0062481174 = score(doc=7450,freq=8.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.20383182 = fieldWeight in 7450, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=7450)
        0.0027776284 = weight(_text_:s in 7450) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=7450,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 7450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=7450)
      0.18181819 = coord(2/11)
    
    Abstract
    Reviews the progress of parallel computing in information retrieval. Stresses the importance of the motivation is using parallel computing for text retrieval. Analyzes parallel IR systems using a classification defined by Rasmussen and describes some parallel IR systems. Gives a description of the retrieval models used in parallel information processing and notes areas where research is needed
    Source
    Journal of documentation. 53(1997) no.3, S.274-315
    Type
    a
  10. Robertson, S.E.; Walker, S.; Beaulieu, M.: Laboratory experiments with Okapi : participation in the TREC programme (1997) 0.00
    0.0016189533 = product of:
      0.008904243 = sum of:
        0.005467103 = weight(_text_:a in 2216) [ClassicSimilarity], result of:
          0.005467103 = score(doc=2216,freq=8.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.17835285 = fieldWeight in 2216, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2216)
        0.00343714 = weight(_text_:s in 2216) [ClassicSimilarity], result of:
          0.00343714 = score(doc=2216,freq=4.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.118916616 = fieldWeight in 2216, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2216)
      0.18181819 = coord(2/11)
    
    Abstract
    Briefly reviews the history of laboratory testing of information retrieval systems, focusing on the idea of a general purpose test collection of documents, queries and relevance judgements. Gives an overview of the methods used in TREC (Text Retrieval Conference) which is concerned with an ideal test collection, and discusses the Okapi team's participation in TREC. Also discusses some of the issues surrounding the difficult problem of interactive evaluation in TREC. The reconciliation of the requirements of the laboratory context with the concerns of interactive retrieval has a long way to go
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
    Source
    Journal of documentation. 53(1997) no.1, S.20-34
    Type
    a
  11. Huang, X.; Robertson, S.E.: Application of probilistic methods to Chinese text retrieval (1997) 0.00
    0.0015532422 = product of:
      0.008542832 = sum of:
        0.006112407 = weight(_text_:a in 4706) [ClassicSimilarity], result of:
          0.006112407 = score(doc=4706,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.19940455 = fieldWeight in 4706, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4706)
        0.0024304248 = weight(_text_:s in 4706) [ClassicSimilarity], result of:
          0.0024304248 = score(doc=4706,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.08408674 = fieldWeight in 4706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4706)
      0.18181819 = coord(2/11)
    
    Abstract
    Discusses the use of text retrieval methods based on the probabilistic model with Chinese language material. Since Chinese text has no natural word boundaries, either a dictionary based word segmentation method must be applied to the text, or indexing and searching must be done in terms of single Chinese characters. In either case, it becomes important to have a good way of dealing with phrases or contoguous strings of characters; the probabilistic model does not at present have such a facility. Proposes some ad hoc modifications of the probabilistic weighting function and matching method for this purpose
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
    Source
    Journal of documentation. 53(1997) no.1, S.74-79
    Type
    a
  12. Robertson, S.E.; Beaulieu, M.: Research and evaluation in information retrieval (1997) 0.00
    0.0014888468 = product of:
      0.008188657 = sum of:
        0.005411029 = weight(_text_:a in 7445) [ClassicSimilarity], result of:
          0.005411029 = score(doc=7445,freq=6.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.17652355 = fieldWeight in 7445, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=7445)
        0.0027776284 = weight(_text_:s in 7445) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=7445,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 7445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=7445)
      0.18181819 = coord(2/11)
    
    Abstract
    Offered as a discussion document drawing on the experiences of the Okapi team in developing information retrieval systems. Raises some of the issues currently exercising the information retrieval community in the context of experimentation and evaluation
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
    Source
    Journal of documentation. 53(1997) no.1, S.51-57
    Type
    a
  13. Robertson, S.E.: Overview of the Okapi projects (1997) 0.00
    0.0014888468 = product of:
      0.008188657 = sum of:
        0.005411029 = weight(_text_:a in 4703) [ClassicSimilarity], result of:
          0.005411029 = score(doc=4703,freq=6.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.17652355 = fieldWeight in 4703, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4703)
        0.0027776284 = weight(_text_:s in 4703) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=4703,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 4703, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=4703)
      0.18181819 = coord(2/11)
    
    Abstract
    Gives a brief description of the Okapi projects and of the work of the centre for Interactive Systems Research in the Department of Information Science at City University, London,UK, where these projects have been developed. Describes firstly one version of an information retrieval system which contains some of the central features of the Okapi projects, and follows this with an indication of the variety of systems now implemented or implementable within the present setup
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
    Source
    Journal of documentation. 53(1997) no.1, S.3-7
    Type
    a
  14. Robertson, S.E.: OKAPI at TREC-1 (1994) 0.00
    0.0013412925 = product of:
      0.0073771086 = sum of:
        0.0039050733 = weight(_text_:a in 7953) [ClassicSimilarity], result of:
          0.0039050733 = score(doc=7953,freq=2.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.12739488 = fieldWeight in 7953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
        0.0034720355 = weight(_text_:s in 7953) [ClassicSimilarity], result of:
          0.0034720355 = score(doc=7953,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.120123915 = fieldWeight in 7953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
      0.18181819 = coord(2/11)
    
    Abstract
    Describes the work carried out on the TREC-2 project following the results of the TREC-1 project. Experiments were conducted on the OKAPI experimental text information retrieval system which investigated a number of alternative probabilistic term weighting functions in place of the 'standard' Robertson Sparck Jones weighting functions used in TREC-1
    Pages
    19 S
  15. Robertson, S.E.: Theories and models in information retrieval (1977) 0.00
    0.0013083118 = product of:
      0.007195715 = sum of:
        0.0044180867 = weight(_text_:a in 1844) [ClassicSimilarity], result of:
          0.0044180867 = score(doc=1844,freq=4.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.14413087 = fieldWeight in 1844, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1844)
        0.0027776284 = weight(_text_:s in 1844) [ClassicSimilarity], result of:
          0.0027776284 = score(doc=1844,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.09609913 = fieldWeight in 1844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=1844)
      0.18181819 = coord(2/11)
    
    Abstract
    This paper is concerned with recent work in the theory of information retrieval. More particularly, it is concerned with theories which tackle the problem of retrieval performance, in a sense which will be explained. The aim is not an exhaustive survey of such work; rather it is an analysis and synthesis of those contributions which I feel to be important or find interesting
    Source
    Journal of documentation. 33(1977), S.126-148
    Type
    a
  16. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the generation of partitioned inverted files (2005) 0.00
    0.0011166352 = product of:
      0.006141493 = sum of:
        0.0040582716 = weight(_text_:a in 651) [ClassicSimilarity], result of:
          0.0040582716 = score(doc=651,freq=6.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.13239266 = fieldWeight in 651, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
        0.0020832212 = weight(_text_:s in 651) [ClassicSimilarity], result of:
          0.0020832212 = score(doc=651,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.072074346 = fieldWeight in 651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
      0.18181819 = coord(2/11)
    
    Abstract
    Purpose - The generation of inverted indexes is one of the most computationally intensive activities for information retrieval systems: indexing large multi-gigabyte text databases can take many hours or even days to complete. We examine the generation of partitioned inverted files in order to speed up the process of indexing. Two types of index partitions are investigated: TermId and DocId. Design/methodology/approach - We use standard measures used in parallel computing such as speedup and efficiency to examine the computing results and also the space costs of our trial indexing experiments. Findings - The results from runs on both partitioning methods are compared and contrasted, concluding that DocId is the more efficient method. Practical implications - The practical implications are that the DocId partitioning method would in most circumstances be used for distributing inverted file data in a parallel computer, particularly if indexing speed is the primary consideration. Originality/value - The paper is of value to database administrators who manage large-scale text collections, and who need to use parallel computing to implement their text retrieval services.
    Source
    Aslib proceedings. 57(2005) no.5, S.434-459
    Type
    a
  17. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the update of partitioned inverted files (2007) 0.00
    0.0011094587 = product of:
      0.006102023 = sum of:
        0.0043660053 = weight(_text_:a in 819) [ClassicSimilarity], result of:
          0.0043660053 = score(doc=819,freq=10.0), product of:
            0.030653298 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.026584605 = queryNorm
            0.14243183 = fieldWeight in 819, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=819)
        0.0017360178 = weight(_text_:s in 819) [ClassicSimilarity], result of:
          0.0017360178 = score(doc=819,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.060061958 = fieldWeight in 819, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=819)
      0.18181819 = coord(2/11)
    
    Abstract
    Purpose - An issue that tends to be ignored in information retrieval is the issue of updating inverted files. This is largely because inverted files were devised to provide fast query service, and much work has been done with the emphasis strongly on queries. This paper aims to study the effect of using parallel methods for the update of inverted files in order to reduce costs, by looking at two types of partitioning for inverted files: document identifier and term identifier. Design/methodology/approach - Raw update service and update with query service are studied with these partitioning schemes using an incremental update strategy. The paper uses standard measures used in parallel computing such as speedup to examine the computing results and also the costs of reorganising indexes while servicing transactions. Findings - Empirical results show that for both transaction processing and index reorganisation the document identifier method is superior. However, there is evidence that the term identifier partitioning method could be useful in a concurrent transaction processing context. Practical implications - There is an increasing need to service updates, which is now becoming a requirement of inverted files (for dynamic collections such as the web), demonstrating that a shift in requirements of inverted file maintenance is needed from the past. Originality/value - The paper is of value to database administrators who manage large-scale and dynamic text collections, and who need to use parallel computing to implement their text retrieval services.
    Source
    Aslib proceedings. 59(2007) no.4/5, S.367-396
    Type
    a
  18. Robertson, S.E.: OKAPI at TREC (1994) 0.00
    3.156396E-4 = product of:
      0.0034720355 = sum of:
        0.0034720355 = weight(_text_:s in 7952) [ClassicSimilarity], result of:
          0.0034720355 = score(doc=7952,freq=2.0), product of:
            0.028903782 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.026584605 = queryNorm
            0.120123915 = fieldWeight in 7952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=7952)
      0.09090909 = coord(1/11)
    
    Pages
    19 S