Search (36 results, page 1 of 2)

  • × author_ss:"Robertson, S.E."
  1. Robertson, S.E.: Some recent theories and models in information retrieval (1980) 0.00
    0.0023731373 = product of:
      0.023731373 = sum of:
        0.0052914224 = weight(_text_:in in 1326) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=1326,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 1326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=1326)
        0.015059452 = product of:
          0.045178354 = sum of:
            0.045178354 = weight(_text_:l in 1326) [ClassicSimilarity], result of:
              0.045178354 = score(doc=1326,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.52696943 = fieldWeight in 1326, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1326)
          0.33333334 = coord(1/3)
        0.0033805002 = weight(_text_:s in 1326) [ClassicSimilarity], result of:
          0.0033805002 = score(doc=1326,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.14414869 = fieldWeight in 1326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=1326)
      0.1 = coord(3/30)
    
    Pages
    S.131-136
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
  2. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.00
    0.002005331 = product of:
      0.020053308 = sum of:
        0.006110009 = weight(_text_:in in 5108) [ClassicSimilarity], result of:
          0.006110009 = score(doc=5108,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 5108, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.002253667 = weight(_text_:s in 5108) [ClassicSimilarity], result of:
          0.002253667 = score(doc=5108,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 5108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.011689632 = product of:
          0.023379264 = sum of:
            0.023379264 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.023379264 = score(doc=5108,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in order to examine the effect on retrieval effectiveness and efficiency. The particular algorithm applied has previously been used to good effect in Okapi experiments at TREC. This algorithm and the mechanism for applying parallel computing to speed up processing are described.
    Date
    20. 1.2007 18:30:22
    Source
    Aslib proceedings. 56(2004) no.4, S.201-211
  3. Robertson, S.E.: ¬The probability ranking principle in IR (1977) 0.00
    9.2971756E-4 = product of:
      0.013945763 = sum of:
        0.009165013 = weight(_text_:in in 1935) [ClassicSimilarity], result of:
          0.009165013 = score(doc=1935,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.3123684 = fieldWeight in 1935, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=1935)
        0.00478075 = weight(_text_:s in 1935) [ClassicSimilarity], result of:
          0.00478075 = score(doc=1935,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.20385705 = fieldWeight in 1935, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=1935)
      0.06666667 = coord(2/30)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willet. San Francisco: Morgan Kaufmann 1997. S.281-286.
    Source
    Journal of documentation. 33(1977), S.294-304
  4. Robertson, S.E.; Belkin, N.J.: Ranking in principle (1978) 0.00
    7.708376E-4 = product of:
      0.0115625635 = sum of:
        0.00705523 = weight(_text_:in in 1143) [ClassicSimilarity], result of:
          0.00705523 = score(doc=1143,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.24046129 = fieldWeight in 1143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=1143)
        0.004507334 = weight(_text_:s in 1143) [ClassicSimilarity], result of:
          0.004507334 = score(doc=1143,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.19219826 = fieldWeight in 1143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=1143)
      0.06666667 = coord(2/30)
    
    Source
    Journal of documentation. 34(1978), S.93-100
  5. Robertson, S.E.; Walker, S.: Some simple effective approximations to the 2-Poisson molde for probabilisitc weighted retrieval (1979) 0.00
    7.4102223E-4 = product of:
      0.011115333 = sum of:
        0.006236001 = weight(_text_:in in 1940) [ClassicSimilarity], result of:
          0.006236001 = score(doc=1940,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 1940, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1940)
        0.0048793326 = weight(_text_:s in 1940) [ClassicSimilarity], result of:
          0.0048793326 = score(doc=1940,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.20806074 = fieldWeight in 1940, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=1940)
      0.06666667 = coord(2/30)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.345-453.
    Source
    Journal of documentation. 35(1979), S.285-295 (???)
  6. Robertson, S.E.; Walker, S.; Hancock-Beaulieu, M.M.: Large test collection experiments of an operational, interactive system : OKAPI at TREC (1995) 0.00
    6.4605067E-4 = product of:
      0.00969076 = sum of:
        0.006901989 = weight(_text_:in in 6964) [ClassicSimilarity], result of:
          0.006901989 = score(doc=6964,freq=10.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.23523843 = fieldWeight in 6964, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6964)
        0.0027887707 = weight(_text_:s in 6964) [ClassicSimilarity], result of:
          0.0027887707 = score(doc=6964,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.118916616 = fieldWeight in 6964, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6964)
      0.06666667 = coord(2/30)
    
    Abstract
    The Okapi system has been used in a series of experiments on the TREC collections, investiganting probabilistic methods, relevance feedback, and query expansion, and interaction issues. Some new probabilistic models have been developed, resulting in simple weigthing functions that take account of document length and within document and within query term frequency. All have been shown to be beneficial when based on large quantities of relevance data as in the routing task. Interaction issues are much more difficult to evaluate in the TREC framework, and no benefits have yet been demonstrated from feedback based on small numbers of 'relevant' items identified by intermediary searchers
    Source
    Information processing and management. 31(1995) no.3, S.345-360
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  7. Robertson, S.E.: OKAPI at TREC-1 (1994) 0.00
    6.0353905E-4 = product of:
      0.009053085 = sum of:
        0.006236001 = weight(_text_:in in 7953) [ClassicSimilarity], result of:
          0.006236001 = score(doc=7953,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 7953, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
        0.0028170836 = weight(_text_:s in 7953) [ClassicSimilarity], result of:
          0.0028170836 = score(doc=7953,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.120123915 = fieldWeight in 7953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
      0.06666667 = coord(2/30)
    
    Abstract
    Describes the work carried out on the TREC-2 project following the results of the TREC-1 project. Experiments were conducted on the OKAPI experimental text information retrieval system which investigated a number of alternative probabilistic term weighting functions in place of the 'standard' Robertson Sparck Jones weighting functions used in TREC-1
    Pages
    19 S
  8. Robertson, S.E.; Walker, S.; Beaulieu, M.: Laboratory experiments with Okapi : participation in the TREC programme (1997) 0.00
    5.974731E-4 = product of:
      0.008962097 = sum of:
        0.0061733257 = weight(_text_:in in 2216) [ClassicSimilarity], result of:
          0.0061733257 = score(doc=2216,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21040362 = fieldWeight in 2216, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2216)
        0.0027887707 = weight(_text_:s in 2216) [ClassicSimilarity], result of:
          0.0027887707 = score(doc=2216,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.118916616 = fieldWeight in 2216, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2216)
      0.06666667 = coord(2/30)
    
    Abstract
    Briefly reviews the history of laboratory testing of information retrieval systems, focusing on the idea of a general purpose test collection of documents, queries and relevance judgements. Gives an overview of the methods used in TREC (Text Retrieval Conference) which is concerned with an ideal test collection, and discusses the Okapi team's participation in TREC. Also discusses some of the issues surrounding the difficult problem of interactive evaluation in TREC. The reconciliation of the requirements of the laboratory context with the concerns of interactive retrieval has a long way to go
    Source
    Journal of documentation. 53(1997) no.1, S.20-34
  9. Vechtomova, O.; Karamuftuoglum, M.; Robertson, S.E.: On document relevance and lexical cohesion between query terms (2006) 0.00
    5.79343E-4 = product of:
      0.008690145 = sum of:
        0.0069998945 = weight(_text_:in in 987) [ClassicSimilarity], result of:
          0.0069998945 = score(doc=987,freq=14.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.23857531 = fieldWeight in 987, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
        0.0016902501 = weight(_text_:s in 987) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=987,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 987, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
      0.06666667 = coord(2/30)
    
    Abstract
    Lexical cohesion is a property of text, achieved through lexical-semantic relations between words in text. Most information retrieval systems make use of lexical relations in text only to a limited extent. In this paper we empirically investigate whether the degree of lexical cohesion between the contexts of query terms' occurrences in a document is related to its relevance to the query. Lexical cohesion between distinct query terms in a document is estimated on the basis of the lexical-semantic relations (repetition, synonymy, hyponymy and sibling) that exist between there collocates - words that co-occur with them in the same windows of text. Experiments suggest significant differences between the lexical cohesion in relevant and non-relevant document sets exist. A document ranking method based on lexical cohesion shows some performance improvements.
    Source
    Information processing and management. 42(2006) no.5, S.1230-1247
  10. Robertson, S.E.: Theories and models in information retrieval (1977) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 1844) [ClassicSimilarity], result of:
          0.006110009 = score(doc=1844,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 1844, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1844)
        0.002253667 = weight(_text_:s in 1844) [ClassicSimilarity], result of:
          0.002253667 = score(doc=1844,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 1844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=1844)
      0.06666667 = coord(2/30)
    
    Abstract
    This paper is concerned with recent work in the theory of information retrieval. More particularly, it is concerned with theories which tackle the problem of retrieval performance, in a sense which will be explained. The aim is not an exhaustive survey of such work; rather it is an analysis and synthesis of those contributions which I feel to be important or find interesting
    Source
    Journal of documentation. 33(1977), S.126-148
  11. Robertson, S.E.: On term selection for query expansion (1990) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 2650) [ClassicSimilarity], result of:
          0.006110009 = score(doc=2650,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 2650, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2650)
        0.002253667 = weight(_text_:s in 2650) [ClassicSimilarity], result of:
          0.002253667 = score(doc=2650,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 2650, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=2650)
      0.06666667 = coord(2/30)
    
    Abstract
    In the framework of a relevance feedback system, term values or term weights may be used to (a) select new terms for inclusion in a query, and/or (b) weight the terms for retrieval purposes once selected. It has sometimes been assumed that the same weighting formula should be used for both purposes. This paper sketches a quantitative argument which suggests that the two purposes require different weighting formulae
    Source
    Journal of documentation. 46(1990) no.4, S.359-364
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  12. Robertson, S.E.: Query-document symmetry and dual models (1994) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 8159) [ClassicSimilarity], result of:
          0.006110009 = score(doc=8159,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 8159, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=8159)
        0.002253667 = weight(_text_:s in 8159) [ClassicSimilarity], result of:
          0.002253667 = score(doc=8159,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 8159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=8159)
      0.06666667 = coord(2/30)
    
    Abstract
    The idea that there is some natural symmetry between queries and documents is explained. If symmetry can be assumed, then it lead to a conception of 'dual' models in information retrieval (given a model, we can construct a dual model in which the roles of documents and queries are reversed). But symmetry breaks down in various ways, which may invalidate this construction. If we can construct a dual, it is not obvious that it can be combined with the original
    Source
    Journal of documentation. 50(1994) no.3, S.233-238
  13. Robertson, S.E.; Sparck Jones, K.: Relevance weighting of search terms (1976) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 71) [ClassicSimilarity], result of:
          0.006110009 = score(doc=71,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 71, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=71)
        0.002253667 = weight(_text_:s in 71) [ClassicSimilarity], result of:
          0.002253667 = score(doc=71,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 71, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=71)
      0.06666667 = coord(2/30)
    
    Abstract
    Examines statistical techniques for exploiting relevance information to weight search terms. These techniques are presented as a natural extension of weighting methods using information about the distribution of index terms in documents in general. A series of relevance weighting functions is derived and is justified by theoretical considerations. In particular, it is shown that specific weighted search methods are implied by a general probabilistic theory of retrieval. Different applications of relevance weighting are illustrated by experimental results for test collections
    Source
    Journal of the American Society for Information Science. 27(1976), S.129-146
  14. Robertson, S.E.; Beaulieu, M.: Research and evaluation in information retrieval (1997) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 7445) [ClassicSimilarity], result of:
          0.006110009 = score(doc=7445,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 7445, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=7445)
        0.002253667 = weight(_text_:s in 7445) [ClassicSimilarity], result of:
          0.002253667 = score(doc=7445,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 7445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=7445)
      0.06666667 = coord(2/30)
    
    Abstract
    Offered as a discussion document drawing on the experiences of the Okapi team in developing information retrieval systems. Raises some of the issues currently exercising the information retrieval community in the context of experimentation and evaluation
    Source
    Journal of documentation. 53(1997) no.1, S.51-57
  15. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing in information retrieval : an updated review (1997) 0.00
    5.575784E-4 = product of:
      0.008363675 = sum of:
        0.006110009 = weight(_text_:in in 7450) [ClassicSimilarity], result of:
          0.006110009 = score(doc=7450,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 7450, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=7450)
        0.002253667 = weight(_text_:s in 7450) [ClassicSimilarity], result of:
          0.002253667 = score(doc=7450,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 7450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=7450)
      0.06666667 = coord(2/30)
    
    Abstract
    Reviews the progress of parallel computing in information retrieval. Stresses the importance of the motivation is using parallel computing for text retrieval. Analyzes parallel IR systems using a classification defined by Rasmussen and describes some parallel IR systems. Gives a description of the retrieval models used in parallel information processing and notes areas where research is needed
    Source
    Journal of documentation. 53(1997) no.3, S.274-315
  16. Robertson, S.E.: ¬The parametric description of retrieval tests : Part II: Overall measures (1969) 0.00
    5.423352E-4 = product of:
      0.008135028 = sum of:
        0.0053462577 = weight(_text_:in in 4156) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=4156,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 4156, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4156)
        0.0027887707 = weight(_text_:s in 4156) [ClassicSimilarity], result of:
          0.0027887707 = score(doc=4156,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.118916616 = fieldWeight in 4156, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4156)
      0.06666667 = coord(2/30)
    
    Abstract
    Two general requirements for overall measures of retrieval effectiveness are proposed, namely that the measures should be as far as possible independent of generality (this is interpreted to mean that it can be described in terms of recall and fallout), and that it should be able to measure the effectiveness of a performance curve (it should not be restricted to a simple 2X2 table). Several measures that have been proposed are examined with these conditions in mind. It turns out that most of the satisfactory ones are directly or indirectly related to swet's measure A, the area under the recall-fallout curve. In particular, Brookes' measure S and Rocchio's normalized recall are versions of A.
    Source
    Journal of documentation. 25(1969) no.2, S.93-106
  17. Robertson, S.E.: ¬The parametric description of retrieval tests : Part I: The basic parameters (1969) 0.00
    4.8788113E-4 = product of:
      0.0073182164 = sum of:
        0.0053462577 = weight(_text_:in in 4155) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=4155,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 4155, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4155)
        0.0019719584 = weight(_text_:s in 4155) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=4155,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 4155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4155)
      0.06666667 = coord(2/30)
    
    Abstract
    Some parameters and techniques in use for describing the results of test on IR system are analysed. Several considerations outside the scope of the usual 2X2 table are relevant to the choice of parameters. In particular, a variable which produces a 'performance curve' of a system corresponds to an extension of the 2x2 table. Also, the statistical relationships between parameters are all-important. It is considered that precision is not such a useful measure of performance (in conjunction with recall)as fallout. A more powerful alternative to Cleverdon's 'invitable inverse relationship between recall and precision'is proposed and justified, namely that the recall-fallout graph is convex.
    Source
    Journal of documentation. 25(1969) no.1, S.1-27
  18. Robertson, S.E.: On relevance weight estimation and query expansion (1986) 0.00
    4.8177355E-4 = product of:
      0.0072266026 = sum of:
        0.004409519 = weight(_text_:in in 3875) [ClassicSimilarity], result of:
          0.004409519 = score(doc=3875,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.15028831 = fieldWeight in 3875, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=3875)
        0.0028170836 = weight(_text_:s in 3875) [ClassicSimilarity], result of:
          0.0028170836 = score(doc=3875,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.120123915 = fieldWeight in 3875, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=3875)
      0.06666667 = coord(2/30)
    
    Source
    Journal of documentation. 42(1986), S.182-188
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  19. Robertson, S.E.: OKAPI at TREC (1994) 0.00
    4.8177355E-4 = product of:
      0.0072266026 = sum of:
        0.004409519 = weight(_text_:in in 7952) [ClassicSimilarity], result of:
          0.004409519 = score(doc=7952,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.15028831 = fieldWeight in 7952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=7952)
        0.0028170836 = weight(_text_:s in 7952) [ClassicSimilarity], result of:
          0.0028170836 = score(doc=7952,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.120123915 = fieldWeight in 7952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=7952)
      0.06666667 = coord(2/30)
    
    Abstract
    Paper presented at the Text Retrieval Conference (TREC), Washington, DC, Nov 1992. Describes the OKAPI experimental text information retrieval system in terms of its design principles: the use of simple, robust and easy to use techniques which use best match searching and avoid Boolean logic
    Pages
    19 S
  20. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the generation of partitioned inverted files (2005) 0.00
    4.6544487E-4 = product of:
      0.0069816727 = sum of:
        0.0052914224 = weight(_text_:in in 651) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=651,freq=8.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 651, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
        0.0016902501 = weight(_text_:s in 651) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=651,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
      0.06666667 = coord(2/30)
    
    Abstract
    Purpose - The generation of inverted indexes is one of the most computationally intensive activities for information retrieval systems: indexing large multi-gigabyte text databases can take many hours or even days to complete. We examine the generation of partitioned inverted files in order to speed up the process of indexing. Two types of index partitions are investigated: TermId and DocId. Design/methodology/approach - We use standard measures used in parallel computing such as speedup and efficiency to examine the computing results and also the space costs of our trial indexing experiments. Findings - The results from runs on both partitioning methods are compared and contrasted, concluding that DocId is the more efficient method. Practical implications - The practical implications are that the DocId partitioning method would in most circumstances be used for distributing inverted file data in a parallel computer, particularly if indexing speed is the primary consideration. Originality/value - The paper is of value to database administrators who manage large-scale text collections, and who need to use parallel computing to implement their text retrieval services.
    Source
    Aslib proceedings. 57(2005) no.5, S.434-459