Search (35 results, page 1 of 2)

  • × author_ss:"Robertson, S.E."
  1. Robertson, S.E.: ¬The methodology of information retrieval experiment (1981) 0.05
    0.045530416 = product of:
      0.09106083 = sum of:
        0.036650293 = weight(_text_:information in 3146) [ClassicSimilarity], result of:
          0.036650293 = score(doc=3146,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.43886948 = fieldWeight in 3146, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.125 = fieldNorm(doc=3146)
        0.054410543 = product of:
          0.10882109 = sum of:
            0.10882109 = weight(_text_:retrieval in 3146) [ClassicSimilarity], result of:
              0.10882109 = score(doc=3146,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.75622874 = fieldWeight in 3146, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.125 = fieldNorm(doc=3146)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information retrieval experiment. Ed.: K. Sparck Jones
  2. Robertson, S.E.; Beaulieu, M.: Research and evaluation in information retrieval (1997) 0.03
    0.032194868 = product of:
      0.064389735 = sum of:
        0.025915671 = weight(_text_:information in 7445) [ClassicSimilarity], result of:
          0.025915671 = score(doc=7445,freq=8.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3103276 = fieldWeight in 7445, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7445)
        0.038474064 = product of:
          0.07694813 = sum of:
            0.07694813 = weight(_text_:retrieval in 7445) [ClassicSimilarity], result of:
              0.07694813 = score(doc=7445,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5347345 = fieldWeight in 7445, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7445)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Offered as a discussion document drawing on the experiences of the Okapi team in developing information retrieval systems. Raises some of the issues currently exercising the information retrieval community in the context of experimentation and evaluation
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
  3. Robertson, S.E.: Some recent theories and models in information retrieval (1980) 0.03
    0.031260498 = product of:
      0.062520996 = sum of:
        0.033665445 = weight(_text_:information in 1326) [ClassicSimilarity], result of:
          0.033665445 = score(doc=1326,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.40312737 = fieldWeight in 1326, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1326)
        0.02885555 = product of:
          0.0577111 = sum of:
            0.0577111 = weight(_text_:retrieval in 1326) [ClassicSimilarity], result of:
              0.0577111 = score(doc=1326,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40105087 = fieldWeight in 1326, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1326)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Theory and application of information research. Proc. of the 2nd Int. Research Forum on Information Science, 3.-6.8.1977, Copenhagen. Ed.: O. Harbo u. L. Kajberg
  4. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing in information retrieval : an updated review (1997) 0.03
    0.030458847 = product of:
      0.060917694 = sum of:
        0.02244363 = weight(_text_:information in 7450) [ClassicSimilarity], result of:
          0.02244363 = score(doc=7450,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2687516 = fieldWeight in 7450, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7450)
        0.038474064 = product of:
          0.07694813 = sum of:
            0.07694813 = weight(_text_:retrieval in 7450) [ClassicSimilarity], result of:
              0.07694813 = score(doc=7450,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.5347345 = fieldWeight in 7450, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7450)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Reviews the progress of parallel computing in information retrieval. Stresses the importance of the motivation is using parallel computing for text retrieval. Analyzes parallel IR systems using a classification defined by Rasmussen and describes some parallel IR systems. Gives a description of the retrieval models used in parallel information processing and notes areas where research is needed
  5. Sparck Jones, K.; Walker, S.; Robertson, S.E.: ¬A probabilistic model of information retrieval : development and comparative experiments - part 1 (2000) 0.03
    0.028171634 = product of:
      0.05634327 = sum of:
        0.02748772 = weight(_text_:information in 4181) [ClassicSimilarity], result of:
          0.02748772 = score(doc=4181,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3291521 = fieldWeight in 4181, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4181)
        0.02885555 = product of:
          0.0577111 = sum of:
            0.0577111 = weight(_text_:retrieval in 4181) [ClassicSimilarity], result of:
              0.0577111 = score(doc=4181,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40105087 = fieldWeight in 4181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4181)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information processing and management. 36(2000) no.6, S.779-808
  6. Sparck Jones, K.; Walker, S.; Robertson, S.E.: ¬A probabilistic model of information retrieval : development and comparative experiments - part 2 (2000) 0.03
    0.028171634 = product of:
      0.05634327 = sum of:
        0.02748772 = weight(_text_:information in 4286) [ClassicSimilarity], result of:
          0.02748772 = score(doc=4286,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.3291521 = fieldWeight in 4286, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4286)
        0.02885555 = product of:
          0.0577111 = sum of:
            0.0577111 = weight(_text_:retrieval in 4286) [ClassicSimilarity], result of:
              0.0577111 = score(doc=4286,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40105087 = fieldWeight in 4286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4286)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Information processing and management. 36(2000) no.6, S.809-840
  7. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.03
    0.026493195 = product of:
      0.10597278 = sum of:
        0.10597278 = sum of:
          0.054410543 = weight(_text_:retrieval in 5108) [ClassicSimilarity], result of:
            0.054410543 = score(doc=5108,freq=4.0), product of:
              0.1438997 = queryWeight, product of:
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.047571484 = queryNorm
              0.37811437 = fieldWeight in 5108, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.024915 = idf(docFreq=5836, maxDocs=44218)
                0.0625 = fieldNorm(doc=5108)
          0.051562235 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
            0.051562235 = score(doc=5108,freq=2.0), product of:
              0.16658723 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047571484 = queryNorm
              0.30952093 = fieldWeight in 5108, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5108)
      0.25 = coord(1/4)
    
    Abstract
    In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in order to examine the effect on retrieval effectiveness and efficiency. The particular algorithm applied has previously been used to good effect in Okapi experiments at TREC. This algorithm and the mechanism for applying parallel computing to speed up processing are described.
    Date
    20. 1.2007 18:30:22
  8. Robertson, S.E.: Theories and models in information retrieval (1977) 0.03
    0.025822332 = product of:
      0.051644664 = sum of:
        0.018325146 = weight(_text_:information in 1844) [ClassicSimilarity], result of:
          0.018325146 = score(doc=1844,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.21943474 = fieldWeight in 1844, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1844)
        0.033319518 = product of:
          0.066639036 = sum of:
            0.066639036 = weight(_text_:retrieval in 1844) [ClassicSimilarity], result of:
              0.066639036 = score(doc=1844,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46309367 = fieldWeight in 1844, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1844)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper is concerned with recent work in the theory of information retrieval. More particularly, it is concerned with theories which tackle the problem of retrieval performance, in a sense which will be explained. The aim is not an exhaustive survey of such work; rather it is an analysis and synthesis of those contributions which I feel to be important or find interesting
  9. Robertson, S.E.: OKAPI at TREC (1994) 0.03
    0.025101941 = product of:
      0.050203882 = sum of:
        0.016197294 = weight(_text_:information in 7952) [ClassicSimilarity], result of:
          0.016197294 = score(doc=7952,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 7952, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=7952)
        0.03400659 = product of:
          0.06801318 = sum of:
            0.06801318 = weight(_text_:retrieval in 7952) [ClassicSimilarity], result of:
              0.06801318 = score(doc=7952,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.47264296 = fieldWeight in 7952, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7952)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Paper presented at the Text Retrieval Conference (TREC), Washington, DC, Nov 1992. Describes the OKAPI experimental text information retrieval system in terms of its design principles: the use of simple, robust and easy to use techniques which use best match searching and avoid Boolean logic
  10. Robertson, S.E.; Walker, S.: Some simple effective approximations to the 2-Poisson molde for probabilisitc weighted retrieval (1979) 0.03
    0.025101941 = product of:
      0.050203882 = sum of:
        0.016197294 = weight(_text_:information in 1940) [ClassicSimilarity], result of:
          0.016197294 = score(doc=1940,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 1940, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1940)
        0.03400659 = product of:
          0.06801318 = sum of:
            0.06801318 = weight(_text_:retrieval in 1940) [ClassicSimilarity], result of:
              0.06801318 = score(doc=1940,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.47264296 = fieldWeight in 1940, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1940)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.345-453.
  11. Robertson, S.E.; Walker, S.; Beaulieu, M.: Laboratory experiments with Okapi : participation in the TREC programme (1997) 0.02
    0.024849655 = product of:
      0.04969931 = sum of:
        0.016034503 = weight(_text_:information in 2216) [ClassicSimilarity], result of:
          0.016034503 = score(doc=2216,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.1920054 = fieldWeight in 2216, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2216)
        0.033664808 = product of:
          0.067329615 = sum of:
            0.067329615 = weight(_text_:retrieval in 2216) [ClassicSimilarity], result of:
              0.067329615 = score(doc=2216,freq=8.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.46789268 = fieldWeight in 2216, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2216)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Briefly reviews the history of laboratory testing of information retrieval systems, focusing on the idea of a general purpose test collection of documents, queries and relevance judgements. Gives an overview of the methods used in TREC (Text Retrieval Conference) which is concerned with an ideal test collection, and discusses the Okapi team's participation in TREC. Also discusses some of the issues surrounding the difficult problem of interactive evaluation in TREC. The reconciliation of the requirements of the laboratory context with the concerns of interactive retrieval has a long way to go
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
  12. Robertson, S.E.: Overview of the Okapi projects (1997) 0.02
    0.024824452 = product of:
      0.049648903 = sum of:
        0.02244363 = weight(_text_:information in 4703) [ClassicSimilarity], result of:
          0.02244363 = score(doc=4703,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2687516 = fieldWeight in 4703, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4703)
        0.027205272 = product of:
          0.054410543 = sum of:
            0.054410543 = weight(_text_:retrieval in 4703) [ClassicSimilarity], result of:
              0.054410543 = score(doc=4703,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.37811437 = fieldWeight in 4703, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4703)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Gives a brief description of the Okapi projects and of the work of the centre for Interactive Systems Research in the Department of Information Science at City University, London,UK, where these projects have been developed. Describes firstly one version of an information retrieval system which contains some of the central features of the Okapi projects, and follows this with an indication of the variety of systems now implemented or implementable within the present setup
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
  13. Robertson, S.E.: ¬The probability ranking principle in IR (1977) 0.02
    0.02414615 = product of:
      0.0482923 = sum of:
        0.019436752 = weight(_text_:information in 1935) [ClassicSimilarity], result of:
          0.019436752 = score(doc=1935,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.23274569 = fieldWeight in 1935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1935)
        0.02885555 = product of:
          0.0577111 = sum of:
            0.0577111 = weight(_text_:retrieval in 1935) [ClassicSimilarity], result of:
              0.0577111 = score(doc=1935,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40105087 = fieldWeight in 1935, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1935)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willet. San Francisco: Morgan Kaufmann 1997. S.281-286.
  14. Robertson, S.E.; Sparck Jones, K.: Relevance weighting of search terms (1976) 0.02
    0.020840332 = product of:
      0.041680664 = sum of:
        0.02244363 = weight(_text_:information in 71) [ClassicSimilarity], result of:
          0.02244363 = score(doc=71,freq=6.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.2687516 = fieldWeight in 71, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=71)
        0.019237032 = product of:
          0.038474064 = sum of:
            0.038474064 = weight(_text_:retrieval in 71) [ClassicSimilarity], result of:
              0.038474064 = score(doc=71,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.26736724 = fieldWeight in 71, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=71)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Examines statistical techniques for exploiting relevance information to weight search terms. These techniques are presented as a natural extension of weighting methods using information about the distribution of index terms in documents in general. A series of relevance weighting functions is derived and is justified by theoretical considerations. In particular, it is shown that specific weighted search methods are implied by a general probabilistic theory of retrieval. Different applications of relevance weighting are illustrated by experimental results for test collections
    Source
    Journal of the American Society for Information Science. 27(1976), S.129-146
  15. Robertson, S.E.: OKAPI at TREC-3 (1995) 0.02
    0.020246342 = product of:
      0.040492684 = sum of:
        0.011338106 = weight(_text_:information in 5694) [ClassicSimilarity], result of:
          0.011338106 = score(doc=5694,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.13576832 = fieldWeight in 5694, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5694)
        0.029154578 = product of:
          0.058309156 = sum of:
            0.058309156 = weight(_text_:retrieval in 5694) [ClassicSimilarity], result of:
              0.058309156 = score(doc=5694,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40520695 = fieldWeight in 5694, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5694)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Reports text information retrieval experiments performed as part of the 3 rd round of Text Retrieval Conferences (TREC) using the Okapi online catalogue system at City University, UK. The emphasis in TREC-3 was: further refinement of term weighting functions; an investigation of run time passage determination and searching; expansion of ad hoc queries by terms extracted from the top documents retrieved by a trial search; new methods for choosing query expansion terms after relevance feedback, now split into methods of ranking terms prior to selection and subsequent selection procedures; and the development of a user interface procedure within the new TREC interactive search framework
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  16. Huang, X.; Robertson, S.E.: Application of probilistic methods to Chinese text retrieval (1997) 0.02
    0.020246342 = product of:
      0.040492684 = sum of:
        0.011338106 = weight(_text_:information in 4706) [ClassicSimilarity], result of:
          0.011338106 = score(doc=4706,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.13576832 = fieldWeight in 4706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4706)
        0.029154578 = product of:
          0.058309156 = sum of:
            0.058309156 = weight(_text_:retrieval in 4706) [ClassicSimilarity], result of:
              0.058309156 = score(doc=4706,freq=6.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.40520695 = fieldWeight in 4706, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4706)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Discusses the use of text retrieval methods based on the probabilistic model with Chinese language material. Since Chinese text has no natural word boundaries, either a dictionary based word segmentation method must be applied to the text, or indexing and searching must be done in terms of single Chinese characters. In either case, it becomes important to have a good way of dealing with phrases or contoguous strings of characters; the probabilistic model does not at present have such a facility. Proposes some ad hoc modifications of the probabilistic weighting function and matching method for this purpose
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
  17. Robertson, S.E.: OKAPI at TREC-1 (1994) 0.02
    0.020121792 = product of:
      0.040243585 = sum of:
        0.016197294 = weight(_text_:information in 7953) [ClassicSimilarity], result of:
          0.016197294 = score(doc=7953,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.19395474 = fieldWeight in 7953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=7953)
        0.02404629 = product of:
          0.04809258 = sum of:
            0.04809258 = weight(_text_:retrieval in 7953) [ClassicSimilarity], result of:
              0.04809258 = score(doc=7953,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.33420905 = fieldWeight in 7953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7953)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Describes the work carried out on the TREC-2 project following the results of the TREC-1 project. Experiments were conducted on the OKAPI experimental text information retrieval system which investigated a number of alternative probabilistic term weighting functions in place of the 'standard' Robertson Sparck Jones weighting functions used in TREC-1
  18. Robertson, S.E.; Sparck Jones, K.: Simple, proven approaches to text retrieval (1997) 0.02
    0.019168893 = product of:
      0.038337786 = sum of:
        0.011453216 = weight(_text_:information in 4532) [ClassicSimilarity], result of:
          0.011453216 = score(doc=4532,freq=4.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.13714671 = fieldWeight in 4532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.026884569 = product of:
          0.053769138 = sum of:
            0.053769138 = weight(_text_:retrieval in 4532) [ClassicSimilarity], result of:
              0.053769138 = score(doc=4532,freq=10.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.37365708 = fieldWeight in 4532, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4532)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This technical note describes straightforward techniques for document indexing and retrieval that have been solidly established through extensive testing and are easy to apply. They are useful for many different types of text material, are viable for very large files, and have the advantage that they do not require special skills or training for searching, but are easy for end users. The document and text retrieval methods described here have a sound theoretical basis, are well established by extensive testing, and the ideas involved are now implemented in some commercial retrieval systems. Testing in the last few years has, in particular, shown that the methods presented here work very well with full texts, not only title and abstracts, and with large files of texts containing three quarters of a million documents. These tests, the TREC Tests (see Harman 1993 - 1997; IP&M 1995), have been rigorous comparative evaluations involving many different approaches to information retrieval. These techniques depend an the use of simple terms for indexing both request and document texts; an term weighting exploiting statistical information about term occurrences; an scoring for request-document matching, using these weights, to obtain a ranked search output; and an relevance feedback to modify request weights or term sets in iterative searching. The normal implementation is via an inverted file organisation using a term list with linked document identifiers, plus counting data, and pointers to the actual texts. The user's request can be a word list, phrases, sentences or extended text.
  19. Robertson, S.E.: Query-document symmetry and dual models (1994) 0.02
    0.016097434 = product of:
      0.032194868 = sum of:
        0.012957836 = weight(_text_:information in 8159) [ClassicSimilarity], result of:
          0.012957836 = score(doc=8159,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.1551638 = fieldWeight in 8159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=8159)
        0.019237032 = product of:
          0.038474064 = sum of:
            0.038474064 = weight(_text_:retrieval in 8159) [ClassicSimilarity], result of:
              0.038474064 = score(doc=8159,freq=2.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.26736724 = fieldWeight in 8159, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8159)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The idea that there is some natural symmetry between queries and documents is explained. If symmetry can be assumed, then it lead to a conception of 'dual' models in information retrieval (given a model, we can construct a dual model in which the roles of documents and queries are reversed). But symmetry breaks down in various ways, which may invalidate this construction. If we can construct a dual, it is not obvious that it can be combined with the original
  20. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the generation of partitioned inverted files (2005) 0.02
    0.015061164 = product of:
      0.030122329 = sum of:
        0.009718376 = weight(_text_:information in 651) [ClassicSimilarity], result of:
          0.009718376 = score(doc=651,freq=2.0), product of:
            0.08351069 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047571484 = queryNorm
            0.116372846 = fieldWeight in 651, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
        0.020403953 = product of:
          0.040807907 = sum of:
            0.040807907 = weight(_text_:retrieval in 651) [ClassicSimilarity], result of:
              0.040807907 = score(doc=651,freq=4.0), product of:
                0.1438997 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.047571484 = queryNorm
                0.2835858 = fieldWeight in 651, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=651)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The generation of inverted indexes is one of the most computationally intensive activities for information retrieval systems: indexing large multi-gigabyte text databases can take many hours or even days to complete. We examine the generation of partitioned inverted files in order to speed up the process of indexing. Two types of index partitions are investigated: TermId and DocId. Design/methodology/approach - We use standard measures used in parallel computing such as speedup and efficiency to examine the computing results and also the space costs of our trial indexing experiments. Findings - The results from runs on both partitioning methods are compared and contrasted, concluding that DocId is the more efficient method. Practical implications - The practical implications are that the DocId partitioning method would in most circumstances be used for distributing inverted file data in a parallel computer, particularly if indexing speed is the primary consideration. Originality/value - The paper is of value to database administrators who manage large-scale text collections, and who need to use parallel computing to implement their text retrieval services.