Search (31 results, page 1 of 2)

  • × author_ss:"Robertson, S.E."
  1. Vechtomova, O.; Karamuftuoglum, M.; Robertson, S.E.: On document relevance and lexical cohesion between query terms (2006) 0.04
    0.041238464 = product of:
      0.10309616 = sum of:
        0.028076671 = weight(_text_:retrieval in 987) [ClassicSimilarity], result of:
          0.028076671 = score(doc=987,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 987, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
        0.075019486 = weight(_text_:semantic in 987) [ClassicSimilarity], result of:
          0.075019486 = score(doc=987,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.38979942 = fieldWeight in 987, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=987)
      0.4 = coord(2/5)
    
    Abstract
    Lexical cohesion is a property of text, achieved through lexical-semantic relations between words in text. Most information retrieval systems make use of lexical relations in text only to a limited extent. In this paper we empirically investigate whether the degree of lexical cohesion between the contexts of query terms' occurrences in a document is related to its relevance to the query. Lexical cohesion between distinct query terms in a document is estimated on the basis of the lexical-semantic relations (repetition, synonymy, hyponymy and sibling) that exist between there collocates - words that co-occur with them in the same windows of text. Experiments suggest significant differences between the lexical cohesion in relevant and non-relevant document sets exist. A document ranking method based on lexical cohesion shows some performance improvements.
  2. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.03
    0.03121084 = product of:
      0.0780271 = sum of:
        0.052941877 = weight(_text_:retrieval in 5108) [ClassicSimilarity], result of:
          0.052941877 = score(doc=5108,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.37811437 = fieldWeight in 5108, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.025085226 = product of:
          0.05017045 = sum of:
            0.05017045 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.05017045 = score(doc=5108,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in order to examine the effect on retrieval effectiveness and efficiency. The particular algorithm applied has previously been used to good effect in Okapi experiments at TREC. This algorithm and the mechanism for applying parallel computing to speed up processing are described.
    Date
    20. 1.2007 18:30:22
  3. Robertson, S.E.: ¬The methodology of information retrieval experiment (1981) 0.02
    0.021176752 = product of:
      0.105883755 = sum of:
        0.105883755 = weight(_text_:retrieval in 3146) [ClassicSimilarity], result of:
          0.105883755 = score(doc=3146,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.75622874 = fieldWeight in 3146, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=3146)
      0.2 = coord(1/5)
    
    Source
    Information retrieval experiment. Ed.: K. Sparck Jones
  4. Robertson, S.E.: Indexing theory and retrieval effectiveness (1979) 0.02
    0.01871778 = product of:
      0.0935889 = sum of:
        0.0935889 = weight(_text_:retrieval in 5175) [ClassicSimilarity], result of:
          0.0935889 = score(doc=5175,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.6684181 = fieldWeight in 5175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.15625 = fieldNorm(doc=5175)
      0.2 = coord(1/5)
    
  5. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the update of partitioned inverted files (2007) 0.02
    0.018682227 = product of:
      0.046705566 = sum of:
        0.033088673 = weight(_text_:retrieval in 819) [ClassicSimilarity], result of:
          0.033088673 = score(doc=819,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.23632148 = fieldWeight in 819, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=819)
        0.013616893 = product of:
          0.027233787 = sum of:
            0.027233787 = weight(_text_:web in 819) [ClassicSimilarity], result of:
              0.027233787 = score(doc=819,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.18028519 = fieldWeight in 819, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=819)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - An issue that tends to be ignored in information retrieval is the issue of updating inverted files. This is largely because inverted files were devised to provide fast query service, and much work has been done with the emphasis strongly on queries. This paper aims to study the effect of using parallel methods for the update of inverted files in order to reduce costs, by looking at two types of partitioning for inverted files: document identifier and term identifier. Design/methodology/approach - Raw update service and update with query service are studied with these partitioning schemes using an incremental update strategy. The paper uses standard measures used in parallel computing such as speedup to examine the computing results and also the costs of reorganising indexes while servicing transactions. Findings - Empirical results show that for both transaction processing and index reorganisation the document identifier method is superior. However, there is evidence that the term identifier partitioning method could be useful in a concurrent transaction processing context. Practical implications - There is an increasing need to service updates, which is now becoming a requirement of inverted files (for dynamic collections such as the web), demonstrating that a shift in requirements of inverted file maintenance is needed from the past. Originality/value - The paper is of value to database administrators who manage large-scale and dynamic text collections, and who need to use parallel computing to implement their text retrieval services.
  6. Robertson, S.E.; Beaulieu, M.: Research and evaluation in information retrieval (1997) 0.01
    0.014974224 = product of:
      0.07487112 = sum of:
        0.07487112 = weight(_text_:retrieval in 7445) [ClassicSimilarity], result of:
          0.07487112 = score(doc=7445,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.5347345 = fieldWeight in 7445, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=7445)
      0.2 = coord(1/5)
    
    Abstract
    Offered as a discussion document drawing on the experiences of the Okapi team in developing information retrieval systems. Raises some of the issues currently exercising the information retrieval community in the context of experimentation and evaluation
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
  7. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing in information retrieval : an updated review (1997) 0.01
    0.014974224 = product of:
      0.07487112 = sum of:
        0.07487112 = weight(_text_:retrieval in 7450) [ClassicSimilarity], result of:
          0.07487112 = score(doc=7450,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.5347345 = fieldWeight in 7450, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=7450)
      0.2 = coord(1/5)
    
    Abstract
    Reviews the progress of parallel computing in information retrieval. Stresses the importance of the motivation is using parallel computing for text retrieval. Analyzes parallel IR systems using a classification defined by Rasmussen and describes some parallel IR systems. Gives a description of the retrieval models used in parallel information processing and notes areas where research is needed
  8. Robertson, S.E.: OKAPI at TREC (1994) 0.01
    0.013235469 = product of:
      0.066177346 = sum of:
        0.066177346 = weight(_text_:retrieval in 7952) [ClassicSimilarity], result of:
          0.066177346 = score(doc=7952,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.47264296 = fieldWeight in 7952, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=7952)
      0.2 = coord(1/5)
    
    Abstract
    Paper presented at the Text Retrieval Conference (TREC), Washington, DC, Nov 1992. Describes the OKAPI experimental text information retrieval system in terms of its design principles: the use of simple, robust and easy to use techniques which use best match searching and avoid Boolean logic
  9. Robertson, S.E.; Walker, S.: Some simple effective approximations to the 2-Poisson molde for probabilisitc weighted retrieval (1979) 0.01
    0.013235469 = product of:
      0.066177346 = sum of:
        0.066177346 = weight(_text_:retrieval in 1940) [ClassicSimilarity], result of:
          0.066177346 = score(doc=1940,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.47264296 = fieldWeight in 1940, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=1940)
      0.2 = coord(1/5)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.345-453.
  10. Robertson, S.E.; Walker, S.; Beaulieu, M.: Laboratory experiments with Okapi : participation in the TREC programme (1997) 0.01
    0.013102447 = product of:
      0.06551223 = sum of:
        0.06551223 = weight(_text_:retrieval in 2216) [ClassicSimilarity], result of:
          0.06551223 = score(doc=2216,freq=8.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.46789268 = fieldWeight in 2216, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2216)
      0.2 = coord(1/5)
    
    Abstract
    Briefly reviews the history of laboratory testing of information retrieval systems, focusing on the idea of a general purpose test collection of documents, queries and relevance judgements. Gives an overview of the methods used in TREC (Text Retrieval Conference) which is concerned with an ideal test collection, and discusses the Okapi team's participation in TREC. Also discusses some of the issues surrounding the difficult problem of interactive evaluation in TREC. The reconciliation of the requirements of the laboratory context with the concerns of interactive retrieval has a long way to go
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
  11. Robertson, S.E.: Theories and models in information retrieval (1977) 0.01
    0.012968059 = product of:
      0.064840294 = sum of:
        0.064840294 = weight(_text_:retrieval in 1844) [ClassicSimilarity], result of:
          0.064840294 = score(doc=1844,freq=6.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.46309367 = fieldWeight in 1844, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=1844)
      0.2 = coord(1/5)
    
    Abstract
    This paper is concerned with recent work in the theory of information retrieval. More particularly, it is concerned with theories which tackle the problem of retrieval performance, in a sense which will be explained. The aim is not an exhaustive survey of such work; rather it is an analysis and synthesis of those contributions which I feel to be important or find interesting
  12. Robertson, S.E.: OKAPI at TREC-3 (1995) 0.01
    0.011347052 = product of:
      0.05673526 = sum of:
        0.05673526 = weight(_text_:retrieval in 5694) [ClassicSimilarity], result of:
          0.05673526 = score(doc=5694,freq=6.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40520695 = fieldWeight in 5694, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5694)
      0.2 = coord(1/5)
    
    Abstract
    Reports text information retrieval experiments performed as part of the 3 rd round of Text Retrieval Conferences (TREC) using the Okapi online catalogue system at City University, UK. The emphasis in TREC-3 was: further refinement of term weighting functions; an investigation of run time passage determination and searching; expansion of ad hoc queries by terms extracted from the top documents retrieved by a trial search; new methods for choosing query expansion terms after relevance feedback, now split into methods of ranking terms prior to selection and subsequent selection procedures; and the development of a user interface procedure within the new TREC interactive search framework
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  13. Huang, X.; Robertson, S.E.: Application of probilistic methods to Chinese text retrieval (1997) 0.01
    0.011347052 = product of:
      0.05673526 = sum of:
        0.05673526 = weight(_text_:retrieval in 4706) [ClassicSimilarity], result of:
          0.05673526 = score(doc=4706,freq=6.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40520695 = fieldWeight in 4706, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4706)
      0.2 = coord(1/5)
    
    Abstract
    Discusses the use of text retrieval methods based on the probabilistic model with Chinese language material. Since Chinese text has no natural word boundaries, either a dictionary based word segmentation method must be applied to the text, or indexing and searching must be done in terms of single Chinese characters. In either case, it becomes important to have a good way of dealing with phrases or contoguous strings of characters; the probabilistic model does not at present have such a facility. Proposes some ad hoc modifications of the probabilistic weighting function and matching method for this purpose
    Footnote
    Contribution to a thematic issue on Okapi and information retrieval research
  14. Robertson, S.E.: Some recent theories and models in information retrieval (1980) 0.01
    0.011230669 = product of:
      0.056153342 = sum of:
        0.056153342 = weight(_text_:retrieval in 1326) [ClassicSimilarity], result of:
          0.056153342 = score(doc=1326,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 1326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=1326)
      0.2 = coord(1/5)
    
  15. Sparck Jones, K.; Walker, S.; Robertson, S.E.: ¬A probabilistic model of information retrieval : development and comparative experiments - part 1 (2000) 0.01
    0.011230669 = product of:
      0.056153342 = sum of:
        0.056153342 = weight(_text_:retrieval in 4181) [ClassicSimilarity], result of:
          0.056153342 = score(doc=4181,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 4181, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4181)
      0.2 = coord(1/5)
    
  16. Sparck Jones, K.; Walker, S.; Robertson, S.E.: ¬A probabilistic model of information retrieval : development and comparative experiments - part 2 (2000) 0.01
    0.011230669 = product of:
      0.056153342 = sum of:
        0.056153342 = weight(_text_:retrieval in 4286) [ClassicSimilarity], result of:
          0.056153342 = score(doc=4286,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 4286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4286)
      0.2 = coord(1/5)
    
  17. Robertson, S.E.; Walker, S.; Beaulieu, M.M.; Gatford, M.; Payne, A.: Okapi at TREC-4 (1996) 0.01
    0.011230669 = product of:
      0.056153342 = sum of:
        0.056153342 = weight(_text_:retrieval in 7546) [ClassicSimilarity], result of:
          0.056153342 = score(doc=7546,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 7546, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=7546)
      0.2 = coord(1/5)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  18. Robertson, S.E.: ¬The probability ranking principle in IR (1977) 0.01
    0.011230669 = product of:
      0.056153342 = sum of:
        0.056153342 = weight(_text_:retrieval in 1935) [ClassicSimilarity], result of:
          0.056153342 = score(doc=1935,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 1935, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=1935)
      0.2 = coord(1/5)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willet. San Francisco: Morgan Kaufmann 1997. S.281-286.
  19. Beaulieu, M.M.; Gatford, M.; Huang, X.; Robertson, S.E.; Walker, S.; Williams, P.: Okapi an TREC-5 (1997) 0.01
    0.011230669 = product of:
      0.056153342 = sum of:
        0.056153342 = weight(_text_:retrieval in 3097) [ClassicSimilarity], result of:
          0.056153342 = score(doc=3097,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 3097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=3097)
      0.2 = coord(1/5)
    
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  20. Robertson, S.E.: On term selection for query expansion (1990) 0.01
    0.010588376 = product of:
      0.052941877 = sum of:
        0.052941877 = weight(_text_:retrieval in 2650) [ClassicSimilarity], result of:
          0.052941877 = score(doc=2650,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.37811437 = fieldWeight in 2650, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2650)
      0.2 = coord(1/5)
    
    Abstract
    In the framework of a relevance feedback system, term values or term weights may be used to (a) select new terms for inclusion in a query, and/or (b) weight the terms for retrieval purposes once selected. It has sometimes been assumed that the same weighting formula should be used for both purposes. This paper sketches a quantitative argument which suggests that the two purposes require different weighting formulae
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval