Search (33 results, page 2 of 2)

  • × author_ss:"Robertson, S.E."
  1. Sparck Jones, K.; Walker, S.; Robertson, S.E.: ¬A probabilistic model of information retrieval : development and comparative experiments - part 2 (2000) 0.00
    0.0012555057 = product of:
      0.00878854 = sum of:
        0.00878854 = product of:
          0.0439427 = sum of:
            0.0439427 = weight(_text_:retrieval in 4286) [ClassicSimilarity], result of:
              0.0439427 = score(doc=4286,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 4286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4286)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
  2. Robertson, S.E.; Walker, S.; Beaulieu, M.M.; Gatford, M.; Payne, A.: Okapi at TREC-4 (1996) 0.00
    0.0012555057 = product of:
      0.00878854 = sum of:
        0.00878854 = product of:
          0.0439427 = sum of:
            0.0439427 = weight(_text_:retrieval in 7546) [ClassicSimilarity], result of:
              0.0439427 = score(doc=7546,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 7546, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7546)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  3. Robertson, S.E.: ¬The probability ranking principle in IR (1977) 0.00
    0.0012555057 = product of:
      0.00878854 = sum of:
        0.00878854 = product of:
          0.0439427 = sum of:
            0.0439427 = weight(_text_:retrieval in 1935) [ClassicSimilarity], result of:
              0.0439427 = score(doc=1935,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 1935, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1935)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willet. San Francisco: Morgan Kaufmann 1997. S.281-286.
  4. Beaulieu, M.M.; Gatford, M.; Huang, X.; Robertson, S.E.; Walker, S.; Williams, P.: Okapi an TREC-5 (1997) 0.00
    0.0012555057 = product of:
      0.00878854 = sum of:
        0.00878854 = product of:
          0.0439427 = sum of:
            0.0439427 = weight(_text_:retrieval in 3097) [ClassicSimilarity], result of:
              0.0439427 = score(doc=3097,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 3097, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3097)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  5. Robertson, S.E.; Sparck Jones, K.: Simple, proven approaches to text retrieval (1997) 0.00
    0.0011697485 = product of:
      0.008188239 = sum of:
        0.008188239 = product of:
          0.040941194 = sum of:
            0.040941194 = weight(_text_:retrieval in 4532) [ClassicSimilarity], result of:
              0.040941194 = score(doc=4532,freq=10.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.37365708 = fieldWeight in 4532, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4532)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This technical note describes straightforward techniques for document indexing and retrieval that have been solidly established through extensive testing and are easy to apply. They are useful for many different types of text material, are viable for very large files, and have the advantage that they do not require special skills or training for searching, but are easy for end users. The document and text retrieval methods described here have a sound theoretical basis, are well established by extensive testing, and the ideas involved are now implemented in some commercial retrieval systems. Testing in the last few years has, in particular, shown that the methods presented here work very well with full texts, not only title and abstracts, and with large files of texts containing three quarters of a million documents. These tests, the TREC Tests (see Harman 1993 - 1997; IP&M 1995), have been rigorous comparative evaluations involving many different approaches to information retrieval. These techniques depend an the use of simple terms for indexing both request and document texts; an term weighting exploiting statistical information about term occurrences; an scoring for request-document matching, using these weights, to obtain a ranked search output; and an relevance feedback to modify request weights or term sets in iterative searching. The normal implementation is via an inverted file organisation using a term list with linked document identifiers, plus counting data, and pointers to the actual texts. The user's request can be a word list, phrases, sentences or extended text.
  6. Robertson, S.E.: On relevance weight estimation and query expansion (1986) 0.00
    0.0010462549 = product of:
      0.007323784 = sum of:
        0.007323784 = product of:
          0.03661892 = sum of:
            0.03661892 = weight(_text_:retrieval in 3875) [ClassicSimilarity], result of:
              0.03661892 = score(doc=3875,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33420905 = fieldWeight in 3875, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3875)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  7. Robertson, S.E.: ¬The parametric description of retrieval tests : Part II: Overall measures (1969) 0.00
    0.0010357393 = product of:
      0.007250175 = sum of:
        0.007250175 = product of:
          0.036250874 = sum of:
            0.036250874 = weight(_text_:retrieval in 4156) [ClassicSimilarity], result of:
              0.036250874 = score(doc=4156,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33085006 = fieldWeight in 4156, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4156)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Two general requirements for overall measures of retrieval effectiveness are proposed, namely that the measures should be as far as possible independent of generality (this is interpreted to mean that it can be described in terms of recall and fallout), and that it should be able to measure the effectiveness of a performance curve (it should not be restricted to a simple 2X2 table). Several measures that have been proposed are examined with these conditions in mind. It turns out that most of the satisfactory ones are directly or indirectly related to swet's measure A, the area under the recall-fallout curve. In particular, Brookes' measure S and Rocchio's normalized recall are versions of A.
  8. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the generation of partitioned inverted files (2005) 0.00
    8.877766E-4 = product of:
      0.006214436 = sum of:
        0.006214436 = product of:
          0.03107218 = sum of:
            0.03107218 = weight(_text_:retrieval in 651) [ClassicSimilarity], result of:
              0.03107218 = score(doc=651,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2835858 = fieldWeight in 651, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=651)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The generation of inverted indexes is one of the most computationally intensive activities for information retrieval systems: indexing large multi-gigabyte text databases can take many hours or even days to complete. We examine the generation of partitioned inverted files in order to speed up the process of indexing. Two types of index partitions are investigated: TermId and DocId. Design/methodology/approach - We use standard measures used in parallel computing such as speedup and efficiency to examine the computing results and also the space costs of our trial indexing experiments. Findings - The results from runs on both partitioning methods are compared and contrasted, concluding that DocId is the more efficient method. Practical implications - The practical implications are that the DocId partitioning method would in most circumstances be used for distributing inverted file data in a parallel computer, particularly if indexing speed is the primary consideration. Originality/value - The paper is of value to database administrators who manage large-scale text collections, and who need to use parallel computing to implement their text retrieval services.
  9. Vechtomova, O.; Robertson, S.E.: ¬A domain-independent approach to finding related entities (2012) 0.00
    8.877766E-4 = product of:
      0.006214436 = sum of:
        0.006214436 = product of:
          0.03107218 = sum of:
            0.03107218 = weight(_text_:retrieval in 2733) [ClassicSimilarity], result of:
              0.03107218 = score(doc=2733,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2835858 = fieldWeight in 2733, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2733)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    We propose an approach to the retrieval of entities that have a specific relationship with the entity given in a query. Our research goal is to investigate whether related entity finding problem can be addressed by combining a measure of relatedness of candidate answer entities to the query, and likelihood that the candidate answer entity belongs to the target entity category specified in the query. An initial list of candidate entities, extracted from top ranked documents retrieved for the query, is refined using a number of statistical and linguistic methods. The proposed method extracts the category of the target entity from the query, identifies instances of this category as seed entities, and computes similarity between candidate and seed entities. The evaluation was conducted on the Related Entity Finding task of the Entity Track of TREC 2010, as well as the QA list questions from TREC 2005 and 2006. Evaluation results demonstrate that the proposed methods are effective in finding related entities.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  10. Robertson, S.E.: Query-document symmetry and dual models (1994) 0.00
    8.3700387E-4 = product of:
      0.0058590267 = sum of:
        0.0058590267 = product of:
          0.029295133 = sum of:
            0.029295133 = weight(_text_:retrieval in 8159) [ClassicSimilarity], result of:
              0.029295133 = score(doc=8159,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.26736724 = fieldWeight in 8159, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8159)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The idea that there is some natural symmetry between queries and documents is explained. If symmetry can be assumed, then it lead to a conception of 'dual' models in information retrieval (given a model, we can construct a dual model in which the roles of documents and queries are reversed). But symmetry breaks down in various ways, which may invalidate this construction. If we can construct a dual, it is not obvious that it can be combined with the original
  11. Robertson, S.E.; Sparck Jones, K.: Relevance weighting of search terms (1976) 0.00
    8.3700387E-4 = product of:
      0.0058590267 = sum of:
        0.0058590267 = product of:
          0.029295133 = sum of:
            0.029295133 = weight(_text_:retrieval in 71) [ClassicSimilarity], result of:
              0.029295133 = score(doc=71,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.26736724 = fieldWeight in 71, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=71)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Examines statistical techniques for exploiting relevance information to weight search terms. These techniques are presented as a natural extension of weighting methods using information about the distribution of index terms in documents in general. A series of relevance weighting functions is derived and is justified by theoretical considerations. In particular, it is shown that specific weighted search methods are implied by a general probabilistic theory of retrieval. Different applications of relevance weighting are illustrated by experimental results for test collections
  12. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the update of partitioned inverted files (2007) 0.00
    7.398139E-4 = product of:
      0.005178697 = sum of:
        0.005178697 = product of:
          0.025893483 = sum of:
            0.025893483 = weight(_text_:retrieval in 819) [ClassicSimilarity], result of:
              0.025893483 = score(doc=819,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23632148 = fieldWeight in 819, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=819)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - An issue that tends to be ignored in information retrieval is the issue of updating inverted files. This is largely because inverted files were devised to provide fast query service, and much work has been done with the emphasis strongly on queries. This paper aims to study the effect of using parallel methods for the update of inverted files in order to reduce costs, by looking at two types of partitioning for inverted files: document identifier and term identifier. Design/methodology/approach - Raw update service and update with query service are studied with these partitioning schemes using an incremental update strategy. The paper uses standard measures used in parallel computing such as speedup to examine the computing results and also the costs of reorganising indexes while servicing transactions. Findings - Empirical results show that for both transaction processing and index reorganisation the document identifier method is superior. However, there is evidence that the term identifier partitioning method could be useful in a concurrent transaction processing context. Practical implications - There is an increasing need to service updates, which is now becoming a requirement of inverted files (for dynamic collections such as the web), demonstrating that a shift in requirements of inverted file maintenance is needed from the past. Originality/value - The paper is of value to database administrators who manage large-scale and dynamic text collections, and who need to use parallel computing to implement their text retrieval services.
  13. Vechtomova, O.; Karamuftuoglum, M.; Robertson, S.E.: On document relevance and lexical cohesion between query terms (2006) 0.00
    6.2775286E-4 = product of:
      0.00439427 = sum of:
        0.00439427 = product of:
          0.02197135 = sum of:
            0.02197135 = weight(_text_:retrieval in 987) [ClassicSimilarity], result of:
              0.02197135 = score(doc=987,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 987, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=987)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Lexical cohesion is a property of text, achieved through lexical-semantic relations between words in text. Most information retrieval systems make use of lexical relations in text only to a limited extent. In this paper we empirically investigate whether the degree of lexical cohesion between the contexts of query terms' occurrences in a document is related to its relevance to the query. Lexical cohesion between distinct query terms in a document is estimated on the basis of the lexical-semantic relations (repetition, synonymy, hyponymy and sibling) that exist between there collocates - words that co-occur with them in the same windows of text. Experiments suggest significant differences between the lexical cohesion in relevant and non-relevant document sets exist. A document ranking method based on lexical cohesion shows some performance improvements.