Search (38 results, page 1 of 2)

  • × theme_ss:"Retrievalalgorithmen"
  • × year_i:[1990 TO 2000}
  1. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.04
    0.043804124 = product of:
      0.08760825 = sum of:
        0.08760825 = sum of:
          0.038116705 = weight(_text_:systems in 1319) [ClassicSimilarity], result of:
            0.038116705 = score(doc=1319,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.23767869 = fieldWeight in 1319, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
          0.049491543 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
            0.049491543 = score(doc=1319,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.2708308 = fieldWeight in 1319, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 30(1998) nos.1/7, S.621-623
  2. Kelledy, F.; Smeaton, A.F.: Signature files and beyond (1996) 0.04
    0.03754639 = product of:
      0.07509278 = sum of:
        0.07509278 = sum of:
          0.03267146 = weight(_text_:systems in 6973) [ClassicSimilarity], result of:
            0.03267146 = score(doc=6973,freq=2.0), product of:
              0.16037072 = queryWeight, product of:
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.052184064 = queryNorm
              0.2037246 = fieldWeight in 6973, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0731742 = idf(docFreq=5561, maxDocs=44218)
                0.046875 = fieldNorm(doc=6973)
          0.042421322 = weight(_text_:22 in 6973) [ClassicSimilarity], result of:
            0.042421322 = score(doc=6973,freq=2.0), product of:
              0.1827397 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052184064 = queryNorm
              0.23214069 = fieldWeight in 6973, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6973)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Willett, P.: Best-match text retrieval (1993) 0.02
    0.023578597 = product of:
      0.047157194 = sum of:
        0.047157194 = product of:
          0.09431439 = sum of:
            0.09431439 = weight(_text_:systems in 7818) [ClassicSimilarity], result of:
              0.09431439 = score(doc=7818,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5881023 = fieldWeight in 7818, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7818)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Provides an introduction to the computational techniques that underlie best match searching retrieval systems. Discusses: problems of traditional Boolean systems; characteristics of best-match searching; automatic indexing; term conflation; matching of documents and queries (dealing with similarity measures, initial weights, relevance weights, and the matching algorithm); and describes operational best-match systems
  4. Nakkouzi, Z.S.; Eastman, C.M.: Query formulation for handling negation in information retrieval systems (1990) 0.02
    0.018862877 = product of:
      0.037725754 = sum of:
        0.037725754 = product of:
          0.07545151 = sum of:
            0.07545151 = weight(_text_:systems in 3531) [ClassicSimilarity], result of:
              0.07545151 = score(doc=3531,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.4704818 = fieldWeight in 3531, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3531)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Queries containing negation are widely recognised as presenting problems for both users and systems. In information retrieval systems such problems usually manifest themselves in the use of the NOT operator. Describes an algorithm to transform Boolean queries with negated terms into queries without negation; the transformation process is based on the use of a hierarchical thesaurus. Examines a set of user requests submitted to the Thomas Cooper Library at the University of South Carolina to determine the pattern and frequency of use of negation.
  5. Savoy, J.: Ranking schemes in hybrid Boolean systems : a new approach (1997) 0.02
    0.01633573 = product of:
      0.03267146 = sum of:
        0.03267146 = product of:
          0.06534292 = sum of:
            0.06534292 = weight(_text_:systems in 393) [ClassicSimilarity], result of:
              0.06534292 = score(doc=393,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.4074492 = fieldWeight in 393, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=393)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In most commercial online systems, the retrieval system is based on the Boolean model and its inverted file organization. Since the investment in these systems is so great and changing them could be economically unfeasible, this article suggests a new ranking scheme especially adapted for hypertext environments in order to produce more effective retrieval results and yet maintain the effectiveness of the investment made to date in the Boolean model. To select the retrieved documents, the suggested ranking strategy uses multiple sources of document content evidence. The proposed scheme integrates both the information provided by the index and query terms, and the inherent relationships between documents such as bibliographic references or hypertext links. We will demonstrate that our scheme represents an integration of both subject and citation indexing, and results in a significant imporvement over classical ranking schemes uses in hybrid Boolean systems, while preserving its efficiency. Moreover, through knowing the nearest neighbor and the hypertext links which constitute additional sources of evidence, our strategy will take them into account in order to further improve retrieval effectiveness and to provide 'good' starting points for browsing in a hypertext or hypermedia environement
  6. Frants, V.I.; Shapiro, J.: Control and feedback in a documentary information retrieval system (1991) 0.02
    0.015401474 = product of:
      0.030802948 = sum of:
        0.030802948 = product of:
          0.061605897 = sum of:
            0.061605897 = weight(_text_:systems in 416) [ClassicSimilarity], result of:
              0.061605897 = score(doc=416,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.38414678 = fieldWeight in 416, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=416)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Addresses the problem of control in documentary information retrieval systems is analysed and it is shown why an IR system has to be looked at as an adaptive system. The algorithms of feedback are proposed and it is shown how they depend on the type of the collection of documents: static (no change in the collection between searches) and dynamic (when the change occurs between searches). The proposed algorithms are the basis for the development of the fully automated information retrieval systems
  7. Smith, M.; Smith, M.P.; Wade, S.J.: Applying genetic programming to the problem of term weight algorithms (1995) 0.02
    0.015401474 = product of:
      0.030802948 = sum of:
        0.030802948 = product of:
          0.061605897 = sum of:
            0.061605897 = weight(_text_:systems in 5803) [ClassicSimilarity], result of:
              0.061605897 = score(doc=5803,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.38414678 = fieldWeight in 5803, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5803)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents the results of an initial study on the application of Genetic Programming (GP) to the production of term weighting algorithms in relevance feedback systems within information retrieval systems. Compares Porter, wpq and GP algorithms with user rankings. Offers a backgroud to term weighting alsgorithms and Genetic Programming
  8. Pfeifer, U.; Pennekamp, S.: Incremental processing of vague queries in interactive retrieval systems (1997) 0.02
    0.015401474 = product of:
      0.030802948 = sum of:
        0.030802948 = product of:
          0.061605897 = sum of:
            0.061605897 = weight(_text_:systems in 735) [ClassicSimilarity], result of:
              0.061605897 = score(doc=735,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.38414678 = fieldWeight in 735, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The application of information retrieval techniques in interactive environments requires systems capable of effeciently processing vague queries. To reach reasonable response times, new data structures and algorithms have to be developed. In this paper we describe an approach taking advantage of the conditions of interactive usage and special access paths. To have a reference we investigate text queries and compared our algorithms to the well known 'Buckley/Lewit' algorithm. We achieved significant improvements for the response times
  9. Aigrain, P.; Longueville, V.: ¬A model for the evaluation of expansion techniques in information retrieval systems (1994) 0.01
    0.014147157 = product of:
      0.028294314 = sum of:
        0.028294314 = product of:
          0.056588627 = sum of:
            0.056588627 = weight(_text_:systems in 5331) [ClassicSimilarity], result of:
              0.056588627 = score(doc=5331,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.35286134 = fieldWeight in 5331, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5331)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We describe an evaluation model for expansion systems in information retrieval, that is, systems expanding a user selection of documents in order to provide the user with a larger set of documents sharing the same or related chracteristics. Our model leads to a test protocal and practical estimates of the efficieny of an expansion system provided that it is possible for a sample of users to exhaustively scan the content of a subset of the database in order to decide which documents would have been selected by an 'ideal' expansion system. This condition is met only by databases whose unit contents can be quickly apprehended, such as still image databases or synthetic bibliographical references. We compare our model with other types of possible indicators, and discuss the precision to which our measure can be estimated, using data from experimentation with an image database system developed by our research team
  10. Faloutsos, C.: Signature files (1992) 0.01
    0.014140441 = product of:
      0.028280882 = sum of:
        0.028280882 = product of:
          0.056561764 = sum of:
            0.056561764 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
              0.056561764 = score(doc=3499,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.30952093 = fieldWeight in 3499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3499)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 5.1999 15:22:48
  11. Loughran, H.: ¬A review of nearest neighbour information retrieval (1994) 0.01
    0.013613109 = product of:
      0.027226217 = sum of:
        0.027226217 = product of:
          0.054452434 = sum of:
            0.054452434 = weight(_text_:systems in 616) [ClassicSimilarity], result of:
              0.054452434 = score(doc=616,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.339541 = fieldWeight in 616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.078125 = fieldNorm(doc=616)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Explains the concept of 'nearest neighbour' searching, also known as best match or ranked output, which it is claimed can overcome many of the inadequacies of traditional Boolean methods. Also points to some of the limitations. Identifies a number of commercial information retrieval systems which feature this search technique
  12. Zhang, W.; Korf, R.E.: Performance of linear-space search algorithms (1995) 0.01
    0.013613109 = product of:
      0.027226217 = sum of:
        0.027226217 = product of:
          0.054452434 = sum of:
            0.054452434 = weight(_text_:systems in 4744) [ClassicSimilarity], result of:
              0.054452434 = score(doc=4744,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.339541 = fieldWeight in 4744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Search algorithms in artificial intelligence systems that use space linear in the search depth are employed in practice to solve difficult problems optimally, such as planning and scheduling. Studies the average-case performance of linear-space search algorithms, including depth-first branch-and-bound, iterative-deepening, and recursive best-first search
  13. Brenner, E.H.: Beyond Boolean : new approaches in information retrieval; the quest for intuitive online search systems past, present & future (1995) 0.01
    0.013476291 = product of:
      0.026952581 = sum of:
        0.026952581 = product of:
          0.053905163 = sum of:
            0.053905163 = weight(_text_:systems in 2547) [ClassicSimilarity], result of:
              0.053905163 = score(doc=2547,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.33612844 = fieldWeight in 2547, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The challenge of effectively bringing specific, relevant information from the global sea of data to our fingertips, has become an increasingly difficult one. Discusses how the online information industry, founded on Boolean search systems, may be evolving to take advantage of other methods, such as 'term weighting', 'relevance ranking' and 'query by example'
  14. Kantor, P.; Kim, M.H.; Ibraev, U.; Atasoy, K.: Estimating the number of relevant documents in enormous collections (1999) 0.01
    0.011789299 = product of:
      0.023578597 = sum of:
        0.023578597 = product of:
          0.047157194 = sum of:
            0.047157194 = weight(_text_:systems in 6690) [ClassicSimilarity], result of:
              0.047157194 = score(doc=6690,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.29405114 = fieldWeight in 6690, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In assessing information retrieval systems, it is important to know not only the precision of the retrieved set, but also to compare the number of retrieved relevant items to the total number of relevant items. For large collections, such as the TREC test collections, or the World Wide Web, it is not possible to enumerate the entire set of relevant documents. If the retrieved documents are evaluated, a variant of the statistical "capture-recapture" method can be used to estimate the total number of relevant documents, providing the several retrieval systems used are sufficiently independent. We show that the underlying signal detection model supporting such an analysis can be extended in two ways. First, assuming that there are two distinct performance characteristics (corresponding to the chance of retrieving a relevant, and retrieving a given non-relevant document), we show that if there are three or more independent systems available it is possible to estimate the number of relevant documents without actually having to decide whether each individual document is relevant. We report applications of this 3-system method to the TREC data, leading to the conclusion that the independence assumptions are not satisfied. We then extend the model to a multi-system, multi-problem model, and show that it is possible to include statistical dependencies of all orders in the model, and determine the number of relevant documents for each of the problems in the set. Application to the TREC setting will be presented
  15. Can, F.: Incremental clustering for dynamic information processing (1993) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 6627) [ClassicSimilarity], result of:
              0.043561947 = score(doc=6627,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 6627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6627)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    ACM transactions on information systems. 11(1993) no.2, S.143-164
  16. Tenopir, C.: Online databases : natural language searching with WIN (1993) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 7038) [ClassicSimilarity], result of:
              0.043561947 = score(doc=7038,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 7038, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7038)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    WESTLAW is one of the first major commercial online systems to embrace both natural language input and partial match searching. Provides a backgroud to WESTLAW. Explains how the WESTLAW Is Natural (WIN) search engine works. Some searchers find that when searching with commands and Boolean logic, results differ drastically from those produces by searching with WIN. Discusses exact match Boolean logic search engines
  17. Hofferer, M.: Heuristic search in information retrieval (1994) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 1070) [ClassicSimilarity], result of:
              0.043561947 = score(doc=1070,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 1070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1070)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 15th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Glasgow 1993. Ed.: Ruben Leon
  18. Sembok, T.M.T.; Rijsbergen, C.J. van: IMAGING: a relevant feedback retrieval with nearest neighbour clusters (1994) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 1071) [ClassicSimilarity], result of:
              0.043561947 = score(doc=1071,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 1071, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1071)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 15th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Glasgow 1993. Ed.: Ruben Leon
  19. Wong, S.K.M.: On modelling information retrieval with probabilistic inference (1995) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 1938) [ClassicSimilarity], result of:
              0.043561947 = score(doc=1938,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 1938, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1938)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    ACM transactions on information systems. 13(1995) no.1, S.38-68
  20. Harman, D.: Relevance feedback and other query modification techniques (1992) 0.01
    0.010890487 = product of:
      0.021780973 = sum of:
        0.021780973 = product of:
          0.043561947 = sum of:
            0.043561947 = weight(_text_:systems in 3508) [ClassicSimilarity], result of:
              0.043561947 = score(doc=3508,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2716328 = fieldWeight in 3508, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3508)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a survey of relevance feedback techniques that have been used in past research, recommends various query modification approaches for use in different retrieval systems, and gives some guidelines for the efficient design of the relevance feedback component of a retrieval system