Search (29 results, page 1 of 2)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Retrievalalgorithmen"
  1. Shiri, A.A.; Revie, C.: Query expansion behavior within a thesaurus-enhanced search environment : a user-centered evaluation (2006) 0.03
    0.031199675 = product of:
      0.06239935 = sum of:
        0.06239935 = sum of:
          0.031563994 = weight(_text_:b in 56) [ClassicSimilarity], result of:
            0.031563994 = score(doc=56,freq=2.0), product of:
              0.16126883 = queryWeight, product of:
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.045518078 = queryNorm
              0.19572285 = fieldWeight in 56, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.542962 = idf(docFreq=3476, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
          0.030835358 = weight(_text_:22 in 56) [ClassicSimilarity], result of:
            0.030835358 = score(doc=56,freq=2.0), product of:
              0.15939656 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045518078 = queryNorm
              0.19345059 = fieldWeight in 56, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
      0.5 = coord(1/2)
    
    Abstract
    The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study. Data were collected from 90 searches performed by 30 users using the OVID interface to the CAB abstracts database. Data-gathering techniques included questionnaires, screen capturing software, and interviews. The results presented here relate to issues of search-topic and search-term characteristics, number and types of expanded queries, usefulness of thesaurus terms, and behavioral differences between academic staff and postgraduate students in their interaction. The key conclusions drawn were that (a) academic staff chose more narrow and synonymous terms than did postgraduate students, who generally selected broader and related terms; (b) topic complexity affected users' interaction with the thesaurus in that complex topics required more query expansion and search term selection; (c) users' prior topic-search experience appeared to have a significant effect on their selection and evaluation of thesaurus terms; (d) in 50% of the searches where additional terms were suggested from the thesaurus, users stated that they had not been aware of the terms at the beginning of the search; this observation was particularly noticeable in the case of postgraduate students.
    Date
    22. 7.2006 16:32:43
  2. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.02
    0.02158475 = product of:
      0.0431695 = sum of:
        0.0431695 = product of:
          0.086339 = sum of:
            0.086339 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.086339 = score(doc=3445,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    25. 8.2005 17:42:22
  3. Silveira, M.; Ribeiro-Neto, B.: Concept-based ranking : a case study in the juridical domain (2004) 0.02
    0.018938396 = product of:
      0.037876792 = sum of:
        0.037876792 = product of:
          0.075753585 = sum of:
            0.075753585 = weight(_text_:b in 2339) [ClassicSimilarity], result of:
              0.075753585 = score(doc=2339,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.46973482 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.01
    0.012334143 = product of:
      0.024668286 = sum of:
        0.024668286 = product of:
          0.04933657 = sum of:
            0.04933657 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.04933657 = score(doc=5108,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 18:30:22
  5. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.01
    0.012334143 = product of:
      0.024668286 = sum of:
        0.024668286 = product of:
          0.04933657 = sum of:
            0.04933657 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.04933657 = score(doc=1422,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2003 19:27:23
  6. Shah, B.; Raghavan, V.; Dhatric, P.; Zhao, X.: ¬A cluster-based approach for efficient content-based image retrieval using a similarity-preserving space transformation method (2006) 0.01
    0.011159557 = product of:
      0.022319114 = sum of:
        0.022319114 = product of:
          0.044638228 = sum of:
            0.044638228 = weight(_text_:b in 6118) [ClassicSimilarity], result of:
              0.044638228 = score(doc=6118,freq=4.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.2767939 = fieldWeight in 6118, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6118)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The techniques of clustering and space transformation have been successfully used in the past to solve a number of pattern recognition problems. In this article, the authors propose a new approach to content-based image retrieval (CBIR) that uses (a) a newly proposed similarity-preserving space transformation method to transform the original low-level image space into a highlevel vector space that enables efficient query processing, and (b) a clustering scheme that further improves the efficiency of our retrieval system. This combination is unique and the resulting system provides synergistic advantages of using both clustering and space transformation. The proposed space transformation method is shown to preserve the order of the distances in the transformed feature space. This strategy makes this approach to retrieval generic as it can be applied to object types, other than images, and feature spaces more general than metric spaces. The CBIR approach uses the inexpensive "estimated" distance in the transformed space, as opposed to the computationally inefficient "real" distance in the original space, to retrieve the desired results for a given query image. The authors also provide a theoretical analysis of the complexity of their CBIR approach when used for color-based retrieval, which shows that it is computationally more efficient than other comparable approaches. An extensive set of experiments to test the efficiency and effectiveness of the proposed approach has been performed. The results show that the approach offers superior response time (improvement of 1-2 orders of magnitude compared to retrieval approaches that either use pruning techniques like indexing, clustering, etc., or space transformation, but not both) with sufficiently high retrieval accuracy.
  7. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.01
    0.010792375 = product of:
      0.02158475 = sum of:
        0.02158475 = product of:
          0.0431695 = sum of:
            0.0431695 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
              0.0431695 = score(doc=3276,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.2708308 = fieldWeight in 3276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 3.2005 16:23:22
  8. Zhu, B.; Chen, H.: Validating a geographical image retrieval system (2000) 0.01
    0.009469198 = product of:
      0.018938396 = sum of:
        0.018938396 = product of:
          0.037876792 = sum of:
            0.037876792 = weight(_text_:b in 4769) [ClassicSimilarity], result of:
              0.037876792 = score(doc=4769,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23486741 = fieldWeight in 4769, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4769)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Drucker, H.; Shahrary, B.; Gibbon, D.C.: Support vector machines : relevance feedback and information retrieval (2002) 0.01
    0.009469198 = product of:
      0.018938396 = sum of:
        0.018938396 = product of:
          0.037876792 = sum of:
            0.037876792 = weight(_text_:b in 2581) [ClassicSimilarity], result of:
              0.037876792 = score(doc=2581,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23486741 = fieldWeight in 2581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2581)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Dominich, S.; Skrop, A.: PageRank and interaction information retrieval (2005) 0.01
    0.009469198 = product of:
      0.018938396 = sum of:
        0.018938396 = product of:
          0.037876792 = sum of:
            0.037876792 = weight(_text_:b in 3268) [ClassicSimilarity], result of:
              0.037876792 = score(doc=3268,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23486741 = fieldWeight in 3268, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3268)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The PageRank method is used by the Google Web search engine to compute the importance of Web pages. Two different views have been developed for the Interpretation of the PageRank method and values: (a) stochastic (random surfer): the PageRank values can be conceived as the steady-state distribution of a Markov chain, and (b) algebraic: the PageRank values form the eigenvector corresponding to eigenvalue 1 of the Web link matrix. The Interaction Information Retrieval (1**2 R) method is a nonclassical information retrieval paradigm, which represents a connectionist approach based an dynamic systems. In the present paper, a different Interpretation of PageRank is proposed, namely, a dynamic systems viewpoint, by showing that the PageRank method can be formally interpreted as a particular case of the Interaction Information Retrieval method; and thus, the PageRank values may be interpreted as neutral equilibrium points of the Web.
  11. Lin, J.; Katz, B.: Building a reusable test collection for question answering (2006) 0.01
    0.009469198 = product of:
      0.018938396 = sum of:
        0.018938396 = product of:
          0.037876792 = sum of:
            0.037876792 = weight(_text_:b in 5045) [ClassicSimilarity], result of:
              0.037876792 = score(doc=5045,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23486741 = fieldWeight in 5045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5045)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Crestani, F.; Dominich, S.; Lalmas, M.; Rijsbergen, C.J.K. van: Mathematical, logical, and formal methods in information retrieval : an introduction to the special issue (2003) 0.01
    0.009250606 = product of:
      0.018501213 = sum of:
        0.018501213 = product of:
          0.037002426 = sum of:
            0.037002426 = weight(_text_:22 in 1451) [ClassicSimilarity], result of:
              0.037002426 = score(doc=1451,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23214069 = fieldWeight in 1451, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1451)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2003 19:27:36
  13. Fan, W.; Fox, E.A.; Pathak, P.; Wu, H.: ¬The effects of fitness functions an genetic programming-based ranking discovery for Web search (2004) 0.01
    0.009250606 = product of:
      0.018501213 = sum of:
        0.018501213 = product of:
          0.037002426 = sum of:
            0.037002426 = weight(_text_:22 in 2239) [ClassicSimilarity], result of:
              0.037002426 = score(doc=2239,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23214069 = fieldWeight in 2239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 5.2004 19:22:06
  14. Furner, J.: ¬A unifying model of document relatedness for hybrid search engines (2003) 0.01
    0.009250606 = product of:
      0.018501213 = sum of:
        0.018501213 = product of:
          0.037002426 = sum of:
            0.037002426 = weight(_text_:22 in 2717) [ClassicSimilarity], result of:
              0.037002426 = score(doc=2717,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23214069 = fieldWeight in 2717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2717)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 9.2004 17:32:22
  15. Witschel, H.F.: Global term weights in distributed environments (2008) 0.01
    0.009250606 = product of:
      0.018501213 = sum of:
        0.018501213 = product of:
          0.037002426 = sum of:
            0.037002426 = weight(_text_:22 in 2096) [ClassicSimilarity], result of:
              0.037002426 = score(doc=2096,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23214069 = fieldWeight in 2096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2096)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.2008 9:44:22
  16. Klas, C.-P.; Fuhr, N.; Schaefer, A.: Evaluating strategic support for information access in the DAFFODIL system (2004) 0.01
    0.009250606 = product of:
      0.018501213 = sum of:
        0.018501213 = product of:
          0.037002426 = sum of:
            0.037002426 = weight(_text_:22 in 2419) [ClassicSimilarity], result of:
              0.037002426 = score(doc=2419,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23214069 = fieldWeight in 2419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2419)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16.11.2008 16:22:48
  17. Campos, L.M. de; Fernández-Luna, J.M.; Huete, J.F.: Implementing relevance feedback in the Bayesian network retrieval model (2003) 0.01
    0.009250606 = product of:
      0.018501213 = sum of:
        0.018501213 = product of:
          0.037002426 = sum of:
            0.037002426 = weight(_text_:22 in 825) [ClassicSimilarity], result of:
              0.037002426 = score(doc=825,freq=2.0), product of:
                0.15939656 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045518078 = queryNorm
                0.23214069 = fieldWeight in 825, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=825)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2003 19:30:19
  18. Henzinger, M.R.: Link analysis in Web information retrieval (2000) 0.01
    0.008927646 = product of:
      0.017855292 = sum of:
        0.017855292 = product of:
          0.035710584 = sum of:
            0.035710584 = weight(_text_:b in 801) [ClassicSimilarity], result of:
              0.035710584 = score(doc=801,freq=4.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.22143513 = fieldWeight in 801, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.03125 = fieldNorm(doc=801)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    The goal of information retrieval is to find all documents relevant for a user query in a collection of documents. Decades of research in information retrieval were successful in developing and refining techniques that are solely word-based (see e.g., [2]). With the advent of the web new sources of information became available, one of them being the hyperlinks between documents and records of user behavior. To be precise, hypertexts (i.e., collections of documents connected by hyperlinks) have existed and have been studied for a long time. What was new was the large number of hyperlinks created by independent individuals. Hyperlinks provide a valuable source of information for web information retrieval as we will show in this article. This area of information retrieval is commonly called link analysis. Why would one expect hyperlinks to be useful? Ahyperlink is a reference of a web page B that is contained in a web page A. When the hyperlink is clicked on in a web browser, the browser displays page B. This functionality alone is not helpful for web information retrieval. However, the way hyperlinks are typically used by authors of web pages can give them valuable information content. Typically, authors create links because they think they will be useful for the readers of the pages. Thus, links are usually either navigational aids that, for example, bring the reader back to the homepage of the site, or links that point to pages whose content augments the content of the current page. The second kind of links tend to point to high-quality pages that might be on the same topic as the page containing the link.
  19. Chen, H.; Lally, A.M.; Zhu, B.; Chau, M.: HelpfulMed : Intelligent searching for medical information over the Internet (2003) 0.01
    0.007890998 = product of:
      0.015781997 = sum of:
        0.015781997 = product of:
          0.031563994 = sum of:
            0.031563994 = weight(_text_:b in 1615) [ClassicSimilarity], result of:
              0.031563994 = score(doc=1615,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19572285 = fieldWeight in 1615, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1615)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Fuhr, N.: Theorie des Information Retrieval I : Modelle (2004) 0.01
    0.007890998 = product of:
      0.015781997 = sum of:
        0.015781997 = product of:
          0.031563994 = sum of:
            0.031563994 = weight(_text_:b in 2912) [ClassicSimilarity], result of:
              0.031563994 = score(doc=2912,freq=2.0), product of:
                0.16126883 = queryWeight, product of:
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.045518078 = queryNorm
                0.19572285 = fieldWeight in 2912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.542962 = idf(docFreq=3476, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information-Retrieval-(IR-)Modelle spezifizieren, wie zur einer gegebenen Anfrage die Antwortdokumente aus einer Dokumentenkollektion bestimmt werden. Dabei macht jedes Modell bestimmte Annahmen über die Struktur von Dokumenten und Anfragen und definiert dann die so genannte Retrievalfunktion, die das Retrievalgewicht eines Dokumentes bezüglich einer Anfrage bestimmt - im Falle des Booleschen Retrieval etwa eines der Gewichte 0 oder 1. Die Dokumente werden dann nach fallenden Gewichten sortiert und dem Benutzer präsentiert. Zunächst sollen hier einige grundlegende Charakteristika von Retrievalmodellen beschrieben werden, bevor auf die einzelnen Modelle näher eingegangen wird. Wie eingangs erwähnt, macht jedes Modell Annahmen über die Struktur von Dokumenten und Fragen. Ein Dokument kann entweder als Menge oder Multimenge von so genannten Termen aufgefasst werden, wobei im zweiten Fall das Mehrfachvorkommen berücksichtigt wird. Dabei subsummiert 'Term' einen Suchbegriff, der ein einzelnes Wort, ein mehrgliedriger Begriff oder auch ein komplexes Freitextmuster sein kann. Diese Dokumentrepräsentation wird wiederum auf eine so genannte Dokumentbeschreibung abgebildet, in der die einzelnen Terme gewichtet sein können; dies ist Aufgabe der in Kapitel B 5 beschriebenen Indexierungsmodelle. Im Folgenden unterscheiden wir nur zwischen ungewichteter (Gewicht eines Terms ist entweder 0 oderl) und gewichteter Indexierung (das Gewicht ist eine nichtnegative reelle Zahl). Ebenso wie bei Dokumenten können auch die Terme in der Frage entweder ungewichtet oder gewichtet sein. Daneben unterscheidet man zwischen linearen (Frage als Menge von Termen, ungewichtet oder gewichtet) und Booleschen Anfragen.