Search (25 results, page 1 of 2)

  • × author_ss:"Croft, W.B."
  1. Croft, W.B.: Approaches to intelligent information retrieval (1987) 0.04
    0.044542145 = product of:
      0.22271073 = sum of:
        0.07423691 = weight(_text_:23 in 1094) [ClassicSimilarity], result of:
          0.07423691 = score(doc=1094,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.63357824 = fieldWeight in 1094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.125 = fieldNorm(doc=1094)
        0.07423691 = weight(_text_:23 in 1094) [ClassicSimilarity], result of:
          0.07423691 = score(doc=1094,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.63357824 = fieldWeight in 1094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.125 = fieldNorm(doc=1094)
        0.07423691 = weight(_text_:23 in 1094) [ClassicSimilarity], result of:
          0.07423691 = score(doc=1094,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.63357824 = fieldWeight in 1094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.125 = fieldNorm(doc=1094)
      0.2 = coord(3/15)
    
    Source
    Information processing and management. 23(1987), S.249-254
  2. Croft, W.B.: Effective retrieval based on combining evidence from the corpus and users (1995) 0.03
    0.031120533 = product of:
      0.11670199 = sum of:
        0.037118454 = weight(_text_:23 in 4489) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4489,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4489, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4489)
        0.037118454 = weight(_text_:23 in 4489) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4489,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4489, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4489)
        0.037118454 = weight(_text_:23 in 4489) [ClassicSimilarity], result of:
          0.037118454 = score(doc=4489,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.31678912 = fieldWeight in 4489, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0625 = fieldNorm(doc=4489)
        0.005346625 = weight(_text_:in in 4489) [ClassicSimilarity], result of:
          0.005346625 = score(doc=4489,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.120230645 = fieldWeight in 4489, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4489)
      0.26666668 = coord(4/15)
    
    Abstract
    Inquery is a text retrieval system that is the basis of a number of WWW applications, including the Thomas system supported by the Library of Congress. Surveys the representation, query processing, and retrieval techniques used in the system. By combining evidence about relevance from the corpus, individual documents, and users, Inquery achieves effective overall recall and precision evaluation while avoiding occasional major failures
    Date
    17. 7.1996 20:18:23
  3. Croft, W.B.: Combining approaches to information retrieval (2000) 0.02
    0.023783326 = product of:
      0.089187466 = sum of:
        0.02783884 = weight(_text_:23 in 6862) [ClassicSimilarity], result of:
          0.02783884 = score(doc=6862,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 6862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
        0.02783884 = weight(_text_:23 in 6862) [ClassicSimilarity], result of:
          0.02783884 = score(doc=6862,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 6862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
        0.02783884 = weight(_text_:23 in 6862) [ClassicSimilarity], result of:
          0.02783884 = score(doc=6862,freq=2.0), product of:
            0.117170855 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.032692216 = queryNorm
            0.23759183 = fieldWeight in 6862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
        0.005670953 = weight(_text_:in in 6862) [ClassicSimilarity], result of:
          0.005670953 = score(doc=6862,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.12752387 = fieldWeight in 6862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
      0.26666668 = coord(4/15)
    
    Abstract
    The combination of different text representations and search strategies has become a standard technique for improving the effectiveness of information retrieval. Combination, for example, has been studied extensively in the TREC evaluations and is the basis of the "meta-search" engines used on the Web. This paper examines the development of this technique, including both experimental results and the retrieval models that have been proposed as formal frameworks for combination. We show that combining approaches for information retrieval can be modeled as combining the outputs of multiple classifiers based on one or more representations, and that this simple model can provide explanations for many of the experimental results. We also show that this view of combination is very similar to the inference net model, and that a new approach to retrieval based on language models supports combination and can be integrated with the inference net model
    Date
    29.12.2001 20:23:17
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  4. Jing, Y.; Croft, W.B.: ¬An association thesaurus for information retrieval (199?) 0.00
    0.002903581 = product of:
      0.021776855 = sum of:
        0.012420262 = weight(_text_:und in 4494) [ClassicSimilarity], result of:
          0.012420262 = score(doc=4494,freq=2.0), product of:
            0.07245795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.032692216 = queryNorm
            0.17141339 = fieldWeight in 4494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
        0.009356594 = weight(_text_:in in 4494) [ClassicSimilarity], result of:
          0.009356594 = score(doc=4494,freq=8.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.21040362 = fieldWeight in 4494, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
      0.13333334 = coord(2/15)
    
    Abstract
    Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent benefits for retrieval performance, and it is difficult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in INQUERY to evaluate different types of association thesauri, and thesauri constructed for a variety of collections
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.00
    0.001574878 = product of:
      0.023623168 = sum of:
        0.023623168 = product of:
          0.070869505 = sum of:
            0.070869505 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
              0.070869505 = score(doc=334,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.61904186 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=334)
          0.33333334 = coord(1/3)
      0.06666667 = coord(1/15)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  6. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.00
    9.842988E-4 = product of:
      0.01476448 = sum of:
        0.01476448 = product of:
          0.04429344 = sum of:
            0.04429344 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.04429344 = score(doc=3103,freq=2.0), product of:
                0.114482574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032692216 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.33333334 = coord(1/3)
      0.06666667 = coord(1/15)
    
    Date
    27. 2.1999 20:55:22
  7. Turtle, H.; Croft, W.B.: Inference networks for document retrieval (1990) 0.00
    7.7171903E-4 = product of:
      0.0115757845 = sum of:
        0.0115757845 = weight(_text_:in in 1936) [ClassicSimilarity], result of:
          0.0115757845 = score(doc=1936,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.260307 = fieldWeight in 1936, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1936)
      0.06666667 = coord(1/15)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.287-298
    Source
    Proceedings of the thirteenth international conference on research and development in information retrieval
  8. Krovetz, R.; Croft, W.B.: Lexical ambiguity and information retrieval (1992) 0.00
    6.973995E-4 = product of:
      0.010460991 = sum of:
        0.010460991 = weight(_text_:in in 4028) [ClassicSimilarity], result of:
          0.010460991 = score(doc=4028,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.23523843 = fieldWeight in 4028, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4028)
      0.06666667 = coord(1/15)
    
    Abstract
    Reports on an analysis of lexical ambiguity in information retrieval text collections and on experiments to determine the utility of word meanings for separating relevant from nonrelevant documents. Results show that there is considerable ambiguity even in a specialised database. Word senses provide a significant separation between relevant and nonrelevant documents, but several factors contribute to determining whether disambiguation will make an improvement in performance such as: resolving lexical ambiguity was found to have little impact on retrieval effectiveness for documents that have many words in common with the query. Discusses other uses of word sense disambiguation in an information retrieval context
  9. Tavakoli, L.; Zamani, H.; Scholer, F.; Croft, W.B.; Sanderson, M.: Analyzing clarification in asynchronous information-seeking conversations (2022) 0.00
    6.5482524E-4 = product of:
      0.009822378 = sum of:
        0.009822378 = weight(_text_:in in 496) [ClassicSimilarity], result of:
          0.009822378 = score(doc=496,freq=12.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.22087781 = fieldWeight in 496, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=496)
      0.06666667 = coord(1/15)
    
    Abstract
    This research analyzes human-generated clarification questions to provide insights into how they are used to disambiguate and provide a better understanding of information needs. A set of clarification questions is extracted from posts on the Stack Exchange platform. Novel taxonomy is defined for the annotation of the questions and their responses. We investigate the clarification questions in terms of whether they add any information to the post (the initial question posted by the asker) and the accepted answer, which is the answer chosen by the asker. After identifying, which clarification questions are more useful, we investigated the characteristics of these questions in terms of their types and patterns. Non-useful clarification questions are identified, and their patterns are compared with useful clarifications. Our analysis indicates that the most useful clarification questions have similar patterns, regardless of topic. This research contributes to an understanding of clarification in conversations and can provide insight for clarification dialogues in conversational search scenarios and for the possible system generation of clarification requests in information-seeking conversations.
  10. Callan, J.; Croft, W.B.; Broglio, J.: TREC and TIPSTER experiments with INQUERY (1995) 0.00
    6.301059E-4 = product of:
      0.009451588 = sum of:
        0.009451588 = weight(_text_:in in 1944) [ClassicSimilarity], result of:
          0.009451588 = score(doc=1944,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.21253976 = fieldWeight in 1944, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1944)
      0.06666667 = coord(1/15)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.436-439.
  11. Croft, W.B.; Thompson, R.H.: Support for browsing in an intelligent text retrieval system (1989) 0.00
    6.2377297E-4 = product of:
      0.009356594 = sum of:
        0.009356594 = weight(_text_:in in 5004) [ClassicSimilarity], result of:
          0.009356594 = score(doc=5004,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.21040362 = fieldWeight in 5004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=5004)
      0.06666667 = coord(1/15)
    
  12. Rajashekar, T.B.; Croft, W.B.: Combining automatic and manual index representations in probabilistic retrieval (1995) 0.00
    6.2377297E-4 = product of:
      0.009356594 = sum of:
        0.009356594 = weight(_text_:in in 2418) [ClassicSimilarity], result of:
          0.009356594 = score(doc=2418,freq=8.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.21040362 = fieldWeight in 2418, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2418)
      0.06666667 = coord(1/15)
    
    Abstract
    Results from research in information retrieval have suggested that significant improvements in retrieval effectiveness can be obtained by combining results from multiple index representioms, query formulations, and search strategies. The inference net model of retrieval, which was designed from this point of view, treats information retrieval as an evidental reasoning process where multiple sources of evidence about document and query content are combined to estimate relevance probabilities. Uses a system based on this model to study the retrieval effectiveness benefits of combining these types of document and query information that are found in typical commercial databases and information services. The results indicate that substantial real benefits are possible
  13. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.00
    5.9777097E-4 = product of:
      0.008966564 = sum of:
        0.008966564 = weight(_text_:in in 2605) [ClassicSimilarity], result of:
          0.008966564 = score(doc=2605,freq=10.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.20163295 = fieldWeight in 2605, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2605)
      0.06666667 = coord(1/15)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
  14. Croft, W.B.: What do people want from information retrieval? : the top 10 research issues for companies that use and sell IR systems (1995) 0.00
    5.346625E-4 = product of:
      0.008019937 = sum of:
        0.008019937 = weight(_text_:in in 3402) [ClassicSimilarity], result of:
          0.008019937 = score(doc=3402,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 3402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=3402)
      0.06666667 = coord(1/15)
    
    Footnote
    Ebenso in: D-LIB Magazine, November 1995
  15. Belkin, N.J.; Croft, W.B.: Information filtering and information retrieval : two sides of the same coin? (1992) 0.00
    5.346625E-4 = product of:
      0.008019937 = sum of:
        0.008019937 = weight(_text_:in in 6093) [ClassicSimilarity], result of:
          0.008019937 = score(doc=6093,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 6093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=6093)
      0.06666667 = coord(1/15)
    
    Abstract
    One of nine articles in this issue of Communications of the ACM devoted to information filtering
  16. Liu, X.; Croft, W.B.: Cluster-based retrieval using language models (2004) 0.00
    5.346625E-4 = product of:
      0.008019937 = sum of:
        0.008019937 = weight(_text_:in in 4115) [ClassicSimilarity], result of:
          0.008019937 = score(doc=4115,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 4115, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=4115)
      0.06666667 = coord(1/15)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  17. Allan, J.; Croft, W.B.; Callan, J.: ¬The University of Massachusetts and a dozen TRECs (2005) 0.00
    5.346625E-4 = product of:
      0.008019937 = sum of:
        0.008019937 = weight(_text_:in in 5086) [ClassicSimilarity], result of:
          0.008019937 = score(doc=5086,freq=2.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.18034597 = fieldWeight in 5086, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=5086)
      0.06666667 = coord(1/15)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  18. Croft, W.B.; Harper, D.J.: Using probabilistic models of document retrieval without relevance information (1979) 0.00
    5.040847E-4 = product of:
      0.00756127 = sum of:
        0.00756127 = weight(_text_:in in 4520) [ClassicSimilarity], result of:
          0.00756127 = score(doc=4520,freq=4.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.17003182 = fieldWeight in 4520, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4520)
      0.06666667 = coord(1/15)
    
    Abstract
    Based on a probablistic model, proposes strategies for the initial search and an intermediate search. Retrieval experiences with the Cranfield collection of 1,400 documents show that this initial search strategy is better than conventional search strategies both in terms of retrieval effectiveness and in terms of the number of queries that retrieve relevant documents. The intermediate search is a useful substitute for a relevance feedback search. A cluster search would be an effective alternative strategy.
  19. Xu, J.; Croft, W.B.: Topic-based language models for distributed retrieval (2000) 0.00
    4.6303135E-4 = product of:
      0.00694547 = sum of:
        0.00694547 = weight(_text_:in in 38) [ClassicSimilarity], result of:
          0.00694547 = score(doc=38,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.1561842 = fieldWeight in 38, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.06666667 = coord(1/15)
    
    Abstract
    Effective retrieval in a distributed environment is an important but difficult problem. Lack of effectiveness appears to have two major causes. First, existing collection selection algorithms do not work well on heterogeneous collections. Second, relevant documents are scattered over many collections and searching a few collections misses many relevant documents. We propose a topic-oriented approach to distributed retrieval. With this approach, we structure the document set of a distributed retrieval environment around a set of topics. Retrieval for a query involves first selecting the right topics for the query and then dispatching the search process to collections that contain such topics. The content of a topic is characterized by a language model. In environments where the labeling of documents by topics is unavailable, document clustering is employed for topic identification. Based on these ideas, three methods are proposed to suit different environments. We show that all three methods improve effectiveness of distributed retrieval
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  20. Murdock, V.; Kelly, D.; Croft, W.B.; Belkin, N.J.; Yuan, X.: Identifying and improving retrieval for procedural questions (2007) 0.00
    4.6303135E-4 = product of:
      0.00694547 = sum of:
        0.00694547 = weight(_text_:in in 902) [ClassicSimilarity], result of:
          0.00694547 = score(doc=902,freq=6.0), product of:
            0.044469737 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.032692216 = queryNorm
            0.1561842 = fieldWeight in 902, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=902)
      0.06666667 = coord(1/15)
    
    Abstract
    People use questions to elicit information from other people in their everyday lives and yet the most common method of obtaining information from a search engine is by posing keywords. There has been research that suggests users are better at expressing their information needs in natural language, however the vast majority of work to improve document retrieval has focused on queries posed as sets of keywords or Boolean queries. This paper focuses on improving document retrieval for the subset of natural language questions asking about how something is done. We classify questions as asking either for a description of a process or asking for a statement of fact, with better than 90% accuracy. Further we identify non-content features of documents relevant to questions asking about a process. Finally we demonstrate that we can use these features to significantly improve the precision of document retrieval results for questions asking about a process. Our approach, based on exploiting the structure of documents, shows a significant improvement in precision at rank one for questions asking about how something is done.