Search (13 results, page 1 of 1)

  • × author_ss:"Croft, W.B."
  1. Rajashekar, T.B.; Croft, W.B.: Combining automatic and manual index representations in probabilistic retrieval (1995) 0.05
    0.04856749 = product of:
      0.12141872 = sum of:
        0.08879929 = weight(_text_:index in 2418) [ClassicSimilarity], result of:
          0.08879929 = score(doc=2418,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.4779429 = fieldWeight in 2418, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2418)
        0.03261943 = weight(_text_:system in 2418) [ClassicSimilarity], result of:
          0.03261943 = score(doc=2418,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 2418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2418)
      0.4 = coord(2/5)
    
    Abstract
    Results from research in information retrieval have suggested that significant improvements in retrieval effectiveness can be obtained by combining results from multiple index representioms, query formulations, and search strategies. The inference net model of retrieval, which was designed from this point of view, treats information retrieval as an evidental reasoning process where multiple sources of evidence about document and query content are combined to estimate relevance probabilities. Uses a system based on this model to study the retrieval effectiveness benefits of combining these types of document and query information that are found in typical commercial databases and information services. The results indicate that substantial real benefits are possible
  2. Croft, W.B.; Thompson, R.H.: Support for browsing in an intelligent text retrieval system (1989) 0.01
    0.013047772 = product of:
      0.06523886 = sum of:
        0.06523886 = weight(_text_:system in 5004) [ClassicSimilarity], result of:
          0.06523886 = score(doc=5004,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4871716 = fieldWeight in 5004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.109375 = fieldNorm(doc=5004)
      0.2 = coord(1/5)
    
  3. Croft, W.B.: Effective retrieval based on combining evidence from the corpus and users (1995) 0.01
    0.012913945 = product of:
      0.06456973 = sum of:
        0.06456973 = weight(_text_:system in 4489) [ClassicSimilarity], result of:
          0.06456973 = score(doc=4489,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.48217484 = fieldWeight in 4489, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=4489)
      0.2 = coord(1/5)
    
    Abstract
    Inquery is a text retrieval system that is the basis of a number of WWW applications, including the Thomas system supported by the Library of Congress. Surveys the representation, query processing, and retrieval techniques used in the system. By combining evidence about relevance from the corpus, individual documents, and users, Inquery achieves effective overall recall and precision evaluation while avoiding occasional major failures
  4. Krovetz, R.; Croft, W.B.: Lexical ambiguity and information retrieval (1992) 0.01
    0.01129755 = product of:
      0.05648775 = sum of:
        0.05648775 = weight(_text_:context in 4028) [ClassicSimilarity], result of:
          0.05648775 = score(doc=4028,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 4028, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4028)
      0.2 = coord(1/5)
    
    Abstract
    Reports on an analysis of lexical ambiguity in information retrieval text collections and on experiments to determine the utility of word meanings for separating relevant from nonrelevant documents. Results show that there is considerable ambiguity even in a specialised database. Word senses provide a significant separation between relevant and nonrelevant documents, but several factors contribute to determining whether disambiguation will make an improvement in performance such as: resolving lexical ambiguity was found to have little impact on retrieval effectiveness for documents that have many words in common with the query. Discusses other uses of word sense disambiguation in an information retrieval context
  5. Jing, Y.; Croft, W.B.: ¬An association thesaurus for information retrieval (199?) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 4494) [ClassicSimilarity], result of:
          0.03261943 = score(doc=4494,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 4494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4494)
      0.2 = coord(1/5)
    
    Abstract
    Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent benefits for retrieval performance, and it is difficult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in INQUERY to evaluate different types of association thesauri, and thesauri constructed for a variety of collections
  6. Croft, W.B.; Turtle, H.R.: Retrieval strategies for hypertext (1993) 0.01
    0.0062004575 = product of:
      0.031002287 = sum of:
        0.031002287 = product of:
          0.09300686 = sum of:
            0.09300686 = weight(_text_:29 in 4711) [ClassicSimilarity], result of:
              0.09300686 = score(doc=4711,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.6218451 = fieldWeight in 4711, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=4711)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 29(1993) no.3, S.313-324
  7. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.01
    0.0061446796 = product of:
      0.030723399 = sum of:
        0.030723399 = product of:
          0.092170194 = sum of:
            0.092170194 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
              0.092170194 = score(doc=334,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.61904186 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=334)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  8. Croft, W.B.: Advances in information retrieval : Recent research from the Center for Intelligent Information Retrieval (2000) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 6860) [ClassicSimilarity], result of:
          0.027959513 = score(doc=6860,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 6860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=6860)
      0.2 = coord(1/5)
    
    Content
    Enthält die Beiträge: CROFT, W.B.: Combining approaches to information retrieval; GREIFF, W.R.: The use of exploratory data analysis in information retrieval research; PONTE, J.M.: Language models for relevance feedback; PAPKA, R. u. J. ALLAN: Topic detection and tracking: event clustering as a basis for first story detection; CALLAN, J.: Distributed information retrieval; XU, J. u. W.B. CROFT: Topic-based language models for ditributed retrieval; LU, Z. u. K.S. McKINLEY: The effect of collection organization and query locality on information retrieval system performance; BALLESTEROS, L.A.: Cross-language retrieval via transitive translation; SANDERSON, M. u. D. LAWRIE: Building, testing, and applying concept hierarchies; RAVELA, S. u. C. LUO: Appearance-based global similarity retrieval of images
  9. Luk, R.W.P.; Leong, H.V.; Dillon, T.S.; Chan, A.T.S.; Croft, W.B.; Allen, J.: ¬A survey in indexing and searching XML documents (2002) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 460) [ClassicSimilarity], result of:
          0.027959513 = score(doc=460,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=460)
      0.2 = coord(1/5)
    
    Abstract
    XML holds the promise to yield (1) a more precise search by providing additional information in the elements, (2) a better integrated search of documents from heterogeneous sources, (3) a powerful search paradigm using structural as well as content specifications, and (4) data and information exchange to share resources and to support cooperative search. We survey several indexing techniques for XML documents, grouping them into flatfile, semistructured, and structured indexing paradigms. Searching techniques and supporting techniques for searching are reviewed, including full text search and multistage search. Because searching XML documents can be very flexible, various search result presentations are discussed, as well as database and information retrieval system integration and XML query languages. We also survey various retrieval models, examining how they would be used or extended for retrieving XML documents. To conclude the article, we discuss various open issues that XML poses with respect to information retrieval and database research.
  10. Tavakoli, L.; Zamani, H.; Scholer, F.; Croft, W.B.; Sanderson, M.: Analyzing clarification in asynchronous information-seeking conversations (2022) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 496) [ClassicSimilarity], result of:
          0.027959513 = score(doc=496,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 496, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=496)
      0.2 = coord(1/5)
    
    Abstract
    This research analyzes human-generated clarification questions to provide insights into how they are used to disambiguate and provide a better understanding of information needs. A set of clarification questions is extracted from posts on the Stack Exchange platform. Novel taxonomy is defined for the annotation of the questions and their responses. We investigate the clarification questions in terms of whether they add any information to the post (the initial question posted by the asker) and the accepted answer, which is the answer chosen by the asker. After identifying, which clarification questions are more useful, we investigated the characteristics of these questions in terms of their types and patterns. Non-useful clarification questions are identified, and their patterns are compared with useful clarifications. Our analysis indicates that the most useful clarification questions have similar patterns, regardless of topic. This research contributes to an understanding of clarification in conversations and can provide insight for clarification dialogues in conversational search scenarios and for the possible system generation of clarification requests in information-seeking conversations.
  11. Belkin, N.J.; Croft, W.B.: Information filtering and information retrieval : two sides of the same coin? (1992) 0.00
    0.004650343 = product of:
      0.023251716 = sum of:
        0.023251716 = product of:
          0.069755144 = sum of:
            0.069755144 = weight(_text_:29 in 6093) [ClassicSimilarity], result of:
              0.069755144 = score(doc=6093,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.46638384 = fieldWeight in 6093, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6093)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Communications of the Association for Computing Machinery. 35(1992) no.12, S.29-38
  12. Allan, J.; Croft, W.B.; Callan, J.: ¬The University of Massachusetts and a dozen TRECs (2005) 0.00
    0.004650343 = product of:
      0.023251716 = sum of:
        0.023251716 = product of:
          0.069755144 = sum of:
            0.069755144 = weight(_text_:29 in 5086) [ClassicSimilarity], result of:
              0.069755144 = score(doc=5086,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.46638384 = fieldWeight in 5086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5086)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 3.1996 18:16:49
  13. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.00
    0.0038404248 = product of:
      0.019202124 = sum of:
        0.019202124 = product of:
          0.057606373 = sum of:
            0.057606373 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.057606373 = score(doc=3103,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    27. 2.1999 20:55:22