-
Croft, W.B.: Combining approaches to information retrieval (2000)
0.02
0.018555261 = product of:
0.086591214 = sum of:
0.020922182 = weight(_text_:web in 6862) [ClassicSimilarity], result of:
0.020922182 = score(doc=6862,freq=2.0), product of:
0.09670874 = queryWeight, product of:
3.2635105 = idf(docFreq=4597, maxDocs=44218)
0.029633347 = queryNorm
0.21634221 = fieldWeight in 6862, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.2635105 = idf(docFreq=4597, maxDocs=44218)
0.046875 = fieldNorm(doc=6862)
0.01482871 = weight(_text_:information in 6862) [ClassicSimilarity], result of:
0.01482871 = score(doc=6862,freq=12.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.2850541 = fieldWeight in 6862, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.046875 = fieldNorm(doc=6862)
0.050840326 = weight(_text_:retrieval in 6862) [ClassicSimilarity], result of:
0.050840326 = score(doc=6862,freq=16.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.5671716 = fieldWeight in 6862, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.046875 = fieldNorm(doc=6862)
0.21428572 = coord(3/14)
- Abstract
- The combination of different text representations and search strategies has become a standard technique for improving the effectiveness of information retrieval. Combination, for example, has been studied extensively in the TREC evaluations and is the basis of the "meta-search" engines used on the Web. This paper examines the development of this technique, including both experimental results and the retrieval models that have been proposed as formal frameworks for combination. We show that combining approaches for information retrieval can be modeled as combining the outputs of multiple classifiers based on one or more representations, and that this simple model can provide explanations for many of the experimental results. We also show that this view of combination is very similar to the inference net model, and that a new approach to retrieval based on language models supports combination and can be integrated with the inference net model
- Series
- The Kluwer international series on information retrieval; 7
- Source
- Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
-
Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987)
0.02
0.018319076 = product of:
0.08548902 = sum of:
0.016143454 = weight(_text_:information in 334) [ClassicSimilarity], result of:
0.016143454 = score(doc=334,freq=2.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.3103276 = fieldWeight in 334, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.125 = fieldNorm(doc=334)
0.047932718 = weight(_text_:retrieval in 334) [ClassicSimilarity], result of:
0.047932718 = score(doc=334,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.5347345 = fieldWeight in 334, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.125 = fieldNorm(doc=334)
0.021412853 = product of:
0.064238556 = sum of:
0.064238556 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
0.064238556 = score(doc=334,freq=2.0), product of:
0.103770934 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.029633347 = queryNorm
0.61904186 = fieldWeight in 334, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.125 = fieldNorm(doc=334)
0.33333334 = coord(1/3)
0.21428572 = coord(3/14)
- Source
- Annual review of information science and technology. 22(1987), S.109-145
-
Ballesteros, L.; Croft, W.B.: Statistical methods for cross-language information retrieval (1998)
0.01
0.011891057 = product of:
0.083237395 = sum of:
0.020970963 = weight(_text_:information in 6303) [ClassicSimilarity], result of:
0.020970963 = score(doc=6303,freq=6.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.40312737 = fieldWeight in 6303, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.09375 = fieldNorm(doc=6303)
0.06226643 = weight(_text_:retrieval in 6303) [ClassicSimilarity], result of:
0.06226643 = score(doc=6303,freq=6.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.6946405 = fieldWeight in 6303, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.09375 = fieldNorm(doc=6303)
0.14285715 = coord(2/14)
- Series
- The Kluwer International series on information retrieval
- Source
- Cross-language information retrieval. Ed.: G. Grefenstette
-
Croft, W.B.: Approaches to intelligent information retrieval (1987)
0.01
0.010109002 = product of:
0.07076301 = sum of:
0.022830293 = weight(_text_:information in 1094) [ClassicSimilarity], result of:
0.022830293 = score(doc=1094,freq=4.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.43886948 = fieldWeight in 1094, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.125 = fieldNorm(doc=1094)
0.047932718 = weight(_text_:retrieval in 1094) [ClassicSimilarity], result of:
0.047932718 = score(doc=1094,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.5347345 = fieldWeight in 1094, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.125 = fieldNorm(doc=1094)
0.14285715 = coord(2/14)
- Source
- Information processing and management. 23(1987), S.249-254
-
Rajashekar, T.B.; Croft, W.B.: Combining automatic and manual index representations in probabilistic retrieval (1995)
0.01
0.009594286 = product of:
0.067159995 = sum of:
0.015792815 = weight(_text_:information in 2418) [ClassicSimilarity], result of:
0.015792815 = score(doc=2418,freq=10.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.3035872 = fieldWeight in 2418, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.0546875 = fieldNorm(doc=2418)
0.05136718 = weight(_text_:retrieval in 2418) [ClassicSimilarity], result of:
0.05136718 = score(doc=2418,freq=12.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.5730491 = fieldWeight in 2418, product of:
3.4641016 = tf(freq=12.0), with freq of:
12.0 = termFreq=12.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.0546875 = fieldNorm(doc=2418)
0.14285715 = coord(2/14)
- Abstract
- Results from research in information retrieval have suggested that significant improvements in retrieval effectiveness can be obtained by combining results from multiple index representioms, query formulations, and search strategies. The inference net model of retrieval, which was designed from this point of view, treats information retrieval as an evidental reasoning process where multiple sources of evidence about document and query content are combined to estimate relevance probabilities. Uses a system based on this model to study the retrieval effectiveness benefits of combining these types of document and query information that are found in typical commercial databases and information services. The results indicate that substantial real benefits are possible
- Source
- Journal of the American Society for Information Science. 46(1995) no.4, S.272-283
-
Turtle, H.; Croft, W.B.: Inference networks for document retrieval (1990)
0.01
0.009451089 = product of:
0.066157624 = sum of:
0.014268933 = weight(_text_:information in 1936) [ClassicSimilarity], result of:
0.014268933 = score(doc=1936,freq=4.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.27429342 = fieldWeight in 1936, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.078125 = fieldNorm(doc=1936)
0.05188869 = weight(_text_:retrieval in 1936) [ClassicSimilarity], result of:
0.05188869 = score(doc=1936,freq=6.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.5788671 = fieldWeight in 1936, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.078125 = fieldNorm(doc=1936)
0.14285715 = coord(2/14)
- Footnote
- Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.287-298
- Source
- Proceedings of the thirteenth international conference on research and development in information retrieval
-
Xu, J.; Croft, W.B.: Topic-based language models for distributed retrieval (2000)
0.01
0.009201399 = product of:
0.06440979 = sum of:
0.0104854815 = weight(_text_:information in 38) [ClassicSimilarity], result of:
0.0104854815 = score(doc=38,freq=6.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.20156369 = fieldWeight in 38, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.046875 = fieldNorm(doc=38)
0.053924307 = weight(_text_:retrieval in 38) [ClassicSimilarity], result of:
0.053924307 = score(doc=38,freq=18.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.60157627 = fieldWeight in 38, product of:
4.2426405 = tf(freq=18.0), with freq of:
18.0 = termFreq=18.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.046875 = fieldNorm(doc=38)
0.14285715 = coord(2/14)
- Abstract
- Effective retrieval in a distributed environment is an important but difficult problem. Lack of effectiveness appears to have two major causes. First, existing collection selection algorithms do not work well on heterogeneous collections. Second, relevant documents are scattered over many collections and searching a few collections misses many relevant documents. We propose a topic-oriented approach to distributed retrieval. With this approach, we structure the document set of a distributed retrieval environment around a set of topics. Retrieval for a query involves first selecting the right topics for the query and then dispatching the search process to collections that contain such topics. The content of a topic is characterized by a language model. In environments where the labeling of documents by topics is unavailable, document clustering is employed for topic identification. Based on these ideas, three methods are proposed to suit different environments. We show that all three methods improve effectiveness of distributed retrieval
- Series
- The Kluwer international series on information retrieval; 7
- Source
- Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
-
Croft, W.B.; Turtle, H.R.: Retrieval strategies for hypertext (1993)
0.01
0.009153739 = product of:
0.06407617 = sum of:
0.016143454 = weight(_text_:information in 4711) [ClassicSimilarity], result of:
0.016143454 = score(doc=4711,freq=2.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.3103276 = fieldWeight in 4711, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.125 = fieldNorm(doc=4711)
0.047932718 = weight(_text_:retrieval in 4711) [ClassicSimilarity], result of:
0.047932718 = score(doc=4711,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.5347345 = fieldWeight in 4711, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.125 = fieldNorm(doc=4711)
0.14285715 = coord(2/14)
- Source
- Information processing and management. 29(1993) no.3, S.313-324
-
Liu, X.; Croft, W.B.: Cluster-based retrieval using language models (2004)
0.01
0.008992559 = product of:
0.062947914 = sum of:
0.012107591 = weight(_text_:information in 4115) [ClassicSimilarity], result of:
0.012107591 = score(doc=4115,freq=2.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.23274569 = fieldWeight in 4115, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.09375 = fieldNorm(doc=4115)
0.050840326 = weight(_text_:retrieval in 4115) [ClassicSimilarity], result of:
0.050840326 = score(doc=4115,freq=4.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.5671716 = fieldWeight in 4115, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.09375 = fieldNorm(doc=4115)
0.14285715 = coord(2/14)
- Source
- SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
-
Belkin, N.J.; Croft, W.B.: Information filtering and information retrieval : two sides of the same coin? (1992)
0.01
0.0081315 = product of:
0.056920502 = sum of:
0.020970963 = weight(_text_:information in 6093) [ClassicSimilarity], result of:
0.020970963 = score(doc=6093,freq=6.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.40312737 = fieldWeight in 6093, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.09375 = fieldNorm(doc=6093)
0.03594954 = weight(_text_:retrieval in 6093) [ClassicSimilarity], result of:
0.03594954 = score(doc=6093,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.40105087 = fieldWeight in 6093, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.09375 = fieldNorm(doc=6093)
0.14285715 = coord(2/14)
- Abstract
- One of nine articles in this issue of Communications of the ACM devoted to information filtering
-
Krovetz, R.; Croft, W.B.: Lexical ambiguity and information retrieval (1992)
0.01
0.008009522 = product of:
0.05606665 = sum of:
0.014125523 = weight(_text_:information in 4028) [ClassicSimilarity], result of:
0.014125523 = score(doc=4028,freq=8.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.27153665 = fieldWeight in 4028, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.0546875 = fieldNorm(doc=4028)
0.04194113 = weight(_text_:retrieval in 4028) [ClassicSimilarity], result of:
0.04194113 = score(doc=4028,freq=8.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.46789268 = fieldWeight in 4028, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.0546875 = fieldNorm(doc=4028)
0.14285715 = coord(2/14)
- Abstract
- Reports on an analysis of lexical ambiguity in information retrieval text collections and on experiments to determine the utility of word meanings for separating relevant from nonrelevant documents. Results show that there is considerable ambiguity even in a specialised database. Word senses provide a significant separation between relevant and nonrelevant documents, but several factors contribute to determining whether disambiguation will make an improvement in performance such as: resolving lexical ambiguity was found to have little impact on retrieval effectiveness for documents that have many words in common with the query. Discusses other uses of word sense disambiguation in an information retrieval context
- Source
- ACM transactions on information systems. 10(1992) no.2, S.115-141
-
Croft, W.B.; Thompson, R.H.: I3R: a new approach to the desing of document retrieval systems (1987)
0.01
0.008009522 = product of:
0.05606665 = sum of:
0.014125523 = weight(_text_:information in 3898) [ClassicSimilarity], result of:
0.014125523 = score(doc=3898,freq=2.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.27153665 = fieldWeight in 3898, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.109375 = fieldNorm(doc=3898)
0.04194113 = weight(_text_:retrieval in 3898) [ClassicSimilarity], result of:
0.04194113 = score(doc=3898,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.46789268 = fieldWeight in 3898, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.109375 = fieldNorm(doc=3898)
0.14285715 = coord(2/14)
- Source
- Journal of the American Society for Information Science. 38(1987), S.389-404
-
Croft, W.B.; Harper, D.J.: Using probabilistic models of document retrieval without relevance information (1979)
0.01
0.007083241 = product of:
0.049582683 = sum of:
0.008071727 = weight(_text_:information in 4520) [ClassicSimilarity], result of:
0.008071727 = score(doc=4520,freq=2.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.1551638 = fieldWeight in 4520, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.0625 = fieldNorm(doc=4520)
0.041510954 = weight(_text_:retrieval in 4520) [ClassicSimilarity], result of:
0.041510954 = score(doc=4520,freq=6.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.46309367 = fieldWeight in 4520, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.0625 = fieldNorm(doc=4520)
0.14285715 = coord(2/14)
- Abstract
- Based on a probablistic model, proposes strategies for the initial search and an intermediate search. Retrieval experiences with the Cranfield collection of 1,400 documents show that this initial search strategy is better than conventional search strategies both in terms of retrieval effectiveness and in terms of the number of queries that retrieve relevant documents. The intermediate search is a useful substitute for a relevance feedback search. A cluster search would be an effective alternative strategy.
-
Croft, W.B.: Hypertext and information retrieval : what are the fundamental concepts? (1990)
0.01
0.006865305 = product of:
0.04805713 = sum of:
0.012107591 = weight(_text_:information in 8003) [ClassicSimilarity], result of:
0.012107591 = score(doc=8003,freq=2.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.23274569 = fieldWeight in 8003, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.09375 = fieldNorm(doc=8003)
0.03594954 = weight(_text_:retrieval in 8003) [ClassicSimilarity], result of:
0.03594954 = score(doc=8003,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.40105087 = fieldWeight in 8003, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.09375 = fieldNorm(doc=8003)
0.14285715 = coord(2/14)
-
Allan, J.; Croft, W.B.; Callan, J.: ¬The University of Massachusetts and a dozen TRECs (2005)
0.01
0.006865305 = product of:
0.04805713 = sum of:
0.012107591 = weight(_text_:information in 5086) [ClassicSimilarity], result of:
0.012107591 = score(doc=5086,freq=2.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.23274569 = fieldWeight in 5086, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.09375 = fieldNorm(doc=5086)
0.03594954 = weight(_text_:retrieval in 5086) [ClassicSimilarity], result of:
0.03594954 = score(doc=5086,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.40105087 = fieldWeight in 5086, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.09375 = fieldNorm(doc=5086)
0.14285715 = coord(2/14)
- Source
- TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
-
Murdock, V.; Kelly, D.; Croft, W.B.; Belkin, N.J.; Yuan, X.: Identifying and improving retrieval for procedural questions (2007)
0.01
0.006865305 = product of:
0.04805713 = sum of:
0.012107591 = weight(_text_:information in 902) [ClassicSimilarity], result of:
0.012107591 = score(doc=902,freq=8.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.23274569 = fieldWeight in 902, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.046875 = fieldNorm(doc=902)
0.03594954 = weight(_text_:retrieval in 902) [ClassicSimilarity], result of:
0.03594954 = score(doc=902,freq=8.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.40105087 = fieldWeight in 902, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.046875 = fieldNorm(doc=902)
0.14285715 = coord(2/14)
- Abstract
- People use questions to elicit information from other people in their everyday lives and yet the most common method of obtaining information from a search engine is by posing keywords. There has been research that suggests users are better at expressing their information needs in natural language, however the vast majority of work to improve document retrieval has focused on queries posed as sets of keywords or Boolean queries. This paper focuses on improving document retrieval for the subset of natural language questions asking about how something is done. We classify questions as asking either for a description of a process or asking for a statement of fact, with better than 90% accuracy. Further we identify non-content features of documents relevant to questions asking about a process. Finally we demonstrate that we can use these features to significantly improve the precision of document retrieval results for questions asking about a process. Our approach, based on exploiting the structure of documents, shows a significant improvement in precision at rank one for questions asking about how something is done.
- Source
- Information processing and management. 43(2007) no.1, S.181-203
-
Luk, R.W.P.; Leong, H.V.; Dillon, T.S.; Chan, A.T.S.; Croft, W.B.; Allen, J.: ¬A survey in indexing and searching XML documents (2002)
0.01
0.0063814167 = product of:
0.044669915 = sum of:
0.013536699 = weight(_text_:information in 460) [ClassicSimilarity], result of:
0.013536699 = score(doc=460,freq=10.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.2602176 = fieldWeight in 460, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.046875 = fieldNorm(doc=460)
0.031133216 = weight(_text_:retrieval in 460) [ClassicSimilarity], result of:
0.031133216 = score(doc=460,freq=6.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.34732026 = fieldWeight in 460, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.046875 = fieldNorm(doc=460)
0.14285715 = coord(2/14)
- Abstract
- XML holds the promise to yield (1) a more precise search by providing additional information in the elements, (2) a better integrated search of documents from heterogeneous sources, (3) a powerful search paradigm using structural as well as content specifications, and (4) data and information exchange to share resources and to support cooperative search. We survey several indexing techniques for XML documents, grouping them into flatfile, semistructured, and structured indexing paradigms. Searching techniques and supporting techniques for searching are reviewed, including full text search and multistage search. Because searching XML documents can be very flexible, various search result presentations are discussed, as well as database and information retrieval system integration and XML query languages. We also survey various retrieval models, examining how they would be used or extended for retrieving XML documents. To conclude the article, we discuss various open issues that XML poses with respect to information retrieval and database research.
- Source
- Journal of the American Society for Information Science and technology. 53(2002) no.6, S.415-437
-
Callan, J.; Croft, W.B.; Broglio, J.: TREC and TIPSTER experiments with INQUERY (1995)
0.01
0.0063181263 = product of:
0.04422688 = sum of:
0.014268933 = weight(_text_:information in 1944) [ClassicSimilarity], result of:
0.014268933 = score(doc=1944,freq=4.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.27429342 = fieldWeight in 1944, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.078125 = fieldNorm(doc=1944)
0.029957948 = weight(_text_:retrieval in 1944) [ClassicSimilarity], result of:
0.029957948 = score(doc=1944,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.33420905 = fieldWeight in 1944, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.078125 = fieldNorm(doc=1944)
0.14285715 = coord(2/14)
- Footnote
- Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.436-439.
- Source
- Information processing and management. 31(1995) no.3, S.327-343
-
Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997)
0.01
0.006191569 = product of:
0.04334098 = sum of:
0.029957948 = weight(_text_:retrieval in 3103) [ClassicSimilarity], result of:
0.029957948 = score(doc=3103,freq=2.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.33420905 = fieldWeight in 3103, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.078125 = fieldNorm(doc=3103)
0.013383033 = product of:
0.040149096 = sum of:
0.040149096 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
0.040149096 = score(doc=3103,freq=2.0), product of:
0.103770934 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.029633347 = queryNorm
0.38690117 = fieldWeight in 3103, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.078125 = fieldNorm(doc=3103)
0.33333334 = coord(1/3)
0.14285715 = coord(2/14)
- Date
- 27. 2.1999 20:55:22
- Source
- The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
-
Liu, X.; Croft, W.B.: Statistical language modeling for information retrieval (2004)
0.01
0.006186473 = product of:
0.04330531 = sum of:
0.013347364 = weight(_text_:information in 4277) [ClassicSimilarity], result of:
0.013347364 = score(doc=4277,freq=14.0), product of:
0.052020688 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.029633347 = queryNorm
0.256578 = fieldWeight in 4277, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.0390625 = fieldNorm(doc=4277)
0.029957948 = weight(_text_:retrieval in 4277) [ClassicSimilarity], result of:
0.029957948 = score(doc=4277,freq=8.0), product of:
0.08963835 = queryWeight, product of:
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.029633347 = queryNorm
0.33420905 = fieldWeight in 4277, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
3.024915 = idf(docFreq=5836, maxDocs=44218)
0.0390625 = fieldNorm(doc=4277)
0.14285715 = coord(2/14)
- Abstract
- This chapter reviews research and applications in statistical language modeling for information retrieval (IR), which has emerged within the past several years as a new probabilistic framework for describing information retrieval processes. Generally speaking, statistical language modeling, or more simply language modeling (LM), involves estimating a probability distribution that captures statistical regularities of natural language use. Applied to information retrieval, language modeling refers to the problem of estimating the likelihood that a query and a document could have been generated by the same language model, given the language model of the document either with or without a language model of the query. The roots of statistical language modeling date to the beginning of the twentieth century when Markov tried to model letter sequences in works of Russian literature (Manning & Schütze, 1999). Zipf (1929, 1932, 1949, 1965) studied the statistical properties of text and discovered that the frequency of works decays as a Power function of each works rank. However, it was Shannon's (1951) work that inspired later research in this area. In 1951, eager to explore the applications of his newly founded information theory to human language, Shannon used a prediction game involving n-grams to investigate the information content of English text. He evaluated n-gram models' performance by comparing their crossentropy an texts with the true entropy estimated using predictions made by human subjects. For many years, statistical language models have been used primarily for automatic speech recognition. Since 1980, when the first significant language model was proposed (Rosenfeld, 2000), statistical language modeling has become a fundamental component of speech recognition, machine translation, and spelling correction.
- Source
- Annual review of information science and technology. 39(2005), S.3-32