-
Nie, J.-Y.; Brisebois, M.: ¬An inferential approach to information retrieval and its implementation using a manual thesaurus (1996)
0.00
0.003827074 = product of:
0.007654148 = sum of:
0.007654148 = product of:
0.015308296 = sum of:
0.015308296 = weight(_text_:a in 7706) [ClassicSimilarity], result of:
0.015308296 = score(doc=7706,freq=16.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.28826174 = fieldWeight in 7706, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0625 = fieldNorm(doc=7706)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- Develops an inferential approach to information retrieval within a fuzzy modal logic framework, emphasising the logical component. The flexibility of this framework offers the possibility of incorporating human defined knowledge in the inference process. Describes a method to incorporate a human defined thesaurus into inference by taking user relevance feedback into consideration. Experiments on the CACM corpus using a general thesaurus of English, Wordnet, indicate a significant improvement in the system's performance
- Footnote
- Contribution to a special issue on the application of artificial intelligence to information retrieval
- Type
- a
-
Brouard, C.; Nie, J.-Y.: Relevance as resonance : a new theoretical perspective and a practical utilization in information filtering (2004)
0.00
0.0026849252 = product of:
0.0053698504 = sum of:
0.0053698504 = product of:
0.010739701 = sum of:
0.010739701 = weight(_text_:a in 4156) [ClassicSimilarity], result of:
0.010739701 = score(doc=4156,freq=14.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.20223314 = fieldWeight in 4156, product of:
3.7416575 = tf(freq=14.0), with freq of:
14.0 = termFreq=14.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=4156)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- This paper presents a new adaptive filtering system called RELIEFS. This system is based on neural mechanisms underlying an information selection process. It is inspired from the cognitive model adaptive resonance theory [Biol. Cybernet. 23 (1976) 121] that proposes a neural explanation of how our brain selects information from its environment. In our approach, resonance, the key idea of this model is used to model the notion of relevance in information retrieval and information filtering (IF). The comparison of resonance with the previous models of relevance shows that resonance captures the very core of most existing models. Moreover, the notion of resonance provides a new angle to look at relevance and opens new theoretical perspectives. The proposed mechanism based on resonance has been directly implemented and tested on the TREC-9 and TREC-11 IF data. The experimental results show that this approach can result in a high effectiveness in practice.
- Type
- a
-
Nie, J.-Y.: Query expansion and query translation as logical inference (2003)
0.00
0.002269176 = product of:
0.004538352 = sum of:
0.004538352 = product of:
0.009076704 = sum of:
0.009076704 = weight(_text_:a in 1425) [ClassicSimilarity], result of:
0.009076704 = score(doc=1425,freq=10.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.1709182 = fieldWeight in 1425, product of:
3.1622777 = tf(freq=10.0), with freq of:
10.0 = termFreq=10.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=1425)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- A number of studies have examined the problems of query expansion in monolingual Information Retrieval (IR), and query translation for crosslanguage IR. However, no link has been made between them. This article first shows that query translation is a special case of query expansion. There is also another set of studies an inferential IR. Again, there is no relationship established with query translation or query expansion. The second claim of this article is that logical inference is a general form that covers query expansion and query translation. This analysis provides a unified view of different subareas of IR. We further develop the inferential IR approach in two particular contexts: using fuzzy logic and probability theory. The evaluation formulas obtained are shown to strongly correspond to those used in other IR models. This indicates that inference is indeed the core of advanced IR.
- Type
- a
-
Song, R.; Luo, Z.; Nie, J.-Y.; Yu, Y.; Hon, H.-W.: Identification of ambiguous queries in web search (2009)
0.00
0.001757696 = product of:
0.003515392 = sum of:
0.003515392 = product of:
0.007030784 = sum of:
0.007030784 = weight(_text_:a in 2441) [ClassicSimilarity], result of:
0.007030784 = score(doc=2441,freq=6.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.13239266 = fieldWeight in 2441, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046875 = fieldNorm(doc=2441)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- It is widely believed that many queries submitted to search engines are inherently ambiguous (e.g., java and apple). However, few studies have tried to classify queries based on ambiguity and to answer "what the proportion of ambiguous queries is". This paper deals with these issues. First, we clarify the definition of ambiguous queries by constructing the taxonomy of queries from being ambiguous to specific. Second, we ask human annotators to manually classify queries. From manually labeled results, we observe that query ambiguity is to some extent predictable. Third, we propose a supervised learning approach to automatically identify ambiguous queries. Experimental results show that we can correctly identify 87% of labeled queries with the approach. Finally, by using our approach, we estimate that about 16% of queries in a real search log are ambiguous.
- Type
- a
-
Bai, J.; Nie, J.-Y.: Adapting information retrieval to query contexts (2008)
0.00
0.0016913437 = product of:
0.0033826875 = sum of:
0.0033826875 = product of:
0.006765375 = sum of:
0.006765375 = weight(_text_:a in 2446) [ClassicSimilarity], result of:
0.006765375 = score(doc=2446,freq=8.0), product of:
0.053105544 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046056706 = queryNorm
0.12739488 = fieldWeight in 2446, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0390625 = fieldNorm(doc=2446)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- In current IR approaches documents are retrieved only according to the terms specified in the query. The same answers are returned for the same query whatever the user and the search goal are. In reality, many other contextual factors strongly influence document's relevance and they should be taken into account in IR operations. This paper proposes a method, based on language modeling, to integrate several contextual factors so that document ranking will be adapted to the specific query contexts. We will consider three contextual factors in this paper: the topic domain of the query, the characteristics of the document collection, as well as context words within the query. Each contextual factor is used to generate a new query language model to specify some aspect of the information need. All these query models are then combined together to produce a more complete model for the underlying information need. Our experiments on TREC collections show that each contextual factor can positively influence the IR effectiveness and the combined model results in the highest effectiveness. This study shows that it is both beneficial and feasible to integrate more contextual factors in the current IR practice.
- Type
- a