Search (5 results, page 1 of 1)

  • × author_ss:"Nie, J.-Y."
  1. Nie, J.-Y.; Brisebois, M.: ¬An inferential approach to information retrieval and its implementation using a manual thesaurus (1996) 0.00
    0.0044077807 = product of:
      0.008815561 = sum of:
        0.008815561 = product of:
          0.017631123 = sum of:
            0.017631123 = weight(_text_:a in 7706) [ClassicSimilarity], result of:
              0.017631123 = score(doc=7706,freq=16.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.28826174 = fieldWeight in 7706, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7706)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Develops an inferential approach to information retrieval within a fuzzy modal logic framework, emphasising the logical component. The flexibility of this framework offers the possibility of incorporating human defined knowledge in the inference process. Describes a method to incorporate a human defined thesaurus into inference by taking user relevance feedback into consideration. Experiments on the CACM corpus using a general thesaurus of English, Wordnet, indicate a significant improvement in the system's performance
    Footnote
    Contribution to a special issue on the application of artificial intelligence to information retrieval
    Type
    a
  2. Brouard, C.; Nie, J.-Y.: Relevance as resonance : a new theoretical perspective and a practical utilization in information filtering (2004) 0.00
    0.0030923262 = product of:
      0.0061846524 = sum of:
        0.0061846524 = product of:
          0.012369305 = sum of:
            0.012369305 = weight(_text_:a in 4156) [ClassicSimilarity], result of:
              0.012369305 = score(doc=4156,freq=14.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.20223314 = fieldWeight in 4156, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a new adaptive filtering system called RELIEFS. This system is based on neural mechanisms underlying an information selection process. It is inspired from the cognitive model adaptive resonance theory [Biol. Cybernet. 23 (1976) 121] that proposes a neural explanation of how our brain selects information from its environment. In our approach, resonance, the key idea of this model is used to model the notion of relevance in information retrieval and information filtering (IF). The comparison of resonance with the previous models of relevance shows that resonance captures the very core of most existing models. Moreover, the notion of resonance provides a new angle to look at relevance and opens new theoretical perspectives. The proposed mechanism based on resonance has been directly implemented and tested on the TREC-9 and TREC-11 IF data. The experimental results show that this approach can result in a high effectiveness in practice.
    Type
    a
  3. Nie, J.-Y.: Query expansion and query translation as logical inference (2003) 0.00
    0.0026134925 = product of:
      0.005226985 = sum of:
        0.005226985 = product of:
          0.01045397 = sum of:
            0.01045397 = weight(_text_:a in 1425) [ClassicSimilarity], result of:
              0.01045397 = score(doc=1425,freq=10.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.1709182 = fieldWeight in 1425, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1425)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A number of studies have examined the problems of query expansion in monolingual Information Retrieval (IR), and query translation for crosslanguage IR. However, no link has been made between them. This article first shows that query translation is a special case of query expansion. There is also another set of studies an inferential IR. Again, there is no relationship established with query translation or query expansion. The second claim of this article is that logical inference is a general form that covers query expansion and query translation. This analysis provides a unified view of different subareas of IR. We further develop the inferential IR approach in two particular contexts: using fuzzy logic and probability theory. The evaluation formulas obtained are shown to strongly correspond to those used in other IR models. This indicates that inference is indeed the core of advanced IR.
    Type
    a
  4. Song, R.; Luo, Z.; Nie, J.-Y.; Yu, Y.; Hon, H.-W.: Identification of ambiguous queries in web search (2009) 0.00
    0.0020244026 = product of:
      0.004048805 = sum of:
        0.004048805 = product of:
          0.00809761 = sum of:
            0.00809761 = weight(_text_:a in 2441) [ClassicSimilarity], result of:
              0.00809761 = score(doc=2441,freq=6.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.13239266 = fieldWeight in 2441, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2441)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is widely believed that many queries submitted to search engines are inherently ambiguous (e.g., java and apple). However, few studies have tried to classify queries based on ambiguity and to answer "what the proportion of ambiguous queries is". This paper deals with these issues. First, we clarify the definition of ambiguous queries by constructing the taxonomy of queries from being ambiguous to specific. Second, we ask human annotators to manually classify queries. From manually labeled results, we observe that query ambiguity is to some extent predictable. Third, we propose a supervised learning approach to automatically identify ambiguous queries. Experimental results show that we can correctly identify 87% of labeled queries with the approach. Finally, by using our approach, we estimate that about 16% of queries in a real search log are ambiguous.
    Type
    a
  5. Bai, J.; Nie, J.-Y.: Adapting information retrieval to query contexts (2008) 0.00
    0.0019479821 = product of:
      0.0038959642 = sum of:
        0.0038959642 = product of:
          0.0077919285 = sum of:
            0.0077919285 = weight(_text_:a in 2446) [ClassicSimilarity], result of:
              0.0077919285 = score(doc=2446,freq=8.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.12739488 = fieldWeight in 2446, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2446)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In current IR approaches documents are retrieved only according to the terms specified in the query. The same answers are returned for the same query whatever the user and the search goal are. In reality, many other contextual factors strongly influence document's relevance and they should be taken into account in IR operations. This paper proposes a method, based on language modeling, to integrate several contextual factors so that document ranking will be adapted to the specific query contexts. We will consider three contextual factors in this paper: the topic domain of the query, the characteristics of the document collection, as well as context words within the query. Each contextual factor is used to generate a new query language model to specify some aspect of the information need. All these query models are then combined together to produce a more complete model for the underlying information need. Our experiments on TREC collections show that each contextual factor can positively influence the IR effectiveness and the combined model results in the highest effectiveness. This study shows that it is both beneficial and feasible to integrate more contextual factors in the current IR practice.
    Type
    a