Search (2 results, page 1 of 1)

  • × author_ss:"Belew, R.K."
  • × author_ss:"Cottrell, G.W."
  1. Bartell, B.T.; Cottrell, G.W.; Belew, R.K.: Optimizing similarity using multi-query relevance feedback (1998) 0.00
    0.0028047764 = product of:
      0.005609553 = sum of:
        0.005609553 = product of:
          0.011219106 = sum of:
            0.011219106 = weight(_text_:a in 1152) [ClassicSimilarity], result of:
              0.011219106 = score(doc=1152,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21126054 = fieldWeight in 1152, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1152)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We propose a novel method for automatically adjusting paprameters in ranked-output text retrieval systems to improve retrieval performance. A renaked-output text retrieval system implements a ranking function which orders documents, placing documents estimated to be more relevant to the user's query before less relevant ones. The systems adjusts its parameters to maximize the match between the systems's document ordering and a target ordering. The target ordering is typically given by user feedback on a set of sample queries, but is more generally any document preference relation. We demonstrate the utility of the approach by using it to estimate a similarity measure (scoring the relevance of documents to queries) in a vector space model of information retrieval. Experimental results using several collections indicate that the approach automatically finds a simimilarity measure which performs equivalently to or better that all 'classic' similarity measures studied. It also performs within 1% of an estimated optimal measure (found by exhaustive sampling of the similarity measures). The method is compared to two alternative methods: a perceptron learning rule motivated by Wong and Yao's (1990) Query Formulation method, and a Least Squared learning rule, motivated by Fuhr and Buckley's (1991) Probabilisitc Learning approach. Though both alternatives have useful characteristics, we demonstrate empirically that neither can be used to estimate the parameters of the optimal similarity measure
    Type
    a
  2. Bartell, B.T.; Cottrell, G.W.; Belew, R.K.: Representing documents using an explicit model of their similarities (1995) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 1426) [ClassicSimilarity], result of:
              0.007654148 = score(doc=1426,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 1426, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1426)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proposes a method for creating vector space representations of documents based on modelling target interdocument similariyt values. The target similarity values are assumed to capture semantic relationships, or associations, between the documents. The vector representations are chosen so that the inner product similarities between document vector pairs closely match their target interdocument similarities. The method is closely related to the Latent Semantic Indexing approach
    Type
    a