Search (2 results, page 1 of 1)

  • × author_ss:"Belew, R.K."
  • × type_ss:"a"
  1. Bartell, B.T.; Cottrell, G.W.; Belew, R.K.: Optimizing similarity using multi-query relevance feedback (1998) 0.01
    0.0052989167 = product of:
      0.037092414 = sum of:
        0.0071344664 = weight(_text_:information in 1152) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=1152,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 1152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1152)
        0.029957948 = weight(_text_:retrieval in 1152) [ClassicSimilarity], result of:
          0.029957948 = score(doc=1152,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33420905 = fieldWeight in 1152, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1152)
      0.14285715 = coord(2/14)
    
    Abstract
    We propose a novel method for automatically adjusting paprameters in ranked-output text retrieval systems to improve retrieval performance. A renaked-output text retrieval system implements a ranking function which orders documents, placing documents estimated to be more relevant to the user's query before less relevant ones. The systems adjusts its parameters to maximize the match between the systems's document ordering and a target ordering. The target ordering is typically given by user feedback on a set of sample queries, but is more generally any document preference relation. We demonstrate the utility of the approach by using it to estimate a similarity measure (scoring the relevance of documents to queries) in a vector space model of information retrieval. Experimental results using several collections indicate that the approach automatically finds a simimilarity measure which performs equivalently to or better that all 'classic' similarity measures studied. It also performs within 1% of an estimated optimal measure (found by exhaustive sampling of the similarity measures). The method is compared to two alternative methods: a perceptron learning rule motivated by Wong and Yao's (1990) Query Formulation method, and a Least Squared learning rule, motivated by Fuhr and Buckley's (1991) Probabilisitc Learning approach. Though both alternatives have useful characteristics, we demonstrate empirically that neither can be used to estimate the parameters of the optimal similarity measure
    Source
    Journal of the American Society for Information Science. 49(1998) no.8, S.742-761
  2. Bartell, B.T.; Cottrell, G.W.; Belew, R.K.: Representing documents using an explicit model of their similarities (1995) 0.00
    5.7655195E-4 = product of:
      0.008071727 = sum of:
        0.008071727 = weight(_text_:information in 1426) [ClassicSimilarity], result of:
          0.008071727 = score(doc=1426,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 1426, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1426)
      0.071428575 = coord(1/14)
    
    Source
    Journal of the American Society for Information Science. 46(1995) no.4, S.254-271