Search (2 results, page 1 of 1)

  • × author_ss:"Belew, R.K."
  1. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.01
    0.012481183 = product of:
      0.0748871 = sum of:
        0.0748871 = weight(_text_:suchmaschine in 3346) [ClassicSimilarity], result of:
          0.0748871 = score(doc=3346,freq=4.0), product of:
            0.21191008 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03747799 = queryNorm
            0.3533909 = fieldWeight in 3346, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.16666667 = coord(1/6)
    
    RSWK
    Suchmaschine / World Wide Web / Information Retrieval
    Subject
    Suchmaschine / World Wide Web / Information Retrieval
  2. Bartell, B.T.; Cottrell, G.W.; Belew, R.K.: Optimizing similarity using multi-query relevance feedback (1998) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 1152) [ClassicSimilarity], result of:
          0.0605745 = score(doc=1152,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 1152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1152)
      0.16666667 = coord(1/6)
    
    Abstract
    We propose a novel method for automatically adjusting paprameters in ranked-output text retrieval systems to improve retrieval performance. A renaked-output text retrieval system implements a ranking function which orders documents, placing documents estimated to be more relevant to the user's query before less relevant ones. The systems adjusts its parameters to maximize the match between the systems's document ordering and a target ordering. The target ordering is typically given by user feedback on a set of sample queries, but is more generally any document preference relation. We demonstrate the utility of the approach by using it to estimate a similarity measure (scoring the relevance of documents to queries) in a vector space model of information retrieval. Experimental results using several collections indicate that the approach automatically finds a simimilarity measure which performs equivalently to or better that all 'classic' similarity measures studied. It also performs within 1% of an estimated optimal measure (found by exhaustive sampling of the similarity measures). The method is compared to two alternative methods: a perceptron learning rule motivated by Wong and Yao's (1990) Query Formulation method, and a Least Squared learning rule, motivated by Fuhr and Buckley's (1991) Probabilisitc Learning approach. Though both alternatives have useful characteristics, we demonstrate empirically that neither can be used to estimate the parameters of the optimal similarity measure