Search (2 results, page 1 of 1)

  • × author_ss:"Burgin, R."
  • × theme_ss:"Retrievalstudien"
  1. Shaw, W.M.; Burgin, R.; Howell, P.: Performance standards and evaluations in IR test collections : vector-space and other retrieval models (1997) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 7259) [ClassicSimilarity], result of:
              0.007030784 = score(doc=7259,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 7259, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7259)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Computes low performance standards for each query and for the group of queries in 13 traditional and 4 TREC test collections. Predicted by the hypergeometric distribution, the standards represent the highest level of retrieval effectiveness attributable to chance. Compares operational levels of performance for vector-space, ad-hoc-feature-based, probabilistic, and other retrieval models to the standards. The effectiveness of these techniques in small, traditional test collections, can be explained by retrieving a few more relevant documents for most queries than expected by chance. The effectiveness of retrieval techniques in the larger TREC test collections can only be explained by retrieving many more relevant documents for most queries than expected by chance. The discrepancy between deviations form chance in traditional and TREC test collections is due to a decrease in performance standards for large test collections, not to an increase in operational performance. The next generation of information retrieval systems would be enhanced by abandoning uninformative performance summaries and focusing on effectiveness and improvements in effectiveness of individual queries
    Type
    a
  2. Burgin, R.: ¬The Monte Carlo method and the evaluation of retrieval system performance (1999) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 2946) [ClassicSimilarity], result of:
              0.0054123 = score(doc=2946,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 2946, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2946)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a

Authors