Search (1 results, page 1 of 1)

  • × author_ss:"Mitra, M."
  • × theme_ss:"Retrievalstudien"
  1. Pal, S.; Mitra, M.; Kamps, J.: Evaluation effort, reliability and reusability in XML retrieval (2011) 0.04
    0.040354572 = product of:
      0.080709144 = sum of:
        0.080709144 = sum of:
          0.050390374 = weight(_text_:2007 in 4197) [ClassicSimilarity], result of:
            0.050390374 = score(doc=4197,freq=2.0), product of:
              0.20205033 = queryWeight, product of:
                4.514535 = idf(docFreq=1315, maxDocs=44218)
                0.044755515 = queryNorm
              0.24939516 = fieldWeight in 4197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.514535 = idf(docFreq=1315, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
          0.030318772 = weight(_text_:22 in 4197) [ClassicSimilarity], result of:
            0.030318772 = score(doc=4197,freq=2.0), product of:
              0.15672618 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044755515 = queryNorm
              0.19345059 = fieldWeight in 4197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
      0.5 = coord(1/2)
    
    Abstract
    The Initiative for the Evaluation of XML retrieval (INEX) provides a TREC-like platform for evaluating content-oriented XML retrieval systems. Since 2007, INEX has been using a set of precision-recall based metrics for its ad hoc tasks. The authors investigate the reliability and robustness of these focused retrieval measures, and of the INEX pooling method. They explore four specific questions: How reliable are the metrics when assessments are incomplete, or when query sets are small? What is the minimum pool/query-set size that can be used to reliably evaluate systems? Can the INEX collections be used to fairly evaluate "new" systems that did not participate in the pooling process? And, for a fixed amount of assessment effort, would this effort be better spent in thoroughly judging a few queries, or in judging many queries relatively superficially? The authors' findings validate properties of precision-recall-based metrics observed in document retrieval settings. Early precision measures are found to be more error-prone and less stable under incomplete judgments and small topic-set sizes. They also find that system rankings remain largely unaffected even when assessment effort is substantially (but systematically) reduced, and confirm that the INEX collections remain usable when evaluating nonparticipating systems. Finally, they observe that for a fixed amount of effort, judging shallow pools for many queries is better than judging deep pools for a smaller set of queries. However, when judging only a random sample of a pool, it is better to completely judge fewer topics than to partially judge many topics. This result confirms the effectiveness of pooling methods.
    Date
    22. 1.2011 14:20:56