Search (1 results, page 1 of 1)

  • × author_ss:"Benoit, E. III"
  • × author_ss:"Xie, I."
  1. Xie, I.; Benoit, E. III: Search result list evaluation versus document evaluation : similarities and differences (2013) 0.03
    0.029978452 = sum of:
      0.011056997 = product of:
        0.055284984 = sum of:
          0.055284984 = weight(_text_:authors in 1754) [ClassicSimilarity], result of:
            0.055284984 = score(doc=1754,freq=2.0), product of:
              0.21952313 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04815356 = queryNorm
              0.25184128 = fieldWeight in 1754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1754)
        0.2 = coord(1/5)
      0.018921455 = product of:
        0.03784291 = sum of:
          0.03784291 = weight(_text_:i in 1754) [ClassicSimilarity], result of:
            0.03784291 = score(doc=1754,freq=2.0), product of:
              0.18162222 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04815356 = queryNorm
              0.20836058 = fieldWeight in 1754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1754)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this study is to compare the evaluation of search result lists and documents, in particular evaluation criteria, elements, association between criteria and elements, pre/post and evaluation activities, and the time spent on evaluation. Design/methodology/approach - The study analyzed the data collected from 31 general users through prequestionnaires, think aloud protocols and logs, and post questionnaires. Types of evaluation criteria, elements, associations between criteria and elements, evaluation activities and their associated pre/post activities, and time were analyzed based on open coding. Findings - The study identifies the similarities and differences of list and document evaluation by analyzing 21 evaluation criteria applied, 13 evaluation elements examined, pre/post and evaluation activities performed and time spent. In addition, the authors also explored the time spent in evaluating lists and documents for different types of tasks. Research limitations/implications - This study helps researchers understand the nature of list and document evaluation. Additionally, this study connects elements that participants examined to criteria they applied, and further reveals problems associated with the lack of integration between list and document evaluation. The findings of this study suggest more elements, especially at list level, be available to support users applying their evaluation criteria. Integration of list and document evaluation and integration of pre, evaluation and post evaluation activities for the interface design is the absolute solution for effective evaluation. Originality/value - This study fills a gap in current research in relation to the comparison of list and document evaluation.