Search (2 results, page 1 of 1)

  • × author_ss:"Dunlop, M.D."
  • × theme_ss:"Retrievalstudien"
  1. Dunlop, M.D.; Johnson, C.W.; Reid, J.: Exploring the layers of information retrieval evaluation (1998) 0.01
    0.006717136 = product of:
      0.026868545 = sum of:
        0.026868545 = weight(_text_:information in 3762) [ClassicSimilarity], result of:
          0.026868545 = score(doc=3762,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3035872 = fieldWeight in 3762, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3762)
      0.25 = coord(1/4)
    
    Abstract
    Presents current work on modelling interactive information retrieval systems and users' interactions with them. Analyzes the papers in this special issue in the context of evaluation in information retrieval (IR) by examining the different layers at which IR use could be evaluated. IR poses the double evaluation problem of evaluating both the underlying system effectiveness and the overall ability of the system to aid users. The papers look at different issues in combining human-computer interaction (HCI) research with IR research and provide insights into the problem of evaluating the information seeking process
    Footnote
    Contribution to a special section of articles related to human-computer interaction and information retrieval
  2. Draper, S.W.; Dunlop, M.D.: New IR - new evaluation : the impact of interaction and multimedia on information retrieval and its evaluation (1997) 0.00
    0.0030344925 = product of:
      0.01213797 = sum of:
        0.01213797 = weight(_text_:information in 2462) [ClassicSimilarity], result of:
          0.01213797 = score(doc=2462,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 2462, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2462)
      0.25 = coord(1/4)
    
    Abstract
    The field of information retrieval (IR) traditionally addressed the problem of retrieving text documents from large collections by full text indexing of words. It has always been characterised by a strong focus on evaluation to compare the performance of alternative designs. the emergence into widespread use both of multimedia and of interactive user interfaces has extensive implications for this field and the evaluation methods on which it depends. discusses what we currently understand about those implications. The 'system' being measured must be expanded to include the human users, whose behaviour has a large effect on overall retrieval success, which now depends upon sessions of many retrieval cycles, rather than a single transaction. Multimedia raise issues not only of how users might specify a query in the same medium (e.g. sketch the kind of picture they want), but of cross-medium retrieval. Current explorations in IR evaluation show diversity along at least 2 dimensions. One is that between comprehensive models that have a place for every possible relevant factor, and lightweight methods. The other is that between highly standardised workbench tests avoiding human users vs. workplace studies