Search (2 results, page 1 of 1)

  • × author_ss:"Dunlop, M.D."
  • × theme_ss:"Retrievalstudien"
  1. Draper, S.W.; Dunlop, M.D.: New IR - new evaluation : the impact of interaction and multimedia on information retrieval and its evaluation (1997) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2462) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2462,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2462, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2462)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The field of information retrieval (IR) traditionally addressed the problem of retrieving text documents from large collections by full text indexing of words. It has always been characterised by a strong focus on evaluation to compare the performance of alternative designs. the emergence into widespread use both of multimedia and of interactive user interfaces has extensive implications for this field and the evaluation methods on which it depends. discusses what we currently understand about those implications. The 'system' being measured must be expanded to include the human users, whose behaviour has a large effect on overall retrieval success, which now depends upon sessions of many retrieval cycles, rather than a single transaction. Multimedia raise issues not only of how users might specify a query in the same medium (e.g. sketch the kind of picture they want), but of cross-medium retrieval. Current explorations in IR evaluation show diversity along at least 2 dimensions. One is that between comprehensive models that have a place for every possible relevant factor, and lightweight methods. The other is that between highly standardised workbench tests avoiding human users vs. workplace studies
    Type
    a
  2. Dunlop, M.D.; Johnson, C.W.; Reid, J.: Exploring the layers of information retrieval evaluation (1998) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 3762) [ClassicSimilarity], result of:
              0.00669738 = score(doc=3762,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 3762, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3762)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Contribution to a special section of articles related to human-computer interaction and information retrieval
    Type
    a