Search (2 results, page 1 of 1)

  • × author_ss:"Bateman, J."
  • × theme_ss:"Retrievalstudien"
  1. Bateman, J.: Modelling the importance of end-user relevance criteria (1999) 0.00
    0.004289937 = product of:
      0.017159749 = sum of:
        0.017159749 = weight(_text_:information in 6606) [ClassicSimilarity], result of:
          0.017159749 = score(doc=6606,freq=26.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.2797255 = fieldWeight in 6606, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=6606)
      0.25 = coord(1/4)
    
    Abstract
    In most information retrieval research, the concept of relevance has been defined a priori as a single variable by the researcher and the meaning of relevance to the end-user who is making relevance judgments has not been taken into account. However, a number of criteria that users employ in making relevance judgments has been identified (Schamher, 1991; Barry, 1993). Understanding these criteria and their importance to end-users can help researchers better understand end-user evaluation behavior. This study reports end-users' ratings of the relative importance of 40 relevance criteria as used in their own information-seeking situations, and examines relationships between criteria that they rated most important. Data were collected from 210 graduate students who were instructed in a mail survey to rate 40 relevance criteria by importance in their selection of the most valuable information source for a recent or current paper or project. The criteria were selected from previous studies in which open-ended interviews were used to elicit criteria from end-users making judgments in their own information-seeking situations (Schamher, 1991; Su, 1992; Barry, 1993). A model of relevance with three constructs that contribute to the concept of relevance was proposed using the eleven criteria that survey respondents rated as most important (75 or more on a scale of 0 to 100). The development of this model was guided by similarities in criteria and criteria groupings from previous research (Barry & Schamher, 1998). Confirmatory factor analysis was used to confirm the model and verify that the constructs would produce reliable subscale scores. The three constructs are information quality, information credibility, and information completeness. Second-order factor analysis indicated that these constructs explain 48% of positive relevance judgments for these respondents. Three additional constructs, information availability, information topicality, and information currency are also suggested. The constructs developed from this analysis are thought to underlie the concept of relevance for this group of users
    Imprint
    Medford, NJ : Information Today
    Series
    Proceedings of the American Society for Information Science; vol.36
    Source
    Knowledge: creation, organization and use. Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 31.10.-4.11.1999. Ed.: L. Woods
  2. Schamber, L.; Bateman, J.: User criteria in relevance evaluation : toward development of a measurement scale (1996) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 7351) [ClassicSimilarity], result of:
          0.016657405 = score(doc=7351,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 7351, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7351)
      0.25 = coord(1/4)
    
    Abstract
    Part of a long term project which aims to develop a simple measurement scale based on user criteria, that will yield results applicable to the study of user evaluations in any type of information seeking and use environment. Describes 2 tests which were conducted to determine how users interpret criterion terms drawn from previous user based relevance studies. Presents results of these initial tests and describes conceptual and methodological challenges in long term development of the instrument
    Imprint
    Medford, NJ : Learned Information
    Source
    Global complexity: information, chaos and control. Proceedings of the 59th Annual Meeting of the American Society for Information Science, ASIS'96, Baltimore, Maryland, 21-24 Oct 1996. Ed.: S. Hardin