Search (2 results, page 1 of 1)

  • × author_ss:"Barry, C.L."
  • × year_i:[1990 TO 2000}
  1. Barry, C.L.: Document representations and clues to document relevance (1998) 0.01
    0.0059131454 = product of:
      0.029565725 = sum of:
        0.029565725 = product of:
          0.05913145 = sum of:
            0.05913145 = weight(_text_:etc in 2325) [ClassicSimilarity], result of:
              0.05913145 = score(doc=2325,freq=2.0), product of:
                0.19761753 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.036484417 = queryNorm
                0.2992217 = fieldWeight in 2325, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2325)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Research into the role of document representations in the relevance judgement process has focused on the ability of users to predict the relevance of documents based on various document representations. Conclusions have been stated as to the comparative effectiveness of various document representations, but there has been little exploration into why certain document representations seem to enable users to better predict the relevance of documents. This examination is an attempt to identify the extent to which various document representation contain clues that allow users to determine the presence or absence of traits and/or qualities that determine the relevance of the document to the user's situation. Motivated users discussed their reasons for pursuing or not pursuing documents based on information contained within representations of those documents (i.e., titles, abstracts, indexing terms, etc.). The results are presented as the co-occurence of respondents' mentions the various traits and/or qualities, and the document representationa that led to such responses. It is concluded that document representations may differ in their effectiveness as indicators of potential relevance because different types of document representations vary in their ability to present clues for specific traits and/or qualities. Suggestions for further research are provided
  2. Barry, C.L.; Schamber, L.: Users' criteria for relevance evaluation : a cross-situational comparison (1998) 0.00
    0.0041203364 = product of:
      0.02060168 = sum of:
        0.02060168 = product of:
          0.04120336 = sum of:
            0.04120336 = weight(_text_:problems in 3271) [ClassicSimilarity], result of:
              0.04120336 = score(doc=3271,freq=2.0), product of:
                0.15058853 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.036484417 = queryNorm
                0.27361554 = fieldWeight in 3271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3271)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Takes a cognitive approach toward understanding the behaviours of end users by focusing on the values or criteria they employ in making relevance judgements, or decisions about whether to obtain and use information. Compares and contrasts the results of 2 empirical studies in which criteria were elicited directly from individuals who were seeking information to resolve their own information problems. In 1 study, respondents were faculty and students in an academic environment examining print documents from traditional text-based information retrieval systems. In the other study, respondents were occupational users of weather-related information in a multimedia environment in which sources included interpersonal communication, mass media, weather instruments, and computerised weather systems. Provides evidence that a finite range of criteria exists and that these criteria are applied consistently across types of information users, problem situation, and source environments