Search (2 results, page 1 of 1)

  • × author_ss:"Liu, L."
  • × author_ss:"Wacholder, N."
  1. Wacholder, N.; Liu, L.: User preference : a measure of query-term quality (2006) 0.00
    8.826613E-4 = product of:
      0.012357258 = sum of:
        0.012357258 = weight(_text_:information in 19) [ClassicSimilarity], result of:
          0.012357258 = score(doc=19,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23754507 = fieldWeight in 19, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=19)
      0.071428575 = coord(1/14)
    
    Abstract
    The goal of this research is to understand what characteristics, if any, lead users engaged in interactive information seeking to prefer certain sets of query terms. Underlying this work is the assumption that query terms that information seekers prefer induce a kind of cognitive efficiency: They require less mental effort to process and therefore reduce the energy required in the interactive information-seeking process. Conceptually, this work applies insights from linguistics and cognitive science to the study of query-term quality. We report on an experiment in which we compare user preference for three sets of terms; one had been preconstructed by a human indexer, and two were identified automatically. Twenty-four participants used a merged list of all terms to answer a carefully created set of questions. By design, the interface constrained users to access the text exclusively via the displayed list of query terms. We found that participants displayed a preference for the human-constructed set of terms eight times greater than the preference for either set of automatically identified terms. We speculate about reasons for this strong preference and discuss the implications for information access. The primary contributions of this research are (a) explication of the concept of user preference as a measure of queryterm quality and (b) identification of a replicable procedure for measuring preference for sets of query terms created by different methods, whether human or automatic. All other factors being equal, query terms that users prefer clearly are the best choice for real-world information-access systems.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.12, S.1566-1580
  2. Wacholder, N.; Liu, L.: Assessing term effectiveness in the interactive information access process (2008) 0.00
    8.826613E-4 = product of:
      0.012357258 = sum of:
        0.012357258 = weight(_text_:information in 2079) [ClassicSimilarity], result of:
          0.012357258 = score(doc=2079,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23754507 = fieldWeight in 2079, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2079)
      0.071428575 = coord(1/14)
    
    Abstract
    This study addresses the question of whether the way in which sets of query terms are identified has an impact on the effectiveness of users' information seeking efforts. Query terms are text strings used as input to an information access system; they are products of a method or grammar that identifies a set of query terms. We conducted an experiment that compared the effectiveness of sets of query terms identified for a single book by three different methods. One had been previously prepared by a human indexer for a back-of-the-book index. The other two were identified by computer programs that used a combination of linguistic and statistical criteria to extract terms from full text. Effectiveness was measured by (1) whether selected query terms led participants to correct answers and (2) how long it took participants to obtain correct answers. Our results show that two sets of terms - the human terms and the set selected according to the linguistically more sophisticated criteria - were significantly more effective than the third set of terms. This single case demonstrates that query languages do have a measurable impact on the effectiveness of query term languages in the interactive information access process. The procedure described in this paper can be used to assess the effectiveness for information seekers of query terms identified by any query language.
    Source
    Information processing and management. 44(2008) no.3, S.1022-1031