Search (9 results, page 1 of 1)

  • × author_ss:"Kelly, D."
  1. Niu, X.; Kelly, D.: ¬The use of query suggestions during information search (2014) 0.06
    0.05534669 = product of:
      0.11069338 = sum of:
        0.03657866 = weight(_text_:data in 2702) [ClassicSimilarity], result of:
          0.03657866 = score(doc=2702,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 2702, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2702)
        0.07411472 = sum of:
          0.042392377 = weight(_text_:processing in 2702) [ClassicSimilarity], result of:
            0.042392377 = score(doc=2702,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.22363065 = fieldWeight in 2702, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2702)
          0.03172234 = weight(_text_:22 in 2702) [ClassicSimilarity], result of:
            0.03172234 = score(doc=2702,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.19345059 = fieldWeight in 2702, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2702)
      0.5 = coord(2/4)
    
    Abstract
    Query suggestion is a common feature of many information search systems. While much research has been conducted about how to generate suggestions, fewer studies have been conducted about how people interact with and use suggestions. The purpose of this paper is to investigate how and when people integrate query suggestions into their searches and the outcome of this usage. The paper further investigates the relationships between search expertise, topic difficulty, and temporal segment of the search and query suggestion usage. A secondary analysis of data was conducted using data collected in a previous controlled laboratory study. In this previous study, 23 undergraduate research participants used an experimental search system with query suggestions to conduct four topic searches. Results showed that participants integrated the suggestions into their searching fairly quickly and that participants with less search expertise used more suggestions and saved more documents. Participants also used more suggestions towards the end of their searches and when searching for more difficult topics. These results show that query suggestion can provide support in situations where people have less search expertise, greater difficulty searching and at specific times during the search.
    Date
    25. 1.2016 18:43:22
    Source
    Information processing and management. 50(2014) no.1, S.218-234
  2. Kelly, D.; Harper, D.J.; Landau, B.: Questionnaire mode effects in interactive information retrieval experiments (2008) 0.02
    0.023530604 = product of:
      0.04706121 = sum of:
        0.02586502 = weight(_text_:data in 2029) [ClassicSimilarity], result of:
          0.02586502 = score(doc=2029,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 2029, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2029)
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 2029) [ClassicSimilarity], result of:
              0.042392377 = score(doc=2029,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 2029, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2029)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The questionnaire is an important technique for gathering data from subjects during interactive information retrieval (IR) experiments. Research in survey methodology, public opinion polling and psychology has demonstrated a number of response biases and behaviors that subjects exhibit when responding to questionnaires. Furthermore, research in human-computer interaction has demonstrated that subjects tend to inflate their ratings of systems when completing usability questionnaires. In this study we investigate the relationship between questionnaire mode and subjects' responses to a usability questionnaire comprised of closed and open questions administered during an interactive IR experiment. Three questionnaire modes (pen-and-paper, electronic and interview) were explored with 51 subjects who used one of two information retrieval systems. Results showed that subjects' quantitative evaluations of systems were significantly lower in the interview mode than in the electronic mode. With respect to open questions, subjects in the interview mode used significantly more words than subjects in the pen-and-paper or electronic modes to communicate their responses, and communicated a significantly higher number of response units, even though the total number of unique response units was roughly the same across condition. Finally, results showed that subjects in the pen-and-paper mode were the most efficient in communicating their responses to open questions. These results suggest that researchers should use the interview mode to elicit responses to closed questions from subjects and either pen-and-paper or electronic modes to elicit responses to open questions.
    Source
    Information processing and management. 44(2008) no.1, S.122-141
  3. Kelly, D.; Wacholder, N.; Rittman, R.; Sun, Y.; Kantor, P.; Small, S.; Strzalkowski, T.: Using interview data to identify evaluation criteria for interactive, analytical question-answering systems (2007) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 332) [ClassicSimilarity], result of:
          0.062076043 = score(doc=332,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 332, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=332)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this work is to identify potential evaluation criteria for interactive, analytical question-answering (QA) systems by analyzing evaluative comments made by users of such a system. Qualitative data collected from intelligence analysts during interviews and focus groups were analyzed to identify common themes related to performance, use, and usability. These data were collected as part of an intensive, three-day evaluation workshop of the High-Quality Interactive Question Answering (HITIQA) system. Inductive coding and memoing were used to identify and categorize these data. Results suggest potential evaluation criteria for interactive, analytical QA systems, which can be used to guide the development and design of future systems and evaluations. This work contributes to studies of QA systems, information seeking and use behaviors, and interactive searching.
  4. Kelly, D.; Fu, X.: Eliciting better information need descriptions from users of information search systems (2007) 0.01
    0.007418666 = product of:
      0.029674664 = sum of:
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 893) [ClassicSimilarity], result of:
              0.05934933 = score(doc=893,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 893, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=893)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 43(2007) no.1, S.30-46
  5. Kelly, D.: Measuring online information seeking context : Part 1: background and method (2006) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 206) [ClassicSimilarity], result of:
          0.02586502 = score(doc=206,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=206)
      0.25 = coord(1/4)
    
    Abstract
    Context is one of the most important concepts in information seeking and retrieval research. However, the challenges of studying context are great; thus, it is more common for researchers to use context as a post hoc explanatory factor, rather than as a concept that drives inquiry. The purposes of this study were to develop a method for collecting data about information seeking context in natural online environments, and identify which aspects of context should be considered when studying online information seeking. The study is reported in two parts. In this, the first part, the background and method are presented. Results and implications of this research are presented in Part 2 (Kelly, in press). Part 1 discusses previous literature on information seeking context and behavior and situates the current work within this literature. This part further describes the naturalistic, longitudinal research design that was used to examine and measure the online information seeking contexts of users during a 14-week period. In this design, information seeking context was characterized by a user's self-identified tasks and topics, and several attributes of these, such as the length of time the user expected to work on a task and the user's familiarity with a topic. At weekly intervals, users evaluated the usefulness of the documents that they viewed, and classified these documents according to their tasks and topics. At the end of the study, users provided feedback about the study method.
  6. Kelly, D.: Measuring online information seeking context : Part 2: Findings and discussion (2006) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 215) [ClassicSimilarity], result of:
          0.02586502 = score(doc=215,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=215)
      0.25 = coord(1/4)
    
    Abstract
    Context is one of the most important concepts in information seeking and retrieval research. However, the challenges of studying context are great; thus, it is more common for researchers to use context as a post hoc explanatory factor, rather than as a concept that drives inquiry. The purpose of this study was to develop a method for collecting data about information seeking context in natural online environments, and identify which aspects of context should be considered when studying online information seeking. The study is reported in two parts. In this, the second part, results and implications of this research are presented. Part 1 (Kelly, 2006) discussed previous literature on information seeking context and behavior, situated the current study within this literature, and described the naturalistic, longitudinal research design that was used to examine and measure the online information seeking context of seven users during a 14-week period. Results provide support for the value of the method in studying online information seeking context, the relative importance of various measures of context, how these measures change over time, and, finally, the relationship between these measures. In particular, results demonstrate significant differences in distributions of usefulness ratings according to task and topic.
  7. Kelly, D.; Sugimoto, C.R.: ¬A systematic review of interactive information retrieval evaluation studies, 1967-2006 (2013) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 684) [ClassicSimilarity], result of:
          0.02586502 = score(doc=684,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=684)
      0.25 = coord(1/4)
    
    Abstract
    With the increasing number and diversity of search tools available, interest in the evaluation of search systems, particularly from a user perspective, has grown among researchers. More researchers are designing and evaluating interactive information retrieval (IIR) systems and beginning to innovate in evaluation methods. Maturation of a research specialty relies on the ability to replicate research, provide standards for measurement and analysis, and understand past endeavors. This article presents a historical overview of 40 years of IIR evaluation studies using the method of systematic review. A total of 2,791 journal and conference units were manually examined and 127 articles were selected for analysis in this study, based on predefined inclusion and exclusion criteria. These articles were systematically coded using features such as author, publication date, sources and references, and properties of the research method used in the articles, such as number of subjects, tasks, corpora, and measures. Results include data describing the growth of IIR studies over time, the most frequently occurring and cited authors and sources, and the most common types of corpora and measures used. An additional product of this research is a bibliography of IIR evaluation research that can be used by students, teachers, and those new to the area. To the authors' knowledge, this is the first historical, systematic characterization of the IIR evaluation literature, including the documentation of methods and measures used by researchers in this specialty.
  8. Murdock, V.; Kelly, D.; Croft, W.B.; Belkin, N.J.; Yuan, X.: Identifying and improving retrieval for procedural questions (2007) 0.01
    0.0063588563 = product of:
      0.025435425 = sum of:
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 902) [ClassicSimilarity], result of:
              0.05087085 = score(doc=902,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=902)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 43(2007) no.1, S.181-203
  9. Kelly, D.: Implicit feedback : using behavior to infer relevance (2005) 0.00
    0.004239238 = product of:
      0.016956951 = sum of:
        0.016956951 = product of:
          0.033913903 = sum of:
            0.033913903 = weight(_text_:processing in 645) [ClassicSimilarity], result of:
              0.033913903 = score(doc=645,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.17890452 = fieldWeight in 645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03125 = fieldNorm(doc=645)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The concept of relevance has a rich history in information retrieval (IR) that dates back well over 40 years (Borlund, 2003) and is necessarily a part of any theory of informationseeking and retrieval. Relevance feedback also has a long history in IR (Salton, 1971) and is considered an important part of interactive IR (Spink and Losee, 1996). Relevance feedback techniques often require users to explicitly provide feedback to the system, by, for instance, specifying keywords, selecting, evaluating and marking documents, or answering questions about their interests. The feedback that users provide during these interactions has been used for a variety of IR techniques and applications including query expansion, term disambiguation, user profiling, filtering and personalization. Empirical studies have led to the general finding that users of interactive IR systems desire explicit relevance feedback features and, in particular, term suggestion features (Beaulieu, 1997; Belkin et al., 2001; Koenemann and Belkin, 1996). However, much of the evidence from laboratory studies has indicated that relevance feedback features are not used. While users often report a desire for relevance feedback and term suggestion, they do not actually use these features during their searching activities. Several reasons can be given for why this disparately exists. Users may not have additional cognitive resources available to operate the relevance feedback feature. While the extra effort required to operate the feature may seem trivial, the user is already potentially involved in a complex and cognitively burdensome task. Increased effort would be required for both learning the new system and operating its features. When features require more effort and additional cognitive processing than they appear to be worth, they may be abandoned all together. Furthermore, if relevance feedback features are not implemented as part of the routine search activity, they may be forgotten, no matter how helpful they are. This research, in part, has lead to the general belief that users are unwilling to engage in explicit relevance feedback. Recently (Anick, 2003) demonstrated in a web-based study, that users made use of a term suggestion feature to expand and refine their queries, thus things may be changing. These results suggest the potential of term suggestion features in some types of information-seeking environments, especially for single session interactions. Hence it may just be the case that traditional relevance feedback interfaces have not effectively elicited feedback from users or optimally integrated relevance feedback features into current information interaction models.