Search (7 results, page 1 of 1)

  • × author_ss:"Belkin, N.J."
  1. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.022252686 = product of:
      0.089010745 = sum of:
        0.07588523 = weight(_text_:cooperative in 2339) [ClassicSimilarity], result of:
          0.07588523 = score(doc=2339,freq=2.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.32890874 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.01312552 = product of:
          0.02625104 = sum of:
            0.02625104 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.02625104 = score(doc=2339,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  2. Li, Y.; Belkin, N.J.: ¬An exploration of the relationships between work task and interactive information search behavior (2010) 0.01
    0.011956038 = product of:
      0.0956483 = sum of:
        0.0956483 = weight(_text_:work in 3980) [ClassicSimilarity], result of:
          0.0956483 = score(doc=3980,freq=22.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.67248654 = fieldWeight in 3980, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3980)
      0.125 = coord(1/8)
    
    Abstract
    This study explores the relationships between work task and interactive information search behavior. Work task was conceptualized based on a faceted classification of task. An experiment was conducted with six work-task types and simulated work-task situations assigned to 24 participants. The results indicate that users present different behavior patterns to approach useful information for different work tasks: They select information systems to search based on the work tasks at hand, different work tasks motivate different types of search tasks, and different facets controlled in the study play different roles in shaping users' interactive information search behavior. The results provide empirical evidence to support the view that work tasks and search tasks play different roles in a user's interaction with information systems and that work task should be considered as a multifaceted variable. The findings provide a possibility to make predictions of a user's information search behavior from his or her work task, and vice versa. Thus, this study sheds light on task-based information seeking and search, and has implications in adaptive information retrieval (IR) and personalization of IR.
  3. Li, Y.; Belkin, N.J.: ¬A faceted approach to conceptualizing tasks in information seeking (2008) 0.01
    0.006243838 = product of:
      0.049950704 = sum of:
        0.049950704 = weight(_text_:work in 2442) [ClassicSimilarity], result of:
          0.049950704 = score(doc=2442,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.35119468 = fieldWeight in 2442, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2442)
      0.125 = coord(1/8)
    
    Abstract
    The nature of the task that leads a person to engage in information interaction, as well as of information seeking and searching tasks, have been shown to influence individuals' information behavior. Classifying tasks in a domain has been viewed as a departure point of studies on the relationship between tasks and human information behavior. However, previous task classification schemes either classify tasks with respect to the requirements of specific studies or merely classify a certain category of task. Such approaches do not lead to a holistic picture of task since a task involves different aspects. Therefore, the present study aims to develop a faceted classification of task, which can incorporate work tasks and information search tasks into the same classification scheme and characterize tasks in such a way as to help people make predictions of information behavior. For this purpose, previous task classification schemes and their underlying facets are reviewed and discussed. Analysis identifies essential facets and categorizes them into Generic facets of task and Common attributes of task. Generic facets of task include Source of task, Task doer, Time, Action, Product, and Goal. Common attributes of task includes Task characteristics and User's perception of task. Corresponding sub-facets and values are identified as well. In this fashion, a faceted classification of task is established which could be used to describe users' work tasks and information search tasks. This faceted classification provides a framework to further explore the relationships among work tasks, search tasks, and interactive information retrieval and advance adaptive IR systems design.
  4. Liu, J.; Belkin, N.J.: Personalizing information retrieval for multi-session tasks : examining the roles of task stage, task type, and topic knowledge on the interpretation of dwell time as an indicator of document usefulness (2015) 0.01
    0.006243838 = product of:
      0.049950704 = sum of:
        0.049950704 = weight(_text_:work in 1608) [ClassicSimilarity], result of:
          0.049950704 = score(doc=1608,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.35119468 = fieldWeight in 1608, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
      0.125 = coord(1/8)
    
    Abstract
    Personalization of information retrieval tailors search towards individual users to meet their particular information needs by taking into account information about users and their contexts, often through implicit sources of evidence such as user behaviors. This study looks at users' dwelling behavior on documents and several contextual factors: the stage of users' work tasks, task type, and users' knowledge of task topics, to explore whether or not taking account contextual factors could help infer document usefulness from dwell time. A controlled laboratory experiment was conducted with 24 participants, each coming 3 times to work on 3 subtasks in a general work task. The results show that task stage could help interpret certain types of dwell time as reliable indicators of document usefulness in certain task types, as was topic knowledge, and the latter played a more significant role when both were available. This study contributes to a better understanding of how dwell time can be used as implicit evidence of document usefulness, as well as how contextual factors can help interpret dwell time as an indicator of usefulness. These findings have both theoretical and practical implications for using behaviors and contextual factors in the development of personalization systems.
  5. Belkin, N.J.; Croft, W.B.: Retrieval techniques (1987) 0.01
    0.005250208 = product of:
      0.042001665 = sum of:
        0.042001665 = product of:
          0.08400333 = sum of:
            0.08400333 = weight(_text_:22 in 334) [ClassicSimilarity], result of:
              0.08400333 = score(doc=334,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.61904186 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=334)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Annual review of information science and technology. 22(1987), S.109-145
  6. Murdock, V.; Kelly, D.; Croft, W.B.; Belkin, N.J.; Yuan, X.: Identifying and improving retrieval for procedural questions (2007) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 902) [ClassicSimilarity], result of:
          0.034606863 = score(doc=902,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=902)
      0.125 = coord(1/8)
    
    Abstract
    People use questions to elicit information from other people in their everyday lives and yet the most common method of obtaining information from a search engine is by posing keywords. There has been research that suggests users are better at expressing their information needs in natural language, however the vast majority of work to improve document retrieval has focused on queries posed as sets of keywords or Boolean queries. This paper focuses on improving document retrieval for the subset of natural language questions asking about how something is done. We classify questions as asking either for a description of a process or asking for a statement of fact, with better than 90% accuracy. Further we identify non-content features of documents relevant to questions asking about a process. Finally we demonstrate that we can use these features to significantly improve the precision of document retrieval results for questions asking about a process. Our approach, based on exploiting the structure of documents, shows a significant improvement in precision at rank one for questions asking about how something is done.
  7. Yuan, X. (J.); Belkin, N.J.: Applying an information-seeking dialogue model in an interactive information retrieval system (2014) 0.00
    0.00164069 = product of:
      0.01312552 = sum of:
        0.01312552 = product of:
          0.02625104 = sum of:
            0.02625104 = weight(_text_:22 in 4544) [ClassicSimilarity], result of:
              0.02625104 = score(doc=4544,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.19345059 = fieldWeight in 4544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4544)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    6. 4.2015 19:22:59