Search (72 results, page 1 of 4)

  • × author_ss:"Spink, A."
  1. Spink, A.; Saracevic, T.: Human-computer interaction in information retrieval : nature and manifestations of feedback (1998) 0.02
    0.017420776 = product of:
      0.08129696 = sum of:
        0.018822279 = weight(_text_:system in 3763) [ClassicSimilarity], result of:
          0.018822279 = score(doc=3763,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.2435858 = fieldWeight in 3763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3763)
        0.016539034 = weight(_text_:information in 3763) [ClassicSimilarity], result of:
          0.016539034 = score(doc=3763,freq=16.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.3840108 = fieldWeight in 3763, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3763)
        0.045935646 = weight(_text_:retrieval in 3763) [ClassicSimilarity], result of:
          0.045935646 = score(doc=3763,freq=14.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.61896384 = fieldWeight in 3763, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3763)
      0.21428572 = coord(3/14)
    
    Abstract
    Develops a theoretical framework for expressing the nature of feedback as a critical process in interactive information retrieval. Feedback concepts from cybernetics and social sciences perspectives are used to develop a concept of information feedback applicable to information retrieval. Adapts models from human-computer interaction and interactive information retrieval as a framework for studying the manifestations of feedback in information retrieval. Presents results from an empirical study of real-life interactions between users, professional mediators and an information retrieval system computer. Presents data involving 885 feedback loops classified in 5 categories. Presents a connection between the theoretical framework and empirical observations and provides a number of pragmatic and research suggestions
    Footnote
    Contribution to a special section of articles related to human-computer interaction and information retrieval
  2. Spink, A.; Cole, C.: ¬A multitasking framework for cognitive information retrieval (2005) 0.01
    0.014440458 = product of:
      0.050541602 = sum of:
        0.015210699 = weight(_text_:system in 642) [ClassicSimilarity], result of:
          0.015210699 = score(doc=642,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.19684705 = fieldWeight in 642, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=642)
        0.0088404855 = weight(_text_:information in 642) [ClassicSimilarity], result of:
          0.0088404855 = score(doc=642,freq=14.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.20526241 = fieldWeight in 642, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=642)
        0.019842334 = weight(_text_:retrieval in 642) [ClassicSimilarity], result of:
          0.019842334 = score(doc=642,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.26736724 = fieldWeight in 642, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=642)
        0.0066480828 = product of:
          0.0132961655 = sum of:
            0.0132961655 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
              0.0132961655 = score(doc=642,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.15476047 = fieldWeight in 642, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=642)
          0.5 = coord(1/2)
      0.2857143 = coord(4/14)
    
    Abstract
    Information retrieval (IR) research has developed considerably since the 1950's to include consideration of more cognitive, interactive and iterative processes during the interaction between humans and IR or Web systems (Ingwersen, 1992, 1996). Interactive search sessions by humans with IR systems have been depicted as interactive IR models (Saracevic, 1997). Human-IR system interaction is also modeled as taking place within the context of broader human information behavior (HIB) processes (Spink et al., 2002). Research into the human or cognitive (user modeling) aspects of IR is a growing body of research on user interactivity, task performance and measures for observing user interactivity. The task context and situational characteristics of users' searches and evaluation have also been identified as key elements in a user's interaction with an IR system (Cool and Spink, 2002; Vakkari, 2003). Major theorized interactive IR models have been proposed relating to the single search episode, including Ingwersen's (1992,1996) Cognitive Model of IR Interaction, Belkin et al.'s (1995) Episodic Interaction Model, and Saracevic's (1996,1997) Stratified Model of IR Interaction. In this chapter we examine Saracevic's Stratified Model of IR Interaction and extend the model within the framework of cognitive IR (CIR) to depict CIR as a multitasking process. This chapter provides a new direction for CIR research by conceptualizing IR with a multitasking context. The next section of the chapter defines the concept of multitasking in the cognitive sciences and Section 3 discusses the emerging understanding of multitasking information behavior. In Section 4, cognitive IR is depicted within a multitasking framework using Saracevic's (1996, 1997) Stratified Model of IR Interaction. In Section 5, we link information searching and seeking models together, via Saracevic's Stratified Model of IR Interaction, but starting with a unitask model of HIB. We begin to model multitasking in cognitive IR in Section 6. In Sections 7 and 8, we increase the complexity of our developing multitasking model of cognitive IR by adding coordinating mechanisms, including feedback loops. Finally, in Section 9, we conclude the chapter and indicate future directions for further research.
    Date
    19. 1.2007 12:55:22
    Series
    The information retrieval series, vol. 19
    Source
    New directions in cognitive information retrieval. Eds.: A. Spink, C. Cole
  3. Spink, A.; Cole, C.: New directions in cognitive information retrieval : conclusion and further research (2005) 0.01
    0.014390237 = product of:
      0.06715444 = sum of:
        0.034012157 = weight(_text_:system in 637) [ClassicSimilarity], result of:
          0.034012157 = score(doc=637,freq=20.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.4401634 = fieldWeight in 637, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=637)
        0.0088404855 = weight(_text_:information in 637) [ClassicSimilarity], result of:
          0.0088404855 = score(doc=637,freq=14.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.20526241 = fieldWeight in 637, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=637)
        0.024301795 = weight(_text_:retrieval in 637) [ClassicSimilarity], result of:
          0.024301795 = score(doc=637,freq=12.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.32745665 = fieldWeight in 637, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=637)
      0.21428572 = coord(3/14)
    
    Abstract
    New Directions in Cognitive Information Retrieval (IR) gathers user or cognitive approaches to IR research into one volume. The group of researchers focus on a middleground perspective between system and user. They ask the question: What is the nexus between the wider context of why and how humans behave when seeking information and the technological and other constraints that determine the interaction between user and machine? These researchers' concern for the application of user/cognitive-oriented research to IR system design thus serves as a meeting ground linking computer scientists with their largely system performance concerns and the social science research that examines human information behavior in the wider context of how human perception and cognitive mechanisms function, and the work and social frameworks in which we live. The researchers in this volume provide an in-depth revaluation of the concepts that form the basis of current IR retrieval system design. Current IR systems are in a certain sense based on design conceptualizations that view - the user's role in the user-system interaction as an input and monitoring mechanism for system performance; - the system's role in the user-system interaction as a data acquisition system, not an information retrieval system; and - the central issue in the user-system interaction as the efficacy of the system's matching algorithms, matching the user request statement to representations of the document set contained in the system's database. But the era of matching-focused approaches to interactive IR appears to be giving way to a concern for developing interactive systems to facilitate collaboration between users in the performance of their work and social tasks. There is room for cognitive approaches to interaction to break in here.
    Series
    The information retrieval series, vol. 19
    Source
    New directions in cognitive information retrieval. Eds.: A. Spink, C. Cole
  4. Spink, A.: Term relevance feedback and mediated database searching : implications for information retrieval practice and systems design (1995) 0.01
    0.012989433 = product of:
      0.06061735 = sum of:
        0.016133383 = weight(_text_:system in 1756) [ClassicSimilarity], result of:
          0.016133383 = score(doc=1756,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.20878783 = fieldWeight in 1756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1756)
        0.011207362 = weight(_text_:information in 1756) [ClassicSimilarity], result of:
          0.011207362 = score(doc=1756,freq=10.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.2602176 = fieldWeight in 1756, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1756)
        0.033276606 = weight(_text_:retrieval in 1756) [ClassicSimilarity], result of:
          0.033276606 = score(doc=1756,freq=10.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.44838852 = fieldWeight in 1756, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1756)
      0.21428572 = coord(3/14)
    
    Abstract
    Research into both the algorithmic and human approaches to information retrieval is required to improve information retrieval system design and database searching effectiveness. Uses the human approach to examine the sources and effectiveness of search terms selected during mediated interactive information retrieval. Focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. Results show that termns selected from particular database fields of retrieved items during term relevance feedback (TRF) were more effective than search terms from the intermediarity, database thesauri or users' domain knowledge during the interaction, but not as effective as terms from the users' written question statements. Implications for the design and testing of automatic relevance feedback techniques that place greater emphasis on these sources and the practice of database searching are also discussed
    Source
    Information processing and management. 31(1995) no.2, S.161-171
  5. Spink, A.; Cole, C.: Introduction (2004) 0.01
    0.011747349 = product of:
      0.054820962 = sum of:
        0.015210699 = weight(_text_:system in 2389) [ClassicSimilarity], result of:
          0.015210699 = score(doc=2389,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.19684705 = fieldWeight in 2389, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=2389)
        0.019767927 = weight(_text_:information in 2389) [ClassicSimilarity], result of:
          0.019767927 = score(doc=2389,freq=70.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.45898068 = fieldWeight in 2389, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2389)
        0.019842334 = weight(_text_:retrieval in 2389) [ClassicSimilarity], result of:
          0.019842334 = score(doc=2389,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.26736724 = fieldWeight in 2389, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2389)
      0.21428572 = coord(3/14)
    
    Abstract
    This is the second part of a two-part special topic JASIST issue an information seeking. The first part presented papers an the topics of health information seeking and everyday life information seeking or ELIS (i.e., information seeking outside of work or school). This second issue presents papers an the topics of information retrieval and information seeking in industry environments. Information retrieval involves a specific kind of information seeking, as the user is in direct contact with an information interface and with potential sources of information from the system's database. The user conducts the search using various strategies, tactics, etc., but there is also the possibility that information processes will occur resulting in a change in the way the user thinks about the topic of the search. If this occurs, the user is, in effect, using the found data, turning it into an informational element of some kind. Such processes can be facilitated in the design of the information retrieval system. Information seeking in industry environments takes up more and more of our working day. Even companies producing industrial products are in fact mainly producing informational elements of some kind, often for the purpose of making decisions or as starting positions for further information seeking. While there may be company mechanisms in place to aid such information seeking, and to make it more efficient, if better information seeking structures were in place, not only would workers waste less time in informational pursuits, but they would also find things, discover new processes, etc., that would benefit the corporation's bottom line. In Figure l, we plot the six papers in this issue an an information behavior continuum, following a taxonomy of information behavior terms from Spink and Cole (2001). Information Behavior is a broad term covering all aspects of information seeking, including passive or undetermined information behavior. Information-Seeking Behavior is usually thought of as active or conscious information behavior. Information-Searching Behavior describes the interactive elements between a user and an information system. Information-Use Behavior is about the user's acquisition and incorporation of data in some kind of information process. This leads to the production of information, but also back to the broad range of Information Behavior in the first part of the continuum. Though we plot all papers in this issue along this continuum, they take into account more than their general framework. The three information retrieval reports veer from the traditional information-searching approach of usersystem interaction, while the three industry environment articles veer from the traditional information-seeking approach of specific context information-seeking studies.
    Footnote
    Einführung zum Themenheft: Information seeking research
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.9, S.767-768
  6. Spink, A.; Goodrum, A.: ¬A study of search intermediary working notes : implications for IR system design (1996) 0.01
    0.011465134 = product of:
      0.05350396 = sum of:
        0.018822279 = weight(_text_:system in 6981) [ClassicSimilarity], result of:
          0.018822279 = score(doc=6981,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.2435858 = fieldWeight in 6981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6981)
        0.010128049 = weight(_text_:information in 6981) [ClassicSimilarity], result of:
          0.010128049 = score(doc=6981,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23515764 = fieldWeight in 6981, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6981)
        0.024553634 = weight(_text_:retrieval in 6981) [ClassicSimilarity], result of:
          0.024553634 = score(doc=6981,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.33085006 = fieldWeight in 6981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6981)
      0.21428572 = coord(3/14)
    
    Abstract
    Reports findings from an explanatory study investigating working notes created during encoding and external storage (EES) processes, by human search intermediaries using a Boolean information retrieval systems. Analysis of 221 sets of working notes created by human search intermediaries revealed extensive use of EES processes and the creation of working notes of textual, numerical and graphical entities. Nearly 70% of recorded working noted were textual/numerical entities, nearly 30 were graphical entities and 0,73% were indiscernible. Segmentation devices were also used in 48% of the working notes. The creation of working notes during the EES processes was a fundamental element within the mediated, interactive information retrieval process. Discusses implications for the design of interfaces to support users' EES processes and further research
    Source
    Information processing and management. 32(1996) no.6, S.681-695
  7. Spink, A.; Cole, C.: New directions in cognitive information retrieval : introduction (2005) 0.01
    0.009433313 = product of:
      0.044022128 = sum of:
        0.010755588 = weight(_text_:system in 647) [ClassicSimilarity], result of:
          0.010755588 = score(doc=647,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.13919188 = fieldWeight in 647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=647)
        0.011082135 = weight(_text_:information in 647) [ClassicSimilarity], result of:
          0.011082135 = score(doc=647,freq=22.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.25731003 = fieldWeight in 647, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=647)
        0.022184404 = weight(_text_:retrieval in 647) [ClassicSimilarity], result of:
          0.022184404 = score(doc=647,freq=10.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.29892567 = fieldWeight in 647, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=647)
      0.21428572 = coord(3/14)
    
    Abstract
    Humans have used electronic information retrieval (IR) systems for more than 50 years as they evolved from experimental systems to full-scale Web search engines and digital libraries. The fields of library and information science (LIS), cognitive science, human factors and computer science have historically been the leading disciplines in conducting research that seeks to model human interaction with IR systems for all kinds of information related behaviors. As technology problems have been mastered, the theoretical and applied framework for studying human interaction with IR systems has evolved from systems-centered to more user-centered, or cognitive-centered approaches. However, cognitive information retrieval (CIR) research that focuses on user interaction with IR systems is still largely under-funded and is often not included at computing and systems design oriented conferences. But CIR-focused research continues, and there are signs that some IR systems designers in academia and the Web search business are realizing that user behavior research can provide valuable insights into systems design and evaluation. The goal of our book is to provide an overview of new CIR research directions. This book does not provide a history of the research field of CIR. Instead, the book confronts new ways of looking at the human information condition with regard to our increasing need to interact with IR systems. The need has grown due to a number of factors, including the increased importance of information to more people in this information age. Also, IR was once considered document-oriented, but has now evolved to include multimedia, text, and other information objects. As a result, IR systems and their complexity have proliferated as users and user purposes for using them have also proliferated. Human interaction with IR systems can often be frustrating as people often lack an understanding of IR system functionality.
    Series
    The information retrieval series, vol. 19
    Source
    New directions in cognitive information retrieval. Eds.: A. Spink, C. Cole
  8. Spink, A.; Park, M.; Jansen, B.J.; Pedersen, J.: Elicitation and use of relevance feedback information (2006) 0.01
    0.009007158 = product of:
      0.042033404 = sum of:
        0.013444485 = weight(_text_:system in 967) [ClassicSimilarity], result of:
          0.013444485 = score(doc=967,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=967)
        0.011050607 = weight(_text_:information in 967) [ClassicSimilarity], result of:
          0.011050607 = score(doc=967,freq=14.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.256578 = fieldWeight in 967, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=967)
        0.017538311 = weight(_text_:retrieval in 967) [ClassicSimilarity], result of:
          0.017538311 = score(doc=967,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.23632148 = fieldWeight in 967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=967)
      0.21428572 = coord(3/14)
    
    Abstract
    A user's single session with a Web search engine or information retrieval (IR) system may consist of seeking information on single or multiple topics, and switch between tasks or multitasking information behavior. Most Web search sessions consist of two queries of approximately two words. However, some Web search sessions consist of three or more queries. We present findings from two studies. First, a study of two-query search sessions on the AltaVista Web search engine, and second, a study of three or more query search sessions on the AltaVista Web search engine. We examine the degree of multitasking search and information task switching during these two sets of AltaVista Web search sessions. A sample of two-query and three or more query sessions were filtered from AltaVista transaction logs from 2002 and qualitatively analyzed. Sessions ranged in duration from less than a minute to a few hours. Findings include: (1) 81% of two-query sessions included multiple topics, (2) 91.3% of three or more query sessions included multiple topics, (3) there are a broad variety of topics in multitasking search sessions, and (4) three or more query sessions sometimes contained frequent topic changes. Multitasking is found to be a growing element in Web searching. This paper proposes an approach to interactive information retrieval (IR) contextually within a multitasking framework. The implications of our findings for Web design and further research are discussed.
    Source
    Information processing and management. 42(2006) no.1, S.264-275
  9. Spink, A.; Ozmutlu, H.C.; Ozmutlu, S.: Multitasking information seeking and searching processes (2002) 0.01
    0.0087654395 = product of:
      0.040905382 = sum of:
        0.013444485 = weight(_text_:system in 600) [ClassicSimilarity], result of:
          0.013444485 = score(doc=600,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
        0.015059439 = weight(_text_:information in 600) [ClassicSimilarity], result of:
          0.015059439 = score(doc=600,freq=26.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.34965688 = fieldWeight in 600, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
        0.012401459 = weight(_text_:retrieval in 600) [ClassicSimilarity], result of:
          0.012401459 = score(doc=600,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.16710453 = fieldWeight in 600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=600)
      0.21428572 = coord(3/14)
    
    Abstract
    Recent studies show that humans engage in multitasking behaviors as they seek and search information retrieval (IR) systems for information on more than one topic at the same time. For example, a Web search session by a single user may consist of searching on single topics or multitasking. Findings are presented from four separate studies of the prevalence of multitasking information seeking and searching by Web, IR system, and library users. Incidence of multitasking identified in the four different studies included: (1) users of the Excite Web search engine who completed a survey form, (2) Excite Web search engine users filtered from an Excite transaction log from 20 December 1999, (3) mediated on-line databases searches, and (4) academic library users. Findings include: (1) multitasking information seeking and searching is a common human behavior, (2) users may conduct information seeking and searching on related or unrelated topics, (3) Web or IR multitasking search sessions are longer than single topic sessions, (4) mean number of topics per Web search ranged of 1 to more than 10 topics with a mean of 2.11 topic changes per search session, and (4) many Web search topic changes were from hobbies to shopping and vice versa. A more complex model of human seeking and searching levels that incorporates multitasking information behaviors is presented, and a theoretical framework for human information coordinating behavior (HICB) is proposed. Multitasking information seeking and searching is developing as major research area that draws together IR and information seeking studies toward a focus on IR within the context of human information behavior. Implications for models of information seeking and searching, IR/Web systems design, and further research are discussed.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.8, S.639-652
  10. Spink, A.: Study of interactive feedback during mediated information retrieval (1997) 0.01
    0.008676887 = product of:
      0.060738206 = sum of:
        0.016369399 = weight(_text_:information in 158) [ClassicSimilarity], result of:
          0.016369399 = score(doc=158,freq=12.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.38007212 = fieldWeight in 158, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=158)
        0.044368807 = weight(_text_:retrieval in 158) [ClassicSimilarity], result of:
          0.044368807 = score(doc=158,freq=10.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.59785134 = fieldWeight in 158, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=158)
      0.14285715 = coord(2/14)
    
    Abstract
    Reports results from a study exploring the information retrieval and types of interactive feedback during mediated information retrieval. Identifies 5 different types of interactive feedback, extending the interactive information retrieval model to include relevance, magnitude, and strategy interactive feedback. Discusses implications for further research, investigating the nature and model of interactive feedback in information retrieval
    Source
    Journal of the American Society for Information Science. 48(1997) no.5, S.382-394
  11. Spink, A.; Goodrum, A.; Robins, D.: Elicitation behavior during mediated information retrieval (1998) 0.01
    0.007466626 = product of:
      0.052266378 = sum of:
        0.017542295 = weight(_text_:information in 3265) [ClassicSimilarity], result of:
          0.017542295 = score(doc=3265,freq=18.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.40730494 = fieldWeight in 3265, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3265)
        0.034724083 = weight(_text_:retrieval in 3265) [ClassicSimilarity], result of:
          0.034724083 = score(doc=3265,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.46789268 = fieldWeight in 3265, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3265)
      0.14285715 = coord(2/14)
    
    Abstract
    Considers what elicitation or requests for information search intermediaries make of users with information requests during an information retrieval interaction - including prior to and during an information retrieval interaction - and for what purpose. Reports a study of elicitations during 40 mediated information retrieval interactions. Identifies a total of 1.557 search intermediary elicitations within 15 purpose categories. The elicitation purposes of search intermediaries included requests for information on search terms and strategies, database selection, search procedures, system's outputs and relevance of retrieved items, and users' knowledge and previous information seeking. Investigates the transition sequences from 1 type of search intermediary elicitation to another. Compares these findings with results from a study of end user questions
    Source
    Information processing and management. 34(1998) nos.2/3, S.257-273
  12. Spink, A.; Greisdorf, H.: Regions and levels : Measuring and mapping users' relevance judgements (2001) 0.01
    0.007088628 = product of:
      0.033080265 = sum of:
        0.013444485 = weight(_text_:system in 5586) [ClassicSimilarity], result of:
          0.013444485 = score(doc=5586,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 5586, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5586)
        0.0072343214 = weight(_text_:information in 5586) [ClassicSimilarity], result of:
          0.0072343214 = score(doc=5586,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16796975 = fieldWeight in 5586, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5586)
        0.012401459 = weight(_text_:retrieval in 5586) [ClassicSimilarity], result of:
          0.012401459 = score(doc=5586,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.16710453 = fieldWeight in 5586, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5586)
      0.21428572 = coord(3/14)
    
    Abstract
    The dichotomous bipolar approach to relevance has produced an abundance of information retrieval (M) research. However, relevance studies that include consideration of users' partial relevance judgments are moving to a greater relevance clarity and congruity to impact the design of more effective [R systems. The study reported in this paper investigates the various regions of across a distribution of users' relevance judgments, including how these regions may be categorized, measured, and evaluated. An instrument was designed using four scales for collecting, measuring, and describing enduser relevance judgments. The instrument was administered to 21 end-users who conducted searches on their own information problems and made relevance judgments on a total of 1059 retrieved items. Findings include: (1) overlapping regions of relevance were found to impact the usefulness of precision ratios as a measure of IR system effectiveness, (2) both positive and negative levels of relevance are important to users as they make relevance judgments, (3) topicality was used more to reject rather than accept items as highly relevant, (4) utility was more used to judge items highly relevant, and (5) the nature of relevance judgment distribution suggested a new IR evaluation measure-median effect. Findings suggest that the middle region of a distribution of relevance judgments, also called "partial relevance," represents a key avenue for ongoing study. The findings provide implications for relevance theory, and the evaluation of IR systems
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.2, S.161-173
  13. Kuhlthau, C.; Spink, A.; Cool, C.: Exploration into stages in the retrieval in the information search process in online information retrieval : communication between users and intermediaries (1992) 0.01
    0.0065346095 = product of:
      0.045742266 = sum of:
        0.017680971 = weight(_text_:information in 4518) [ClassicSimilarity], result of:
          0.017680971 = score(doc=4518,freq=14.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.41052482 = fieldWeight in 4518, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4518)
        0.028061297 = weight(_text_:retrieval in 4518) [ClassicSimilarity], result of:
          0.028061297 = score(doc=4518,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 4518, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4518)
      0.14285715 = coord(2/14)
    
    Abstract
    Describes a model of information seeking behaviour that views the information search process as proceeding through a series of cognitive states through which users progressively refine and reformulate their information problem. The model suggests that searches have several stages which evolve from vague and uncertain to clearer and directed and finally to focused and confident
    Imprint
    Medford, NJ : Learned Information Inc.
    Source
    Proceedings of the 55th Annual Meeting of the American Society for Information Science, Pittsburgh, 26.-29.10.92. Ed.: D. Shaw
  14. Zhang, Y.; Jansen, B.J.; Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis (2009) 0.01
    0.006399848 = product of:
      0.029865958 = sum of:
        0.0050120843 = weight(_text_:information in 2742) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=2742,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 2742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.014881751 = weight(_text_:retrieval in 2742) [ClassicSimilarity], result of:
          0.014881751 = score(doc=2742,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.20052543 = fieldWeight in 2742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.009972124 = product of:
          0.019944249 = sum of:
            0.019944249 = weight(_text_:22 in 2742) [ClassicSimilarity], result of:
              0.019944249 = score(doc=2742,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.23214069 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Abstract
    In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing.
    Date
    22. 3.2009 17:49:11
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.3, S.557-570
  15. Cool, C.; Spink, A.: Issues of context in information retrieval (IR) : an introduction to the special issue (2002) 0.01
    0.0058529805 = product of:
      0.040970862 = sum of:
        0.011207362 = weight(_text_:information in 2587) [ClassicSimilarity], result of:
          0.011207362 = score(doc=2587,freq=10.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.2602176 = fieldWeight in 2587, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2587)
        0.029763501 = weight(_text_:retrieval in 2587) [ClassicSimilarity], result of:
          0.029763501 = score(doc=2587,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40105087 = fieldWeight in 2587, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2587)
      0.14285715 = coord(2/14)
    
    Abstract
    The subject of context has received a great deal of attention in the information retrieval (IR) literature over the past decade, primarily in studies of information seeking and IR interactions. Recently, attention to context in IR has expanded to address new problems in new environments. In this paper we outline five overlapping dimensions of context which we believe to be important constituent elements and we discuss how they are related to different issues in IR research. The papers in this special issue are summarized with respect to how they represent work that is being conducted within these dimensions of context. We conclude with future areas of research which are needed in order to fully understand the multidimensional nature of context in IR.
    Footnote
    Einführung in ein Themenheft: "Issues of context in information retrieval (IR)"
    Source
    Information processing and management. 38(2002) no.5, S.605-611
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  16. Spink, A.; Losee, R.M.: Feedback in information retrieval (1996) 0.01
    0.005662316 = product of:
      0.03963621 = sum of:
        0.011574914 = weight(_text_:information in 7441) [ClassicSimilarity], result of:
          0.011574914 = score(doc=7441,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.2687516 = fieldWeight in 7441, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=7441)
        0.028061297 = weight(_text_:retrieval in 7441) [ClassicSimilarity], result of:
          0.028061297 = score(doc=7441,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 7441, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=7441)
      0.14285715 = coord(2/14)
    
    Abstract
    State of the art review of the mechanisms of feedback in information retrieval (IR) in terms of feedback concepts and models in cybernetics and social sciences. Critically evaluates feedback research based on the traditional IR models and comparing the different approaches to automatic relevance feedback techniques, and feedback research within the framework of interactive IR models. Calls for an extension of the concept of feedback beyond relevance feedback to interactive feedback. Cites specific examples of feedback models used within IR research and presents 6 challenges to future research
    Source
    Annual review of information science and technology. 31(1996), S.33-78
  17. Spink, A.; Saracevic, T.: Interaction in information retrieval : selection and effectiveness of search terms (1997) 0.01
    0.005492098 = product of:
      0.038444687 = sum of:
        0.008681185 = weight(_text_:information in 206) [ClassicSimilarity], result of:
          0.008681185 = score(doc=206,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.20156369 = fieldWeight in 206, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=206)
        0.029763501 = weight(_text_:retrieval in 206) [ClassicSimilarity], result of:
          0.029763501 = score(doc=206,freq=8.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40105087 = fieldWeight in 206, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=206)
      0.14285715 = coord(2/14)
    
    Abstract
    We investigated the sources and effectiveness of search terms used during mediated on-line searching under real-life (as opposed to laboratory) circumstances. A stratified model of information retrieval (IR) interaction served as a framework for the analysis. For the analysis, we used the on-line transaction logs, videotapes, and transcribed dialogue of the presearch and on-line interaction between 40 users and 4 professional intermediaries. Each user provided one question and interacted with one of the four intermediaries. Searching was done using DIALOG. Five sources of search terms were identified: (1) the users' written question statements, (2) terms derived from users' domain knowledge during the interaction, (3) terms extracted from retrieved items as relevance feedback, (4) database thesaurus, and (5) terms derived by intermediaries during the interaction. Distribution, retrieval effectiveness, transition sequences, and correlation of search terms from different sources were investigated. Search terms from users' written question statements and term relevance feedback were the most productive sources of terms contributing to the retrieval of items judged relevant by users. Implications of the findings are discussed
    Source
    Journal of the American Society for Information Science. 48(1997) no.8, S.741-761
  18. Spink, A.; Saracevic, T.: Where do the search terms come from? (1992) 0.00
    0.00496344 = product of:
      0.034744076 = sum of:
        0.006682779 = weight(_text_:information in 4032) [ClassicSimilarity], result of:
          0.006682779 = score(doc=4032,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1551638 = fieldWeight in 4032, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4032)
        0.028061297 = weight(_text_:retrieval in 4032) [ClassicSimilarity], result of:
          0.028061297 = score(doc=4032,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.37811437 = fieldWeight in 4032, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4032)
      0.14285715 = coord(2/14)
    
    Abstract
    Presents selected results from a large study which observed under real-life conditions the interaction between users, intermediaries and computers before and during online searching. Concentrates on the sources of search terms and the relation between given search terms and retrieval of relevant and nonrelevant items as answers. Users provided the largest proportion of search terms (61%), followed by the thesuaurs (19%), relevance feedback (11%), and intermediary (9%). Only 4% of search terms resulted in retrieval of relevant items only; 60% retrieved relevant and nonrelevant items; 25% retrieved nonrelevant items only; and 11% retrieved nothing.
    Imprint
    Medford : Learned Information Inc.
  19. Spink, A.; Saracevic, T.: Sources and use of search terms in online searching (1992) 0.00
    0.0049545267 = product of:
      0.034681685 = sum of:
        0.010128049 = weight(_text_:information in 4523) [ClassicSimilarity], result of:
          0.010128049 = score(doc=4523,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23515764 = fieldWeight in 4523, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4523)
        0.024553634 = weight(_text_:retrieval in 4523) [ClassicSimilarity], result of:
          0.024553634 = score(doc=4523,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.33085006 = fieldWeight in 4523, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4523)
      0.14285715 = coord(2/14)
    
    Abstract
    Reports selected results from a larger study whose objectives are to observe, under real life conditions, the nature and patterns of interaction between users, intermediaries, and computer sysrtems in the context of online information searching and retrieval. Reports various analyses on the relation of search term sources and the retrieval of items judges as to their relevance. While the users generated the largest proportion of search terms (61%) which were responsible for 68% of retrieved items judges relevant, other sources in the interaction process played an important role
    Imprint
    Medford, NJ : Learned Information Inc.
    Source
    Proceedings of the 55th Annual Meeting of the American Society for Information Science, Pittsburgh, 26.-29.10.92. Ed.: D. Shaw
  20. Spink, A.; Park, M.; Koshman, S.: Factors affecting assigned information problem ordering during Web search : an exploratory study (2006) 0.00
    0.004805036 = product of:
      0.03363525 = sum of:
        0.018753503 = weight(_text_:information in 991) [ClassicSimilarity], result of:
          0.018753503 = score(doc=991,freq=28.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.4354273 = fieldWeight in 991, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=991)
        0.014881751 = weight(_text_:retrieval in 991) [ClassicSimilarity], result of:
          0.014881751 = score(doc=991,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.20052543 = fieldWeight in 991, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=991)
      0.14285715 = coord(2/14)
    
    Abstract
    Multitasking is the human ability to handle the demands of multiple tasks. Multitasking behavior involves the ordering of multiple tasks and switching between tasks. People often multitask when using information retrieval (IR) technologies as they seek information on more than one information problem over single or multiple search episodes. However, limited studies have examined how people order their information problems, especially during their Web search engine interaction. The aim of our exploratory study was to investigate assigned information problem ordering by forty (40) study participants engaged in Web search. Findings suggest that assigned information problem ordering was influenced by the following factors, including personal interest, problem knowledge, perceived level of information available on the Web, ease of finding information, level of importance and seeking information on information problems in order from general to specific. Personal interest and problem knowledge were the major factors during assigned information problem ordering. Implications of the findings and further research are discussed. The relationship between information problem ordering and gratification theory is an important area for further exploration.
    Source
    Information processing and management. 42(2006) no.5, S.1366-1378

Types

  • a 70
  • m 2
  • el 1
  • More… Less…