Search (57 results, page 1 of 3)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Benutzerstudien"
  • × language_ss:"e"
  1. Branch, J.L.: Investigating the information-seeking process of adolescents : the value of using think alouds and think afters (2000) 0.02
    0.024745772 = product of:
      0.049491543 = sum of:
        0.049491543 = product of:
          0.09898309 = sum of:
            0.09898309 = weight(_text_:22 in 3924) [ClassicSimilarity], result of:
              0.09898309 = score(doc=3924,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.5416616 = fieldWeight in 3924, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3924)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library and information science research. 22(2000) no.4, S.371-382
  2. Yoo, E.-Y.; Robbins, L.S.: Understanding middle-aged women's health information seeking on the web : a theoretical approach (2008) 0.02
    0.021210661 = product of:
      0.042421322 = sum of:
        0.042421322 = product of:
          0.084842645 = sum of:
            0.084842645 = weight(_text_:22 in 2973) [ClassicSimilarity], result of:
              0.084842645 = score(doc=2973,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.46428138 = fieldWeight in 2973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2973)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    9. 2.2008 17:52:22
  3. Matsui, S.; Konno, H.: Evaluation of World Wide Web access to OPACs of public libraries in Japan : functional survey of 46 OPAC systems and end user survey of three of those systems (2000) 0.02
    0.019251842 = product of:
      0.038503684 = sum of:
        0.038503684 = product of:
          0.07700737 = sum of:
            0.07700737 = weight(_text_:systems in 1762) [ClassicSimilarity], result of:
              0.07700737 = score(doc=1762,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.48018348 = fieldWeight in 1762, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1762)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Griesdorf, H.; Spink, A.: Median measure : an approach to IR systems evaluation (2001) 0.02
    0.019058352 = product of:
      0.038116705 = sum of:
        0.038116705 = product of:
          0.07623341 = sum of:
            0.07623341 = weight(_text_:systems in 1774) [ClassicSimilarity], result of:
              0.07623341 = score(doc=1774,freq=2.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.47535738 = fieldWeight in 1774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1774)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Walbridge, S.L.: Usability testing of user interfaces in libraries (2009) 0.02
    0.016505018 = product of:
      0.033010036 = sum of:
        0.033010036 = product of:
          0.06602007 = sum of:
            0.06602007 = weight(_text_:systems in 3899) [ClassicSimilarity], result of:
              0.06602007 = score(doc=3899,freq=6.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.41167158 = fieldWeight in 3899, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3899)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As libraries face increasing competition in providing information, we must insure that our library systems are usable, effective, efficient, and perhaps even enticing. How do librarians know that systems give users what they need and want? One way is usability testing. Usability testing has been around the computer industry for at least a decade, but library use of the method is relatively new. It has been a common perception that library systems were designed for librarians. Even if the user was considered, it was from the perspective of librarians who worked with the user. Those perceptions were anecdotal, and librarians frequently disagreed with one another about user behavior and knowledge.
  6. Zhang, X.; Chignell, M.: Assessment of the effects of user characteristics on mental models of information retrieval systems (2001) 0.02
    0.01633573 = product of:
      0.03267146 = sum of:
        0.03267146 = product of:
          0.06534292 = sum of:
            0.06534292 = weight(_text_:systems in 5753) [ClassicSimilarity], result of:
              0.06534292 = score(doc=5753,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.4074492 = fieldWeight in 5753, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5753)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article reports the results of a study that investigated effects of four user characteristics on users' mental models of information retrieval systems: educational and professional status, first language, academic background, and computer experience. The repertory grid technique was used in the study. Using this method, important components of information retrieval systems were represented by nine concepts, based on four IR experts' judgments. Users' mental models were represented by factor scores that were derived from users' matrices of concept ratings on different attributes of the concepts. The study found that educational and professional status, academic background, and computer experience had significant effects in differentiating users on their factor scores. First language had a borderline effect, but the effect was not significant enough at a = 0.05 level. Specific different views regarding IR systems among different groups of users are described and discussed. Implications of the study for information science and IR system designs are suggested
  7. Käki, M.; Aula, A.: Controlling the complexity in comparing search user interfaces via user studies (2008) 0.02
    0.01633573 = product of:
      0.03267146 = sum of:
        0.03267146 = product of:
          0.06534292 = sum of:
            0.06534292 = weight(_text_:systems in 2024) [ClassicSimilarity], result of:
              0.06534292 = score(doc=2024,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.4074492 = fieldWeight in 2024, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2024)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Over time, researchers have acknowledged the importance of understanding the users' strategies in the design of search systems. However, when involving users in the comparison of search systems, methodological challenges still exist as researchers are pondering on how to handle the variability that human participants bring to the comparisons. This paper present methods for controlling the complexity of user-centered evaluations of search user interfaces through within-subjects designs, balanced task sets, time limitations, pre-formulated queries, cached result pages, and through limiting the users' access to result documents. Additionally, we will present our experiences in using three measures - search speed, qualified search speed, and immediate accuracy - to facilitate the comparison of different search systems over studies.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
  8. Gremett, P.: Utilizing a user's context to improve search results (2006) 0.01
    0.014140441 = product of:
      0.028280882 = sum of:
        0.028280882 = product of:
          0.056561764 = sum of:
            0.056561764 = weight(_text_:22 in 5299) [ClassicSimilarity], result of:
              0.056561764 = score(doc=5299,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.30952093 = fieldWeight in 5299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:17:44
  9. Markey, K.: Twenty-five years of end-user searching : part 1: research findings (2007) 0.01
    0.013613109 = product of:
      0.027226217 = sum of:
        0.027226217 = product of:
          0.054452434 = sum of:
            0.054452434 = weight(_text_:systems in 5163) [ClassicSimilarity], result of:
              0.054452434 = score(doc=5163,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.339541 = fieldWeight in 5163, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5163)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is the first part of a two-part article that reviews 25 years of published research findings on end-user searching in online information retrieval (IR) systems. In Part 1 (Markey, 2007), the author seeks to answer the following questions: What characterizes the queries that end users submit to online IR systems? What search features do people use? What features would enable them to improve on the retrievals they have in hand? What features are hardly ever used? What do end users do in response to the system's retrievals? Are end users satisfied with their online searches? Summarizing searches of online IR systems by the search features people use everyday makes information retrieval appear to be a very simplistic one-stop event. In Part 2, the author examines current models of the information retrieval process, demonstrating that information retrieval is much more complex and involves changes in cognition, feelings, and/or events during the information seeking process. She poses a host of new research questions that will further our understanding about end-user searching of online IR systems.
  10. Markey, K.: Twenty-five years of end-user searching : part 2: future research directions (2007) 0.01
    0.013613109 = product of:
      0.027226217 = sum of:
        0.027226217 = product of:
          0.054452434 = sum of:
            0.054452434 = weight(_text_:systems in 443) [ClassicSimilarity], result of:
              0.054452434 = score(doc=443,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.339541 = fieldWeight in 443, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=443)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is the second part of a two-part article that examines 25 years of published research findings on end-user searching of online information retrieval (IR) systems. In Part 1, it was learned that people enter a few short search statements into online IR systems. Their searches do not resemble the systematic approach of expert searchers who use the full range of IR-system functionality. Part 2 picks up the discussion of research findings about end-user searching in the context of current information retrieval models. These models demonstrate that information retrieval is a complex event, involving changes in cognition, feelings, and/or events during the information seeking process. The author challenges IR researchers to design new studies of end-user searching, collecting data not only on system-feature use, but on multiple search sessions and controlling for variables such as domain knowledge expertise and expert system knowledge. Because future IR systems designers are likely to improve the functionality of online IR systems in response to answers to the new research questions posed here, the author concludes with advice to these designers about retaining the simplicity of online IR system interfaces.
  11. Kelly, D.; Harper, D.J.; Landau, B.: Questionnaire mode effects in interactive information retrieval experiments (2008) 0.01
    0.013613109 = product of:
      0.027226217 = sum of:
        0.027226217 = product of:
          0.054452434 = sum of:
            0.054452434 = weight(_text_:systems in 2029) [ClassicSimilarity], result of:
              0.054452434 = score(doc=2029,freq=8.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.339541 = fieldWeight in 2029, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2029)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The questionnaire is an important technique for gathering data from subjects during interactive information retrieval (IR) experiments. Research in survey methodology, public opinion polling and psychology has demonstrated a number of response biases and behaviors that subjects exhibit when responding to questionnaires. Furthermore, research in human-computer interaction has demonstrated that subjects tend to inflate their ratings of systems when completing usability questionnaires. In this study we investigate the relationship between questionnaire mode and subjects' responses to a usability questionnaire comprised of closed and open questions administered during an interactive IR experiment. Three questionnaire modes (pen-and-paper, electronic and interview) were explored with 51 subjects who used one of two information retrieval systems. Results showed that subjects' quantitative evaluations of systems were significantly lower in the interview mode than in the electronic mode. With respect to open questions, subjects in the interview mode used significantly more words than subjects in the pen-and-paper or electronic modes to communicate their responses, and communicated a significantly higher number of response units, even though the total number of unique response units was roughly the same across condition. Finally, results showed that subjects in the pen-and-paper mode were the most efficient in communicating their responses to open questions. These results suggest that researchers should use the interview mode to elicit responses to closed questions from subjects and either pen-and-paper or electronic modes to elicit responses to open questions.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
  12. Moulaison, H.L.: OPAC queries at a medium-sized academic library : a transaction log analysis (2008) 0.01
    0.012372886 = product of:
      0.024745772 = sum of:
        0.024745772 = product of:
          0.049491543 = sum of:
            0.049491543 = weight(_text_:22 in 3599) [ClassicSimilarity], result of:
              0.049491543 = score(doc=3599,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.2708308 = fieldWeight in 3599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3599)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2000 17:38:22
  13. Agosto, D.E.: Bounded rationality and satisficing in young people's Web-based decision making (2002) 0.01
    0.010605331 = product of:
      0.021210661 = sum of:
        0.021210661 = product of:
          0.042421322 = sum of:
            0.042421322 = weight(_text_:22 in 177) [ClassicSimilarity], result of:
              0.042421322 = score(doc=177,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23214069 = fieldWeight in 177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=177)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study investigated Simon's behavioral decisionmaking theories of bounded rationality and satisficing in relation to young people's decision making in the World Wide Web, and considered the role of personal preferences in Web-based decisions. It employed a qualitative research methodology involving group interviews with 22 adolescent females. Data analysis took the form of iterative pattern coding using QSR NUD*IST Vivo qualitative data analysis software. Data analysis revealed that the study participants did operate within the limits of bounded rationality. These limits took the form of time constraints, information overload, and physical constraints. Data analysis also uncovered two major satisficing behaviors-reduction and termination. Personal preference was found to play a major role in Web site evaluation in the areas of graphic/multimedia and subject content preferences. This study has related implications for Web site designers and for adult intermediaries who work with young people and the Web
  14. Large, A.; Beheshti, J.; Rahman, T.: Design criteria for children's Web portals : the users speak out (2002) 0.01
    0.010605331 = product of:
      0.021210661 = sum of:
        0.021210661 = product of:
          0.042421322 = sum of:
            0.042421322 = weight(_text_:22 in 197) [ClassicSimilarity], result of:
              0.042421322 = score(doc=197,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23214069 = fieldWeight in 197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=197)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 6.2005 10:34:22
  15. Fidel, R.: ¬The user-centered approach (2000) 0.01
    0.010605331 = product of:
      0.021210661 = sum of:
        0.021210661 = product of:
          0.042421322 = sum of:
            0.042421322 = weight(_text_:22 in 917) [ClassicSimilarity], result of:
              0.042421322 = score(doc=917,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23214069 = fieldWeight in 917, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=917)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  16. Bilal, D.: Children's use of the Yahooligans! Web search engine : III. Cognitive and physical behaviors on fully self-generated search tasks (2002) 0.01
    0.010605331 = product of:
      0.021210661 = sum of:
        0.021210661 = product of:
          0.042421322 = sum of:
            0.042421322 = weight(_text_:22 in 5228) [ClassicSimilarity], result of:
              0.042421322 = score(doc=5228,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23214069 = fieldWeight in 5228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5228)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bilal, in this third part of her Yahooligans! study looks at children's performance with self-generated search tasks, as compared to previously assigned search tasks looking for differences in success, cognitive behavior, physical behavior, and task preference. Lotus ScreenCam was used to record interactions and post search interviews to record impressions. The subjects, the same 22 seventh grade children in the previous studies, generated topics of interest that were mediated with the researcher into more specific topics where necessary. Fifteen usable sessions form the basis of the study. Eleven children were successful in finding information, a rate of 73% compared to 69% in assigned research questions, and 50% in assigned fact-finding questions. Eighty-seven percent began using one or two keyword searches. Spelling was a problem. Successful children made fewer keyword searches and the number of search moves averaged 5.5 as compared to 2.4 on the research oriented task and 3.49 on the factual. Backtracking and looping were common. The self-generated task was preferred by 47% of the subjects.
  17. Kim, J.: Describing and predicting information-seeking behavior on the Web (2009) 0.01
    0.010605331 = product of:
      0.021210661 = sum of:
        0.021210661 = product of:
          0.042421322 = sum of:
            0.042421322 = weight(_text_:22 in 2747) [ClassicSimilarity], result of:
              0.042421322 = score(doc=2747,freq=2.0), product of:
                0.1827397 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052184064 = queryNorm
                0.23214069 = fieldWeight in 2747, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2747)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 18:54:15
  18. Spink, A.; Ozmutlu, H.C.; Ozmutlu, S.: Multitasking information seeking and searching processes (2002) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 600) [ClassicSimilarity], result of:
              0.038503684 = score(doc=600,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 600, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=600)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent studies show that humans engage in multitasking behaviors as they seek and search information retrieval (IR) systems for information on more than one topic at the same time. For example, a Web search session by a single user may consist of searching on single topics or multitasking. Findings are presented from four separate studies of the prevalence of multitasking information seeking and searching by Web, IR system, and library users. Incidence of multitasking identified in the four different studies included: (1) users of the Excite Web search engine who completed a survey form, (2) Excite Web search engine users filtered from an Excite transaction log from 20 December 1999, (3) mediated on-line databases searches, and (4) academic library users. Findings include: (1) multitasking information seeking and searching is a common human behavior, (2) users may conduct information seeking and searching on related or unrelated topics, (3) Web or IR multitasking search sessions are longer than single topic sessions, (4) mean number of topics per Web search ranged of 1 to more than 10 topics with a mean of 2.11 topic changes per search session, and (4) many Web search topic changes were from hobbies to shopping and vice versa. A more complex model of human seeking and searching levels that incorporates multitasking information behaviors is presented, and a theoretical framework for human information coordinating behavior (HICB) is proposed. Multitasking information seeking and searching is developing as major research area that draws together IR and information seeking studies toward a focus on IR within the context of human information behavior. Implications for models of information seeking and searching, IR/Web systems design, and further research are discussed.
  19. Drabenstott, K.M.: Do nondomain experts enlist the strategies of domain experts? (2003) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 1713) [ClassicSimilarity], result of:
              0.038503684 = score(doc=1713,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 1713, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1713)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    User studies demonstrate that nondomain experts do not use the same information-seeking strategies as domain experts. Because of the transformation of integrated library systems into Information Gateways in the late 1990s, both nondomain experts and domain experts have had available to them the wide range of information-seeking strategies in a single system. This article describes the results of a study to answer three research questions: (1) do nondomain experts enlist the strategies of domain experts? (2) if they do, how did they learn about these strategies? and (3) are they successful using them? Interviews, audio recordings, screen captures, and observations were used to gather data from 14 undergraduate students who searched an academic library's Information Gateway. The few times that the undergraduates in this study enlisted search strategies that were characteristic of domain experts, it usually took perseverance, trial-and-error, serendipity, or a combination of all three for them to find useful information. Although this study's results provide no compelling reasons for systems to support features that make domain-expert strategies possible, there is need for system features that scaffold nondomain experts from their usual strategies to the strategies characteristic of domain experts.
  20. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : I. Theory and background (2003) 0.01
    0.009625921 = product of:
      0.019251842 = sum of:
        0.019251842 = product of:
          0.038503684 = sum of:
            0.038503684 = weight(_text_:systems in 5164) [ClassicSimilarity], result of:
              0.038503684 = score(doc=5164,freq=4.0), product of:
                0.16037072 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.052184064 = queryNorm
                0.24009174 = fieldWeight in 5164, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The project proposes and tests a comprehensive and systematic model of user evaluation of Web search engines. The project contains two parts. Part I describes the background and the model including a set of criteria and measures, and a method for implementation. It includes a literature review for two periods. The early period (1995-1996) portrays the settings for developing the model and the later period (1997-2000) places two applications of the model among contemporary evaluation work. Part II presents one of the applications that investigated the evaluation of four major search engines by 36 undergraduates from three academic disciplines. It reports results from statistical analyses of quantitative data for the entire sample and among disciplines, and content analysis of verbal data containing users' reasons for satisfaction. The proposed model aims to provide systematic feedback to engine developers or service providers for system improvement and to generate useful insight for system design and tool choice. The model can be applied to evaluating other compatible information retrieval systems or information retrieval (IR) techniques. It intends to contribute to developing a theory of relevance that goes beyond topicality to include value and usefulness for designing user-oriented information retrieval systems.

Types

  • a 56
  • b 1
  • r 1
  • More… Less…