Search (236 results, page 12 of 12)

  • × theme_ss:"Retrievalstudien"
  1. Ellis, D.: ¬The dilemma of measurement in information retrieval research (1996) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 3003) [ClassicSimilarity], result of:
          0.013336393 = score(doc=3003,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 3003, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3003)
      0.33333334 = coord(1/3)
    
    Abstract
    The problem of measurement in information retrieval research is traced to its source in the first retrieval tests. The problem is seen as presenting a chronic dilemma for the field. This dilemma has taken 3 forms as the discipline has evloved: (1) the dilemma of measurement in the archetypal approach: stated relevance versus user relevance; (2) the dilemma of measurement in the probabilistic approach: realism versus formalism; and (3) the dilemma of measurement in the Information Retrieval-Expert System (IR-ES) approach: linear measures of relevance versus logarithmic measures of knowledge. It is argued that the dilemma of measurement has remained intractable even given the different assumptions of the different approaches for 3 connecte reasons - the nature of the subject matter of the field; the nature of relevance jidgement; and the nature of cognition and knowledge. Finally, it is concluded that the original vision of information retrieval research as a discipline founded on quantification proved restricting for its theoretical and methodological development and that increasing recognition of this is reflected in growing interest in qualitative methods in information retrieval research in relation to cognitive, behavioral, and affective aspects of the information retrieval interaction
  2. Gluck, M.: Exploring the relationship between user satisfaction and relevance in information systems (1996) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 4082) [ClassicSimilarity], result of:
          0.013336393 = score(doc=4082,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 4082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4082)
      0.33333334 = coord(1/3)
    
    Abstract
    Aims to better understand the relationship between relevance and user satisfaction, the 2 predominant aspects of user based performance in information systems. Unconfounds relevance and user satisfaction assessments of system performance at the retrieved item level. To minimize the idiosyncrasies of any one system, a generalized, naturalistic information system was employed in this study. Respondents completed sensemaking timeline questionnaires in which they described a recent need they had for geographic information. Retrieved documents from the generalized system consisted of the responses users obtained while resolving their information needs. Respondents directly provided process, product, cost benefit, and overall satisfaction assessments with the generalized geographic systems. Relevance judgements of retrieved items were obtained through content analysis from sensemaking questionnaires as a secondary observation technique. The content analysis provided relevance values on both 5 category and 2 category scales. Results indicate that relevance has strong relationships with process, product and overall user satisfaction measures while relevance and cost benefit satisfaction measures have no significant relationship. This analysis also indicates that neither relevance nor user satisfaction subsumes the other concept, and that understanding the proper units of analysis for these measures helps resolve the paradox of the management information system and information science literature not informing aech other concerning user based information system performance measures
  3. Wildemuth, B.M.; Jacob, E.K.; Fullington, A.;; Bliek, R. de; Friedman, C.P.: ¬A detailed analysis of end-user search behaviours (1991) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 2423) [ClassicSimilarity], result of:
          0.013336393 = score(doc=2423,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 2423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2423)
      0.33333334 = coord(1/3)
    
    Abstract
    Search statements in this revision process can be viewed as a 'move' in the overall search strategy. Very little is known about how end users develop and revise their search strategies. A study was conducted to analyse the moves made in 244 data base searches conducted by 26 medical students at the University of North Carolina at Chapel Hill. Students search INQUIRER, a data base of facts and concepts in microbiology. The searches were conducted during a 3-week period in spring 1990 and were recorded by the INQUIRER system. Each search statement was categorised, using Fidel's online searching moves (S. Online review 9(1985) S.61-74) and Bates' search tactics (s. JASIS 30(1979) S.205-214). Further analyses indicated that the most common moves were Browse/Specity, Select Exhaust, Intersect, and Vary, and that selection of moves varied by student and by problem. Analysis of search tactics (combinations of moves) identified 5 common search approaches. The results of this study have implcations for future research on search behaviours, for thedesign of system interfaces and data base structures, and for the training of end users
  4. Park, S.: Usability, user preferences, effectiveness, and user behaviors when searching individual and integrated full-text databases : implications for digital libraries (2000) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 4591) [ClassicSimilarity], result of:
          0.013336393 = score(doc=4591,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 4591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4591)
      0.33333334 = coord(1/3)
    
    Abstract
    This article addresses a crucial issue in the digital library environment: how to support effective interaction of users with heterogeneous and distributed information resources. In particular, this study compared usability, user preference, effectiveness, and searching behaviors in systems that implement interaction with multiple databases as if they were one (integrated interaction) in a experiment in the TREC environment. 28 volunteers were recruited from the graduate students of the School of Communication, Information & Library Studies at Rutgers University. Significantly more subjects preferred the common interface to the integrated interface, mainly because they could have more control over database selection. Subjects were also more satisfied with the results from the common interface, and performed better with the common interface than with the integrated interface. Overall, it appears that for this population, interacting with databases through a common interface is preferable on all grounds to interacting with databases through an integrated interface. These results suggest that: (1) the general assumption of the information retrieval (IR) literature that an integrated interaction is best needs to be revisited; (2) it is important to allow for more user control in the distributed environment; (3) for digital library purposes, it is important to characterize different databases to support user choice for integration; and (4) certain users prefer control over database selection while still opting for results to be merged
  5. MacCall, S.L.; Cleveland, A.D.; Gibson, I.E.: Outline and preliminary evaluation of the classical digital library model (1999) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 6541) [ClassicSimilarity], result of:
          0.013336393 = score(doc=6541,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 6541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6541)
      0.33333334 = coord(1/3)
    
    Abstract
    The growing number of networked information resources and services offers unprecedented opportunities for delivering high quality information to the computer desktop of a wide range of individuals. However, currently there is a reliance on a database retrieval model, in which endusers use keywords to search large collections of automatically indexed resources in order to find needed information. As an alternative to the database retrieval model, this paper outlines the classical digital library model, which is derived from traditional practices of library and information science professionals. These practices include the selection and organization of information resources for local populations of users and the integration of advanced information retrieval tools, such as databases and the Internet into these collections. To evaluate this model, library and information professionals and endusers involved with primary care medicine were asked to respond to a series of questions comparing their experiences with a digital library developed for the primary care population to their experiences with general Internet use. Preliminary results are reported
  6. Hood, W.W.; Wilson, C.S.: ¬The scatter of documents over databases in different subject domains : how many databases are needed? (2001) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 6936) [ClassicSimilarity], result of:
          0.013336393 = score(doc=6936,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 6936, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6936)
      0.33333334 = coord(1/3)
    
    Abstract
    The distribution of bibliographic records in on-line bibliographic databases is examined using 14 different search topics. These topics were searched using the DIALOG database host, and using as many suitable databases as possible. The presence of duplicate records in the searches was taken into consideration in the analysis, and the problem with lexical ambiguity in at least one search topic is discussed. The study answers questions such as how many databases are needed in a multifile search for particular topics, and what coverage will be achieved using a certain number of databases. The distribution of the percentages of records retrieved over a number of databases for 13 of the 14 search topics roughly fell into three groups: (1) high concentration of records in one database with about 80% coverage in five to eight databases; (2) moderate concentration in one database with about 80% coverage in seven to 10 databases; and (3) low concentration in one database with about 80% coverage in 16 to 19 databases. The study does conform with earlier results, but shows that the number of databases needed for searches with varying complexities of search strategies, is much more topic dependent than previous studies would indicate.
  7. Borlund, P.: Experimental components for the evaluation of interactive information retrieval systems (2000) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 4549) [ClassicSimilarity], result of:
          0.013336393 = score(doc=4549,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 4549, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4549)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a set of basic components which constitutes the experimental setting intended for the evaluation of interactive information retrieval (IIR) systems, the aim of which is to facilitate evaluation of IIR systems in a way which is as close as possible to realistic IR processes. The experimental settings consists of 3 components: (1) the involvement of potential users as test persons; (2) the application of dynamic and individual information needs; and (3) the use of multidimensionsal and dynamic relevance judgements. Hidden under the information need component is the essential central sub-component, the simulated work task situation, the tool that triggers the (simulated) dynamic information need. This paper also reports on the empirical findings of the meta-evaluation of the application of this sub-component, the purpose of which is to discover whether the application of simulated work task situations to future evaluation of IIR systems can be recommended. Investigations are carried out to dertermine whether any search behavioural differences exist between test persons' treatment of their own real information needs versus simulated information needs. The hypothesis is that if no difference exist one can correctly substitute real information needs with simulated information needs through the application of simulated work task situations. The empirical results of the meta-evaluation provide positive evidence for the application of simulated work task situations to the evaluation of IIR systems. The results also indicate that tailoring work task situations to the group of test persons is important in motivating them. Furthermore, the results of the evaluation show that different versions of semantic openness of the simulated situations make no difference to the test persons' search treatment
  8. Radev, D.R.; Libner, K.; Fan, W.: Getting answers to natural language questions on the Web (2002) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 5204) [ClassicSimilarity], result of:
          0.013336393 = score(doc=5204,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 5204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
      0.33333334 = coord(1/3)
    
  9. Talvensaari, T.; Laurikkala, J.; Järvelin, K.; Juhola, M.: ¬A study on automatic creation of a comparable document collection in cross-language information retrieval (2006) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 5601) [ClassicSimilarity], result of:
          0.013336393 = score(doc=5601,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 5601, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5601)
      0.33333334 = coord(1/3)
    
  10. Vakkari, P.; Huuskonen, S.: Search effort degrades search output but improves task outcome (2012) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 46) [ClassicSimilarity], result of:
          0.013336393 = score(doc=46,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 46, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=46)
      0.33333334 = coord(1/3)
    
    Abstract
    We analyzed how effort in searching is associated with search output and task outcome. In a field study, we examined how students' search effort for an assigned learning task was associated with precision and relative recall, and how this was associated to the quality of learning outcome. The study subjects were 41 medical students writing essays for a class in medicine. Searching in Medline was part of their assignment. The data comprised students' search logs in Medline, their assessment of the usefulness of references retrieved, a questionnaire concerning the search process, and evaluation scores of the essays given by the teachers. Pearson correlation was calculated for answering the research questions. Finally, a path model for predicting task outcome was built. We found that effort in the search process degraded precision but improved task outcome. There were two major mechanisms reducing precision while enhancing task outcome. Effort in expanding Medical Subject Heading (MeSH) terms within search sessions and effort in assessing and exploring documents in the result list between the sessions degraded precision, but led to better task outcome. Thus, human effort compensated bad retrieval results on the way to good task outcome. Findings suggest that traditional effectiveness measures in information retrieval should be complemented with evaluation measures for search process and outcome.
  11. Angelini, M.; Fazzini, V.; Ferro, N.; Santucci, G.; Silvello, G.: CLAIRE: A combinatorial visual analytics system for information retrieval evaluation (2018) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 5049) [ClassicSimilarity], result of:
          0.013336393 = score(doc=5049,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 5049, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5049)
      0.33333334 = coord(1/3)
    
    Abstract
    Information Retrieval (IR) develops complex systems, constituted of several components, which aim at returning and optimally ranking the most relevant documents in response to user queries. In this context, experimental evaluation plays a central role, since it allows for measuring IR systems effectiveness, increasing the understanding of their functioning, and better directing the efforts for improving them. Current evaluation methodologies are limited by two major factors: (i) IR systems are evaluated as "black boxes", since it is not possible to decompose the contributions of the different components, e.g., stop lists, stemmers, and IR models; (ii) given that it is not possible to predict the effectiveness of an IR system, both academia and industry need to explore huge numbers of systems, originated by large combinatorial compositions of their components, to understand how they perform and how these components interact together. We propose a Combinatorial visuaL Analytics system for Information Retrieval Evaluation (CLAIRE) which allows for exploring and making sense of the performances of a large amount of IR systems, in order to quickly and intuitively grasp which system configurations are preferred, what are the contributions of the different components and how these components interact together. The CLAIRE system is then validated against use cases based on several test collections using a wide set of systems, generated by a combinatorial composition of several off-the-shelf components, representing the most common denominator almost always present in English IR systems. In particular, we validate the findings enabled by CLAIRE with respect to consolidated deep statistical analyses and we show that the CLAIRE system allows the generation of new insights, which were not detectable with traditional approaches.
  12. Vegt, A. van der; Zuccon, G.; Koopman, B.: Do better search engines really equate to better clinical decisions? : If not, why not? (2021) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 150) [ClassicSimilarity], result of:
          0.013336393 = score(doc=150,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=150)
      0.33333334 = coord(1/3)
    
    Abstract
    Previous research has found that improved search engine effectiveness-evaluated using a batch-style approach-does not always translate to significant improvements in user task performance; however, these prior studies focused on simple recall and precision-based search tasks. We investigated the same relationship, but for realistic, complex search tasks required in clinical decision making. One hundred and nine clinicians and final year medical students answered 16 clinical questions. Although the search engine did improve answer accuracy by 20 percentage points, there was no significant difference when participants used a more effective, state-of-the-art search engine. We also found that the search engine effectiveness difference, identified in the lab, was diminished by around 70% when the search engines were used with real users. Despite the aid of the search engine, half of the clinical questions were answered incorrectly. We further identified the relative contribution of search engine effectiveness to the overall end task success. We found that the ability to interpret documents correctly was a much more important factor impacting task success. If these findings are representative, information retrieval research may need to reorient its emphasis towards helping users to better understand information, rather than just finding it for them.
  13. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0038498852 = product of:
      0.011549655 = sum of:
        0.011549655 = weight(_text_:on in 636) [ClassicSimilarity], result of:
          0.011549655 = score(doc=636,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.10522352 = fieldWeight in 636, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.33333334 = coord(1/3)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
  14. Bateman, J.: Modelling the importance of end-user relevance criteria (1999) 0.00
    0.0035563714 = product of:
      0.010669114 = sum of:
        0.010669114 = weight(_text_:on in 6606) [ClassicSimilarity], result of:
          0.010669114 = score(doc=6606,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 6606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=6606)
      0.33333334 = coord(1/3)
    
    Abstract
    In most information retrieval research, the concept of relevance has been defined a priori as a single variable by the researcher and the meaning of relevance to the end-user who is making relevance judgments has not been taken into account. However, a number of criteria that users employ in making relevance judgments has been identified (Schamher, 1991; Barry, 1993). Understanding these criteria and their importance to end-users can help researchers better understand end-user evaluation behavior. This study reports end-users' ratings of the relative importance of 40 relevance criteria as used in their own information-seeking situations, and examines relationships between criteria that they rated most important. Data were collected from 210 graduate students who were instructed in a mail survey to rate 40 relevance criteria by importance in their selection of the most valuable information source for a recent or current paper or project. The criteria were selected from previous studies in which open-ended interviews were used to elicit criteria from end-users making judgments in their own information-seeking situations (Schamher, 1991; Su, 1992; Barry, 1993). A model of relevance with three constructs that contribute to the concept of relevance was proposed using the eleven criteria that survey respondents rated as most important (75 or more on a scale of 0 to 100). The development of this model was guided by similarities in criteria and criteria groupings from previous research (Barry & Schamher, 1998). Confirmatory factor analysis was used to confirm the model and verify that the constructs would produce reliable subscale scores. The three constructs are information quality, information credibility, and information completeness. Second-order factor analysis indicated that these constructs explain 48% of positive relevance judgments for these respondents. Three additional constructs, information availability, information topicality, and information currency are also suggested. The constructs developed from this analysis are thought to underlie the concept of relevance for this group of users
  15. Mansourian, Y.; Ford, N.: Web searchers' attributions of success and failure: an empirical study (2007) 0.00
    0.0035563714 = product of:
      0.010669114 = sum of:
        0.010669114 = weight(_text_:on in 840) [ClassicSimilarity], result of:
          0.010669114 = score(doc=840,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 840, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=840)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper reports the findings of a study designed to explore web searchers' perceptions of the causes of their search failure and success. In particular, it seeks to discover the extent to which the constructs locus of control and attribution theory might provide useful frameworks for understanding searchers' perceptions. Design/methodology/approach - A combination of inductive and deductive approaches were employed. Perceptions of failed and successful searches were derived from the inductive analysis of using open-ended qualitative interviews with a sample of 37 biologists at the University of Sheffield. These perceptions were classified into "internal" and "external" attributions, and the relationships between these categories and "successful" and "failed" searches were analysed deductively to test the extent to which they might be explainable using locus of control and attribution theory interpretive frameworks. Findings - All searchers were readily able to recall "successful" and "unsuccessful" searches. In a large majority of cases (82.4 per cent), they clearly attributed each search to either internal (e.g. ability or effort) or external (e.g. luck or information not being available) factors. The pattern of such relationships was analysed, and mapped onto those that would be predicted by locus of control and attribution theory. The authors conclude that the potential of these theoretical frameworks to illuminate one's understanding of web searching, and associated training, merits further systematic study. Research limitations/implications - The findings are based on a relatively small sample of academic and research staff in a particular subject area. Importantly, also, the study can at best provide a prima facie case for further systematic study since, although the patterns of attribution behaviour accord with those predictable by locus of control and attribution theory, data relating to the predictive elements of these theories (e.g. levels of confidence and achievement) were not available. This issue is discussed, and recommendations made for further work. Originality/value - The findings provide some empirical support for the notion that locus of control and attribution theory might - subject to the limitations noted above - be potentially useful theoretical frameworks for helping us better understand web-based information seeking. If so, they could have implications particularly for better understanding of searchers' motivations, and for the design and development of more effective search training programmes.
  16. Naderi, H.; Rumpler, B.: PERCIRS: a system to combine personalized and collaborative information retrieval (2010) 0.00
    0.0035563714 = product of:
      0.010669114 = sum of:
        0.010669114 = weight(_text_:on in 3960) [ClassicSimilarity], result of:
          0.010669114 = score(doc=3960,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 3960, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=3960)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper aims to discuss and test the claim that utilization of the personalization techniques can be valuable to improve the efficiency of collaborative information retrieval (CIR) systems. Design/methodology/approach - A new personalized CIR system, called PERCIRS, is presented based on the user profile similarity calculation (UPSC) formulas. To this aim, the paper proposes several UPSC formulas as well as two techniques to evaluate them. As the proposed CIR system is personalized, it could not be evaluated by Cranfield, like evaluation techniques (e.g. TREC). Hence, this paper proposes a new user-centric mechanism, which enables PERCIRS to be evaluated. This mechanism is generic and can be used to evaluate any other personalized IR system. Findings - The results show that among the proposed UPSC formulas in this paper, the (query-document)-graph based formula is the most effective. After integrating this formula into PERCIRS and comparing it with nine other IR systems, it is concluded that the results of the system are better than the other IR systems. In addition, the paper shows that the complexity of the system is less that the complexity of the other CIR systems. Research limitations/implications - This system asks the users to explicitly rank the returned documents, while explicit ranking is still not widespread enough. However it believes that the users should actively participate in the IR process in order to aptly satisfy their needs to information. Originality/value - The value of this paper lies in combining collaborative and personalized IR, as well as introducing a mechanism which enables the personalized IR system to be evaluated. The proposed evaluation mechanism is very valuable for developers of personalized IR systems. The paper also introduces some significant user profile similarity calculation formulas, and two techniques to evaluate them. These formulas can also be used to find the user's community in the social networks.

Languages

Types

  • a 222
  • s 10
  • m 6
  • el 3
  • r 1
  • More… Less…