Search (11 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.03
    0.031595502 = product of:
      0.12638201 = sum of:
        0.12638201 = sum of:
          0.08872356 = weight(_text_:aspects in 2552) [ClassicSimilarity], result of:
            0.08872356 = score(doc=2552,freq=4.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.42373765 = fieldWeight in 2552, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
          0.03765845 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
            0.03765845 = score(doc=2552,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.23214069 = fieldWeight in 2552, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  2. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.02
    0.020915726 = product of:
      0.083662905 = sum of:
        0.083662905 = sum of:
          0.052280862 = weight(_text_:aspects in 1184) [ClassicSimilarity], result of:
            0.052280862 = score(doc=1184,freq=2.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.2496898 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.031382043 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.031382043 = score(doc=1184,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.19345059 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
      0.25 = coord(1/4)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  3. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.010983714 = product of:
      0.043934856 = sum of:
        0.043934856 = product of:
          0.08786971 = sum of:
            0.08786971 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.08786971 = score(doc=6438,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    11. 8.2001 16:22:19
  4. Robins, D.: Shifts of focus on various aspects of user information problems during interactive information retrieval (2000) 0.01
    0.007842129 = product of:
      0.031368516 = sum of:
        0.031368516 = product of:
          0.06273703 = sum of:
            0.06273703 = weight(_text_:aspects in 4995) [ClassicSimilarity], result of:
              0.06273703 = score(doc=4995,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.29962775 = fieldWeight in 4995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4995)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  5. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 4393) [ClassicSimilarity], result of:
              0.052280862 = score(doc=4393,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 4393, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4393)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
  6. Greisdorf, H.; O'Connor, B.: Nodes of topicality modeling user notions of on topic documents (2003) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 5175) [ClassicSimilarity], result of:
              0.052280862 = score(doc=5175,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 5175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5175)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Griesdorf and O'Connor attempt to determine the aspects of a retrieved item that provide a questioner with evidence that the item is in fact on the topic searched independent of its relevance. To this end they collect data from 32 participants, 11 from the business community as well as 21 doctoral students at the University of North Texas each of whom were asked to state if they considered material that approaches a topic in each of 14 specific manners as " on topic" or "off topic." Chi-square indicates that the observed values are significantly different from expected values and the chi-square residuals for on topic judgements exceed plus or minus two in eight cases and plus two in five cases. The positive values which indicate a percentage of response greater than that from chance suggest that documents considered topical are only related to the problem at hand, contain terms that were in the query, and describe, explain or expand the topic of the query. The chi-square residuals for off topic judgements exceed plus or minus two in ten cases and plus two in four cases. The positive values suggest that documents considered not topical exhibit a contrasting, contrary, or confounding point of view, or merely spark curiosity. Such material might well be relevant, but is not judged topical. This suggests that topical appropriateness may best be achieved using the Bruza, et alia, left compositional monotonicity approach.
  7. Ruthven, I.; Baillie, M.; Elsweiler, D.: ¬The relative effects of knowledge, interest and confidence in assessing relevance (2007) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 835) [ClassicSimilarity], result of:
              0.052280862 = score(doc=835,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic and their confidence in assessing relevance for a topic, affect the relevance judgements made and the assessor's ability to predict which documents they will assess as being relevant. Design/methodology/approach - The study was conducted as part of the Text REtrieval Conference (TREC) HARD track. Using a specially constructed questionnaire information was sought on TREC assessors' personal context and, using the TREC assessments gathered, the responses were correlated to the questionnaire questions and the final relevance decisions. Findings - This study found that each of the three factors (interest, knowledge and confidence) had an affect on how many documents were assessed as relevant and the balance between how many documents were marked as marginally or highly relevant. Also these factors are shown to affect an assessors' ability to predict what information they will finally mark as being relevant. Research limitations/implications - The major limitation is that the research is conducted within the TREC initiative. This means that we can report on results but cannot report on discussions with the assessors. The research implications are numerous but mainly on the effect of personal context on the outcomes of a user study. Practical implications - One major consequence is that we should take more account of how we construct search tasks for IIR evaluation to create tasks that are interesting and relevant to experimental subjects. Originality/value - Examining different search variables within one study to compare the relative effects on these variables on the search outcomes.
  8. Mansourian, Y.; Ford, N.: Search persistence and failure on the web : a "bounded rationality" and "satisficing" analysis (2007) 0.01
    0.005228086 = product of:
      0.020912344 = sum of:
        0.020912344 = product of:
          0.041824687 = sum of:
            0.041824687 = weight(_text_:aspects in 841) [ClassicSimilarity], result of:
              0.041824687 = score(doc=841,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19975184 = fieldWeight in 841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.03125 = fieldNorm(doc=841)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to examine our current knowledge of how searchers perceive and react to the possibility of missing potentially important information whilst searching the web is limited. The study reported here seeks to investigate such perceptions and reactions, and to explore the extent to which Simon's "bounded rationality" theory is useful in illuminating these issues. Design/methodology/approach - Totally 37 academic staff, research staff and research students in three university departments were interviewed about their web searching. The open-ended, semi-structured interviews were inductively analysed. Emergence of the concept of "good enough" searching prompted a further analysis to explore the extent to which the data could be interpreted in terms of Simon's concepts of "bounded rationality" and "satisficing". Findings - The results indicate that the risk of missing potentially important information was a matter of concern to the interviewees. Their estimations of the likely extent and importance of missed information affected decisions by individuals as to when to stop searching - decisions based on very different criteria, which map well onto Simon's concepts. On the basis of the interview data, the authors propose tentative categorizations of perceptions of the risk of missing information including "inconsequential" "tolerable" "damaging" and "disastrous" and search strategies including "perfunctory" "minimalist" "nervous" and "extensive". It is concluded that there is at least a prima facie case for bounded rationality and satisficing being considered as potentially useful concepts in our quest better to understand aspects of human information behaviour. Research limitations/implications - Although the findings are based on a relatively small sample and an exploratory qualitative analysis, it is argued that the study raises a number of interesting questions, and has implications for both the development of theory and practice in the areas of web searching and information literacy. Originality/value - The paper focuses on an aspect of web searching which has not to date been well explored. Whilst research has done much to illuminate searchers' perceptions of what they find on the web, we know relatively little of their perceptions of, and reactions to information that they fail to find. The study reported here provides some tentative models, based on empirical evidence, of these phenomena.
  9. Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005) 0.00
    0.0045745755 = product of:
      0.018298302 = sum of:
        0.018298302 = product of:
          0.036596604 = sum of:
            0.036596604 = weight(_text_:aspects in 5082) [ClassicSimilarity], result of:
              0.036596604 = score(doc=5082,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.17478286 = fieldWeight in 5082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5082)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This book chronicles the evolution of retrieval systems over the course of TREC. To be sure, there has already been a wealth of information written about TREC. Each conference has produced a proceedings containing general overviews of the various tasks, papers written by the individual participants, and evaluation results.1 Reports on expanded versions of TREC experiments frequently appear in the wider information retrieval literature. There also have been special issues of journals devoted to particular TRECs [3; 13] and particular TREC tasks [6; 4]. No single volume could hope to be a comprehensive record of all TREC-related research. Instead, this book looks to distill the overabundance of detail into a manageable whole that summarizes the main lessons learned from TREC. The book consists of three main parts. The first part contains introductory and descriptive chapters on TREC's history, the major products of TREC (the test collections), and the retrieval evaluation methodology. Part II includes chapters describing the major TREC ''tracks,'' evaluations of special subtopics such as cross-language retrieval and question answering. Part III contains contributions from research groups that have participated in TREC. The epilogue to the book is written by Karen Sparck Jones, who reflects on the impact TREC has had on the information retrieval field. The structure of this introductory chapter is similar to that of the book as a whole. The chapter begins with a short history of TREC; expanded descriptions of specific aspects of the history are included in subsequent chapters to make those chapters self-contained. Section 1.2 describes TREC's track structure, which has been responsible for the growth of TREC and allows TREC to adapt to changing needs. The final section lists both the major accomplishments of TREC and some remaining challenges.
  10. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.031382043 = score(doc=2026,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  11. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.00
    0.003138204 = product of:
      0.012552816 = sum of:
        0.012552816 = product of:
          0.025105633 = sum of:
            0.025105633 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.025105633 = score(doc=2752,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.15476047 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2009 18:48:28