Search (110 results, page 1 of 6)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[2000 TO 2010}
  1. TREC: experiment and evaluation in information retrieval (2005) 0.04
    0.039899282 = product of:
      0.13964748 = sum of:
        0.015099285 = weight(_text_:web in 636) [ClassicSimilarity], result of:
          0.015099285 = score(doc=636,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.15613155 = fieldWeight in 636, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.009769271 = weight(_text_:information in 636) [ClassicSimilarity], result of:
          0.009769271 = score(doc=636,freq=30.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.18779588 = fieldWeight in 636, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.044308405 = weight(_text_:retrieval in 636) [ClassicSimilarity], result of:
          0.044308405 = score(doc=636,freq=70.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.49430186 = fieldWeight in 636, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.07047053 = weight(_text_:kongress in 636) [ClassicSimilarity], result of:
          0.07047053 = score(doc=636,freq=8.0), product of:
            0.19442701 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.029633347 = queryNorm
            0.36245233 = fieldWeight in 636, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.2857143 = coord(4/14)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
    LCSH
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
    RSWK
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Subject
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
  2. Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005) 0.02
    0.02436916 = product of:
      0.085292056 = sum of:
        0.022496238 = weight(_text_:wide in 5082) [ClassicSimilarity], result of:
          0.022496238 = score(doc=5082,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.171337 = fieldWeight in 5082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
        0.012204607 = weight(_text_:web in 5082) [ClassicSimilarity], result of:
          0.012204607 = score(doc=5082,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.12619963 = fieldWeight in 5082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
        0.00865008 = weight(_text_:information in 5082) [ClassicSimilarity], result of:
          0.00865008 = score(doc=5082,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16628155 = fieldWeight in 5082, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
        0.04194113 = weight(_text_:retrieval in 5082) [ClassicSimilarity], result of:
          0.04194113 = score(doc=5082,freq=32.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 5082, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
      0.2857143 = coord(4/14)
    
    Abstract
    Text retrieval technology targets a problem that is all too familiar: finding relevant information in large stores of electronic documents. The problem is an old one, with the first research conference devoted to the subject held in 1958 [11]. Since then the problem has continued to grow as more information is created in electronic form and more people gain electronic access. The advent of the World Wide Web, where anyone can publish so everyone must search, is a graphic illustration of the need for effective retrieval technology. The Text REtrieval Conference (TREC) is a workshop series designed to build the infrastructure necessary for the large-scale evaluation of text retrieval technology, thereby accelerating its transfer into the commercial sector. The series is sponsored by the U.S. National Institute of Standards and Technology (NIST) and the U.S. Department of Defense. At the time of this writing, there have been twelve TREC workshops and preparations for the thirteenth workshop are under way. Participants in the workshops have been drawn from the academic, commercial, and government sectors, and have included representatives from more than twenty different countries. These collective efforts have accomplished a great deal: a variety of large test collections have been built for both traditional ad hoc retrieval and related tasks such as cross-language retrieval, speech retrieval, and question answering; retrieval effectiveness has approximately doubled; and many commercial retrieval systems now contain technology first developed in TREC.
    This book chronicles the evolution of retrieval systems over the course of TREC. To be sure, there has already been a wealth of information written about TREC. Each conference has produced a proceedings containing general overviews of the various tasks, papers written by the individual participants, and evaluation results.1 Reports on expanded versions of TREC experiments frequently appear in the wider information retrieval literature. There also have been special issues of journals devoted to particular TRECs [3; 13] and particular TREC tasks [6; 4]. No single volume could hope to be a comprehensive record of all TREC-related research. Instead, this book looks to distill the overabundance of detail into a manageable whole that summarizes the main lessons learned from TREC. The book consists of three main parts. The first part contains introductory and descriptive chapters on TREC's history, the major products of TREC (the test collections), and the retrieval evaluation methodology. Part II includes chapters describing the major TREC ''tracks,'' evaluations of special subtopics such as cross-language retrieval and question answering. Part III contains contributions from research groups that have participated in TREC. The epilogue to the book is written by Karen Sparck Jones, who reflects on the impact TREC has had on the information retrieval field. The structure of this introductory chapter is similar to that of the book as a whole. The chapter begins with a short history of TREC; expanded descriptions of specific aspects of the history are included in subsequent chapters to make those chapters self-contained. Section 1.2 describes TREC's track structure, which has been responsible for the growth of TREC and allows TREC to adapt to changing needs. The final section lists both the major accomplishments of TREC and some remaining challenges.
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  3. Lazonder, A.W.; Biemans, H.J.A.; Wopereis, I.G.J.H.: Differences between novice and experienced users in searching information on the World Wide Web (2000) 0.02
    0.021466525 = product of:
      0.10017712 = sum of:
        0.03856498 = weight(_text_:wide in 4598) [ClassicSimilarity], result of:
          0.03856498 = score(doc=4598,freq=2.0), product of:
            0.1312982 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.029633347 = queryNorm
            0.29372054 = fieldWeight in 4598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
        0.046783425 = weight(_text_:web in 4598) [ClassicSimilarity], result of:
          0.046783425 = score(doc=4598,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.48375595 = fieldWeight in 4598, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
        0.01482871 = weight(_text_:information in 4598) [ClassicSimilarity], result of:
          0.01482871 = score(doc=4598,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2850541 = fieldWeight in 4598, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
      0.21428572 = coord(3/14)
    
    Abstract
    Searching for information on the WWW basically comes down to locating an appropriate Web site and to retrieving relevant information from that site. This study examined the effect of a user's WWW experience on both phases of the search process. 35 students from 2 schools for Dutch pre-university education were observed while performing 3 search tasks. The results indicate that subjects with WWW-experience are more proficient in locating Web sites than are novice WWW-users. The observed differences were ascribed to the experts' superior skills in operating Web search engines. However, on tasks that required subjects to locate information on specific Web sites, the performance of experienced and novice users was equivalent - a result that is in line with hypertext research. Based on these findings, implications for training and supporting students in searching for information on the WWW are identified. Finally, the role of the subjects' level of domain expertise is discussed and directions for future research are proposed
    Source
    Journal of the American Society for Information Science. 51(2000) no.6, S.576-581
  4. Hawking, D.; Craswell, N.: ¬The very large collection and Web tracks (2005) 0.02
    0.019264605 = product of:
      0.08990149 = sum of:
        0.041844364 = weight(_text_:web in 5085) [ClassicSimilarity], result of:
          0.041844364 = score(doc=5085,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.43268442 = fieldWeight in 5085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=5085)
        0.012107591 = weight(_text_:information in 5085) [ClassicSimilarity], result of:
          0.012107591 = score(doc=5085,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 5085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=5085)
        0.03594954 = weight(_text_:retrieval in 5085) [ClassicSimilarity], result of:
          0.03594954 = score(doc=5085,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 5085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=5085)
      0.21428572 = coord(3/14)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  5. Airio, E.: Who benefits from CLIR in web retrieval? (2008) 0.02
    0.017303396 = product of:
      0.080749184 = sum of:
        0.036238287 = weight(_text_:web in 2342) [ClassicSimilarity], result of:
          0.036238287 = score(doc=2342,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 2342, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2342)
        0.00856136 = weight(_text_:information in 2342) [ClassicSimilarity], result of:
          0.00856136 = score(doc=2342,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 2342, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2342)
        0.03594954 = weight(_text_:retrieval in 2342) [ClassicSimilarity], result of:
          0.03594954 = score(doc=2342,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 2342, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2342)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - The aim of the current paper is to test whether query translation is beneficial in web retrieval. Design/methodology/approach - The language pairs were Finnish-Swedish, English-German and Finnish-French. A total of 12-18 participants were recruited for each language pair. Each participant performed four retrieval tasks. The author's aim was to compare the performance of the translated queries with that of the target language queries. Thus, the author asked participants to formulate a source language query and a target language query for each task. The source language queries were translated into the target language utilizing a dictionary-based system. In English-German, also machine translation was utilized. The author used Google as the search engine. Findings - The results differed depending on the language pair. The author concluded that the dictionary coverage had an effect on the results. On average, the results of query-translation were better than in the traditional laboratory tests. Originality/value - This research shows that query translation in web is beneficial especially for users with moderate and non-active language skills. This is valuable information for developers of cross-language information retrieval systems.
  6. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.016029192 = product of:
      0.0748029 = sum of:
        0.014125523 = weight(_text_:information in 6438) [ClassicSimilarity], result of:
          0.014125523 = score(doc=6438,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.04194113 = weight(_text_:retrieval in 6438) [ClassicSimilarity], result of:
          0.04194113 = score(doc=6438,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.018736245 = product of:
          0.056208733 = sum of:
            0.056208733 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.056208733 = score(doc=6438,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Date
    11. 8.2001 16:22:19
    Source
    Information processing and management. 36(2000) no.1, S.3-36
  7. Landoni, M.; Bell, S.: Information retrieval techniques for evaluating search engines : a critical overview (2000) 0.01
    0.013749211 = product of:
      0.064162984 = sum of:
        0.020922182 = weight(_text_:web in 716) [ClassicSimilarity], result of:
          0.020922182 = score(doc=716,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 716, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=716)
        0.012107591 = weight(_text_:information in 716) [ClassicSimilarity], result of:
          0.012107591 = score(doc=716,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 716, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=716)
        0.031133216 = weight(_text_:retrieval in 716) [ClassicSimilarity], result of:
          0.031133216 = score(doc=716,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.34732026 = fieldWeight in 716, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=716)
      0.21428572 = coord(3/14)
    
    Abstract
    The objective of this paper is to highlight the importance of a scientifically sounded approach to search engine evaluation. Nowadays there is a flourishing literature which describes various attempts at conducting such evaluation by following all sort of approaches, but very often only the final results are published with little, if any, information about the methodology and the procedures adopted. These various experiments have been critically investigated and catalogued according to their scientific foundation by Bell [1] in the attempt to provide a valuable framework for future studies in this area. This paper reconsiders some of Bell's ideas in the light of the crisis of classic evaluation techniques for information retrieval and tries to envisage some form of collaboration between the IR and web communities in order to design a better and more consistent platform for the evaluation of tools for interactive information retrieval.
  8. Borlund, P.: Evaluation of interactive information retrieval systems (2000) 0.01
    0.013730608 = product of:
      0.09611425 = sum of:
        0.02421518 = weight(_text_:information in 2556) [ClassicSimilarity], result of:
          0.02421518 = score(doc=2556,freq=18.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.46549135 = fieldWeight in 2556, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
        0.07189907 = weight(_text_:retrieval in 2556) [ClassicSimilarity], result of:
          0.07189907 = score(doc=2556,freq=18.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.8021017 = fieldWeight in 2556, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
      0.14285715 = coord(2/14)
    
    LCSH
    Information storage and retrieval systems / Evaluation
    RSWK
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Subject
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Information storage and retrieval systems / Evaluation
  9. Savoy , J.: Cross-language information retrieval : experiments based an CLEF 2000 corpora (2003) 0.01
    0.012177391 = product of:
      0.056827825 = sum of:
        0.020922182 = weight(_text_:web in 1034) [ClassicSimilarity], result of:
          0.020922182 = score(doc=1034,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 1034, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1034)
        0.0104854815 = weight(_text_:information in 1034) [ClassicSimilarity], result of:
          0.0104854815 = score(doc=1034,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.20156369 = fieldWeight in 1034, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1034)
        0.025420163 = weight(_text_:retrieval in 1034) [ClassicSimilarity], result of:
          0.025420163 = score(doc=1034,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 1034, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1034)
      0.21428572 = coord(3/14)
    
    Abstract
    Search engines play an essential role in the usability of Internet-based information systems and without them the Web would be much less accessible, and at the very least would develop at a much slower rate. Given that non-English users now tend to make up the majority in this environment, our main objective is to analyze and evaluate the retrieval effectiveness of various indexing and search strategies based on test-collections written in four different languages: English, French, German, and Italian. Our second objective is to describe and evaluate various approaches that might be implemented in order to effectively access document collections written in another language. As a third objective, we will explore the underlying problems involved in searching document collections written in the four different languages, and we will suggest and evaluate different database merging strategies capable of providing the user with a single unique result list.
    Source
    Information processing and management. 39(2003) no.1, S.75-115
  10. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.01
    0.01215637 = product of:
      0.056729726 = sum of:
        0.013347364 = weight(_text_:information in 2026) [ClassicSimilarity], result of:
          0.013347364 = score(doc=2026,freq=14.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.256578 = fieldWeight in 2026, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.036690846 = weight(_text_:retrieval in 2026) [ClassicSimilarity], result of:
          0.036690846 = score(doc=2026,freq=12.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40932083 = fieldWeight in 2026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.020074548 = score(doc=2026,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  11. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.01
    0.010501441 = product of:
      0.049006723 = sum of:
        0.012357258 = weight(_text_:information in 1184) [ClassicSimilarity], result of:
          0.012357258 = score(doc=1184,freq=12.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23754507 = fieldWeight in 1184, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.029957948 = weight(_text_:retrieval in 1184) [ClassicSimilarity], result of:
          0.029957948 = score(doc=1184,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33420905 = fieldWeight in 1184, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.020074548 = score(doc=1184,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  12. Buckley, C.; Voorhees, E.M.: Retrieval system evaluation (2005) 0.01
    0.01049132 = product of:
      0.07343923 = sum of:
        0.014125523 = weight(_text_:information in 648) [ClassicSimilarity], result of:
          0.014125523 = score(doc=648,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27153665 = fieldWeight in 648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=648)
        0.05931371 = weight(_text_:retrieval in 648) [ClassicSimilarity], result of:
          0.05931371 = score(doc=648,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.6617001 = fieldWeight in 648, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=648)
      0.14285715 = coord(2/14)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  13. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.01
    0.0103876535 = product of:
      0.07271357 = sum of:
        0.05917687 = weight(_text_:web in 4587) [ClassicSimilarity], result of:
          0.05917687 = score(doc=4587,freq=16.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.6119082 = fieldWeight in 4587, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
        0.013536699 = weight(_text_:information in 4587) [ClassicSimilarity], result of:
          0.013536699 = score(doc=4587,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.2602176 = fieldWeight in 4587, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
      0.14285715 = coord(2/14)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
    Source
    Journal of the American Society for Information Science. 51(2000) no.5, S.432-443
  14. Cole, C.: Intelligent information retrieval : Part IV: Testing the timing of two information retrieval devices in a naturalistic setting (2001) 0.01
    0.010258756 = product of:
      0.07181129 = sum of:
        0.020970963 = weight(_text_:information in 365) [ClassicSimilarity], result of:
          0.020970963 = score(doc=365,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.40312737 = fieldWeight in 365, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=365)
        0.050840326 = weight(_text_:retrieval in 365) [ClassicSimilarity], result of:
          0.050840326 = score(doc=365,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5671716 = fieldWeight in 365, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=365)
      0.14285715 = coord(2/14)
    
    Source
    Information processing and management. 37(2001) no.1, S.163-182
  15. Buckley, C.; Voorhees, E.M.: Retrieval evaluation with incomplete information (2004) 0.01
    0.009709007 = product of:
      0.06796305 = sum of:
        0.01712272 = weight(_text_:information in 4127) [ClassicSimilarity], result of:
          0.01712272 = score(doc=4127,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3291521 = fieldWeight in 4127, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4127)
        0.050840326 = weight(_text_:retrieval in 4127) [ClassicSimilarity], result of:
          0.050840326 = score(doc=4127,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5671716 = fieldWeight in 4127, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4127)
      0.14285715 = coord(2/14)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  16. MacFarlane, A.: Evaluation of web search for the information practitioner (2007) 0.01
    0.009637499 = product of:
      0.06746249 = sum of:
        0.055354897 = weight(_text_:web in 817) [ClassicSimilarity], result of:
          0.055354897 = score(doc=817,freq=14.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.57238775 = fieldWeight in 817, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=817)
        0.012107591 = weight(_text_:information in 817) [ClassicSimilarity], result of:
          0.012107591 = score(doc=817,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 817, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=817)
      0.14285715 = coord(2/14)
    
    Abstract
    Purpose - The aim of the paper is to put forward a structured mechanism for web search evaluation. The paper seeks to point to useful scientific research and show how information practitioners can use these methods in evaluation of search on the web for their users. Design/methodology/approach - The paper puts forward an approach which utilizes traditional laboratory-based evaluation measures such as average precision/precision at N documents, augmented with diagnostic measures such as link broken, etc., which are used to show why precision measures are depressed as well as the quality of the search engines crawling mechanism. Findings - The paper shows how to use diagnostic measures in conjunction with precision in order to evaluate web search. Practical implications - The methodology presented in this paper will be useful to any information professional who regularly uses web search as part of their information seeking and needs to evaluate web search services. Originality/value - The paper argues that the use of diagnostic measures is essential in web search, as precision measures on their own do not allow a searcher to understand why search results differ between search engines.
  17. Kwok, K.-L.: Ten years of ad hoc retrieval at TREC using PIRCS (2005) 0.01
    0.008992559 = product of:
      0.062947914 = sum of:
        0.012107591 = weight(_text_:information in 5090) [ClassicSimilarity], result of:
          0.012107591 = score(doc=5090,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 5090, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=5090)
        0.050840326 = weight(_text_:retrieval in 5090) [ClassicSimilarity], result of:
          0.050840326 = score(doc=5090,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5671716 = fieldWeight in 5090, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=5090)
      0.14285715 = coord(2/14)
    
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  18. Beaulieu, M.: Approaches to user-based studies in information seeking and retrieval : a Sheffield perspective (2003) 0.01
    0.008845377 = product of:
      0.061917633 = sum of:
        0.019976506 = weight(_text_:information in 4692) [ClassicSimilarity], result of:
          0.019976506 = score(doc=4692,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3840108 = fieldWeight in 4692, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4692)
        0.04194113 = weight(_text_:retrieval in 4692) [ClassicSimilarity], result of:
          0.04194113 = score(doc=4692,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 4692, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=4692)
      0.14285715 = coord(2/14)
    
    Source
    Journal of information science. 29(2003) no.4, S.239-248
  19. Voorhees, E.M.: Text REtrieval Conference (TREC) (2009) 0.01
    0.008478267 = product of:
      0.059347864 = sum of:
        0.011415146 = weight(_text_:information in 3890) [ClassicSimilarity], result of:
          0.011415146 = score(doc=3890,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 3890, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3890)
        0.047932718 = weight(_text_:retrieval in 3890) [ClassicSimilarity], result of:
          0.047932718 = score(doc=3890,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5347345 = fieldWeight in 3890, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3890)
      0.14285715 = coord(2/14)
    
    Abstract
    This entry summarizes the history, results, and impact of the Text REtrieval Conference (TREC), a workshop series designed to support the information retrieval community by building the infrastructure necessary for large-scale evaluation of retrieval technology.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  20. Della Mea, V.; Mizzaro, S.: Measuring retrieval effectiveness : a new proposal and a first experimental validation (2004) 0.01
    0.008446382 = product of:
      0.05912467 = sum of:
        0.012233062 = weight(_text_:information in 2263) [ClassicSimilarity], result of:
          0.012233062 = score(doc=2263,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 2263, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2263)
        0.046891607 = weight(_text_:retrieval in 2263) [ClassicSimilarity], result of:
          0.046891607 = score(doc=2263,freq=10.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.5231199 = fieldWeight in 2263, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2263)
      0.14285715 = coord(2/14)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based an the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this article, these assumptions are questioned, and a new measure named ADM (average distance measure) is proposed, discussed from a conceptual point of view, and experimentally validated an Text Retrieval Conference (TREC) data. Both conceptual analysis and experimental evidence demonstrate ADM's adequacy in measuring the effectiveness of information retrieval systems. Some potential problems about precision and recall are also highlighted and discussed.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.6, S.530-543

Types

  • a 108
  • m 2
  • el 1
  • s 1
  • More… Less…