Search (131 results, page 2 of 7)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[2000 TO 2010}
  1. Hiemstra, D.; Kraaij, W.: ¬A language-modeling approach to TREC (2005) 0.03
    0.029287683 = product of:
      0.07810049 = sum of:
        0.014714771 = weight(_text_:information in 5091) [ClassicSimilarity], result of:
          0.014714771 = score(doc=5091,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.23274569 = fieldWeight in 5091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=5091)
        0.043690715 = weight(_text_:retrieval in 5091) [ClassicSimilarity], result of:
          0.043690715 = score(doc=5091,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.40105087 = fieldWeight in 5091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=5091)
        0.019695 = product of:
          0.059085 = sum of:
            0.059085 = weight(_text_:29 in 5091) [ClassicSimilarity], result of:
              0.059085 = score(doc=5091,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.46638384 = fieldWeight in 5091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5091)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Date
    29. 3.1996 18:16:49
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  2. Sparck Jones, K.: Metareflections on TREC (2005) 0.03
    0.029287683 = product of:
      0.07810049 = sum of:
        0.014714771 = weight(_text_:information in 5092) [ClassicSimilarity], result of:
          0.014714771 = score(doc=5092,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.23274569 = fieldWeight in 5092, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=5092)
        0.043690715 = weight(_text_:retrieval in 5092) [ClassicSimilarity], result of:
          0.043690715 = score(doc=5092,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.40105087 = fieldWeight in 5092, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=5092)
        0.019695 = product of:
          0.059085 = sum of:
            0.059085 = weight(_text_:29 in 5092) [ClassicSimilarity], result of:
              0.059085 = score(doc=5092,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.46638384 = fieldWeight in 5092, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5092)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Date
    29. 3.1996 18:16:49
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  3. Borlund, P.: Evaluation of interactive information retrieval systems (2000) 0.03
    0.02920274 = product of:
      0.11681096 = sum of:
        0.02942954 = weight(_text_:information in 2556) [ClassicSimilarity], result of:
          0.02942954 = score(doc=2556,freq=18.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.46549135 = fieldWeight in 2556, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
        0.08738142 = weight(_text_:retrieval in 2556) [ClassicSimilarity], result of:
          0.08738142 = score(doc=2556,freq=18.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.8021017 = fieldWeight in 2556, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
      0.25 = coord(2/8)
    
    LCSH
    Information storage and retrieval systems / Evaluation
    RSWK
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Subject
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Information storage and retrieval systems / Evaluation
  4. Alemayehu, N.: Analysis of performance variation using quey expansion (2003) 0.03
    0.028537815 = product of:
      0.07610084 = sum of:
        0.012743366 = weight(_text_:information in 1454) [ClassicSimilarity], result of:
          0.012743366 = score(doc=1454,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.20156369 = fieldWeight in 1454, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1454)
        0.053509977 = weight(_text_:retrieval in 1454) [ClassicSimilarity], result of:
          0.053509977 = score(doc=1454,freq=12.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.49118498 = fieldWeight in 1454, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1454)
        0.0098475 = product of:
          0.0295425 = sum of:
            0.0295425 = weight(_text_:29 in 1454) [ClassicSimilarity], result of:
              0.0295425 = score(doc=1454,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.23319192 = fieldWeight in 1454, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1454)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    Information retrieval performance evaluation is commonly made based an the classical recall and precision based figures or graphs. However, important information indicating causes for variation may remain hidden under the average recall and precision figures. Identifying significant causes for variation can help researchers and developers to focus an opportunities for improvement that underlay the averages. This article presents a case study showing the potential of a statistical repeated measures analysis of variance for testing the significance of factors in retrieval performance variation. The TREC-9 Query Track performance data is used as a case study and the factors studied are retrieval method, topic, and their interaction. The results show that retrieval method, topic, and their interaction are all significant. A topic level analysis is also made to see the nature of variation in the performance of retrieval methods across topics. The observed retrieval performances of expansion runs are truly significant improvements for most of the topics. Analyses of the effect of query expansion an document ranking confirm that expansion affects ranking positively.
    Date
    29. 3.2003 19:28:33
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.5, S.379-391
  5. TREC: experiment and evaluation in information retrieval (2005) 0.03
    0.026184592 = product of:
      0.06982558 = sum of:
        0.01187293 = weight(_text_:information in 636) [ClassicSimilarity], result of:
          0.01187293 = score(doc=636,freq=30.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.18779588 = fieldWeight in 636, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.05384953 = weight(_text_:retrieval in 636) [ClassicSimilarity], result of:
          0.05384953 = score(doc=636,freq=70.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.49430186 = fieldWeight in 636, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.0041031255 = product of:
          0.012309376 = sum of:
            0.012309376 = weight(_text_:29 in 636) [ClassicSimilarity], result of:
              0.012309376 = score(doc=636,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.097163305 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Date
    29. 3.1996 18:16:49
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
    LCSH
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
    RSWK
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Subject
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
  6. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.03
    0.025854595 = product of:
      0.06894559 = sum of:
        0.01622151 = weight(_text_:information in 2026) [ClassicSimilarity], result of:
          0.01622151 = score(doc=2026,freq=14.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.256578 = fieldWeight in 2026, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.044591647 = weight(_text_:retrieval in 2026) [ClassicSimilarity], result of:
          0.044591647 = score(doc=2026,freq=12.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.40932083 = fieldWeight in 2026, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.00813243 = product of:
          0.024397288 = sum of:
            0.024397288 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.024397288 = score(doc=2026,freq=2.0), product of:
                0.12611638 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036014426 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  7. Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005) 0.03
    0.025211105 = product of:
      0.06722961 = sum of:
        0.01051274 = weight(_text_:information in 5082) [ClassicSimilarity], result of:
          0.01051274 = score(doc=5082,freq=12.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.16628155 = fieldWeight in 5082, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
        0.0509725 = weight(_text_:retrieval in 5082) [ClassicSimilarity], result of:
          0.0509725 = score(doc=5082,freq=32.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.46789268 = fieldWeight in 5082, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
        0.0057443753 = product of:
          0.017233126 = sum of:
            0.017233126 = weight(_text_:29 in 5082) [ClassicSimilarity], result of:
              0.017233126 = score(doc=5082,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.13602862 = fieldWeight in 5082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5082)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    Text retrieval technology targets a problem that is all too familiar: finding relevant information in large stores of electronic documents. The problem is an old one, with the first research conference devoted to the subject held in 1958 [11]. Since then the problem has continued to grow as more information is created in electronic form and more people gain electronic access. The advent of the World Wide Web, where anyone can publish so everyone must search, is a graphic illustration of the need for effective retrieval technology. The Text REtrieval Conference (TREC) is a workshop series designed to build the infrastructure necessary for the large-scale evaluation of text retrieval technology, thereby accelerating its transfer into the commercial sector. The series is sponsored by the U.S. National Institute of Standards and Technology (NIST) and the U.S. Department of Defense. At the time of this writing, there have been twelve TREC workshops and preparations for the thirteenth workshop are under way. Participants in the workshops have been drawn from the academic, commercial, and government sectors, and have included representatives from more than twenty different countries. These collective efforts have accomplished a great deal: a variety of large test collections have been built for both traditional ad hoc retrieval and related tasks such as cross-language retrieval, speech retrieval, and question answering; retrieval effectiveness has approximately doubled; and many commercial retrieval systems now contain technology first developed in TREC.
    This book chronicles the evolution of retrieval systems over the course of TREC. To be sure, there has already been a wealth of information written about TREC. Each conference has produced a proceedings containing general overviews of the various tasks, papers written by the individual participants, and evaluation results.1 Reports on expanded versions of TREC experiments frequently appear in the wider information retrieval literature. There also have been special issues of journals devoted to particular TRECs [3; 13] and particular TREC tasks [6; 4]. No single volume could hope to be a comprehensive record of all TREC-related research. Instead, this book looks to distill the overabundance of detail into a manageable whole that summarizes the main lessons learned from TREC. The book consists of three main parts. The first part contains introductory and descriptive chapters on TREC's history, the major products of TREC (the test collections), and the retrieval evaluation methodology. Part II includes chapters describing the major TREC ''tracks,'' evaluations of special subtopics such as cross-language retrieval and question answering. Part III contains contributions from research groups that have participated in TREC. The epilogue to the book is written by Karen Sparck Jones, who reflects on the impact TREC has had on the information retrieval field. The structure of this introductory chapter is similar to that of the book as a whole. The chapter begins with a short history of TREC; expanded descriptions of specific aspects of the history are included in subsequent chapters to make those chapters self-contained. Section 1.2 describes TREC's track structure, which has been responsible for the growth of TREC and allows TREC to adapt to changing needs. The final section lists both the major accomplishments of TREC and some remaining challenges.
    Date
    29. 3.1996 18:16:49
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  8. Brown, E.W.; Carmel, D.; Franz, M.; Ittycheriah, A.; Kanungo, T.; Maarek, Y.; McCarley, J.S.; Mack, R.L.; Prager, J.M.; Smith, J.R.; Soffer, A.; Zien, J.Y.; Marwick, A.D.: IBM research activities at TREC (2005) 0.02
    0.024406403 = product of:
      0.06508374 = sum of:
        0.012262309 = weight(_text_:information in 5093) [ClassicSimilarity], result of:
          0.012262309 = score(doc=5093,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.19395474 = fieldWeight in 5093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=5093)
        0.036408927 = weight(_text_:retrieval in 5093) [ClassicSimilarity], result of:
          0.036408927 = score(doc=5093,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.33420905 = fieldWeight in 5093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=5093)
        0.016412502 = product of:
          0.049237505 = sum of:
            0.049237505 = weight(_text_:29 in 5093) [ClassicSimilarity], result of:
              0.049237505 = score(doc=5093,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.38865322 = fieldWeight in 5093, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5093)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Date
    29. 3.1996 18:16:49
    Source
    TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman
  9. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.02
    0.022334833 = product of:
      0.059559554 = sum of:
        0.0150182 = weight(_text_:information in 1184) [ClassicSimilarity], result of:
          0.0150182 = score(doc=1184,freq=12.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.23754507 = fieldWeight in 1184, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.036408927 = weight(_text_:retrieval in 1184) [ClassicSimilarity], result of:
          0.036408927 = score(doc=1184,freq=8.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.33420905 = fieldWeight in 1184, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.00813243 = product of:
          0.024397288 = sum of:
            0.024397288 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.024397288 = score(doc=1184,freq=2.0), product of:
                0.12611638 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036014426 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  10. Cole, C.: Intelligent information retrieval : Part IV: Testing the timing of two information retrieval devices in a naturalistic setting (2001) 0.02
    0.021818683 = product of:
      0.08727473 = sum of:
        0.025486732 = weight(_text_:information in 365) [ClassicSimilarity], result of:
          0.025486732 = score(doc=365,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.40312737 = fieldWeight in 365, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=365)
        0.061788 = weight(_text_:retrieval in 365) [ClassicSimilarity], result of:
          0.061788 = score(doc=365,freq=4.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.5671716 = fieldWeight in 365, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=365)
      0.25 = coord(2/8)
    
    Source
    Information processing and management. 37(2001) no.1, S.163-182
  11. Mandl, T.: Web- und Multimedia-Dokumente : Neuere Entwicklungen bei der Evaluierung von Information Retrieval Systemen (2003) 0.02
    0.02176644 = product of:
      0.08706576 = sum of:
        0.021935485 = weight(_text_:information in 1734) [ClassicSimilarity], result of:
          0.021935485 = score(doc=1734,freq=10.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.3469568 = fieldWeight in 1734, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1734)
        0.06513027 = weight(_text_:retrieval in 1734) [ClassicSimilarity], result of:
          0.06513027 = score(doc=1734,freq=10.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.59785134 = fieldWeight in 1734, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=1734)
      0.25 = coord(2/8)
    
    Abstract
    Die Menge an Daten im Internet steigt weiter rapide an. Damit wächst auch der Bedarf an qualitativ hochwertigen Information Retrieval Diensten zur Orientierung und problemorientierten Suche. Die Entscheidung für die Benutzung oder Beschaffung von Information Retrieval Software erfordert aussagekräftige Evaluierungsergebnisse. Dieser Beitrag stellt neuere Entwicklungen bei der Evaluierung von Information Retrieval Systemen vor und zeigt den Trend zu Spezialisierung und Diversifizierung von Evaluierungsstudien, die den Realitätsgrad derErgebnisse erhöhen. DerSchwerpunkt liegt auf dem Retrieval von Fachtexten, Internet-Seiten und Multimedia-Objekten.
    Source
    Information - Wissenschaft und Praxis. 54(2003) H.4, S.203-210
  12. Buckley, C.; Voorhees, E.M.: Retrieval evaluation with incomplete information (2004) 0.02
    0.020649457 = product of:
      0.08259783 = sum of:
        0.020809827 = weight(_text_:information in 4127) [ClassicSimilarity], result of:
          0.020809827 = score(doc=4127,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.3291521 = fieldWeight in 4127, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4127)
        0.061788 = weight(_text_:retrieval in 4127) [ClassicSimilarity], result of:
          0.061788 = score(doc=4127,freq=4.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.5671716 = fieldWeight in 4127, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4127)
      0.25 = coord(2/8)
    
    Source
    SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a
  13. Keenan, S.; Smeaton, A.F.; Keogh, G.: ¬The effect of pool depth on system evaluation in TREC (2001) 0.02
    0.019982228 = product of:
      0.05328594 = sum of:
        0.008670762 = weight(_text_:information in 5908) [ClassicSimilarity], result of:
          0.008670762 = score(doc=5908,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.13714671 = fieldWeight in 5908, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5908)
        0.036408927 = weight(_text_:retrieval in 5908) [ClassicSimilarity], result of:
          0.036408927 = score(doc=5908,freq=8.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.33420905 = fieldWeight in 5908, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5908)
        0.008206251 = product of:
          0.024618752 = sum of:
            0.024618752 = weight(_text_:29 in 5908) [ClassicSimilarity], result of:
              0.024618752 = score(doc=5908,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.19432661 = fieldWeight in 5908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5908)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    The TREC benchmarking exercise for information retrieval (IR) experiments has provided a forum and an opportunity for IR researchers to evaluate the performance of their approaches to the IR task and has resulted in improvements in IR effectiveness. Typically, retrieval performance has been measured in terms of precision and recall, and comparisons between different IR approaches have been based on these measures. These measures are in turn dependent on the so-called "pool depth" used to discover relevant documents. Whereas there is evidence to suggest that the pool depth size used for TREC evaluations adequately identifies the relevant documents in the entire test data collection, we consider how it affects the evaluations of individual systems. The data used comes from the Sixth TREC conference, TREC-6. By fitting appropriate regression models we explore whether different pool depths confer advantages or disadvantages on different retrieval systems when they are compared. As a consequence of this model fitting, a pair of measures for each retrieval run, which are related to precision and recall, emerge. For each system, these give an extrapolation for the number of relevant documents the system would have been deemed to have retrieved if an indefinitely large pool size had been used, and also a measure of the sensitivity of each system to pool size. We concur that even on the basis of analyses of individual systems, the pool depth of 100 used by TREC is adequate
    Date
    29. 9.2001 14:01:50
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.7, S.570-574
  14. Voorhees, E.M.: Text REtrieval Conference (TREC) (2009) 0.02
    0.018031875 = product of:
      0.0721275 = sum of:
        0.013873219 = weight(_text_:information in 3890) [ClassicSimilarity], result of:
          0.013873219 = score(doc=3890,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.21943474 = fieldWeight in 3890, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3890)
        0.058254283 = weight(_text_:retrieval in 3890) [ClassicSimilarity], result of:
          0.058254283 = score(doc=3890,freq=8.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.5347345 = fieldWeight in 3890, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3890)
      0.25 = coord(2/8)
    
    Abstract
    This entry summarizes the history, results, and impact of the Text REtrieval Conference (TREC), a workshop series designed to support the information retrieval community by building the infrastructure necessary for large-scale evaluation of retrieval technology.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  15. Della Mea, V.; Mizzaro, S.: Measuring retrieval effectiveness : a new proposal and a first experimental validation (2004) 0.02
    0.017964061 = product of:
      0.071856245 = sum of:
        0.01486726 = weight(_text_:information in 2263) [ClassicSimilarity], result of:
          0.01486726 = score(doc=2263,freq=6.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.23515764 = fieldWeight in 2263, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2263)
        0.056988988 = weight(_text_:retrieval in 2263) [ClassicSimilarity], result of:
          0.056988988 = score(doc=2263,freq=10.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.5231199 = fieldWeight in 2263, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2263)
      0.25 = coord(2/8)
    
    Abstract
    Most common effectiveness measures for information retrieval systems are based an the assumptions of binary relevance (either a document is relevant to a given query or it is not) and binary retrieval (either a document is retrieved or it is not). In this article, these assumptions are questioned, and a new measure named ADM (average distance measure) is proposed, discussed from a conceptual point of view, and experimentally validated an Text Retrieval Conference (TREC) data. Both conceptual analysis and experimental evidence demonstrate ADM's adequacy in measuring the effectiveness of information retrieval systems. Some potential problems about precision and recall are also highlighted and discussed.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.6, S.530-543
  16. Tague-Sutcliffe, J.: Information retrieval experimentation (2009) 0.02
    0.017517347 = product of:
      0.07006939 = sum of:
        0.019619694 = weight(_text_:information in 3801) [ClassicSimilarity], result of:
          0.019619694 = score(doc=3801,freq=8.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.3103276 = fieldWeight in 3801, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3801)
        0.05044969 = weight(_text_:retrieval in 3801) [ClassicSimilarity], result of:
          0.05044969 = score(doc=3801,freq=6.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.46309367 = fieldWeight in 3801, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3801)
      0.25 = coord(2/8)
    
    Abstract
    Jean Tague-Sutcliffe was an important figure in information retrieval experimentation. Here, she reviews the history of IR research, and provides a description of the fundamental paradigm of information retrieval experimentation that continues to dominate the field.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  17. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.02
    0.017252883 = product of:
      0.04600769 = sum of:
        0.0069366093 = weight(_text_:information in 2752) [ClassicSimilarity], result of:
          0.0069366093 = score(doc=2752,freq=4.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.10971737 = fieldWeight in 2752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2752)
        0.032565136 = weight(_text_:retrieval in 2752) [ClassicSimilarity], result of:
          0.032565136 = score(doc=2752,freq=10.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.29892567 = fieldWeight in 2752, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2752)
        0.0065059434 = product of:
          0.01951783 = sum of:
            0.01951783 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.01951783 = score(doc=2752,freq=2.0), product of:
                0.12611638 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036014426 = queryNorm
                0.15476047 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    We report data fusion experiments carried out on the four best-performing retrieval models from TREC 5. Three were conceptually/algorithmically very different from one another; one was algorithmically similar to one of the former. The objective of the test was to observe the performance of the 11 logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded as a representation of a unique interpretation of information retrieval (IR). It predicts that only fusions of very different, but equally good, IR models may outperform each constituent as well as their intermediate fusions. Two kinds of experiments were carried out. One tested restricted fusions, which entails that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were used. Performance was measured by precision and recall, with document cutoff values (DCVs) at 100 and 15 documents, respectively. Results show that restricted fusions made of two, three, or four cognitively/algorithmically very different retrieval models perform significantly better than do the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable. The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion in IR. Data fusion improves retrieval performance over their constituent IR models only if the models all are quite conceptually/algorithmically dissimilar and equally and well performing, in that order of importance.
    Date
    22. 3.2009 18:48:28
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.646-654
  18. Pirkola, A.; Järvelin, K.: Employing the resolution power of search keys (2001) 0.02
    0.017084481 = product of:
      0.045558617 = sum of:
        0.008583616 = weight(_text_:information in 5907) [ClassicSimilarity], result of:
          0.008583616 = score(doc=5907,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.13576832 = fieldWeight in 5907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5907)
        0.02548625 = weight(_text_:retrieval in 5907) [ClassicSimilarity], result of:
          0.02548625 = score(doc=5907,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.23394634 = fieldWeight in 5907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5907)
        0.011488751 = product of:
          0.03446625 = sum of:
            0.03446625 = weight(_text_:29 in 5907) [ClassicSimilarity], result of:
              0.03446625 = score(doc=5907,freq=2.0), product of:
                0.1266875 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036014426 = queryNorm
                0.27205724 = fieldWeight in 5907, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5907)
          0.33333334 = coord(1/3)
      0.375 = coord(3/8)
    
    Abstract
    Search key resolution power is analyzed in the context of a request, i.e., among the set of search keys for the request. Methods of characterizing the resolution power of keys automatically are studied, and the effects search keys of varying resolution power have on retrieval effectiveness are analyzed. It is shown that it often is possible to identify the best key of a query while the discrimination between the remaining keys presents problems. It is also shown that query performance is improved by suitably using the best key in a structured query. The tests were run with InQuery in a subcollection of the TREC collection, which contained some 515,000 documents
    Date
    29. 9.2001 14:01:42
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.7, S.575-583
  19. Wilbur, W.J.: Global term weights for document retrieval learned from TREC data (2001) 0.02
    0.017034933 = product of:
      0.06813973 = sum of:
        0.017167233 = weight(_text_:information in 2647) [ClassicSimilarity], result of:
          0.017167233 = score(doc=2647,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.27153665 = fieldWeight in 2647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=2647)
        0.0509725 = weight(_text_:retrieval in 2647) [ClassicSimilarity], result of:
          0.0509725 = score(doc=2647,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.46789268 = fieldWeight in 2647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=2647)
      0.25 = coord(2/8)
    
    Source
    Journal of information science. 27(2001) no.5, S.303-310
  20. Hersh, W.R.; Over, P.: Interactivity at the Text Retrieval Conference (TREC) (2001) 0.02
    0.017034933 = product of:
      0.06813973 = sum of:
        0.017167233 = weight(_text_:information in 5486) [ClassicSimilarity], result of:
          0.017167233 = score(doc=5486,freq=2.0), product of:
            0.06322253 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.036014426 = queryNorm
            0.27153665 = fieldWeight in 5486, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5486)
        0.0509725 = weight(_text_:retrieval in 5486) [ClassicSimilarity], result of:
          0.0509725 = score(doc=5486,freq=2.0), product of:
            0.10894058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036014426 = queryNorm
            0.46789268 = fieldWeight in 5486, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=5486)
      0.25 = coord(2/8)
    
    Source
    Information processing and management. 37(2001) no.3, S.365-367

Languages

  • e 109
  • d 20
  • m 1
  • More… Less…

Types

  • a 122
  • m 5
  • el 3
  • s 3
  • r 2
  • x 2
  • More… Less…