Search (4 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × type_ss:"m"
  1. TREC: experiment and evaluation in information retrieval (2005) 0.02
    0.020176543 = sum of:
      0.0075204372 = product of:
        0.030081749 = sum of:
          0.030081749 = weight(_text_:authors in 636) [ClassicSimilarity], result of:
            0.030081749 = score(doc=636,freq=2.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.12592064 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=636)
        0.25 = coord(1/4)
      0.012656106 = product of:
        0.025312211 = sum of:
          0.025312211 = weight(_text_:j in 636) [ClassicSimilarity], result of:
            0.025312211 = score(doc=636,freq=6.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.1520165 = fieldWeight in 636, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.01953125 = fieldNorm(doc=636)
        0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
  2. Ellis, D.: Progress and problems in information retrieval (1996) 0.01
    0.014199706 = product of:
      0.028399412 = sum of:
        0.028399412 = product of:
          0.056798823 = sum of:
            0.056798823 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.056798823 = score(doc=789,freq=2.0), product of:
                0.1835056 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052402776 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26. 7.2002 20:22:46
  3. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.01
    0.014199706 = product of:
      0.028399412 = sum of:
        0.028399412 = product of:
          0.056798823 = sum of:
            0.056798823 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
              0.056798823 = score(doc=4049,freq=2.0), product of:
                0.1835056 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052402776 = queryNorm
                0.30952093 = fieldWeight in 4049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  4. Cross-language information retrieval (1998) 0.00
    0.0037602186 = product of:
      0.0075204372 = sum of:
        0.0075204372 = product of:
          0.030081749 = sum of:
            0.030081749 = weight(_text_:authors in 6299) [ClassicSimilarity], result of:
              0.030081749 = score(doc=6299,freq=2.0), product of:
                0.2388945 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052402776 = queryNorm
                0.12592064 = fieldWeight in 6299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Footnote
    The retrieved output from a query including the phrase 'big rockets' may be, for instance, a sentence containing 'giant rocket' which is semantically ranked above 'military ocket'. David Hull (Xerox Research Centre, Grenoble) describes an implementation of a weighted Boolean model for Spanish-English CLIR. Users construct Boolean-type queries, weighting each term in the query, which is then translated by an on-line dictionary before being applied to the database. Comparisons with the performance of unweighted free-form queries ('vector space' models) proved encouraging. Two contributions consider the evaluation of CLIR systems. In order to by-pass the time-consuming and expensive process of assembling a standard collection of documents and of user queries against which the performance of an CLIR system is manually assessed, Páriac Sheridan et al (ETH Zurich) propose a method based on retrieving 'seed documents'. This involves identifying a unique document in a database (the 'seed document') and, for a number of queries, measuring how fast it is retrieved. The authors have also assembled a large database of multilingual news documents for testing purposes. By storing the (fairly short) documents in a structured form tagged with descriptor codes (e.g. for topic, country and area), the test suite is easily expanded while remaining consistent for the purposes of testing. Douglas Ouard and Bonne Dorr (University of Maryland) describe an evaluation methodology which appears to apply LSI techniques in order to filter and rank incoming documents designed for testing CLIR systems. The volume provides the reader an excellent overview of several projects in CLIR. It is well supported with references and is intended as a secondary text for researchers and practitioners. It highlights the need for a good, general tutorial introduction to the field."