Search (2 results, page 1 of 1)

  • × classification_ss:"025.04 / dc22"
  • × year_i:[2000 TO 2010}
  1. TREC: experiment and evaluation in information retrieval (2005) 0.14
    0.14080381 = product of:
      0.21120572 = sum of:
        0.049735278 = weight(_text_:storage in 636) [ClassicSimilarity], result of:
          0.049735278 = score(doc=636,freq=4.0), product of:
            0.23366846 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.04288404 = queryNorm
            0.21284549 = fieldWeight in 636, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.06412113 = weight(_text_:retrieval in 636) [ClassicSimilarity], result of:
          0.06412113 = score(doc=636,freq=70.0), product of:
            0.12972058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04288404 = queryNorm
            0.49430186 = fieldWeight in 636, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.01937652 = weight(_text_:systems in 636) [ClassicSimilarity], result of:
          0.01937652 = score(doc=636,freq=6.0), product of:
            0.13179013 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04288404 = queryNorm
            0.14702557 = fieldWeight in 636, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.0779728 = product of:
          0.1559456 = sum of:
            0.1559456 = weight(_text_:congresses in 636) [ClassicSimilarity], result of:
              0.1559456 = score(doc=636,freq=8.0), product of:
                0.347934 = queryWeight, product of:
                  8.113368 = idf(docFreq=35, maxDocs=44218)
                  0.04288404 = queryNorm
                0.44820452 = fieldWeight in 636, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  8.113368 = idf(docFreq=35, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
    LCSH
    Text processing (Computer science) / Congresses
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
    RSWK
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Subject
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Text processing (Computer science) / Congresses
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
  2. O'Connor, B.C.; Kearns, J.; Anderson, R.L.: Doing things with information : beyond indexing and abstracting (2008) 0.08
    0.08470565 = product of:
      0.1694113 = sum of:
        0.07957644 = weight(_text_:storage in 4297) [ClassicSimilarity], result of:
          0.07957644 = score(doc=4297,freq=4.0), product of:
            0.23366846 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.04288404 = queryNorm
            0.34055278 = fieldWeight in 4297, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.03125 = fieldNorm(doc=4297)
        0.04247787 = weight(_text_:retrieval in 4297) [ClassicSimilarity], result of:
          0.04247787 = score(doc=4297,freq=12.0), product of:
            0.12972058 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04288404 = queryNorm
            0.32745665 = fieldWeight in 4297, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=4297)
        0.047356993 = weight(_text_:systems in 4297) [ClassicSimilarity], result of:
          0.047356993 = score(doc=4297,freq=14.0), product of:
            0.13179013 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04288404 = queryNorm
            0.3593364 = fieldWeight in 4297, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=4297)
      0.5 = coord(3/6)
    
    Abstract
    The relationship between a person with a question and a source of information is complex. Indexing and abstracting often fail because too much emphasis is put on the mechanics of description, and too little has been given as to what ought to be represented. Research literature suggests that inappropriate representation results in failed searches a significant number of times, perhaps even in a majority of cases. "Doing Things with Information" seeks to rectify this unfortunate situation by emphasizing methods of modeling and constructing appropriate representations of such questions and documents. Students in programs of information studies will find focal points for discussion about system design and refinement of existing systems. Librarians, scholars, and those who work within large document collections, whether paper or electronic, will find insights into the strengths and weaknesses of the access systems they use.
    Footnote
    The authors state that this book emerged from a proposal to do a second edition of Explorations in Indexing and Abstracting (O'Connor 1996); much of its content is the result of the authors' reaction to the reviews of this first edition and their realization for "the necessity to address some more fundamental questions". Rez. in: KO 38(2011) no.1, S.62-64 (L.F. Spiteri): "This book provides a good overview of the relationship between the document and the user; in this regard, it reinforces the importance of the clientcentred approach to the design of document representation systems. In the final chapter, the authors state: "We have offered examples of new ways to think about messages in all sorts of media and how they might be discovered, analyzed, synthesized, and generated. We brought together philosophical, scientific, and engineering notions into a fundamental model for just how we might understand doing this with information" (p. 225). The authors have certainly succeeded in highlighting the complex processes, nature, and implications of document representation systems, although, as has been seen, the novelty of some of their discussions and suggestions is sometimes limited. With further explanation, the FOC model may serve as a useful way to understand how to build document representation systems to better meet user needs."; vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_38_2011_1e.pdf.
    LCSH
    Information retrieval
    Information storage and retrieval systems / Design
    RSWK
    Information-Retrieval-System
    Subject
    Information-Retrieval-System
    Information retrieval
    Information storage and retrieval systems / Design