Literatur zur Informationserschließung
Diese Datenbank enthält über 40.000 Dokumente zu Themen aus den Bereichen Formalerschließung – Inhaltserschließung – Information Retrieval.
© 2015 W. Gödert, TH Köln, Institut für Informationswissenschaft
/
Powered by litecat, BIS Oldenburg
(Stand: 28. April 2022)
Suche
Suchergebnisse
Treffer 1–12 von 12
sortiert nach:
-
1Voorhees, E.M.: Text REtrieval Conference (TREC).
In: Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates. London : Taylor & Francis, 2009. S.xx-xx.
Abstract: This entry summarizes the history, results, and impact of the Text REtrieval Conference (TREC), a workshop series designed to support the information retrieval community by building the infrastructure necessary for large-scale evaluation of retrieval technology.
Anmerkung: Vgl.: http://www.tandfonline.com/doi/book/10.1081/E-ELIS3.
Themenfeld: Retrievalstudien
Objekt: TREC
-
2Voorhees, E.M.: On test collections for adaptive information retrieval.
In: Information processing and management. 44(2008) no.6, S.1879-1885.
Abstract: Traditional Cranfield test collections represent an abstraction of a retrieval task that Sparck Jones calls the "core competency" of retrieval: a task that is necessary, but not sufficient, for user retrieval tasks. The abstraction facilitates research by controlling for (some) sources of variability, thus increasing the power of experiments that compare system effectiveness while reducing their cost. However, even within the highly-abstracted case of the Cranfield paradigm, meta-analysis demonstrates that the user/topic effect is greater than the system effect, so experiments must include a relatively large number of topics to distinguish systems' effectiveness. The evidence further suggests that changing the abstraction slightly to include just a bit more characterization of the user will result in a dramatic loss of power or increase in cost of retrieval experiments. Defining a new, feasible abstraction for supporting adaptive IR research will require winnowing the list of all possible factors that can affect retrieval behavior to a minimum number of essential factors.
Anmerkung: Beitrag in einem Themenheft "Adaptive information retrieval"
Themenfeld: Retrievalstudien
-
3Voorhees, E.M.: Question answering in TREC.
In: TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman. Cambridge, MA : MIT Press, 2005. S.233-259.
Themenfeld: Retrievalstudien ; Sprachretrieval
Objekt: TREC
-
4Buckley, C. ; Voorhees, E.M.: Retrieval system evaluation.
In: TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman. Cambridge, MA : MIT Press, 2005. S.53-78.
Themenfeld: Retrievalstudien
Objekt: TREC
-
5Voorhees, E.M. ; Harman, D.K.: ¬The Text REtrieval Conference.
In: TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman. Cambridge, MA : MIT Press, 2005. S3-20.
Abstract: Text retrieval technology targets a problem that is all too familiar: finding relevant information in large stores of electronic documents. The problem is an old one, with the first research conference devoted to the subject held in 1958 [11]. Since then the problem has continued to grow as more information is created in electronic form and more people gain electronic access. The advent of the World Wide Web, where anyone can publish so everyone must search, is a graphic illustration of the need for effective retrieval technology. The Text REtrieval Conference (TREC) is a workshop series designed to build the infrastructure necessary for the large-scale evaluation of text retrieval technology, thereby accelerating its transfer into the commercial sector. The series is sponsored by the U.S. National Institute of Standards and Technology (NIST) and the U.S. Department of Defense. At the time of this writing, there have been twelve TREC workshops and preparations for the thirteenth workshop are under way. Participants in the workshops have been drawn from the academic, commercial, and government sectors, and have included representatives from more than twenty different countries. These collective efforts have accomplished a great deal: a variety of large test collections have been built for both traditional ad hoc retrieval and related tasks such as cross-language retrieval, speech retrieval, and question answering; retrieval effectiveness has approximately doubled; and many commercial retrieval systems now contain technology first developed in TREC. ; This book chronicles the evolution of retrieval systems over the course of TREC. To be sure, there has already been a wealth of information written about TREC. Each conference has produced a proceedings containing general overviews of the various tasks, papers written by the individual participants, and evaluation results.1 Reports on expanded versions of TREC experiments frequently appear in the wider information retrieval literature. There also have been special issues of journals devoted to particular TRECs [3; 13] and particular TREC tasks [6; 4]. No single volume could hope to be a comprehensive record of all TREC-related research. Instead, this book looks to distill the overabundance of detail into a manageable whole that summarizes the main lessons learned from TREC. The book consists of three main parts. The first part contains introductory and descriptive chapters on TREC's history, the major products of TREC (the test collections), and the retrieval evaluation methodology. Part II includes chapters describing the major TREC ''tracks,'' evaluations of special subtopics such as cross-language retrieval and question answering. Part III contains contributions from research groups that have participated in TREC. The epilogue to the book is written by Karen Sparck Jones, who reflects on the impact TREC has had on the information retrieval field. The structure of this introductory chapter is similar to that of the book as a whole. The chapter begins with a short history of TREC; expanded descriptions of specific aspects of the history are included in subsequent chapters to make those chapters self-contained. Section 1.2 describes TREC's track structure, which has been responsible for the growth of TREC and allows TREC to adapt to changing needs. The final section lists both the major accomplishments of TREC and some remaining challenges.
Themenfeld: Retrievalstudien
Objekt: TREC
-
6Voorhees, E.M. ; Garofolo, J.S.: Retrieving noisy text.
In: TREC: experiment and evaluation in information retrieval. Ed.: E.M. Voorhees, u. D.K. Harman. Cambridge, MA : MIT Press, 2005. S.183-198.
Themenfeld: Retrievalstudien
Objekt: TREC
-
7Buckley, C. ; Voorhees, E.M.: Retrieval evaluation with incomplete information.
In: SIGIR'04: Proceedings of the 27th Annual International ACM-SIGIR Conference an Research and Development in Information Retrieval. Ed.: K. Järvelin, u.a. New York, NY : ACM Press, 2004. S.25-32.
Themenfeld: Retrievalstudien
-
8Voorhees, E.M.: Variations in relevance judgements and the measurement of retrieval effectiveness.
In: Information processing and management. 36(2000) no.5, S.697-716.
Themenfeld: Retrievalstudien
-
9Voorhees, E.M. ; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6).
In: Information processing and management. 36(2000) no.1, S.3-36.
Themenfeld: Retrievalstudien
Objekt: TREC
-
10Voorhees, E.M. ; Harman, D.K.: Overview of the fifth Text Retrieval Conference (TREC-5).
In: The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman. Gaithersburgh, MD : National Institute of Standards and Technology, 1997. S.1-28.
(NIST special publication;)
Themenfeld: Retrievalstudien
Objekt: TREC
-
11Voorhees, E.M.: Using WordNet to disambiguate word senses for text retrieval.
In: SIGIR forum. Special issue 1993, S.171-180.
Objekt: WordNet
-
12Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval.
In: Information processing and management. 22(1986) no.6, S.465-476.
Themenfeld: Automatisches Indexieren ; Retrievalalgorithmen