Search (47 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.05
    0.053701937 = product of:
      0.16110581 = sum of:
        0.07552487 = weight(_text_:21st in 2339) [ClassicSimilarity], result of:
          0.07552487 = score(doc=2339,freq=2.0), product of:
            0.2381352 = queryWeight, product of:
              5.74105 = idf(docFreq=385, maxDocs=44218)
              0.041479383 = queryNorm
            0.3171512 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.74105 = idf(docFreq=385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.08558095 = sum of:
          0.05748153 = weight(_text_:century in 2339) [ClassicSimilarity], result of:
            0.05748153 = score(doc=2339,freq=2.0), product of:
              0.20775084 = queryWeight, product of:
                5.0085325 = idf(docFreq=802, maxDocs=44218)
                0.041479383 = queryNorm
              0.27668494 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.0085325 = idf(docFreq=802, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
          0.02809942 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
            0.02809942 = score(doc=2339,freq=2.0), product of:
              0.14525373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041479383 = queryNorm
              0.19345059 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
      0.33333334 = coord(2/6)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  2. Serrano Cobos, J.; Quintero Orta, A.: Design, development and management of an information recovery system for an Internet Website : from documentary theory to practice (2003) 0.04
    0.04170625 = product of:
      0.12511875 = sum of:
        0.09062983 = weight(_text_:21st in 2726) [ClassicSimilarity], result of:
          0.09062983 = score(doc=2726,freq=2.0), product of:
            0.2381352 = queryWeight, product of:
              5.74105 = idf(docFreq=385, maxDocs=44218)
              0.041479383 = queryNorm
            0.3805814 = fieldWeight in 2726, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.74105 = idf(docFreq=385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2726)
        0.03448892 = product of:
          0.06897784 = sum of:
            0.06897784 = weight(_text_:century in 2726) [ClassicSimilarity], result of:
              0.06897784 = score(doc=2726,freq=2.0), product of:
                0.20775084 = queryWeight, product of:
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.041479383 = queryNorm
                0.33202195 = fieldWeight in 2726, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2726)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  3. Robertson, S.: On the history of evaluation in IR (2009) 0.02
    0.0187011 = product of:
      0.1122066 = sum of:
        0.1122066 = weight(_text_:history in 3653) [ClassicSimilarity], result of:
          0.1122066 = score(doc=3653,freq=4.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.5814978 = fieldWeight in 3653, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.0625 = fieldNorm(doc=3653)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper is a personal take on the history of evaluation experiments in information retrieval. It describes some of the early experiments that were formative in our understanding, and goes on to discuss the current dominance of TREC (the Text REtrieval Conference) and to assess its impact.
  4. Tague-Sutcliffe, J.: Information retrieval experimentation (2009) 0.01
    0.013223674 = product of:
      0.079342045 = sum of:
        0.079342045 = weight(_text_:history in 3801) [ClassicSimilarity], result of:
          0.079342045 = score(doc=3801,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.41118103 = fieldWeight in 3801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.0625 = fieldNorm(doc=3801)
      0.16666667 = coord(1/6)
    
    Abstract
    Jean Tague-Sutcliffe was an important figure in information retrieval experimentation. Here, she reviews the history of IR research, and provides a description of the fundamental paradigm of information retrieval experimentation that continues to dominate the field.
  5. Voorhees, E.M.: Text REtrieval Conference (TREC) (2009) 0.01
    0.013223674 = product of:
      0.079342045 = sum of:
        0.079342045 = weight(_text_:history in 3890) [ClassicSimilarity], result of:
          0.079342045 = score(doc=3890,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.41118103 = fieldWeight in 3890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.0625 = fieldNorm(doc=3890)
      0.16666667 = coord(1/6)
    
    Abstract
    This entry summarizes the history, results, and impact of the Text REtrieval Conference (TREC), a workshop series designed to support the information retrieval community by building the infrastructure necessary for large-scale evaluation of retrieval technology.
  6. Robertson, S.E.; Walker, S.; Beaulieu, M.: Laboratory experiments with Okapi : participation in the TREC programme (1997) 0.01
    0.011570716 = product of:
      0.069424294 = sum of:
        0.069424294 = weight(_text_:history in 2216) [ClassicSimilarity], result of:
          0.069424294 = score(doc=2216,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.3597834 = fieldWeight in 2216, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2216)
      0.16666667 = coord(1/6)
    
    Abstract
    Briefly reviews the history of laboratory testing of information retrieval systems, focusing on the idea of a general purpose test collection of documents, queries and relevance judgements. Gives an overview of the methods used in TREC (Text Retrieval Conference) which is concerned with an ideal test collection, and discusses the Okapi team's participation in TREC. Also discusses some of the issues surrounding the difficult problem of interactive evaluation in TREC. The reconciliation of the requirements of the laboratory context with the concerns of interactive retrieval has a long way to go
  7. Davis, C.H.: From document retrieval to Web browsing : some universal concerns (1997) 0.01
    0.011570716 = product of:
      0.069424294 = sum of:
        0.069424294 = weight(_text_:history in 399) [ClassicSimilarity], result of:
          0.069424294 = score(doc=399,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.3597834 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.16666667 = coord(1/6)
    
    Abstract
    Computer based systems can produce enourmous retrieval sets even when good search logic is used. Sometimes this is desirable, more often it is not. Appropriate filters can limit search results, but they represent only a partial solution. Simple ranking techniques are needed that are both effective and easily understood by the humans doing the searching. Optimal search output, whether from a traditional database or the Internet, will result when intuitive interfaces are designed that inspire confidence while making the necessary mathematics transparent. Weighted term searching using powers of 2, a technique proposed early in the history of information retrieval, can be simplifies and used in combination with modern graphics and textual input to achieve these results
  8. Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005) 0.01
    0.010020534 = product of:
      0.0601232 = sum of:
        0.0601232 = weight(_text_:history in 5082) [ClassicSimilarity], result of:
          0.0601232 = score(doc=5082,freq=6.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.31158158 = fieldWeight in 5082, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5082)
      0.16666667 = coord(1/6)
    
    Abstract
    This book chronicles the evolution of retrieval systems over the course of TREC. To be sure, there has already been a wealth of information written about TREC. Each conference has produced a proceedings containing general overviews of the various tasks, papers written by the individual participants, and evaluation results.1 Reports on expanded versions of TREC experiments frequently appear in the wider information retrieval literature. There also have been special issues of journals devoted to particular TRECs [3; 13] and particular TREC tasks [6; 4]. No single volume could hope to be a comprehensive record of all TREC-related research. Instead, this book looks to distill the overabundance of detail into a manageable whole that summarizes the main lessons learned from TREC. The book consists of three main parts. The first part contains introductory and descriptive chapters on TREC's history, the major products of TREC (the test collections), and the retrieval evaluation methodology. Part II includes chapters describing the major TREC ''tracks,'' evaluations of special subtopics such as cross-language retrieval and question answering. Part III contains contributions from research groups that have participated in TREC. The epilogue to the book is written by Karen Sparck Jones, who reflects on the impact TREC has had on the information retrieval field. The structure of this introductory chapter is similar to that of the book as a whole. The chapter begins with a short history of TREC; expanded descriptions of specific aspects of the history are included in subsequent chapters to make those chapters self-contained. Section 1.2 describes TREC's track structure, which has been responsible for the growth of TREC and allows TREC to adapt to changing needs. The final section lists both the major accomplishments of TREC and some remaining challenges.
  9. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.008264797 = product of:
      0.049588777 = sum of:
        0.049588777 = weight(_text_:history in 636) [ClassicSimilarity], result of:
          0.049588777 = score(doc=636,freq=8.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.25698814 = fieldWeight in 636, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.16666667 = coord(1/6)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
  10. Saracevic, T.: Effects of inconsistent relevance judgments on information retrieval test results : a historical perspective (2008) 0.01
    0.008264797 = product of:
      0.049588777 = sum of:
        0.049588777 = weight(_text_:history in 5585) [ClassicSimilarity], result of:
          0.049588777 = score(doc=5585,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.25698814 = fieldWeight in 5585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5585)
      0.16666667 = coord(1/6)
    
    Abstract
    The main objective of information retrieval (IR) systems is to retrieve information or information objects relevant to user requests and possible needs. In IR tests, retrieval effectiveness is established by comparing IR systems retrievals (systems relevance) with users' or user surrogates' assessments (user relevance), where user relevance is treated as the gold standard for performance evaluation. Relevance is a human notion, and establishing relevance by humans is fraught with a number of problems-inconsistency in judgment being one of them. The aim of this critical review is to explore the relationship between relevance on the one hand and testing of IR systems and procedures on the other. Critics of IR tests raised the issue of validity of the IR tests because they were based on relevance judgments that are inconsistent. This review traces and synthesizes experimental studies dealing with (1) inconsistency of relevance judgments by people, (2) effects of such inconsistency on results of IR tests and (3) reasons for retrieval failures. A historical context for these studies and for IR testing is provided including an assessment of Lancaster's (1969) evaluation of MEDLARS and its unique place in the history of IR evaluation.
  11. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.006556531 = product of:
      0.039339185 = sum of:
        0.039339185 = product of:
          0.07867837 = sum of:
            0.07867837 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.07867837 = score(doc=262,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    20.10.2000 12:22:23
  12. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.006556531 = product of:
      0.039339185 = sum of:
        0.039339185 = product of:
          0.07867837 = sum of:
            0.07867837 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.07867837 = score(doc=6418,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Online. 22(1998) no.6, S.57-58
  13. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.006556531 = product of:
      0.039339185 = sum of:
        0.039339185 = product of:
          0.07867837 = sum of:
            0.07867837 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.07867837 = score(doc=6438,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    11. 8.2001 16:22:19
  14. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.006556531 = product of:
      0.039339185 = sum of:
        0.039339185 = product of:
          0.07867837 = sum of:
            0.07867837 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.07867837 = score(doc=5089,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 7.2006 18:43:54
  15. Behnert, C.; Lewandowski, D.: ¬A framework for designing retrieval effectiveness studies of library information systems using human relevance assessments (2017) 0.00
    0.0047901277 = product of:
      0.028740766 = sum of:
        0.028740766 = product of:
          0.05748153 = sum of:
            0.05748153 = weight(_text_:century in 3700) [ClassicSimilarity], result of:
              0.05748153 = score(doc=3700,freq=2.0), product of:
                0.20775084 = queryWeight, product of:
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.041479383 = queryNorm
                0.27668494 = fieldWeight in 3700, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3700)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose This paper demonstrates how to apply traditional information retrieval evaluation methods based on standards from the Text REtrieval Conference (TREC) and web search evaluation to all types of modern library information systems including online public access catalogs, discovery systems, and digital libraries that provide web search features to gather information from heterogeneous sources. Design/methodology/approach We apply conventional procedures from information retrieval evaluation to the library information system context considering the specific characteristics of modern library materials. Findings We introduce a framework consisting of five parts: (1) search queries, (2) search results, (3) assessors, (4) testing, and (5) data analysis. We show how to deal with comparability problems resulting from diverse document types, e.g., electronic articles vs. printed monographs and what issues need to be considered for retrieval tests in the library context. Practical implications The framework can be used as a guideline for conducting retrieval effectiveness studies in the library context. Originality/value Although a considerable amount of research has been done on information retrieval evaluation, and standards for conducting retrieval effectiveness studies do exist, to our knowledge this is the first attempt to provide a systematic framework for evaluating the retrieval effectiveness of twenty-first-century library information systems. We demonstrate which issues must be considered and what decisions must be made by researchers prior to a retrieval test.
  16. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.00
    0.0046832366 = product of:
      0.02809942 = sum of:
        0.02809942 = product of:
          0.05619884 = sum of:
            0.05619884 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.05619884 = score(doc=3103,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    27. 2.1999 20:55:22
  17. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.00
    0.0046832366 = product of:
      0.02809942 = sum of:
        0.02809942 = product of:
          0.05619884 = sum of:
            0.05619884 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.05619884 = score(doc=3107,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    27. 2.1999 20:59:22
  18. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.00
    0.0046832366 = product of:
      0.02809942 = sum of:
        0.02809942 = product of:
          0.05619884 = sum of:
            0.05619884 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.05619884 = score(doc=2417,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Pages
    S.22-25
  19. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.00
    0.0038321023 = product of:
      0.022992613 = sum of:
        0.022992613 = product of:
          0.045985226 = sum of:
            0.045985226 = weight(_text_:century in 3643) [ClassicSimilarity], result of:
              0.045985226 = score(doc=3643,freq=2.0), product of:
                0.20775084 = queryWeight, product of:
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.041479383 = queryNorm
                0.22134796 = fieldWeight in 3643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3643)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    A landmark event in the twentieth-century development of subject analysis theory was a retrieval experiment, begun in 1957, by Cyril Cleverdon, Librarian of the Cranfield Institute of Technology. For this work he received the Professional Award of the Special Libraries Association in 1962 and the Award of Merit of the American Society for Information Science in 1970. The objective of the experiment, called Cranfield I, was to test the ability of four indexing systems-UDC, Facet, Uniterm, and Alphabetic-Subject Headings-to retrieve material responsive to questions addressed to a collection of documents. The experiment was ambitious in scale, consisting of eighteen thousand documents and twelve hundred questions. Prior to Cranfield I, the question of what constitutes good indexing was approached subjectively and reference was made to assumptions in the form of principles that should be observed or user needs that should be met. Cranfield I was the first large-scale effort to use objective criteria for determining the parameters of good indexing. Its creative impetus was the definition of user satisfaction in terms of precision and recall. Out of the experiment emerged the definition of recall as the percentage of relevant documents retrieved and precision as the percentage of retrieved documents that were relevant. Operationalizing the concept of user satisfaction, that is, making it measurable, meant that it could be studied empirically and manipulated as a variable in mathematical equations. Much has been made of the fact that the experimental methodology of Cranfield I was seriously flawed. This is unfortunate as it tends to diminish Cleverdon's contribu tion, which was not methodological-such contributions can be left to benchmark researchers-but rather creative: the introduction of a new paradigm, one that proved to be eminently productive. The criticism leveled at the methodological shortcomings of Cranfield I underscored the need for more precise definitions of the variables involved in information retrieval. Particularly important was the need for a definition of the dependent variable index language. Like the definitions of precision and recall, that of index language provided a new way of looking at the indexing process. It was a re-visioning that stimulated research activity and led not only to a better understanding of indexing but also the design of better retrieval systems." Cranfield I was followed by Cranfield II. While Cranfield I was a wholesale comparison of four indexing "systems," Cranfield II aimed to single out various individual factors in index languages, called "indexing devices," and to measure how variations in these affected retrieval performance. The following selection represents the thinking at Cranfield midway between these two notable retrieval experiments.
  20. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.00
    0.0037465894 = product of:
      0.022479536 = sum of:
        0.022479536 = product of:
          0.044959072 = sum of:
            0.044959072 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.044959072 = score(doc=5002,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    19. 3.1996 11:22:12

Languages

  • e 42
  • d 3
  • f 1
  • More… Less…

Types

  • a 42
  • s 4
  • m 3
  • More… Less…