Search (44 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Robertson, S.: On the history of evaluation in IR (2009) 0.02
    0.021222979 = product of:
      0.06366894 = sum of:
        0.06366894 = product of:
          0.12733787 = sum of:
            0.12733787 = weight(_text_:history in 3653) [ClassicSimilarity], result of:
              0.12733787 = score(doc=3653,freq=4.0), product of:
                0.21898255 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.047072954 = queryNorm
                0.5814978 = fieldWeight in 3653, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3653)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper is a personal take on the history of evaluation experiments in information retrieval. It describes some of the early experiments that were formative in our understanding, and goes on to discuss the current dominance of TREC (the Text REtrieval Conference) and to assess its impact.
  2. Tague-Sutcliffe, J.: Information retrieval experimentation (2009) 0.02
    0.015006913 = product of:
      0.045020737 = sum of:
        0.045020737 = product of:
          0.09004147 = sum of:
            0.09004147 = weight(_text_:history in 3801) [ClassicSimilarity], result of:
              0.09004147 = score(doc=3801,freq=2.0), product of:
                0.21898255 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.047072954 = queryNorm
                0.41118103 = fieldWeight in 3801, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3801)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Jean Tague-Sutcliffe was an important figure in information retrieval experimentation. Here, she reviews the history of IR research, and provides a description of the fundamental paradigm of information retrieval experimentation that continues to dominate the field.
  3. Voorhees, E.M.: Text REtrieval Conference (TREC) (2009) 0.02
    0.015006913 = product of:
      0.045020737 = sum of:
        0.045020737 = product of:
          0.09004147 = sum of:
            0.09004147 = weight(_text_:history in 3890) [ClassicSimilarity], result of:
              0.09004147 = score(doc=3890,freq=2.0), product of:
                0.21898255 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.047072954 = queryNorm
                0.41118103 = fieldWeight in 3890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3890)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This entry summarizes the history, results, and impact of the Text REtrieval Conference (TREC), a workshop series designed to support the information retrieval community by building the infrastructure necessary for large-scale evaluation of retrieval technology.
  4. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.014881384 = product of:
      0.04464415 = sum of:
        0.04464415 = product of:
          0.0892883 = sum of:
            0.0892883 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.0892883 = score(doc=262,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  5. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.014881384 = product of:
      0.04464415 = sum of:
        0.04464415 = product of:
          0.0892883 = sum of:
            0.0892883 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.0892883 = score(doc=6418,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.6, S.57-58
  6. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.014881384 = product of:
      0.04464415 = sum of:
        0.04464415 = product of:
          0.0892883 = sum of:
            0.0892883 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.0892883 = score(doc=6438,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    11. 8.2001 16:22:19
  7. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.014881384 = product of:
      0.04464415 = sum of:
        0.04464415 = product of:
          0.0892883 = sum of:
            0.0892883 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.0892883 = score(doc=5089,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 18:43:54
  8. Robertson, S.E.; Walker, S.; Beaulieu, M.: Laboratory experiments with Okapi : participation in the TREC programme (1997) 0.01
    0.0131310485 = product of:
      0.039393146 = sum of:
        0.039393146 = product of:
          0.07878629 = sum of:
            0.07878629 = weight(_text_:history in 2216) [ClassicSimilarity], result of:
              0.07878629 = score(doc=2216,freq=2.0), product of:
                0.21898255 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.047072954 = queryNorm
                0.3597834 = fieldWeight in 2216, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2216)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Briefly reviews the history of laboratory testing of information retrieval systems, focusing on the idea of a general purpose test collection of documents, queries and relevance judgements. Gives an overview of the methods used in TREC (Text Retrieval Conference) which is concerned with an ideal test collection, and discusses the Okapi team's participation in TREC. Also discusses some of the issues surrounding the difficult problem of interactive evaluation in TREC. The reconciliation of the requirements of the laboratory context with the concerns of interactive retrieval has a long way to go
  9. Davis, C.H.: From document retrieval to Web browsing : some universal concerns (1997) 0.01
    0.0131310485 = product of:
      0.039393146 = sum of:
        0.039393146 = product of:
          0.07878629 = sum of:
            0.07878629 = weight(_text_:history in 399) [ClassicSimilarity], result of:
              0.07878629 = score(doc=399,freq=2.0), product of:
                0.21898255 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.047072954 = queryNorm
                0.3597834 = fieldWeight in 399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=399)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Computer based systems can produce enourmous retrieval sets even when good search logic is used. Sometimes this is desirable, more often it is not. Appropriate filters can limit search results, but they represent only a partial solution. Simple ranking techniques are needed that are both effective and easily understood by the humans doing the searching. Optimal search output, whether from a traditional database or the Internet, will result when intuitive interfaces are designed that inspire confidence while making the necessary mathematics transparent. Weighted term searching using powers of 2, a technique proposed early in the history of information retrieval, can be simplifies and used in combination with modern graphics and textual input to achieve these results
  10. Voorhees, E.M.; Harman, D.K.: ¬The Text REtrieval Conference (2005) 0.01
    0.011371821 = product of:
      0.034115463 = sum of:
        0.034115463 = product of:
          0.06823093 = sum of:
            0.06823093 = weight(_text_:history in 5082) [ClassicSimilarity], result of:
              0.06823093 = score(doc=5082,freq=6.0), product of:
                0.21898255 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.047072954 = queryNorm
                0.31158158 = fieldWeight in 5082, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5082)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This book chronicles the evolution of retrieval systems over the course of TREC. To be sure, there has already been a wealth of information written about TREC. Each conference has produced a proceedings containing general overviews of the various tasks, papers written by the individual participants, and evaluation results.1 Reports on expanded versions of TREC experiments frequently appear in the wider information retrieval literature. There also have been special issues of journals devoted to particular TRECs [3; 13] and particular TREC tasks [6; 4]. No single volume could hope to be a comprehensive record of all TREC-related research. Instead, this book looks to distill the overabundance of detail into a manageable whole that summarizes the main lessons learned from TREC. The book consists of three main parts. The first part contains introductory and descriptive chapters on TREC's history, the major products of TREC (the test collections), and the retrieval evaluation methodology. Part II includes chapters describing the major TREC ''tracks,'' evaluations of special subtopics such as cross-language retrieval and question answering. Part III contains contributions from research groups that have participated in TREC. The epilogue to the book is written by Karen Sparck Jones, who reflects on the impact TREC has had on the information retrieval field. The structure of this introductory chapter is similar to that of the book as a whole. The chapter begins with a short history of TREC; expanded descriptions of specific aspects of the history are included in subsequent chapters to make those chapters self-contained. Section 1.2 describes TREC's track structure, which has been responsible for the growth of TREC and allows TREC to adapt to changing needs. The final section lists both the major accomplishments of TREC and some remaining challenges.
  11. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.01
    0.01062956 = product of:
      0.03188868 = sum of:
        0.03188868 = product of:
          0.06377736 = sum of:
            0.06377736 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.06377736 = score(doc=3103,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:55:22
  12. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.01062956 = product of:
      0.03188868 = sum of:
        0.03188868 = product of:
          0.06377736 = sum of:
            0.06377736 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.06377736 = score(doc=3107,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:59:22
  13. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.01
    0.01062956 = product of:
      0.03188868 = sum of:
        0.03188868 = product of:
          0.06377736 = sum of:
            0.06377736 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.06377736 = score(doc=2417,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.22-25
  14. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.00937932 = product of:
      0.028137958 = sum of:
        0.028137958 = product of:
          0.056275915 = sum of:
            0.056275915 = weight(_text_:history in 636) [ClassicSimilarity], result of:
              0.056275915 = score(doc=636,freq=8.0), product of:
                0.21898255 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.047072954 = queryNorm
                0.25698814 = fieldWeight in 636, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
  15. Saracevic, T.: Effects of inconsistent relevance judgments on information retrieval test results : a historical perspective (2008) 0.01
    0.00937932 = product of:
      0.028137958 = sum of:
        0.028137958 = product of:
          0.056275915 = sum of:
            0.056275915 = weight(_text_:history in 5585) [ClassicSimilarity], result of:
              0.056275915 = score(doc=5585,freq=2.0), product of:
                0.21898255 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.047072954 = queryNorm
                0.25698814 = fieldWeight in 5585, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5585)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The main objective of information retrieval (IR) systems is to retrieve information or information objects relevant to user requests and possible needs. In IR tests, retrieval effectiveness is established by comparing IR systems retrievals (systems relevance) with users' or user surrogates' assessments (user relevance), where user relevance is treated as the gold standard for performance evaluation. Relevance is a human notion, and establishing relevance by humans is fraught with a number of problems-inconsistency in judgment being one of them. The aim of this critical review is to explore the relationship between relevance on the one hand and testing of IR systems and procedures on the other. Critics of IR tests raised the issue of validity of the IR tests because they were based on relevance judgments that are inconsistent. This review traces and synthesizes experimental studies dealing with (1) inconsistency of relevance judgments by people, (2) effects of such inconsistency on results of IR tests and (3) reasons for retrieval failures. A historical context for these studies and for IR testing is provided including an assessment of Lancaster's (1969) evaluation of MEDLARS and its unique place in the history of IR evaluation.
  16. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.01
    0.0085036475 = product of:
      0.025510943 = sum of:
        0.025510943 = product of:
          0.051021885 = sum of:
            0.051021885 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.051021885 = score(doc=5002,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    19. 3.1996 11:22:12
  17. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.0085036475 = product of:
      0.025510943 = sum of:
        0.025510943 = product of:
          0.051021885 = sum of:
            0.051021885 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.051021885 = score(doc=6971,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  18. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.01
    0.0085036475 = product of:
      0.025510943 = sum of:
        0.025510943 = product of:
          0.051021885 = sum of:
            0.051021885 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.051021885 = score(doc=744,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:01:00
  19. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.01
    0.0085036475 = product of:
      0.025510943 = sum of:
        0.025510943 = product of:
          0.051021885 = sum of:
            0.051021885 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.051021885 = score(doc=3087,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  20. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.01
    0.0085036475 = product of:
      0.025510943 = sum of:
        0.025510943 = product of:
          0.051021885 = sum of:
            0.051021885 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.051021885 = score(doc=3572,freq=2.0), product of:
                0.16484147 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047072954 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.3, S.24-26,28

Languages

  • e 39
  • d 3
  • f 1
  • More… Less…

Types