Search (9 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × type_ss:"s"
  1. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.05
    0.04877709 = product of:
      0.09755418 = sum of:
        0.09755418 = sum of:
          0.040948182 = weight(_text_:technology in 3087) [ClassicSimilarity], result of:
            0.040948182 = score(doc=3087,freq=2.0), product of:
              0.15554588 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.052224867 = queryNorm
              0.2632547 = fieldWeight in 3087, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
          0.05660599 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
            0.05660599 = score(doc=3087,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.30952093 = fieldWeight in 3087, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  2. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.04
    0.036582813 = product of:
      0.073165625 = sum of:
        0.073165625 = sum of:
          0.030711137 = weight(_text_:technology in 3564) [ClassicSimilarity], result of:
            0.030711137 = score(doc=3564,freq=2.0), product of:
              0.15554588 = queryWeight, product of:
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.052224867 = queryNorm
              0.19744103 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.978387 = idf(docFreq=6114, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
          0.042454492 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
            0.042454492 = score(doc=3564,freq=2.0), product of:
              0.18288259 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052224867 = queryNorm
              0.23214069 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
      0.5 = coord(1/2)
    
    Date
    9. 1.1996 10:22:31
    Source
    ASIS'89. Managing information and technology. Proceedings of the 52nd annual meeting of the American Society for Information Science, Washington D.C., 30.10.-2.11.1989. Vol.26. Ed.by J. Katzer and G.B. Newby
  3. ¬The First Text Retrieval Conference (TREC-1) (1993) 0.02
    0.020474091 = product of:
      0.040948182 = sum of:
        0.040948182 = product of:
          0.081896365 = sum of:
            0.081896365 = weight(_text_:technology in 3352) [ClassicSimilarity], result of:
              0.081896365 = score(doc=3352,freq=2.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.5265094 = fieldWeight in 3352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.125 = fieldNorm(doc=3352)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  4. ¬The Fourth Text Retrieval Conference (TREC-4) (1996) 0.01
    0.010237046 = product of:
      0.020474091 = sum of:
        0.020474091 = product of:
          0.040948182 = sum of:
            0.040948182 = weight(_text_:technology in 7521) [ClassicSimilarity], result of:
              0.040948182 = score(doc=7521,freq=2.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2632547 = fieldWeight in 7521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  5. ¬The Sixth Text Retrieval Conference (TREC-6) (1998) 0.01
    0.010237046 = product of:
      0.020474091 = sum of:
        0.020474091 = product of:
          0.040948182 = sum of:
            0.040948182 = weight(_text_:technology in 4476) [ClassicSimilarity], result of:
              0.040948182 = score(doc=4476,freq=2.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.2632547 = fieldWeight in 4476, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Imprint
    Gaithersburgh, MD : National Institute of Standards, Information Technology Laboratory
  6. Sullivan, M.V.; Borgman, C.L.: Bibliographic searching by end-users and intermediaries : front-end software vs native DIALOG commands (1988) 0.01
    0.007677784 = product of:
      0.015355568 = sum of:
        0.015355568 = product of:
          0.030711137 = sum of:
            0.030711137 = weight(_text_:technology in 3560) [ClassicSimilarity], result of:
              0.030711137 = score(doc=3560,freq=2.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.19744103 = fieldWeight in 3560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    ASIS '88. Information Technology: planning for the next fifty years. Proceedings of the 51st annual meeting of the American Society for Information Science, Atlanta, Georgia, 23-27.10.1988. Vol.25. Ed. by C.L. Borgman and E.Y.H. Pai
  7. Sievert, M.E.; McKinin, E.J.; Slough, M.: ¬A comparison of indexing and full-text for the retrieval of clinical medical literature (1988) 0.01
    0.007677784 = product of:
      0.015355568 = sum of:
        0.015355568 = product of:
          0.030711137 = sum of:
            0.030711137 = weight(_text_:technology in 3563) [ClassicSimilarity], result of:
              0.030711137 = score(doc=3563,freq=2.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.19744103 = fieldWeight in 3563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    ASIS'88. Information technology: planning for the next fifty years. Proceedings of the 51st annual meeting of the American Society for Information Science, Atlanta, Georgia, 23-27.10.1988. Vol.25. Ed. by C.L. Borgman and E.Y.H. Pai
  8. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.005540964 = product of:
      0.011081928 = sum of:
        0.011081928 = product of:
          0.022163857 = sum of:
            0.022163857 = weight(_text_:technology in 636) [ClassicSimilarity], result of:
              0.022163857 = score(doc=636,freq=6.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.1424908 = fieldWeight in 636, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
  9. Cross-language information retrieval (1998) 0.00
    0.0031990767 = product of:
      0.0063981535 = sum of:
        0.0063981535 = product of:
          0.012796307 = sum of:
            0.012796307 = weight(_text_:technology in 6299) [ClassicSimilarity], result of:
              0.012796307 = score(doc=6299,freq=2.0), product of:
                0.15554588 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.052224867 = queryNorm
                0.08226709 = fieldWeight in 6299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness