Search (15 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × type_ss:"s"
  1. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.03
    0.02836731 = product of:
      0.05673462 = sum of:
        0.05673462 = sum of:
          0.006806435 = weight(_text_:s in 3087) [ClassicSimilarity], result of:
            0.006806435 = score(doc=3087,freq=4.0), product of:
              0.05008241 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046063907 = queryNorm
              0.1359047 = fieldWeight in 3087, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
          0.04992819 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
            0.04992819 = score(doc=3087,freq=2.0), product of:
              0.16130796 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046063907 = queryNorm
              0.30952093 = fieldWeight in 3087, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
    Pages
    546 S
    Type
    s
  2. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.03
    0.027370533 = product of:
      0.054741066 = sum of:
        0.054741066 = sum of:
          0.004812876 = weight(_text_:s in 4049) [ClassicSimilarity], result of:
            0.004812876 = score(doc=4049,freq=2.0), product of:
              0.05008241 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046063907 = queryNorm
              0.09609913 = fieldWeight in 4049, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.0625 = fieldNorm(doc=4049)
          0.04992819 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
            0.04992819 = score(doc=4049,freq=2.0), product of:
              0.16130796 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046063907 = queryNorm
              0.30952093 = fieldWeight in 4049, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4049)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
    Type
    s
  3. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.02
    0.021275483 = product of:
      0.042550966 = sum of:
        0.042550966 = sum of:
          0.0051048263 = weight(_text_:s in 3564) [ClassicSimilarity], result of:
            0.0051048263 = score(doc=3564,freq=4.0), product of:
              0.05008241 = queryWeight, product of:
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046063907 = queryNorm
              0.101928525 = fieldWeight in 3564, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.0872376 = idf(docFreq=40523, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
          0.03744614 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
            0.03744614 = score(doc=3564,freq=2.0), product of:
              0.16130796 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046063907 = queryNorm
              0.23214069 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
      0.5 = coord(1/2)
    
    Date
    9. 1.1996 10:22:31
    Pages
    S.34-39
    Type
    s
  4. ¬The First Text Retrieval Conference (TREC-1) (1993) 0.00
    0.0034032175 = product of:
      0.006806435 = sum of:
        0.006806435 = product of:
          0.01361287 = sum of:
            0.01361287 = weight(_text_:s in 3352) [ClassicSimilarity], result of:
              0.01361287 = score(doc=3352,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.2718094 = fieldWeight in 3352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.125 = fieldNorm(doc=3352)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    xxx S
    Type
    s
  5. Evaluation of information retrieval systems : special topic issue (1996) 0.00
    0.0025524131 = product of:
      0.0051048263 = sum of:
        0.0051048263 = product of:
          0.010209653 = sum of:
            0.010209653 = weight(_text_:s in 6812) [ClassicSimilarity], result of:
              0.010209653 = score(doc=6812,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.20385705 = fieldWeight in 6812, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6812)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.1-105
    Type
    s
  6. TREC-1: The first text retrieval conference : Rockville, MD, USA, 4-6 Nov. 1993 (1993) 0.00
    0.0025524131 = product of:
      0.0051048263 = sum of:
        0.0051048263 = product of:
          0.010209653 = sum of:
            0.010209653 = weight(_text_:s in 1315) [ClassicSimilarity], result of:
              0.010209653 = score(doc=1315,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.20385705 = fieldWeight in 1315, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1315)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 29(1993) no.4, S.411-521
    Type
    s
  7. Pao, M.L.: Retrieval differences between term and citation indexing (1989) 0.00
    0.0020840368 = product of:
      0.0041680736 = sum of:
        0.0041680736 = product of:
          0.008336147 = sum of:
            0.008336147 = weight(_text_:s in 3566) [ClassicSimilarity], result of:
              0.008336147 = score(doc=3566,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.1664486 = fieldWeight in 3566, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3566)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.113-120
    Source
    Information, knowledge, evolution. Proceedings of the 44th FID congress, Helsinki, 28.8.-1.9.1988. Ed. by S. Koshiala and R. Launo
    Type
    s
  8. ¬The Fourth Text Retrieval Conference (TREC-4) (1996) 0.00
    0.0017016088 = product of:
      0.0034032175 = sum of:
        0.0034032175 = product of:
          0.006806435 = sum of:
            0.006806435 = weight(_text_:s in 7521) [ClassicSimilarity], result of:
              0.006806435 = score(doc=7521,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.1359047 = fieldWeight in 7521, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    776 S
    Type
    s
  9. ¬The Sixth Text Retrieval Conference (TREC-6) (1998) 0.00
    0.0017016088 = product of:
      0.0034032175 = sum of:
        0.0034032175 = product of:
          0.006806435 = sum of:
            0.006806435 = weight(_text_:s in 4476) [ClassicSimilarity], result of:
              0.006806435 = score(doc=4476,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.1359047 = fieldWeight in 4476, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4476)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    xxx S
    Type
    s
  10. Information retrieval experiment (1981) 0.00
    0.0014889077 = product of:
      0.0029778155 = sum of:
        0.0029778155 = product of:
          0.005955631 = sum of:
            0.005955631 = weight(_text_:s in 2653) [ClassicSimilarity], result of:
              0.005955631 = score(doc=2653,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.118916616 = fieldWeight in 2653, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2653)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    352 S
    Type
    s
  11. Sullivan, M.V.; Borgman, C.L.: Bibliographic searching by end-users and intermediaries : front-end software vs native DIALOG commands (1988) 0.00
    0.0012762066 = product of:
      0.0025524131 = sum of:
        0.0025524131 = product of:
          0.0051048263 = sum of:
            0.0051048263 = weight(_text_:s in 3560) [ClassicSimilarity], result of:
              0.0051048263 = score(doc=3560,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.101928525 = fieldWeight in 3560, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.120-126
    Type
    s
  12. Sievert, M.E.; McKinin, E.J.; Slough, M.: ¬A comparison of indexing and full-text for the retrieval of clinical medical literature (1988) 0.00
    0.0012762066 = product of:
      0.0025524131 = sum of:
        0.0025524131 = product of:
          0.0051048263 = sum of:
            0.0051048263 = weight(_text_:s in 3563) [ClassicSimilarity], result of:
              0.0051048263 = score(doc=3563,freq=4.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.101928525 = fieldWeight in 3563, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.143-146
    Type
    s
  13. TREC: experiment and evaluation in information retrieval (2005) 0.00
    7.520119E-4 = product of:
      0.0015040238 = sum of:
        0.0015040238 = product of:
          0.0030080476 = sum of:
            0.0030080476 = weight(_text_:s in 636) [ClassicSimilarity], result of:
              0.0030080476 = score(doc=636,freq=8.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.060061958 = fieldWeight in 636, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    Pages
    X, 462 S
    Type
    s
  14. Cross-language information retrieval (1998) 0.00
    6.512615E-4 = product of:
      0.001302523 = sum of:
        0.001302523 = product of:
          0.002605046 = sum of:
            0.002605046 = weight(_text_:s in 6299) [ClassicSimilarity], result of:
              0.002605046 = score(doc=6299,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.052015185 = fieldWeight in 6299, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.
    Pages
    VII,182 S
    Type
    s
  15. Effektive Information Retrieval Verfahren in Theorie und Praxis : ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005 (2006) 0.00
    5.210092E-4 = product of:
      0.0010420184 = sum of:
        0.0010420184 = product of:
          0.0020840368 = sum of:
            0.0020840368 = weight(_text_:s in 5973) [ClassicSimilarity], result of:
              0.0020840368 = score(doc=5973,freq=6.0), product of:
                0.05008241 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046063907 = queryNorm
                0.04161215 = fieldWeight in 5973, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5973)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 57(2006) H.5, S.290-291 (C. Schindler): "Weniger als ein Jahr nach dem "Vierten Hildesheimer Evaluierungs- und Retrievalworkshop" (HIER 2005) im Juli 2005 ist der dazugehörige Tagungsband erschienen. Eingeladen hatte die Hildesheimer Informationswissenschaft um ihre Forschungsergebnisse und die einiger externer Experten zum Thema Information Retrieval einem Fachpublikum zu präsentieren und zur Diskussion zu stellen. Unter dem Titel "Effektive Information Retrieval Verfahren in Theorie und Praxis" sind nahezu sämtliche Beiträge des Workshops in dem nun erschienenen, 15 Beiträge umfassenden Band gesammelt. Mit dem Schwerpunkt Information Retrieval (IR) wird ein Teilgebiet der Informationswissenschaft vorgestellt, das schon immer im Zentrum informationswissenschaftlicher Forschung steht. Ob durch den Leistungsanstieg von Prozessoren und Speichermedien, durch die Verbreitung des Internet über nationale Grenzen hinweg oder durch den stetigen Anstieg der Wissensproduktion, festzuhalten ist, dass in einer zunehmend wechselseitig vernetzten Welt die Orientierung und das Auffinden von Dokumenten in großen Wissensbeständen zu einer zentralen Herausforderung geworden sind. Aktuelle Verfahrensweisen zu diesem Thema, dem Information Retrieval, präsentiert der neue Band anhand von praxisbezogenen Projekten und theoretischen Diskussionen. Das Kernthema Information Retrieval wird in dem Sammelband in die Bereiche Retrieval-Systeme, Digitale Bibliothek, Evaluierung und Multilinguale Systeme untergliedert. Die Artikel der einzelnen Sektionen sind insgesamt recht heterogen und bieten daher keine Überschneidungen inhaltlicher Art. Jedoch ist eine vollkommene thematische Abdeckung der unterschiedlichen Bereiche ebenfalls nicht gegeben, was bei der Präsentation von Forschungsergebnissen eines Institutes und seiner Kooperationspartner auch nur bedingt erwartet werden kann. So lässt sich sowohl in der Gliederung als auch in den einzelnen Beiträgen eine thematische Verdichtung erkennen, die das spezielle Profil und die Besonderheit der Hildesheimer Informationswissenschaft im Feld des Information Retrieval wiedergibt. Teil davon ist die mehrsprachige und interdisziplinäre Ausrichtung, die die Schnittstellen zwischen Informationswissenschaft, Sprachwissenschaft und Informatik in ihrer praxisbezogenen und internationalen Forschung fokussiert.
    Pages
    VIII, 244 S
    Type
    s