Search (6 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × type_ss:"m"
  1. Ellis, D.: Progress and problems in information retrieval (1996) 0.04
    0.03526516 = product of:
      0.07053032 = sum of:
        0.07053032 = sum of:
          0.020730218 = weight(_text_:d in 789) [ClassicSimilarity], result of:
            0.020730218 = score(doc=789,freq=4.0), product of:
              0.08729101 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.045945734 = queryNorm
              0.237484 = fieldWeight in 789, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.0625 = fieldNorm(doc=789)
          0.049800098 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
            0.049800098 = score(doc=789,freq=2.0), product of:
              0.16089413 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045945734 = queryNorm
              0.30952093 = fieldWeight in 789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=789)
      0.5 = coord(1/2)
    
    Date
    26. 7.2002 20:22:46
    Footnote
    Rez. in: Managing information 3(1996) no.10, S.49 (D. Bawden); Program 32(1998) no.2, S.190-192 (C. Revie)
  2. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.01
    0.0124500245 = product of:
      0.024900049 = sum of:
        0.024900049 = product of:
          0.049800098 = sum of:
            0.049800098 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
              0.049800098 = score(doc=4049,freq=2.0), product of:
                0.16089413 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045945734 = queryNorm
                0.30952093 = fieldWeight in 4049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  3. Drabenstott, K.M.; Vizine-Goetz, D.: Using subject headings for online retrieval : theory, practice and potential (1994) 0.00
    0.0027484642 = product of:
      0.0054969285 = sum of:
        0.0054969285 = product of:
          0.010993857 = sum of:
            0.010993857 = weight(_text_:d in 386) [ClassicSimilarity], result of:
              0.010993857 = score(doc=386,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.1259449 = fieldWeight in 386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.046875 = fieldNorm(doc=386)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Cross-language information retrieval (1998) 0.00
    0.0016195483 = product of:
      0.0032390966 = sum of:
        0.0032390966 = product of:
          0.006478193 = sum of:
            0.006478193 = weight(_text_:d in 6299) [ClassicSimilarity], result of:
              0.006478193 = score(doc=6299,freq=4.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.07421375 = fieldWeight in 6299, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Footnote
    Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.
  5. Lohmann, H.: KASCADE: Dokumentanreicherung und automatische Inhaltserschließung : Projektbericht und Ergebnisse des Retrievaltests (2000) 0.00
    0.0016032709 = product of:
      0.0032065418 = sum of:
        0.0032065418 = product of:
          0.0064130835 = sum of:
            0.0064130835 = weight(_text_:d in 494) [ClassicSimilarity], result of:
              0.0064130835 = score(doc=494,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.07346786 = fieldWeight in 494, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=494)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    d
  6. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0011451935 = product of:
      0.002290387 = sum of:
        0.002290387 = product of:
          0.004580774 = sum of:
            0.004580774 = weight(_text_:d in 636) [ClassicSimilarity], result of:
              0.004580774 = score(doc=636,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.052477043 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones