Search (4 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × type_ss:"s"
  • × year_i:[1990 TO 2000}
  1. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.01
    0.0124500245 = product of:
      0.024900049 = sum of:
        0.024900049 = product of:
          0.049800098 = sum of:
            0.049800098 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.049800098 = score(doc=3087,freq=2.0), product of:
                0.16089413 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045945734 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  2. ¬The First Text Retrieval Conference (TREC-1) (1993) 0.01
    0.0073292386 = product of:
      0.014658477 = sum of:
        0.014658477 = product of:
          0.029316954 = sum of:
            0.029316954 = weight(_text_:d in 3352) [ClassicSimilarity], result of:
              0.029316954 = score(doc=3352,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.33585307 = fieldWeight in 3352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.125 = fieldNorm(doc=3352)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Editor
    Harman, D.
  3. Evaluation of information retrieval systems : special topic issue (1996) 0.01
    0.0054969285 = product of:
      0.010993857 = sum of:
        0.010993857 = product of:
          0.021987714 = sum of:
            0.021987714 = weight(_text_:d in 6812) [ClassicSimilarity], result of:
              0.021987714 = score(doc=6812,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.2518898 = fieldWeight in 6812, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6812)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    d
  4. Cross-language information retrieval (1998) 0.00
    0.0016195483 = product of:
      0.0032390966 = sum of:
        0.0032390966 = product of:
          0.006478193 = sum of:
            0.006478193 = weight(_text_:d in 6299) [ClassicSimilarity], result of:
              0.006478193 = score(doc=6299,freq=4.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.07421375 = fieldWeight in 6299, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Footnote
    Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.

Languages