Document (#24301)

Editor
Grefenstette, G.
Title
Cross-language information retrieval
Imprint
Boston, MA : Kluwer Academic Publ.
Year
1998
Pages
VII,182 S
Isbn
0-7923-8122-X
Series
The Kluwer International series on information retrieval
Content
Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
Footnote
Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.
Christian Fluhr at al (DIST/SMTI, France) outline the EMIR (European Multilingual Information Retrieval) and ESPRIT projects. They found that using SYSTRAN to machine translate queries and to access material from various multilingual databases produced less relevant results than a method referred to as 'multilingual reformulation' (the mechanics of which are only hinted at). An interesting technique is Latent Semantic Indexing (LSI), described by Michael Littman et al (Brown University) and, most clearly, by David Evans et al (Carnegie Mellon University). LSI involves creating matrices of documents and the terms they contain and 'fitting' related documents into a reduced matrix space. This effectively allows queries to be mapped onto a common semantic representation of the documents. Eugenio Picchi and Carol Peters (Pisa) report on a procedure to create links between translation equivalents in an Italian-English parallel corpus. The links are used to construct parallel linguistic contexts in real-time for any term or combination of terms that is being searched for in either language. Their interest is primarily lexicographic but they plan to apply the same procedure to comparable corpora, i.e. to texts which are not translations of each other but which share the same domain. Kiyoshi Yamabana et al (NEC, Japan) address the issue of how to disambiguate between alternative translations of query terms. Their DMAX (double maximise) method looks at co-occurrence frequencies between both source language words and target language words in order to arrive at the most probable translation. The statistical data for the decision are derived, not from the translation texts but independently from monolingual corpora in each language. An interactive user interface allows the user to influence the selection of terms during the matching process. Denis Gachot et al (SYSTRAN) describe the SYSTRAN NLP browser, a prototype tool which collects parsing information derived from a text or corpus previously translated with SYSTRAN. The user enters queries into the browser in either a structured or free form and receives grammatical and lexical information about the source text and/or its translation.
The retrieved output from a query including the phrase 'big rockets' may be, for instance, a sentence containing 'giant rocket' which is semantically ranked above 'military ocket'. David Hull (Xerox Research Centre, Grenoble) describes an implementation of a weighted Boolean model for Spanish-English CLIR. Users construct Boolean-type queries, weighting each term in the query, which is then translated by an on-line dictionary before being applied to the database. Comparisons with the performance of unweighted free-form queries ('vector space' models) proved encouraging. Two contributions consider the evaluation of CLIR systems. In order to by-pass the time-consuming and expensive process of assembling a standard collection of documents and of user queries against which the performance of an CLIR system is manually assessed, Páriac Sheridan et al (ETH Zurich) propose a method based on retrieving 'seed documents'. This involves identifying a unique document in a database (the 'seed document') and, for a number of queries, measuring how fast it is retrieved. The authors have also assembled a large database of multilingual news documents for testing purposes. By storing the (fairly short) documents in a structured form tagged with descriptor codes (e.g. for topic, country and area), the test suite is easily expanded while remaining consistent for the purposes of testing. Douglas Ouard and Bonne Dorr (University of Maryland) describe an evaluation methodology which appears to apply LSI techniques in order to filter and rank incoming documents designed for testing CLIR systems. The volume provides the reader an excellent overview of several projects in CLIR. It is well supported with references and is intended as a secondary text for researchers and practitioners. It highlights the need for a good, general tutorial introduction to the field."
Theme
Retrievalalgorithmen
Retrievalstudien
Multilinguale Probleme

Similar documents (content)

  1. Levergood, B.; Farrenkopf, S.; Frasnelli, E.: ¬The specification of the language of the field and interoperability : cross-language access to catalogues and online libraries (CACAO) (2008) 1.30
    1.3016781 = sum of:
      1.3016781 = sum of:
        0.049495045 = weight(abstract_txt:information in 466) [ClassicSimilarity], result of:
          0.049495045 = score(doc=466,freq=2.0), product of:
            0.15325768 = queryWeight, product of:
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.06291715 = queryNorm
            0.3229531 = fieldWeight in 466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.09375 = fieldNorm(doc=466)
        0.10053025 = weight(abstract_txt:retrieval in 466) [ClassicSimilarity], result of:
          0.10053025 = score(doc=466,freq=1.0), product of:
            0.3096865 = queryWeight, product of:
              1.4215103 = boost
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.06291715 = queryNorm
            0.3246194 = fieldWeight in 466, product of:
              1.0 = tf(freq=1.0), with freq of:
                1.0 = termFreq=1.0
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.09375 = fieldNorm(doc=466)
        0.40057915 = weight(abstract_txt:language in 466) [ClassicSimilarity], result of:
          0.40057915 = score(doc=466,freq=5.0), product of:
            0.45519063 = queryWeight, product of:
              1.7233977 = boost
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.06291715 = queryNorm
            0.880025 = fieldWeight in 466, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.09375 = fieldNorm(doc=466)
        0.7510736 = weight(abstract_txt:cross in 466) [ClassicSimilarity], result of:
          0.7510736 = score(doc=466,freq=3.0), product of:
            0.8206142 = queryWeight, product of:
              2.3139737 = boost
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.06291715 = queryNorm
            0.9152579 = fieldWeight in 466, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.09375 = fieldNorm(doc=466)
    
  2. Li, Y.; Shawe-Taylor, J.: Advanced learning algorithms for cross-language patent retrieval and classification (2007) 1.20
    1.1955632 = sum of:
      1.1955632 = sum of:
        0.029165236 = weight(abstract_txt:information in 2932) [ClassicSimilarity], result of:
          0.029165236 = score(doc=2932,freq=1.0), product of:
            0.15325768 = queryWeight, product of:
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.06291715 = queryNorm
            0.19030195 = fieldWeight in 2932, product of:
              1.0 = tf(freq=1.0), with freq of:
                1.0 = termFreq=1.0
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.078125 = fieldNorm(doc=2932)
        0.14510292 = weight(abstract_txt:retrieval in 2932) [ClassicSimilarity], result of:
          0.14510292 = score(doc=2932,freq=3.0), product of:
            0.3096865 = queryWeight, product of:
              1.4215103 = boost
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.06291715 = queryNorm
            0.46854776 = fieldWeight in 2932, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.078125 = fieldNorm(doc=2932)
        0.29857406 = weight(abstract_txt:language in 2932) [ClassicSimilarity], result of:
          0.29857406 = score(doc=2932,freq=4.0), product of:
            0.45519063 = queryWeight, product of:
              1.7233977 = boost
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.06291715 = queryNorm
            0.6559319 = fieldWeight in 2932, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.078125 = fieldNorm(doc=2932)
        0.722721 = weight(abstract_txt:cross in 2932) [ClassicSimilarity], result of:
          0.722721 = score(doc=2932,freq=4.0), product of:
            0.8206142 = queryWeight, product of:
              2.3139737 = boost
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.06291715 = queryNorm
            0.8807074 = fieldWeight in 2932, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.078125 = fieldNorm(doc=2932)
    
  3. Lopez-Ostenero, F.; Gonzalo, J.; Verdejo, F.: Noun phrases as building blocks for cross-language search assistance (2005) 1.12
    1.1165929 = sum of:
      1.1165929 = sum of:
        0.04124587 = weight(abstract_txt:information in 3022) [ClassicSimilarity], result of:
          0.04124587 = score(doc=3022,freq=2.0), product of:
            0.15325768 = queryWeight, product of:
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.06291715 = queryNorm
            0.26912758 = fieldWeight in 3022, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.078125 = fieldNorm(doc=3022)
        0.083775215 = weight(abstract_txt:retrieval in 3022) [ClassicSimilarity], result of:
          0.083775215 = score(doc=3022,freq=1.0), product of:
            0.3096865 = queryWeight, product of:
              1.4215103 = boost
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.06291715 = queryNorm
            0.2705162 = fieldWeight in 3022, product of:
              1.0 = tf(freq=1.0), with freq of:
                1.0 = termFreq=1.0
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.078125 = fieldNorm(doc=3022)
        0.36567706 = weight(abstract_txt:language in 3022) [ClassicSimilarity], result of:
          0.36567706 = score(doc=3022,freq=6.0), product of:
            0.45519063 = queryWeight, product of:
              1.7233977 = boost
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.06291715 = queryNorm
            0.80334926 = fieldWeight in 3022, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.078125 = fieldNorm(doc=3022)
        0.62589467 = weight(abstract_txt:cross in 3022) [ClassicSimilarity], result of:
          0.62589467 = score(doc=3022,freq=3.0), product of:
            0.8206142 = queryWeight, product of:
              2.3139737 = boost
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.06291715 = queryNorm
            0.76271486 = fieldWeight in 3022, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.078125 = fieldNorm(doc=3022)
    
  4. Petrelli, D.; Levin, S.; Beaulieu, M.; Sanderson, M.: Which user interaction for cross-language information retrieval? : design issues and reflections (2006) 1.11
    1.1108176 = sum of:
      1.1108176 = sum of:
        0.04124587 = weight(abstract_txt:information in 54) [ClassicSimilarity], result of:
          0.04124587 = score(doc=54,freq=2.0), product of:
            0.15325768 = queryWeight, product of:
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.06291715 = queryNorm
            0.26912758 = fieldWeight in 54, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.078125 = fieldNorm(doc=54)
        0.14510292 = weight(abstract_txt:retrieval in 54) [ClassicSimilarity], result of:
          0.14510292 = score(doc=54,freq=3.0), product of:
            0.3096865 = queryWeight, product of:
              1.4215103 = boost
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.06291715 = queryNorm
            0.46854776 = fieldWeight in 54, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.078125 = fieldNorm(doc=54)
        0.29857406 = weight(abstract_txt:language in 54) [ClassicSimilarity], result of:
          0.29857406 = score(doc=54,freq=4.0), product of:
            0.45519063 = queryWeight, product of:
              1.7233977 = boost
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.06291715 = queryNorm
            0.6559319 = fieldWeight in 54, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.078125 = fieldNorm(doc=54)
        0.62589467 = weight(abstract_txt:cross in 54) [ClassicSimilarity], result of:
          0.62589467 = score(doc=54,freq=3.0), product of:
            0.8206142 = queryWeight, product of:
              2.3139737 = boost
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.06291715 = queryNorm
            0.76271486 = fieldWeight in 54, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.078125 = fieldNorm(doc=54)
    
  5. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 1.11
    1.1082478 = sum of:
      1.1082478 = sum of:
        0.032607727 = weight(abstract_txt:information in 3165) [ClassicSimilarity], result of:
          0.032607727 = score(doc=3165,freq=5.0), product of:
            0.15325768 = queryWeight, product of:
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.06291715 = queryNorm
            0.21276405 = fieldWeight in 3165, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              2.435865 = idf(docFreq=10064, maxDocs=42306)
              0.0390625 = fieldNorm(doc=3165)
        0.15102792 = weight(abstract_txt:retrieval in 3165) [ClassicSimilarity], result of:
          0.15102792 = score(doc=3165,freq=13.0), product of:
            0.3096865 = queryWeight, product of:
              1.4215103 = boost
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.06291715 = queryNorm
            0.48768 = fieldWeight in 3165, product of:
              3.6055512 = tf(freq=13.0), with freq of:
                13.0 = termFreq=13.0
              3.4626071 = idf(docFreq=3604, maxDocs=42306)
              0.0390625 = fieldNorm(doc=3165)
        0.32536355 = weight(abstract_txt:language in 3165) [ClassicSimilarity], result of:
          0.32536355 = score(doc=3165,freq=19.0), product of:
            0.45519063 = queryWeight, product of:
              1.7233977 = boost
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.06291715 = queryNorm
            0.7147852 = fieldWeight in 3165, product of:
              4.358899 = tf(freq=19.0), with freq of:
                19.0 = termFreq=19.0
              4.197964 = idf(docFreq=1727, maxDocs=42306)
              0.0390625 = fieldNorm(doc=3165)
        0.5992486 = weight(abstract_txt:cross in 3165) [ClassicSimilarity], result of:
          0.5992486 = score(doc=3165,freq=11.0), product of:
            0.8206142 = queryWeight, product of:
              2.3139737 = boost
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.06291715 = queryNorm
            0.730244 = fieldWeight in 3165, product of:
              3.3166249 = tf(freq=11.0), with freq of:
                11.0 = termFreq=11.0
              5.636527 = idf(docFreq=409, maxDocs=42306)
              0.0390625 = fieldNorm(doc=3165)