Document (#24300)

Editor
Grefenstette, G.
Title
Cross-language information retrieval
Imprint
Boston, MA : Kluwer Academic Publ.
Year
1998
Pages
VII,182 S
Isbn
0-7923-8122-X
Series
The Kluwer International series on information retrieval
Content
Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
Footnote
Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.
Christian Fluhr at al (DIST/SMTI, France) outline the EMIR (European Multilingual Information Retrieval) and ESPRIT projects. They found that using SYSTRAN to machine translate queries and to access material from various multilingual databases produced less relevant results than a method referred to as 'multilingual reformulation' (the mechanics of which are only hinted at). An interesting technique is Latent Semantic Indexing (LSI), described by Michael Littman et al (Brown University) and, most clearly, by David Evans et al (Carnegie Mellon University). LSI involves creating matrices of documents and the terms they contain and 'fitting' related documents into a reduced matrix space. This effectively allows queries to be mapped onto a common semantic representation of the documents. Eugenio Picchi and Carol Peters (Pisa) report on a procedure to create links between translation equivalents in an Italian-English parallel corpus. The links are used to construct parallel linguistic contexts in real-time for any term or combination of terms that is being searched for in either language. Their interest is primarily lexicographic but they plan to apply the same procedure to comparable corpora, i.e. to texts which are not translations of each other but which share the same domain. Kiyoshi Yamabana et al (NEC, Japan) address the issue of how to disambiguate between alternative translations of query terms. Their DMAX (double maximise) method looks at co-occurrence frequencies between both source language words and target language words in order to arrive at the most probable translation. The statistical data for the decision are derived, not from the translation texts but independently from monolingual corpora in each language. An interactive user interface allows the user to influence the selection of terms during the matching process. Denis Gachot et al (SYSTRAN) describe the SYSTRAN NLP browser, a prototype tool which collects parsing information derived from a text or corpus previously translated with SYSTRAN. The user enters queries into the browser in either a structured or free form and receives grammatical and lexical information about the source text and/or its translation.
The retrieved output from a query including the phrase 'big rockets' may be, for instance, a sentence containing 'giant rocket' which is semantically ranked above 'military ocket'. David Hull (Xerox Research Centre, Grenoble) describes an implementation of a weighted Boolean model for Spanish-English CLIR. Users construct Boolean-type queries, weighting each term in the query, which is then translated by an on-line dictionary before being applied to the database. Comparisons with the performance of unweighted free-form queries ('vector space' models) proved encouraging. Two contributions consider the evaluation of CLIR systems. In order to by-pass the time-consuming and expensive process of assembling a standard collection of documents and of user queries against which the performance of an CLIR system is manually assessed, Páriac Sheridan et al (ETH Zurich) propose a method based on retrieving 'seed documents'. This involves identifying a unique document in a database (the 'seed document') and, for a number of queries, measuring how fast it is retrieved. The authors have also assembled a large database of multilingual news documents for testing purposes. By storing the (fairly short) documents in a structured form tagged with descriptor codes (e.g. for topic, country and area), the test suite is easily expanded while remaining consistent for the purposes of testing. Douglas Ouard and Bonne Dorr (University of Maryland) describe an evaluation methodology which appears to apply LSI techniques in order to filter and rank incoming documents designed for testing CLIR systems. The volume provides the reader an excellent overview of several projects in CLIR. It is well supported with references and is intended as a secondary text for researchers and practitioners. It highlights the need for a good, general tutorial introduction to the field."
Theme
Retrievalalgorithmen
Retrievalstudien
Multilinguale Probleme

Similar documents (content)

  1. Levergood, B.; Farrenkopf, S.; Frasnelli, E.: ¬The specification of the language of the field and interoperability : cross-language access to catalogues and online libraries (CACAO) (2008) 1.30
    1.2956741 = sum of:
      1.2956741 = sum of:
        0.049061496 = weight(abstract_txt:information in 2646) [ClassicSimilarity], result of:
          0.049061496 = score(doc=2646,freq=2.0), product of:
            0.15285137 = queryWeight, product of:
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.06313703 = queryNorm
            0.32097518 = fieldWeight in 2646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.09375 = fieldNorm(doc=2646)
        0.10260935 = weight(abstract_txt:retrieval in 2646) [ClassicSimilarity], result of:
          0.10260935 = score(doc=2646,freq=1.0), product of:
            0.31495133 = queryWeight, product of:
              1.4354466 = boost
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.06313703 = queryNorm
            0.3257943 = fieldWeight in 2646, product of:
              1.0 = tf(freq=1.0), with freq of:
                1.0 = termFreq=1.0
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.09375 = fieldNorm(doc=2646)
        0.3998845 = weight(abstract_txt:language in 2646) [ClassicSimilarity], result of:
          0.3998845 = score(doc=2646,freq=5.0), product of:
            0.4561264 = queryWeight, product of:
              1.7274598 = boost
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.06313703 = queryNorm
            0.8766967 = fieldWeight in 2646, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.09375 = fieldNorm(doc=2646)
        0.74411875 = weight(abstract_txt:cross in 2646) [ClassicSimilarity], result of:
          0.74411875 = score(doc=2646,freq=3.0), product of:
            0.8181631 = queryWeight, product of:
              2.313584 = boost
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.06313703 = queryNorm
            0.9094993 = fieldWeight in 2646, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.09375 = fieldNorm(doc=2646)
    
  2. Li, Y.; Shawe-Taylor, J.: Advanced learning algorithms for cross-language patent retrieval and classification (2007) 1.19
    1.1910985 = sum of:
      1.1910985 = sum of:
        0.028909763 = weight(abstract_txt:information in 931) [ClassicSimilarity], result of:
          0.028909763 = score(doc=931,freq=1.0), product of:
            0.15285137 = queryWeight, product of:
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.06313703 = queryNorm
            0.18913643 = fieldWeight in 931, product of:
              1.0 = tf(freq=1.0), with freq of:
                1.0 = termFreq=1.0
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.078125 = fieldNorm(doc=931)
        0.14810383 = weight(abstract_txt:retrieval in 931) [ClassicSimilarity], result of:
          0.14810383 = score(doc=931,freq=3.0), product of:
            0.31495133 = queryWeight, product of:
              1.4354466 = boost
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.06313703 = queryNorm
            0.47024357 = fieldWeight in 931, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.078125 = fieldNorm(doc=931)
        0.2980563 = weight(abstract_txt:language in 931) [ClassicSimilarity], result of:
          0.2980563 = score(doc=931,freq=4.0), product of:
            0.4561264 = queryWeight, product of:
              1.7274598 = boost
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.06313703 = queryNorm
            0.65345114 = fieldWeight in 931, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.078125 = fieldNorm(doc=931)
        0.7160286 = weight(abstract_txt:cross in 931) [ClassicSimilarity], result of:
          0.7160286 = score(doc=931,freq=4.0), product of:
            0.8181631 = queryWeight, product of:
              2.313584 = boost
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.06313703 = queryNorm
            0.87516606 = fieldWeight in 931, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.078125 = fieldNorm(doc=931)
    
  3. Lopez-Ostenero, F.; Gonzalo, J.; Verdejo, F.: Noun phrases as building blocks for cross-language search assistance (2005) 1.11
    1.1115342 = sum of:
      1.1115342 = sum of:
        0.040884577 = weight(abstract_txt:information in 1021) [ClassicSimilarity], result of:
          0.040884577 = score(doc=1021,freq=2.0), product of:
            0.15285137 = queryWeight, product of:
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.06313703 = queryNorm
            0.2674793 = fieldWeight in 1021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.078125 = fieldNorm(doc=1021)
        0.08550779 = weight(abstract_txt:retrieval in 1021) [ClassicSimilarity], result of:
          0.08550779 = score(doc=1021,freq=1.0), product of:
            0.31495133 = queryWeight, product of:
              1.4354466 = boost
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.06313703 = queryNorm
            0.27149525 = fieldWeight in 1021, product of:
              1.0 = tf(freq=1.0), with freq of:
                1.0 = termFreq=1.0
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.078125 = fieldNorm(doc=1021)
        0.36504295 = weight(abstract_txt:language in 1021) [ClassicSimilarity], result of:
          0.36504295 = score(doc=1021,freq=6.0), product of:
            0.4561264 = queryWeight, product of:
              1.7274598 = boost
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.06313703 = queryNorm
            0.80031097 = fieldWeight in 1021, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.078125 = fieldNorm(doc=1021)
        0.62009895 = weight(abstract_txt:cross in 1021) [ClassicSimilarity], result of:
          0.62009895 = score(doc=1021,freq=3.0), product of:
            0.8181631 = queryWeight, product of:
              2.313584 = boost
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.06313703 = queryNorm
            0.75791603 = fieldWeight in 1021, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.078125 = fieldNorm(doc=1021)
    
  4. Petrelli, D.; Levin, S.; Beaulieu, M.; Sanderson, M.: Which user interaction for cross-language information retrieval? : design issues and reflections (2006) 1.11
    1.1071436 = sum of:
      1.1071436 = sum of:
        0.040884577 = weight(abstract_txt:information in 5053) [ClassicSimilarity], result of:
          0.040884577 = score(doc=5053,freq=2.0), product of:
            0.15285137 = queryWeight, product of:
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.06313703 = queryNorm
            0.2674793 = fieldWeight in 5053, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.078125 = fieldNorm(doc=5053)
        0.14810383 = weight(abstract_txt:retrieval in 5053) [ClassicSimilarity], result of:
          0.14810383 = score(doc=5053,freq=3.0), product of:
            0.31495133 = queryWeight, product of:
              1.4354466 = boost
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.06313703 = queryNorm
            0.47024357 = fieldWeight in 5053, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.078125 = fieldNorm(doc=5053)
        0.2980563 = weight(abstract_txt:language in 5053) [ClassicSimilarity], result of:
          0.2980563 = score(doc=5053,freq=4.0), product of:
            0.4561264 = queryWeight, product of:
              1.7274598 = boost
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.06313703 = queryNorm
            0.65345114 = fieldWeight in 5053, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.078125 = fieldNorm(doc=5053)
        0.62009895 = weight(abstract_txt:cross in 5053) [ClassicSimilarity], result of:
          0.62009895 = score(doc=5053,freq=3.0), product of:
            0.8181631 = queryWeight, product of:
              2.313584 = boost
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.06313703 = queryNorm
            0.75791603 = fieldWeight in 5053, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.078125 = fieldNorm(doc=5053)
    
  5. Oard, D.W.: Alternative approaches for cross-language text retrieval (1997) 1.10
    1.1049724 = sum of:
      1.1049724 = sum of:
        0.032322098 = weight(abstract_txt:information in 1164) [ClassicSimilarity], result of:
          0.032322098 = score(doc=1164,freq=5.0), product of:
            0.15285137 = queryWeight, product of:
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.06313703 = queryNorm
            0.21146096 = fieldWeight in 1164, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              2.4209464 = idf(docFreq=10677, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1164)
        0.15415137 = weight(abstract_txt:retrieval in 1164) [ClassicSimilarity], result of:
          0.15415137 = score(doc=1164,freq=13.0), product of:
            0.31495133 = queryWeight, product of:
              1.4354466 = boost
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.06313703 = queryNorm
            0.48944503 = fieldWeight in 1164, product of:
              3.6055512 = tf(freq=13.0), with freq of:
                13.0 = termFreq=13.0
              3.4751394 = idf(docFreq=3720, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1164)
        0.32479936 = weight(abstract_txt:language in 1164) [ClassicSimilarity], result of:
          0.32479936 = score(doc=1164,freq=19.0), product of:
            0.4561264 = queryWeight, product of:
              1.7274598 = boost
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.06313703 = queryNorm
            0.7120819 = fieldWeight in 1164, product of:
              4.358899 = tf(freq=19.0), with freq of:
                19.0 = termFreq=19.0
              4.1820874 = idf(docFreq=1834, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1164)
        0.5936995 = weight(abstract_txt:cross in 1164) [ClassicSimilarity], result of:
          0.5936995 = score(doc=1164,freq=11.0), product of:
            0.8181631 = queryWeight, product of:
              2.313584 = boost
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.06313703 = queryNorm
            0.72564936 = fieldWeight in 1164, product of:
              3.3166249 = tf(freq=11.0), with freq of:
                11.0 = termFreq=11.0
              5.601063 = idf(docFreq=443, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1164)