Search (6 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × theme_ss:"Multilinguale Probleme"
  1. Airio, E.: Who benefits from CLIR in web retrieval? (2008) 0.01
    0.0053607305 = product of:
      0.08577169 = sum of:
        0.08577169 = weight(_text_:author in 2342) [ClassicSimilarity], result of:
          0.08577169 = score(doc=2342,freq=6.0), product of:
            0.15482868 = queryWeight, product of:
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.032090448 = queryNorm
            0.553978 = fieldWeight in 2342, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.824759 = idf(docFreq=964, maxDocs=44218)
              0.046875 = fieldNorm(doc=2342)
      0.0625 = coord(1/16)
    
    Abstract
    Purpose - The aim of the current paper is to test whether query translation is beneficial in web retrieval. Design/methodology/approach - The language pairs were Finnish-Swedish, English-German and Finnish-French. A total of 12-18 participants were recruited for each language pair. Each participant performed four retrieval tasks. The author's aim was to compare the performance of the translated queries with that of the target language queries. Thus, the author asked participants to formulate a source language query and a target language query for each task. The source language queries were translated into the target language utilizing a dictionary-based system. In English-German, also machine translation was utilized. The author used Google as the search engine. Findings - The results differed depending on the language pair. The author concluded that the dictionary coverage had an effect on the results. On average, the results of query-translation were better than in the traditional laboratory tests. Originality/value - This research shows that query translation in web is beneficial especially for users with moderate and non-active language skills. This is valuable information for developers of cross-language information retrieval systems.
  2. Davis, M.; Dunning, T.: ¬A TREC evaluation of query translation methods for multi-lingual text retrieval (1996) 0.00
    0.001681252 = product of:
      0.026900033 = sum of:
        0.026900033 = product of:
          0.053800065 = sum of:
            0.053800065 = weight(_text_:ed in 1917) [ClassicSimilarity], result of:
              0.053800065 = score(doc=1917,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.47146195 = fieldWeight in 1917, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1917)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  3. Davis, M.: New experiments in cross-language text retrieval at NMSU's computing research lab (1997) 0.00
    0.001681252 = product of:
      0.026900033 = sum of:
        0.026900033 = product of:
          0.053800065 = sum of:
            0.053800065 = weight(_text_:ed in 3111) [ClassicSimilarity], result of:
              0.053800065 = score(doc=3111,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.47146195 = fieldWeight in 3111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3111)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  4. Sheridan, P.; Ballerini, J.P.; Schäuble, P.: Building a large multilingual test collection from comparable news documents (1998) 0.00
    0.001681252 = product of:
      0.026900033 = sum of:
        0.026900033 = product of:
          0.053800065 = sum of:
            0.053800065 = weight(_text_:ed in 6298) [ClassicSimilarity], result of:
              0.053800065 = score(doc=6298,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.47146195 = fieldWeight in 6298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6298)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Source
    Cross-language information retrieval. Ed.: G. Grefenstette
  5. Davis, M.W.: On the effective use of large parallel corpora in cross-language text retrieval (1998) 0.00
    0.001681252 = product of:
      0.026900033 = sum of:
        0.026900033 = product of:
          0.053800065 = sum of:
            0.053800065 = weight(_text_:ed in 6302) [ClassicSimilarity], result of:
              0.053800065 = score(doc=6302,freq=2.0), product of:
                0.11411327 = queryWeight, product of:
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.032090448 = queryNorm
                0.47146195 = fieldWeight in 6302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5559888 = idf(docFreq=3431, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6302)
          0.5 = coord(1/2)
      0.0625 = coord(1/16)
    
    Source
    Cross-language information retrieval. Ed.: G. Grefenstette
  6. Cross-language information retrieval (1998) 0.00
    6.909157E-4 = product of:
      0.011054651 = sum of:
        0.011054651 = weight(_text_:26 in 6299) [ClassicSimilarity], result of:
          0.011054651 = score(doc=6299,freq=2.0), product of:
            0.113328174 = queryWeight, product of:
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.032090448 = queryNorm
            0.097545475 = fieldWeight in 6299, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5315237 = idf(docFreq=3516, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
      0.0625 = coord(1/16)
    
    Footnote
    Rez. in: Machine translation review: 1999, no.10, S.26-27 (D. Lewis): "Cross Language Information Retrieval (CLIR) addresses the growing need to access large volumes of data across language boundaries. The typical requirement is for the user to input a free form query, usually a brief description of a topic, into a search or retrieval engine which returns a list, in ranked order, of documents or web pages that are relevant to the topic. The search engine matches the terms in the query to indexed terms, usually keywords previously derived from the target documents. Unlike monolingual information retrieval, CLIR requires query terms in one language to be matched to indexed terms in another. Matching can be done by bilingual dictionary lookup, full machine translation, or by applying statistical methods. A query's success is measured in terms of recall (how many potentially relevant target documents are found) and precision (what proportion of documents found are relevant). Issues in CLIR are how to translate query terms into index terms, how to eliminate alternative translations (e.g. to decide that French 'traitement' in a query means 'treatment' and not 'salary'), and how to rank or weight translation alternatives that are retained (e.g. how to order the French terms 'aventure', 'business', 'affaire', and 'liaison' as relevant translations of English 'affair'). Grefenstette provides a lucid and useful overview of the field and the problems. The volume brings together a number of experiments and projects in CLIR. Mark Davies (New Mexico State University) describes Recuerdo, a Spanish retrieval engine which reduces translation ambiguities by scanning indexes for parallel texts; it also uses either a bilingual dictionary or direct equivalents from a parallel corpus in order to compare results for queries on parallel texts. Lisa Ballesteros and Bruce Croft (University of Massachusetts) use a 'local feedback' technique which automatically enhances a query by adding extra terms to it both before and after translation; such terms can be derived from documents known to be relevant to the query.