Search (1 results, page 1 of 1)

  • × author_ss:"Banchs, R.E."
  • × theme_ss:"Multilinguale Probleme"
  • × type_ss:"a"
  1. Gupta, P.; Banchs, R.E.; Rosso, P.: Continuous space models for CLIR (2017) 0.01
    0.0072074793 = product of:
      0.039641134 = sum of:
        0.012516791 = weight(_text_:of in 3295) [ClassicSimilarity], result of:
          0.012516791 = score(doc=3295,freq=10.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.23179851 = fieldWeight in 3295, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3295)
        0.027124345 = weight(_text_:on in 3295) [ClassicSimilarity], result of:
          0.027124345 = score(doc=3295,freq=12.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.35714048 = fieldWeight in 3295, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3295)
      0.18181819 = coord(2/11)
    
    Abstract
    We present and evaluate a novel technique for learning cross-lingual continuous space models to aid cross-language information retrieval (CLIR). Our model, which is referred to as external-data composition neural network (XCNN), is based on a composition function that is implemented on top of a deep neural network that provides a distributed learning framework. Different from most existing models, which rely only on available parallel data for training, our learning framework provides a natural way to exploit monolingual data and its associated relevance metadata for learning continuous space representations of language. Cross-language extensions of the obtained models can then be trained by using a small set of parallel data. This property is very helpful for resource-poor languages, therefore, we carry out experiments on the English-Hindi language pair. On the conducted comparative evaluation, the proposed model is shown to outperform state-of-the-art continuous space models with statistically significant margin on two different tasks: parallel sentence retrieval and ad-hoc retrieval.