Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Yuille, A.L.: Explain images with multimodal recurrent neural networks (2014)
0.01
0.01102379 = product of:
0.03307137 = sum of:
0.03307137 = product of:
0.0992141 = sum of:
0.0992141 = weight(_text_:network in 1557) [ClassicSimilarity], result of:
0.0992141 = score(doc=1557,freq=6.0), product of:
0.19402927 = queryWeight, product of:
4.4533744 = idf(docFreq=1398, maxDocs=44218)
0.043569047 = queryNorm
0.51133573 = fieldWeight in 1557, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
4.4533744 = idf(docFreq=1398, maxDocs=44218)
0.046875 = fieldNorm(doc=1557)
0.33333334 = coord(1/3)
0.33333334 = coord(1/3)
- Abstract
- In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12 [8], Flickr 8K [28], and Flickr 30K [13]). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.