Lhadj, L.S.; Boughanem, M.; Amrouche, K.: Enhancing information retrieval through concept-based language modeling and semantic smoothing (2016)
0.01
0.011614156 = product of:
0.023228312 = sum of:
0.023228312 = product of:
0.046456624 = sum of:
0.046456624 = weight(_text_:n in 3221) [ClassicSimilarity], result of:
0.046456624 = score(doc=3221,freq=2.0), product of:
0.19504215 = queryWeight, product of:
4.3116565 = idf(docFreq=1611, maxDocs=44218)
0.045236014 = queryNorm
0.23818761 = fieldWeight in 3221, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
4.3116565 = idf(docFreq=1611, maxDocs=44218)
0.0390625 = fieldNorm(doc=3221)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- Traditionally, many information retrieval models assume that terms occur in documents independently. Although these models have already shown good performance, the word independency assumption seems to be unrealistic from a natural language point of view, which considers that terms are related to each other. Therefore, such an assumption leads to two well-known problems in information retrieval (IR), namely, polysemy, or term mismatch, and synonymy. In language models, these issues have been addressed by considering dependencies such as bigrams, phrasal-concepts, or word relationships, but such models are estimated using simple n-grams or concept counting. In this paper, we address polysemy and synonymy mismatch with a concept-based language modeling approach that combines ontological concepts from external resources with frequently found collocations from the document collection. In addition, the concept-based model is enriched with subconcepts and semantic relationships through a semantic smoothing technique so as to perform semantic matching. Experiments carried out on TREC collections show that our model achieves significant improvements over a single word-based model and the Markov Random Field model (using a Markov classifier).