Lee, Y.-Y.; Ke, H.; Yen, T.-Y.; Huang, H.-H.; Chen, H.-H.: Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement (2020)
0.00
0.0015457221 = product of:
0.009274333 = sum of:
0.009274333 = weight(_text_:in in 5871) [ClassicSimilarity], result of:
0.009274333 = score(doc=5871,freq=6.0), product of:
0.059380736 = queryWeight, product of:
1.3602545 = idf(docFreq=30841, maxDocs=44218)
0.043654136 = queryNorm
0.1561842 = fieldWeight in 5871, product of:
2.4494898 = tf(freq=6.0), with freq of:
6.0 = termFreq=6.0
1.3602545 = idf(docFreq=30841, maxDocs=44218)
0.046875 = fieldNorm(doc=5871)
0.16666667 = coord(1/6)
- Abstract
- In this research, we propose 3 different approaches to measure the semantic relatedness between 2 words: (i) boost the performance of GloVe word embedding model via removing or transforming abnormal dimensions; (ii) linearly combine the information extracted from WordNet and word embeddings; and (iii) utilize word embedding and 12 linguistic information extracted from WordNet as features for Support Vector Regression. We conducted our experiments on 8 benchmark data sets, and computed Spearman correlations between the outputs of our methods and the ground truth. We report our results together with 3 state-of-the-art approaches. The experimental results show that our method can outperform state-of-the-art approaches in all the selected English benchmark data sets.
- Theme
- Semantisches Umfeld in Indexierung u. Retrieval