Lee, Y.-Y.; Ke, H.; Yen, T.-Y.; Huang, H.-H.; Chen, H.-H.: Combining and learning word embedding with WordNet for semantic relatedness and similarity measurement (2020)
0.02
0.024581954 = product of:
0.049163908 = sum of:
0.049163908 = product of:
0.098327816 = sum of:
0.098327816 = weight(_text_:ii in 5871) [ClassicSimilarity], result of:
0.098327816 = score(doc=5871,freq=2.0), product of:
0.2745971 = queryWeight, product of:
5.4016213 = idf(docFreq=541, maxDocs=44218)
0.050836053 = queryNorm
0.3580803 = fieldWeight in 5871, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
5.4016213 = idf(docFreq=541, maxDocs=44218)
0.046875 = fieldNorm(doc=5871)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- In this research, we propose 3 different approaches to measure the semantic relatedness between 2 words: (i) boost the performance of GloVe word embedding model via removing or transforming abnormal dimensions; (ii) linearly combine the information extracted from WordNet and word embeddings; and (iii) utilize word embedding and 12 linguistic information extracted from WordNet as features for Support Vector Regression. We conducted our experiments on 8 benchmark data sets, and computed Spearman correlations between the outputs of our methods and the ground truth. We report our results together with 3 state-of-the-art approaches. The experimental results show that our method can outperform state-of-the-art approaches in all the selected English benchmark data sets.