Yeh, J.-Y.; Ke, H.-R.; Yang, W.-P.; Meng, I.-H.: Text summarization using a trainable summarizer and latent semantic analysis (2005)
0.01
0.005431735 = product of:
0.013579337 = sum of:
0.009632425 = weight(_text_:a in 1003) [ClassicSimilarity], result of:
0.009632425 = score(doc=1003,freq=16.0), product of:
0.053464882 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.046368346 = queryNorm
0.18016359 = fieldWeight in 1003, product of:
4.0 = tf(freq=16.0), with freq of:
16.0 = termFreq=16.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0390625 = fieldNorm(doc=1003)
0.003946911 = product of:
0.007893822 = sum of:
0.007893822 = weight(_text_:information in 1003) [ClassicSimilarity], result of:
0.007893822 = score(doc=1003,freq=2.0), product of:
0.08139861 = queryWeight, product of:
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.046368346 = queryNorm
0.09697737 = fieldWeight in 1003, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
1.7554779 = idf(docFreq=20772, maxDocs=44218)
0.0390625 = fieldNorm(doc=1003)
0.5 = coord(1/2)
0.4 = coord(2/5)
- Abstract
- This paper proposes two approaches to address text summarization: modified corpus-based approach (MCBA) and LSA-based T.R.M. approach (LSA + T.R.M.). The first is a trainable summarizer, which takes into account several features, including position, positive keyword, negative keyword, centrality, and the resemblance to the title, to generate summaries. Two new ideas are exploited: (1) sentence positions are ranked to emphasize the significances of different sentence positions, and (2) the score function is trained by the genetic algorithm (GA) to obtain a suitable combination of feature weights. The second uses latent semantic analysis (LSA) to derive the semantic matrix of a document or a corpus and uses semantic sentence representation to construct a semantic text relationship map. We evaluate LSA + T.R.M. both with single documents and at the corpus level to investigate the competence of LSA in text summarization. The two novel approaches were measured at several compression rates on a data corpus composed of 100 political articles. When the compression rate was 30%, an average f-measure of 49% for MCBA, 52% for MCBA + GA, 44% and 40% for LSA + T.R.M. in single-document and corpus level were achieved respectively.
- Source
- Information processing and management. 41(2005) no.1, S.75-95
- Type
- a