Search (1 results, page 1 of 1)

  • × author_ss:"Ke, H.-R."
  • × theme_ss:"Automatisches Abstracting"
  1. Yeh, J.-Y.; Ke, H.-R.; Yang, W.-P.; Meng, I.-H.: Text summarization using a trainable summarizer and latent semantic analysis (2005) 0.00
    0.0016417772 = product of:
      0.008208886 = sum of:
        0.008208886 = product of:
          0.024626656 = sum of:
            0.024626656 = weight(_text_:f in 1003) [ClassicSimilarity], result of:
              0.024626656 = score(doc=1003,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.22018565 = fieldWeight in 1003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1003)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Abstract
    This paper proposes two approaches to address text summarization: modified corpus-based approach (MCBA) and LSA-based T.R.M. approach (LSA + T.R.M.). The first is a trainable summarizer, which takes into account several features, including position, positive keyword, negative keyword, centrality, and the resemblance to the title, to generate summaries. Two new ideas are exploited: (1) sentence positions are ranked to emphasize the significances of different sentence positions, and (2) the score function is trained by the genetic algorithm (GA) to obtain a suitable combination of feature weights. The second uses latent semantic analysis (LSA) to derive the semantic matrix of a document or a corpus and uses semantic sentence representation to construct a semantic text relationship map. We evaluate LSA + T.R.M. both with single documents and at the corpus level to investigate the competence of LSA in text summarization. The two novel approaches were measured at several compression rates on a data corpus composed of 100 political articles. When the compression rate was 30%, an average f-measure of 49% for MCBA, 52% for MCBA + GA, 44% and 40% for LSA + T.R.M. in single-document and corpus level were achieved respectively.