Sjöbergh, J.: Older versions of the ROUGEeval summarization evaluation system were easier to fool (2007)
0.00
0.0030150786 = product of:
0.006030157 = sum of:
0.006030157 = product of:
0.012060314 = sum of:
0.012060314 = weight(_text_:a in 940) [ClassicSimilarity], result of:
0.012060314 = score(doc=940,freq=8.0), product of:
0.059167966 = queryWeight, product of:
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.051314447 = queryNorm
0.20383182 = fieldWeight in 940, product of:
2.828427 = tf(freq=8.0), with freq of:
8.0 = termFreq=8.0
1.153047 = idf(docFreq=37942, maxDocs=44218)
0.0625 = fieldNorm(doc=940)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- We show some limitations of the ROUGE evaluation method for automatic summarization. We present a method for automatic summarization based on a Markov model of the source text. By a simple greedy word selection strategy, summaries with high ROUGE-scores are generated. These summaries would however not be considered good by human readers. The method can be adapted to trick different settings of the ROUGEeval package.
- Type
- a