Search (2 results, page 1 of 1)
-
×
author_ss:"Dorr, B.J."
-
×
author_ss:"Schwartz, R."
-
×
theme_ss:"Automatisches Abstracting"
-
×
type_ss:"a"
- Did you mean:
- bliss%3a%2281.68 computereinsatz in unterricht und ausbildung%22 2
- bliss%3a%2181.68 computereinsatz in unterricht und ausbildung%22 2
- bliss%3a%2281.68 computereinsatz in unterrichts und ausbildung%22 2
- bliss%3a%2281.68 computereinsatzes in unterricht und ausbildung%22 2
- bross%3a%2281.68 computereinsatz in unterricht und ausbildung%22 2
-
Hobson, S.P.; Dorr, B.J.; Monz, C.; Schwartz, R.: Task-based evaluation of text summarization using Relevance Prediction (2007)
0.00
0.0015457221 = product of: 0.009274333 = sum of: 0.009274333 = weight(_text_:in in 938) [ClassicSimilarity], result of: 0.009274333 = score(doc=938,freq=6.0), product of: 0.059380736 = queryWeight, product of: 1.3602545 = idf(docFreq=30841, maxDocs=44218) 0.043654136 = queryNorm 0.1561842 = fieldWeight in 938, product of: 2.4494898 = tf(freq=6.0), with freq of: 6.0 = termFreq=6.0 1.3602545 = idf(docFreq=30841, maxDocs=44218) 0.046875 = fieldNorm(doc=938) 0.16666667 = coord(1/6)
- Abstract
- This article introduces a new task-based evaluation measure called Relevance Prediction that is a more intuitive measure of an individual's performance on a real-world task than interannotator agreement. Relevance Prediction parallels what a user does in the real world task of browsing a set of documents using standard search tools, i.e., the user judges relevance based on a short summary and then that same user - not an independent user - decides whether to open (and judge) the corresponding document. This measure is shown to be a more reliable measure of task performance than LDC Agreement, a current gold-standard based measure used in the summarization evaluation community. Our goal is to provide a stable framework within which developers of new automatic measures may make stronger statistical statements about the effectiveness of their measures in predicting summary usefulness. We demonstrate - as a proof-of-concept methodology for automatic metric developers - that a current automatic evaluation measure has a better correlation with Relevance Prediction than with LDC Agreement and that the significance level for detected differences is higher for the former than for the latter.
-
Zajic, D.; Dorr, B.J.; Lin, J.; Schwartz, R.: Multi-candidate reduction : sentence compression as a tool for document summarization tasks (2007)
0.00
0.0014724231 = product of: 0.008834538 = sum of: 0.008834538 = weight(_text_:in in 944) [ClassicSimilarity], result of: 0.008834538 = score(doc=944,freq=4.0), product of: 0.059380736 = queryWeight, product of: 1.3602545 = idf(docFreq=30841, maxDocs=44218) 0.043654136 = queryNorm 0.14877784 = fieldWeight in 944, product of: 2.0 = tf(freq=4.0), with freq of: 4.0 = termFreq=4.0 1.3602545 = idf(docFreq=30841, maxDocs=44218) 0.0546875 = fieldNorm(doc=944) 0.16666667 = coord(1/6)
- Abstract
- This article examines the application of two single-document sentence compression techniques to the problem of multi-document summarization-a "parse-and-trim" approach and a statistical noisy-channel approach. We introduce the multi-candidate reduction (MCR) framework for multi-document summarization, in which many compressed candidates are generated for each source sentence. These candidates are then selected for inclusion in the final summary based on a combination of static and dynamic features. Evaluations demonstrate that sentence compression is a valuable component of a larger multi-document summarization framework.
Authors
- Hobson, S.P. 1
- Lin, J. 1
- Monz, C. 1
- Zajic, D. 1