Liu, P.J.; Saleh, M.; Pot, E.; Goodrich, B.; Sepassi, R.; Kaiser, L.; Shazeer, N.: Generating Wikipedia by summarizing long sequences (2018)
0.00
0.0014724231 = product of:
0.008834538 = sum of:
0.008834538 = weight(_text_:in in 773) [ClassicSimilarity], result of:
0.008834538 = score(doc=773,freq=4.0), product of:
0.059380736 = queryWeight, product of:
1.3602545 = idf(docFreq=30841, maxDocs=44218)
0.043654136 = queryNorm
0.14877784 = fieldWeight in 773, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
1.3602545 = idf(docFreq=30841, maxDocs=44218)
0.0546875 = fieldNorm(doc=773)
0.16666667 = coord(1/6)
- Abstract
- We show that generating English Wikipedia articles can be approached as a multi-document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.