Radford, A.; Wu, J.; Child, R.; Luan, D.; Amode, D.; Sutskever, I.: Language models are unsupervised multitask learners
0.01
0.011942157 = product of:
0.053739704 = sum of:
0.019052157 = weight(_text_:of in 871) [ClassicSimilarity], result of:
0.019052157 = score(doc=871,freq=18.0), product of:
0.061262865 = queryWeight, product of:
1.5637573 = idf(docFreq=25162, maxDocs=44218)
0.03917671 = queryNorm
0.3109903 = fieldWeight in 871, product of:
4.2426405 = tf(freq=18.0), with freq of:
18.0 = termFreq=18.0
1.5637573 = idf(docFreq=25162, maxDocs=44218)
0.046875 = fieldNorm(doc=871)
0.034687545 = weight(_text_:systems in 871) [ClassicSimilarity], result of:
0.034687545 = score(doc=871,freq=4.0), product of:
0.12039685 = queryWeight, product of:
3.0731742 = idf(docFreq=5561, maxDocs=44218)
0.03917671 = queryNorm
0.28811008 = fieldWeight in 871, product of:
2.0 = tf(freq=4.0), with freq of:
4.0 = termFreq=4.0
3.0731742 = idf(docFreq=5561, maxDocs=44218)
0.046875 = fieldNorm(doc=871)
0.22222222 = coord(2/9)
- Abstract
- Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.