Search (2 results, page 1 of 1)
- Did you mean:
- author's%3a%22Erdelez%2c S.%22 2
- authors%3a%22Erdelez%2c S.%22 3
-
Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020)
0.00
0.001203219 = product of: 0.002406438 = sum of: 0.002406438 = product of: 0.004812876 = sum of: 0.004812876 = weight(_text_:s in 872) [ClassicSimilarity], result of: 0.004812876 = score(doc=872,freq=8.0), product of: 0.05008241 = queryWeight, product of: 1.0872376 = idf(docFreq=40523, maxDocs=44218) 0.046063907 = queryNorm 0.09609913 = fieldWeight in 872, product of: 2.828427 = tf(freq=8.0), with freq of: 8.0 = termFreq=8.0 1.0872376 = idf(docFreq=40523, maxDocs=44218) 0.03125 = fieldNorm(doc=872) 0.5 = coord(1/2) 0.5 = coord(1/2)
- Abstract
- Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
-
Aizawa, A.; Kohlhase, M.: Mathematical information retrieval (2021)
0.00
0.0010528166 = product of: 0.0021056333 = sum of: 0.0021056333 = product of: 0.0042112665 = sum of: 0.0042112665 = weight(_text_:s in 667) [ClassicSimilarity], result of: 0.0042112665 = score(doc=667,freq=2.0), product of: 0.05008241 = queryWeight, product of: 1.0872376 = idf(docFreq=40523, maxDocs=44218) 0.046063907 = queryNorm 0.08408674 = fieldWeight in 667, product of: 1.4142135 = tf(freq=2.0), with freq of: 2.0 = termFreq=2.0 1.0872376 = idf(docFreq=40523, maxDocs=44218) 0.0546875 = fieldNorm(doc=667) 0.5 = coord(1/2) 0.5 = coord(1/2)
- Pages
- S.169-185
Authors
- Agarwal, S. 1
- Aizawa, A. 1
- Amodei, D. 1
- Askell, A. 1
- Berner, C. 1
- Brown, T.B. 1
- Chen, M. 1
- Chess, B. 1
- Child, R. 1
- Clark, J. 1
- Dhariwal, P. 1
- Gray, S. 1
- Henighan, T. 1
- Herbert-Voss, A. 1
- Hesse, C. 1
- Kaplan, J. 1
- Kohlhase, M. 1
- Krueger, G. 1
- Litwin, M. 1
- Mann, B. 1
- McCandlish, S. 1
- Neelakantan, A. 1
- Radford, A. 1
- Ramesh, A. 1
- Ryder, N. 1
- Sastry, G. 1
- Shyam, P. 1
- Sigler, E. 1
- Subbiah, M. 1
- Sutskever, I. 1
- Winter, C. 1
- Wu, J. 1
- Ziegler, D.M. 1
- More… Less…