Language Models are Few-shot Learners TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arXiv preprint arXiv:2005.14165, 2020 | 30765* | 2020 |
Glide: Towards photorealistic image generation and editing with text-guided diffusion models A Nichol, P Dhariwal, A Ramesh, P Shyam, P Mishkin, B McGrew, ... arXiv preprint arXiv:2112.10741, 2021 | 2138 | 2021 |
Gpt-4 technical report J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ... arXiv preprint arXiv:2303.08774, 2023 | 905 | 2023 |
Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 519 | 2023 |
Text and code embeddings by contrastive pre-training A Neelakantan, T Xu, R Puri, A Radford, JM Han, J Tworek, Q Yuan, ... arXiv preprint arXiv:2201.10005, 2022 | 236 | 2022 |
Model-Based Active Exploration P Shyam, W Jaśkowski, F Gomez International Conference on Machine Learning (ICML), 2019, 2018 | 203 | 2018 |
Language Models are Few-Shot Learners. 2020. doi: 10.48550 TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arxiv, 5-7, 2005 | 150 | 2005 |
Attentive Recurrent Comparators P Shyam, S Gupta, A Dukkipati International Conference on Machine Learning (ICML), 2017, 3173-3181, 2017 | 146 | 2017 |
Language models are few-shot learners. arXiv TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Computer Science, Computation and Language, 2005 | 140 | 2005 |
Training agents using upside-down reinforcement learning RK Srivastava, P Shyam, F Mutz, W Jaśkowski, J Schmidhuber arXiv preprint arXiv:1912.02877, 2019 | 114 | 2019 |
Language models are few-shot learners. CoRR abs/2005.14165 (2020) TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... URL: https://arxiv. org/abs/2005.14165, 2005 | 72 | 2005 |
Artificial Intelligence for Prosthetics - Challenge Solutions Ł Kidziński, C Ong, SP Mohanty, J Hicks, SF Carroll, B Zhou, H Zeng, ... arXiv preprint arXiv:1902.02441, 2019 | 43 | 2019 |
Unsupervised neural machine translation with generative language models only JM Han, I Babuschkin, H Edwards, A Neelakantan, T Xu, S Polu, A Ray, ... arXiv preprint arXiv:2110.05448, 2021 | 23 | 2021 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ... arXiv preprint arXiv:2403.05530, 2024 | 13 | 2024 |
Language models are few-shot learners.[Cs] TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Proceedings of 2020 Neural Information Processing Systems, 2020 | 13 | 2020 |