Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... arXiv preprint arXiv:2206.04615, 2022 | 754 | 2022 |
Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ... arXiv preprint arXiv:2312.11805, 2023 | 548 | 2023 |
Nl-augmenter: A framework for task-sensitive natural language augmentation KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ... arXiv preprint arXiv:2112.02721, 2021 | 64 | 2021 |
Explaining relationships between scientific documents K Luu, X Wu, R Koncel-Kedziorski, K Lo, I Cachola, NA Smith arXiv preprint arXiv:2002.00317, 2020 | 56* | 2020 |
What makes a good counselor? learning to distinguish between high-quality and low-quality counseling conversations V Pérez-Rosas, X Wu, K Resnicow, R Mihalcea Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019 | 54 | 2019 |
Linguistically-informed transformations (LIT): A method for automatically generating contrast sets C Li, L Shengshuo, LZ Liu, X Wu, X Zhou, S Steinert-Threlkeld arXiv preprint arXiv:2010.08580, 2020 | 30 | 2020 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models SU Toshniwal, S Debnath, S Shakeri, S Thormeyer, S Melzi, S Reddy, ... ArXiv, abs/2206.04615, 2022 | 11 | 2022 |