Follow
Sam McCandlish
Sam McCandlish
Anthropic
Verified email at anthropic.com - Homepage
Title
Cited by
Cited by
Year
Language models are few-shot learners
T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ...
Advances in neural information processing systems 33, 1877-1901, 2020
29966*2020
Scaling laws for neural language models
J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess, R Child, ...
arXiv preprint arXiv:2001.08361, 2020
22992020
Evaluating large language models trained on code
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374, 2021
19502021
Training a helpful and harmless assistant with reinforcement learning from human feedback
Y Bai, A Jones, K Ndousse, A Askell, A Chen, N DasSarma, D Drain, ...
arXiv preprint arXiv:2204.05862, 2022
6632022
Constitutional ai: Harmlessness from ai feedback
Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ...
arXiv preprint arXiv:2212.08073, 2022
5682022
A stereoscopic look into the bulk
B Czech, L Lamprou, S McCandlish, B Mosk, J Sully
Journal of High Energy Physics 2016 (7), 1-47, 2016
2432016
Integral geometry and holography
B Czech, L Lamprou, S McCandlish, J Sully
Journal of High Energy Physics 2015 (10), 1-41, 2015
2372015
Scaling laws for autoregressive generative modeling
T Henighan, J Kaplan, M Katz, M Chen, C Hesse, J Jackson, H Jun, ...
arXiv preprint arXiv:2010.14701, 2020
2312020
Spontaneous segregation of self-propelled particles with different motilities
SR McCandlish, A Baskaran, MF Hagan
Soft Matter 8 (8), 2527-2534, 2012
2272012
Language models (mostly) know what they know
S Kadavath, T Conerly, A Askell, T Henighan, D Drain, E Perez, ...
arXiv preprint arXiv:2207.05221, 2022
2192022
Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned
D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, ...
arXiv preprint arXiv:2209.07858, 2022
2102022
A general language assistant as a laboratory for alignment
A Askell, Y Bai, A Chen, D Drain, D Ganguli, T Henighan, A Jones, ...
arXiv preprint arXiv:2112.00861, 2021
2082021
An empirical model of large-batch training
S McCandlish, J Kaplan, D Amodei, OAID Team
arXiv preprint arXiv:1812.06162, 2018
1982018
In-context learning and induction heads
C Olsson, N Elhage, N Nanda, N Joseph, N DasSarma, T Henighan, ...
arXiv preprint arXiv:2209.11895, 2022
1842022
Predictability and surprise in large generative models
D Ganguli, D Hernandez, L Lovitt, A Askell, Y Bai, A Chen, T Conerly, ...
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and …, 2022
1692022
Tensor networks from kinematic space
B Czech, L Lamprou, S McCandlish, J Sully
Journal of High Energy Physics 2016 (7), 1-38, 2016
1652016
A mathematical framework for transformer circuits
N Elhage, N Nanda, C Olsson, T Henighan, N Joseph, B Mann, A Askell, ...
Transformer Circuits Thread 1, 1, 2021
1462021
Toy models of superposition
N Elhage, T Hume, C Olsson, N Schiefer, T Henighan, S Kravec, ...
arXiv preprint arXiv:2209.10652, 2022
1372022
Language models are few-shot learners. arXiv
TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ...
Computer Science, Computation and Language, 2005
1372005
Scaling laws for transfer
D Hernandez, J Kaplan, T Henighan, S McCandlish
arXiv preprint arXiv:2102.01293, 2021
1252021
The system can't perform the operation now. Try again later.
Articles 1–20