Follow
Ananya Kumar
Ananya Kumar
Research Scientist, OpenAI
Verified email at cs.stanford.edu - Homepage
Title
Cited by
Cited by
Year
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
43192021
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
Transactions on Machine Learning Research (TMLR), 2023
11022023
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
A Kumar, A Raghunathan, R Jones, T Ma, P Liang
International Conference on Learning Representations (ICLR), 2022
6812022
Verified Uncertainty Calibration
A Kumar, P Liang, T Ma
Neural Information Processing Systems (NeurIPS), 2019
3912019
Understanding Self-Training for Gradual Domain Adaptation
A Kumar, T Ma, P Liang
International Conference on Machine Learning (ICML), 2020
2522020
Surgical fine-tuning improves adaptation to distribution shifts
Y Lee, AS Chen, F Tajwar, A Kumar, H Yao, P Liang, C Finn
International Conference on Learning Representations (ICLR), 2023
1962023
Finetune like you pretrain: Improved finetuning of zero-shot vision models
S Goyal, A Kumar, S Garg, Z Kolter, A Raghunathan
Conference on Computer Vision and Pattern Recognition (CVPR), 2023
1302023
Extending the wilds benchmark for unsupervised adaptation
S Sagawa*, PW Koh*, T Lee*, I Gao*, SM Xie, K Shen, A Kumar, W Hu, ...
International Conference on Learning Representations (ICLR), 2022
1292022
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation
K Shen*, R Jones*, A Kumar*, SM Xie*, JZ HaoChen, T Ma, P Liang
International Conference on Machine Learning (ICML), 2022
1042022
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures
J Uesato*, A Kumar*, C Szepesvari*, T Erez, A Ruderman, K Anderson, ...
International Conference on Learning Representations (ICLR), 2019
862019
Self-training avoids using spurious features under domain shift
Y Chen*, C Wei*, A Kumar, T Ma
Neural Information Processing Systems (NeurIPS), 2020
832020
Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization?
R Bommasani, K Creel, A Kumar, D Jurafsky, P Liang
Advances in Neural Information Processing Systems (NeurIPS), 2022
812022
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
SM Xie*, A Kumar*, R Jones*, F Khani, T Ma, P Liang
International Conference on Learning Representations (ICLR), 2021
612021
Selective Classification Can Magnify Disparities Across Groups
E Jones*, S Sagawa*, PW Koh*, A Kumar, P Liang
International Conference on Learning Representations (ICLR), 2021
592021
Consistent generative query networks
A Kumar, SM Eslami, DJ Rezende, M Garnelo, F Viola, E Lockhart, ...
NeurIPS workshop on Bayesian Deep Learning, 2018
50*2018
Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift
A Kumar, T Ma, P Liang, A Raghunathan
Conference on Uncertainty in Artificial Intelligence (UAI), 2022
41*2022
Beyond separability: Analyzing the linear transferability of contrastive representations to related subpopulations
JZ HaoChen, C Wei, A Kumar, T Ma
Neural Information Processing Systems (NeurIPS), 2022
402022
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
F Tajwar, A Kumar*, SM Xie*, P Liang
ICML UDL Workshop, 2021
202021
How to fine-tune vision models with sgd
A Kumar, R Shen, S Bubeck, S Gunasekar
International Conference on Learning Representations (ICLR), 2024
182024
Uncovering Surprising Behaviors in Reinforcement Learning via Worst-case Analysis
A Ruderman, R Everett, B Sikder, H Soyer, C Beattie, J Uesato, A Kumar, ...
ICLR SafeML Workshop, 2019
16*2019
The system can't perform the operation now. Try again later.
Articles 1–20