Følg
Aviral Kumar
Aviral Kumar
CMU & Google DeepMind
Verificeret mail på andrew.cmu.edu - Startside
Titel
Citeret af
Citeret af
År
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
21922023
Offline reinforcement learning: Tutorial, review, and perspectives on open problems
S Levine, A Kumar, G Tucker, J Fu
arXiv preprint arXiv:2005.01643, 2020
20452020
Conservative q-learning for offline reinforcement learning
A Kumar, A Zhou, G Tucker, S Levine
Advances in Neural Information Processing Systems 33, 1179-1191, 2020
19412020
D4rl: Datasets for deep data-driven reinforcement learning
J Fu, A Kumar, O Nachum, G Tucker, S Levine
arXiv preprint arXiv:2004.07219, 2020
12082020
Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
A Kumar, J Fu, G Tucker, S Levine
NeuRIPS 2019, arXiv:1906.00949, 2019
11352019
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
6672024
Advantage-weighted regression: Simple and scalable off-policy reinforcement learning
XB Peng, A Kumar, G Zhang, S Levine
arXiv preprint arXiv:1910.00177, 2019
5372019
Combo: Conservative offline model-based policy optimization
T Yu, A Kumar, R Rafailov, A Rajeswaran, S Levine, C Finn
Advances in neural information processing systems 34, 28954-28967, 2021
4312021
Trainable calibration measures for neural networks from kernel mean embeddings
A Kumar, S Sarawagi, U Jain
International Conference on Machine Learning, 2805-2814, 2018
3132018
Graph Normalizing Flows
J Liu, A Kumar, J Ba, J Kiros, K Swersky
NeurIPS 2019, arxiv:1905.13177, 2019
296*2019
Opal: Offline primitive discovery for accelerating offline reinforcement learning
A Ajay, A Kumar, P Agrawal, S Levine, O Nachum
arXiv preprint arXiv:2010.13611, 2020
1862020
Diagnosing Bottlenecks in Deep Q-learning Algorithms
J Fu, A Kumar, M Soh, S Levine
International Conference on Machine Learning (ICML) 2019, https://arxiv.org …, 2019
1642019
Conservative safety critics for exploration
H Bharadhwaj, A Kumar, N Rhinehart, S Levine, F Shkurti, A Garg
arXiv preprint arXiv:2010.14497, 2020
1462020
When should we prefer offline reinforcement learning over behavioral cloning?
A Kumar, J Hong, A Singh, S Levine
arXiv preprint arXiv:2204.05618, 2022
143*2022
Discor: Corrective feedback in reinforcement learning via distribution correction
A Kumar, A Gupta, S Levine
Advances in Neural Information Processing Systems 33, 18560-18572, 2020
1212020
Why generalization in rl is difficult: Epistemic pomdps and implicit partial observability
D Ghosh, J Rahme, A Kumar, A Zhang, RP Adams, S Levine
Advances in neural information processing systems 34, 25502-25515, 2021
1182021
Cog: Connecting new skills to past experience with offline reinforcement learning
A Singh, A Yu, J Yang, J Zhang, A Kumar, S Levine
arXiv preprint arXiv:2010.14500, 2020
1132020
Implicit under-parameterization inhibits data-efficient deep reinforcement learning
A Kumar, R Agarwal, D Ghosh, S Levine
arXiv preprint arXiv:2010.14498, 2020
1132020
One solution is not all you need: Few-shot extrapolation via structured maxent rl
S Kumar, A Kumar, S Levine, C Finn
Advances in Neural Information Processing Systems 33, 8198-8210, 2020
1012020
Calibration of Encoder Decoder Models for Neural Machine Translation
A Kumar, S Sarawagi
https://arxiv.org/abs/1903.00802, 2019
1002019
Systemet kan ikke foretage handlingen nu. Prøv igen senere.
Artikler 1–20