Follow
Jack Parker-Holder
Jack Parker-Holder
Google DeepMind, UCL
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Effective diversity in population based reinforcement learning
J Parker-Holder, A Pacchiano, KM Choromanski, SJ Roberts
Advances in Neural Information Processing Systems 33, 18050-18062, 2020
1472020
Evolving curricula with regret-based environment design
J Parker-Holder, M Jiang, M Dennis, M Samvelyan, J Foerster, ...
International Conference on Machine Learning, 17473-17498, 2022
852022
Replay-guided adversarial environment design
M Jiang, M Dennis, J Parker-Holder, J Foerster, E Grefenstette, ...
Advances in Neural Information Processing Systems 34, 1884-1897, 2021
742021
Automated reinforcement learning (autorl): A survey and open problems
J Parker-Holder, R Rajan, X Song, A Biedenkapp, Y Miao, T Eimer, ...
Journal of Artificial Intelligence Research 74, 517-568, 2022
722022
Minihack the planet: A sandbox for open-ended reinforcement learning research
M Samvelyan, R Kirk, V Kurin, J Parker-Holder, M Jiang, E Hambro, ...
arXiv preprint arXiv:2109.13202, 2021
692021
Human-timescale adaptation in an open-ended task space
AA Team, J Bauer, K Baumli, S Baveja, F Behbahani, A Bhoopchand, ...
ICML 2023 (Oral), 2023
68*2023
Provably efficient online hyperparameter optimization with population-based bandits
J Parker-Holder, V Nguyen, SJ Roberts
Advances in neural information processing systems 33, 17200-17211, 2020
632020
Ready policy one: World building through active learning
P Ball, J Parker-Holder, A Pacchiano, K Choromanski, S Roberts
International Conference on Machine Learning, 591-601, 2020
522020
Revisiting Design Choices in Offline Model-Based Reinforcement Learning
C Lu, PJ Ball, J Parker-Holder, MA Osborne, SJ Roberts
arXiv preprint arXiv:2110.04135, 2021
512021
From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization
KM Choromanski, A Pacchiano, J Parker-Holder, Y Tang, V Sindhwani
Advances in Neural Information Processing Systems 32, 2019
492019
Tactical optimism and pessimism for deep reinforcement learning
T Moskovitz, J Parker-Holder, A Pacchiano, M Arbel, M Jordan
Advances in Neural Information Processing Systems 34, 12849-12863, 2021
452021
Provably robust blackbox optimization for reinforcement learning
K Choromanski, A Pacchiano, J Parker-Holder, Y Tang, D Jain, Y Yang, ...
Conference on Robot Learning, 683-696, 2020
422020
Same state, different task: Continual reinforcement learning without interference
S Kessler, J Parker-Holder, P Ball, S Zohren, SJ Roberts
Proceedings of the AAAI Conference on Artificial Intelligence 36 (7), 7143-7151, 2022
40*2022
Towards tractable optimism in model-based reinforcement learning
A Pacchiano, P Ball, J Parker-Holder, K Choromanski, S Roberts
Uncertainty in Artificial Intelligence, 1413-1423, 2021
382021
Learning to score behaviors for guided policy optimization
A Pacchiano, J Parker-Holder, Y Tang, K Choromanski, A Choromanska, ...
International Conference on Machine Learning, 7445-7454, 2020
382020
Augmented world models facilitate zero-shot dynamics generalization from a single offline environment
PJ Ball, C Lu, J Parker-Holder, S Roberts
International Conference on Machine Learning, 619-629, 2021
332021
Challenges and opportunities in offline reinforcement learning from visual observations
C Lu, PJ Ball, TGJ Rudner, J Parker-Holder, MA Osborne, YW Teh
arXiv preprint arXiv:2206.04779, 2022
312022
Ridge rider: Finding diverse solutions by following eigenvectors of the hessian
J Parker-Holder, L Metz, C Resnick, H Hu, A Lerer, A Letcher, ...
Advances in Neural Information Processing Systems 33, 753-765, 2020
262020
Synthetic Experience Replay
C Lu, PJ Ball, YW Teh, J Parker-Holder
NeurIPS 2023, 2023
202023
From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers
K Choromanski, H Lin, H Chen, T Zhang, A Sehanobish, V Likhosherstov, ...
International Conference on Machine Learning, 3962-3983, 2022
182022
The system can't perform the operation now. Try again later.
Articles 1–20