Jack Parker-Holder
Jack Parker-Holder
Research Scientist at DeepMind
Verified email at - Homepage
Cited by
Cited by
Effective Diversity in Population Based Reinforcement Learning
J Parker-Holder*, A Pacchiano*, K Choromanski, S Roberts
NeurIPS 2020 (Spotlight), 2020
Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits
J Parker-Holder, V Nguyen, S Roberts
NeurIPS 2020, 2020
MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
M Samvelyan, R Kirk, V Kurin, J Parker-Holder, M Jiang, E Hambro, ...
NeurIPS 2021 (Datasets and Benchmarks), 2021
From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox Optimization
KM Choromanski*, A Pacchiano*, J Parker-Holder*, Y Tang*, ...
Advances in Neural Information Processing Systems, 10299-10309, 2019
Ready Policy One: World Building Through Active Learning
P Ball*, J Parker-Holder*, A Pacchiano, K Choromanski, S Roberts
ICML 2020, 2020
Evolving Curricula with Regret-Based Environment Design
J Parker-Holder*, M Jiang*, M Dennis, M Samvelyan, J Foerster, ...
ICML 2022, 2022
Provably Robust Blackbox Optimization for Reinforcement Learning
K Choromanski*, A Pacchiano*, J Parker-Holder*, Y Tang, D Jain, Y Yang, ...
Conference on Robot Learning, 683-696, 2019
Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
J Parker-Holder*, R Rajan*, X Song*, A Biedenkapp, Y Miao, T Eimer, ...
JAIR, 2022
Replay-Guided Adversarial Environment Design
M Jiang*, M Dennis*, J Parker-Holder, J Foerster, E Grefenstette, ...
NeurIPS 2021, 2021
Towards Tractable Optimism in Model-Based Reinforcement Learning
A Pacchiano*, P Ball*, J Parker-Holder*, K Choromanski, S Roberts
UAI 2021, 2020
Learning to Score Behaviors for Guided Policy Optimization
A Pacchiano*, J Parker-Holder*, Y Tang*, K Choromanski, ...
International Conference on Machine Learning, 7445-7454, 2020
Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment
PJ Ball*, C Lu*, J Parker-Holder, S Roberts
ICML 2021, 2021
Tactical Optimism and Pessimism for Deep Reinforcement Learning
T Moskovitz, J Parker-Holder, A Pacchiano, M Arbel, MI Jordan
NeurIPS 2021, 2021
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
J Parker-Holder*, L Metz, C Resnick, H Hu, A Lerer, A Letcher, ...
Advances in Neural Information Processing Systems 33, 2020
Same State, Different Task: Continual Reinforcement Learning without Interference
S Kessler, J Parker-Holder, P Ball, S Zohren, SJ Roberts
AAAI 2022 (Oral), 2022
Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations
C Lu, PJ Ball, TGJ Rudner, J Parker-Holder, MA Osborne, YW Teh
RSS L-DOD Workshop (Best Paper Award), 2022
Revisiting Design Choices in Model-Based Offline Reinforcement Learning
C Lu*, PJ Ball*, J Parker-Holder, MA Osborne, SJ Roberts
ICLR 2022 (Spotlight), 2022
Human-Timescale Adaptation in an Open-Ended Task Space
AA Team, J Bauer, K Baumli, S Baveja, F Behbahani, A Bhoopchand, ...
ICML 2023, 2023
ES-ENAS: Blackbox Optimization over Hybrid Spaces via Combinatorial and Continuous Evolution
X Song, KM Choromanski, J Parker-Holder, Y Tang, D Peng, D Jain, ...
Tuning Mixed Input Hyperparameters on the Fly for Efficient Population Based AutoRL
J Parker-Holder, V Nguyen, S Desai, S Roberts
NeurIPS 2021, 2021
The system can't perform the operation now. Try again later.
Articles 1–20