Follow
Zachary Nado
Zachary Nado
Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift
Y Ovadia, E Fertig, J Ren, Z Nado, D Sculley, S Nowozin, J Dillon, ...
Advances in neural information processing systems 32, 2019
11842019
Underspecification presents challenges for credibility in modern machine learning
A D'Amour, K Heller, D Moldovan, B Adlam, B Alipanahi, A Beutel, ...
The Journal of Machine Learning Research 23 (1), 10237-10297, 2022
4622022
On empirical comparisons of optimizers for deep learning
D Choi, CJ Shallue, Z Nado, J Lee, CJ Maddison, GE Dahl
arXiv preprint arXiv:1910.05446, 2019
2422019
Evaluating prediction-time batch normalization for robustness under covariate shift
Z Nado, S Padhy, D Sculley, A D'Amour, B Lakshminarayanan, J Snoek
arXiv preprint arXiv:2006.10963, 2020
1012020
Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model
G Zhang, L Li, Z Nado, J Martens, S Sachdeva, G Dahl, C Shallue, ...
Advances in neural information processing systems 32, 2019
962019
Uncertainty Baselines: Benchmarks for uncertainty & robustness in deep learning
Z Nado, N Band, M Collier, J Djolonga, MW Dusenberry, S Farquhar, ...
arXiv preprint arXiv:2106.04015, 2021
592021
Plex: Towards reliability using pretrained large model extensions
D Tran, J Liu, MW Dusenberry, D Phan, M Collier, J Ren, K Han, Z Wang, ...
arXiv preprint arXiv:2207.07411, 2022
352022
AG: Imperative-style Coding with Graph-based Performance
D Moldovan, J Decker, F Wang, A Johnson, B Lee, Z Nado, D Sculley, ...
Proceedings of Machine Learning and Systems 1, 389-405, 2019
342019
A large batch optimizer reality check: Traditional, generic optimizers suffice across batch sizes
Z Nado, JM Gilmer, CJ Shallue, R Anil, GE Dahl
arXiv preprint arXiv:2102.06356, 2021
282021
A loss curvature perspective on training instabilities of deep learning models
J Gilmer, B Ghorbani, A Garg, S Kudugunta, B Neyshabur, D Cardoze, ...
International Conference on Learning Representations, 2022
26*2022
Revisiting one-vs-all classifiers for predictive uncertainty and out-of-distribution detection in neural networks
S Padhy, Z Nado, J Ren, J Liu, J Snoek, B Lakshminarayanan
arXiv preprint arXiv:2007.05134, 2020
252020
Benchmarking bayesian deep learning on diabetic retinopathy detection tasks
N Band, TGJ Rudner, Q Feng, A Filos, Z Nado, MW Dusenberry, G Jerfel, ...
arXiv preprint arXiv:2211.12717, 2022
162022
Adaptive gradient methods at the edge of stability
JM Cohen, B Ghorbani, S Krishnan, N Agarwal, S Medapati, M Badura, ...
arXiv preprint arXiv:2207.14484, 2022
102022
A simple approach to improve single-model deep uncertainty via distance-awareness
JZ Liu, S Padhy, J Ren, Z Lin, Y Wen, G Jerfel, Z Nado, J Snoek, D Tran, ...
Journal of Machine Learning Research 23, 1-63, 2022
102022
Stochastic gradient Langevin dynamics that exploit neural network structure
Z Nado, J Snoek, R Grosse, D Duvenaud, B Xu, J Martens
102018
Pre-trained Gaussian processes for Bayesian optimization
Z Wang, GE Dahl, K Swersky, C Lee, Z Mariet, Z Nado, J Gilmer, J Snoek, ...
arXiv preprint arXiv:2109.08215, 2021
72021
Tensorforest: scalable random forests on tensorflow
T Colthurst, D Sculley, G Hendry, Z Nado
Machine learning systems workshop at NIPS, 2016
72016
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
52023
Automatic prior selection for meta Bayesian optimization with a case study on tuning deep neural network optimizers
Z Wang, GE Dahl, K Swersky, C Lee, ZE Mariet, Z Nado, J Gilmer, ...
32021
Pre-training helps Bayesian optimization too
Z Wang, GE Dahl, K Swersky, C Lee, Z Mariet, Z Nado, J Gilmer, J Snoek, ...
arXiv preprint arXiv:2207.03084, 2022
22022
The system can't perform the operation now. Try again later.
Articles 1–20