Follow
Pavel Izmailov
Pavel Izmailov
PhD Student, NYU
Verified email at nyu.edu - Homepage
Title
Cited by
Cited by
Year
Averaging Weights Leads to Wider Optima and Better Generalization
P Izmailov, D Podoprikhin, T Garipov, D Vetrov, AG Wilson
Uncertainty in Artificial Intelligence (UAI), 2018
8722018
A Simple Baseline for Bayesian Uncertainty in Deep Learning
W Maddox, T Garipov, P Izmailov, D Vetrov, AG Wilson
Advances in Neural Information Processing Systems (NeurIPS), 2019
5052019
Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
T Garipov, P Izmailov, D Podoprikhin, DP Vetrov, AG Wilson
Advances in Neural Information Processing Systems (NeurIPS), 2018
4182018
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
AG Wilson, P Izmailov
Advances in Neural Information Processing Systems (NeurIPS), 2020
3342020
There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average
B Athiwaratkun, M Finzi, P Izmailov, AG Wilson
International Conference on Learning Representations (ICLR 2019), 2018
230*2018
Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data
M Finzi, S Stanton, P Izmailov, AG Wilson
International Conference on Machine Learning (ICML), 2020
1642020
What Are Bayesian Neural Network Posteriors Really Like?
P Izmailov, S Vikram, MD Hoffman, AG Wilson
International Conference on Machine Learning (ICML), 2021
1492021
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
P Kirichenko, P Izmailov, AG Wilson
Advances in Neural Information Processing Systems (NeurIPS), 2020
1122020
Subspace Inference for Bayesian Deep Learning
P Izmailov, WJ Maddox, P Kirichenko, T Garipov, D Vetrov, AG Wilson
Uncertainty in Artificial Intelligence (UAI), 2019
982019
Learning Invariances in Neural Networks
G Benton, M Finzi, P Izmailov, AG Wilson
Advances in Neural Information Processing Systems (NeurIPS), 2020
80*2020
Semi-Supervised Learning with Normalizing Flows
P Izmailov, P Kirichenko, M Finzi, AG Wilson
International Conference on Machine Learning (ICML), 2019
652019
Does Knowledge Distillation Really Work?
S Stanton, P Izmailov, P Kirichenko, AA Alemi, AG Wilson
Advances in Neural Information Processing Systems (NeurIPS), 2021
552021
Tensor Train decomposition on TensorFlow (T3F)
A Novikov, P Izmailov, V Khrulkov, M Figurnov, I Oseledets
Journal of Machine Learning Research 21, 2020
512020
Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition
P Izmailov, A Novikov, D Kropotov
Artificial Intelligence and Statistics (AISTATS), 2018
512018
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
P Kirichenko, P Izmailov, AG Wilson
International Conference on Learning Representations (ICLR 2023), 2022
322022
Improving Stability in Deep Reinforcement Learning with Weight Averaging
E Nikishin, P Izmailov, B Athiwaratkun, D Podoprikhin, T Garipov, ...
Uncertainty in Deep Learning Workshop at UAI, 2018
262018
Dangers of Bayesian Model Averaging under Covariate Shift
P Izmailov, P Nicholson, S Lotfi, AG Wilson
Advances in Neural Information Processing Systems (NeurIPS), 2021
182021
Invertible Convolutional Networks
M Finzi, P Izmailov, W Maddox, P Kirichenko, AG Wilson
Workshop on Invertible Neural Nets and Normalizing Flows at ICML, 2019
122019
Bayesian Model Selection, the Marginal Likelihood, and Generalization
S Lotfi, P Izmailov, G Benton, M Goldblum, AG Wilson
International Conference on Machine Learning (ICML), 2022
112022
Fast uncertainty estimates and bayesian model averaging of dnns
W Maddox, T Garipov, P Izmailov, D Vetrov, AG Wilson
Uncertainty in Deep Learning Workshop at UAI 8, 2018
62018
The system can't perform the operation now. Try again later.
Articles 1–20