Follow
Ohad Shamir
Ohad Shamir
Verified email at weizmann.ac.il - Homepage
Title
Cited by
Cited by
Year
The power of depth for feedforward neural networks
R Eldan, O Shamir
Conference on learning theory, 907-940, 2016
10142016
Learnability, stability and uniform convergence
S Shalev-Shwartz, O Shamir, N Srebro, K Sridharan
The Journal of Machine Learning Research 9999, 2635-2670, 2010
919*2010
Making gradient descent optimal for strongly convex stochastic optimization
A Rakhlin, O Shamir, K Sridharan
arXiv preprint arXiv:1109.5647, 2011
8472011
Optimal Distributed Online Prediction Using Mini-Batches.
O Dekel, R Gilad-Bachrach, O Shamir, L Xiao
Journal of Machine Learning Research 13 (1), 2012
8212012
Communication-efficient distributed optimization using an approximate newton-type method
O Shamir, N Srebro, T Zhang
International conference on machine learning, 1000-1008, 2014
6902014
Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes
O Shamir, T Zhang
International conference on machine learning, 71-79, 2013
6762013
On the computational efficiency of training neural networks
R Livni, S Shalev-Shwartz, O Shamir
Advances in neural information processing systems 27, 2014
6272014
Size-independent sample complexity of neural networks
N Golowich, A Rakhlin, O Shamir
Conference On Learning Theory, 297-299, 2018
4812018
Better mini-batch algorithms via accelerated gradient methods
A Cotter, O Shamir, N Srebro, K Sridharan
Advances in neural information processing systems 24, 2011
4002011
Adaptively learning the crowd kernel
O Tamuz, C Liu, S Belongie, O Shamir, AT Kalai
arXiv preprint arXiv:1105.1033, 2011
3342011
Nonstochastic multi-armed bandits with graph-structured feedback
N Alon, N Cesa-Bianchi, C Gentile, S Mannor, Y Mansour, O Shamir
SIAM Journal on Computing 46 (6), 1785-1826, 2017
319*2017
Spurious local minima are common in two-layer relu neural networks
I Safran, O Shamir
International conference on machine learning, 4433-4441, 2018
3102018
Proving the lottery ticket hypothesis: Pruning is all you need
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
International Conference on Machine Learning, 6682-6691, 2020
3092020
Is local SGD better than minibatch SGD?
B Woodworth, KK Patel, S Stich, Z Dai, B Bullins, B Mcmahan, O Shamir, ...
International Conference on Machine Learning, 10334-10343, 2020
2982020
An optimal algorithm for bandit and zero-order convex optimization with two-point feedback
O Shamir
Journal of Machine Learning Research 18 (52), 1-11, 2017
2882017
Learning and generalization with the information bottleneck
O Shamir, S Sabato, N Tishby
Theoretical Computer Science 411 (29-30), 2696-2711, 2010
2562010
Depth-width tradeoffs in approximating natural functions with neural networks
I Safran, O Shamir
International conference on machine learning, 2979-2987, 2017
237*2017
Communication complexity of distributed convex learning and optimization
Y Arjevani, O Shamir
Advances in neural information processing systems 28, 2015
2342015
Failures of gradient-based deep learning
S Shalev-Shwartz, O Shamir, S Shammah
International Conference on Machine Learning, 3067-3075, 2017
2322017
On the complexity of bandit and derivative-free stochastic convex optimization
O Shamir
Conference on Learning Theory, 3-24, 2013
2272013
The system can't perform the operation now. Try again later.
Articles 1–20