Fast global convergence of natural policy gradient methods with entropy regularization S Cen, C Cheng, Y Chen, Y Wei, Y Chi Operations Research, 2021 | 63 | 2021 |
Communication-efficient distributed optimization in networks with gradient tracking and variance reduction B Li, S Cen, Y Chen, Y Chi International Conference on Artificial Intelligence and Statistics, 1662-1672, 2020 | 53* | 2020 |
A stochastic semismooth newton method for nonsmooth nonconvex optimization A Milzarek, X Xiao, S Cen, Z Wen, M Ulbrich SIAM Journal on Optimization 29 (4), 2916-2948, 2019 | 15 | 2019 |
Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence W Zhan, S Cen, B Huang, Y Chen, JD Lee, Y Chi arXiv preprint arXiv:2105.11066, 2021 | 14 | 2021 |
Convergence of distributed stochastic variance reduced methods without sampling extra data S Cen, H Zhang, Y Chi, W Chen, TY Liu IEEE Transactions on Signal Processing 68, 3976-3989, 2020 | 14 | 2020 |
Fast policy extragradient methods for competitive games with entropy regularization S Cen, Y Wei, Y Chi Advances in Neural Information Processing Systems 34, 2021 | 10 | 2021 |
Independent Natural Policy Gradient Methods for Potential Games: Finite-time Global Convergence with Entropy Regularization S Cen, F Chen, Y Chi arXiv preprint arXiv:2204.05466, 2022 | | 2022 |