Fast global convergence of natural policy gradient methods with entropy regularization S Cen, C Cheng, Y Chen, Y Wei, Y Chi Operations Research 70 (4), 2563-2578, 2022 | 129 | 2022 |
Communication-efficient distributed optimization in networks with gradient tracking and variance reduction B Li, S Cen, Y Chen, Y Chi The Journal of Machine Learning Research 21 (1), 7331-7381, 2020 | 98* | 2020 |
Fast policy extragradient methods for competitive games with entropy regularization S Cen, Y Wei, Y Chi Advances in Neural Information Processing Systems 34, 27952-27964, 2021 | 37 | 2021 |
Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence W Zhan, S Cen, B Huang, Y Chen, JD Lee, Y Chi arXiv preprint arXiv:2105.11066, 2021 | 35 | 2021 |
A stochastic semismooth newton method for nonsmooth nonconvex optimization A Milzarek, X Xiao, S Cen, Z Wen, M Ulbrich SIAM Journal on Optimization 29 (4), 2916-2948, 2019 | 28 | 2019 |
Convergence of distributed stochastic variance reduced methods without sampling extra data S Cen, H Zhang, Y Chi, W Chen, TY Liu IEEE Transactions on Signal Processing 68, 3976-3989, 2020 | 24 | 2020 |
Faster Last-iterate Convergence of Policy Optimization in Zero-Sum Markov Games S Cen, Y Chi, SS Du, L Xiao arXiv preprint arXiv:2210.01050, 2022 | 8 | 2022 |
Independent natural policy gradient methods for potential games: Finite-time global convergence with entropy regularization S Cen, F Chen, Y Chi 2022 IEEE 61st Conference on Decision and Control (CDC), 2833-2838, 2022 | 4 | 2022 |
Asynchronous Gradient Play in Zero-Sum Multi-agent Games R Ao, S Cen, Y Chi arXiv preprint arXiv:2211.08980, 2022 | 2 | 2022 |