Publications
* denotes equal contributions.
Conference Publications
- Adagrad Under Anisotropic Smoothness.
Yuxing Liu*, Rui Pan*, and Tong Zhang. [ICLR 2025] - Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers.
Yuxing Liu, Lesi Chen, and Luo Luo. [ICML 2024] - On the Complexity of Finite-Sum Smooth Optimization under the Polyak-Lojasiewicz Condition.
Yunyan Bai, Yuxing Liu, and Luo Luo. [ICML 2024] - Accelerated Convergence of Stochastic Heavy Ball Method under Anisotropic Gradient Noise.
Rui Pan*, Yuxing Liu*, Xiaoyu Wang, and Tong Zhang. [ICLR 2024]
Preprints
- Theoretical Analysis on how Learning Rate Warmup Accelerates Convergence.
Yuxing Liu*, Yuze Ge, Rui Pan, Kang An, and Tong Zhang. [arXiv] - Asgo: Adaptive structured gradient optimization.
Kang An*, Yuxing Liu*, Rui Pan, Yi Ren, Shiqian Ma, Donald Goldfarb, and Tong Zhang. [arXiv] - On the Complexity of Decentralized Smooth Nonconvex Finite-Sum Optimization.
Luo Luo, Yunyan Bai, Lesi Chen, Yuxing Liu, and Haishan Ye. [arXiv]