Proximal-Proximal-Gradient Method

Ernest Ryu, UCLA

Abstract:  In this paper, we present the proximal-proximal-gradient method (PPG), a novel optimization method that is simple to implement and simple to parallelize. PPG generalizes the proximal-gradient method and ADMM and is applicable to minimization problems written as a sum of many differentiable and possibly non-differentiable convex functions. We furthermore present a related variation, which we call stochastic PPG (S-PPG). S-PPG can be interpreted as a generalization of Finito and MISO. We present many applications that can benefit from PPG and S-PPG and prove convergence for both methods.