Distributed nonsmooth composite optimization via the proximal augmented Lagrangian

Neil Dhingra, Univeristy of Minnesota


We study a class of optimization problems in which the objective function is given by the sum of a differentiable, but possibly nonconvex component and a nondifferentiable convex regularization term. We introduce an auxiliary variable to separate the objective function components and utilize the Moreau envelope of the regularization term to derive the proximal augmented Lagrangian - a continuously differentiable function obtained by constraining the augmented Lagrangian to the manifold that corresponds to the explicit minimization over the variable in the nonsmooth term. This function is used to develop customized algorithms based on the method of multipliers (MM) and a primal-descent dual-ascent gradient method in order to compute optimal primal-dual pairs. Our customized MM algorithm is applicable to a broader class of problems than proximal gradient methods and it has stronger convergence guarantees and a more refined step-size update rules than the alternating direction method of multipliers. These features make it an attractive option for solving structured optimal control problems. When the differentiable component of the objective function is (strongly) convex and the regularization term is convex, we prove (exponential) asymptotic convergence of the primal-descent dual-ascent algorithm, which is well-suited for distributed implementation. We study edge addition in directed consensus networks and optimal placement problems to demonstrate the merits and the effectiveness of our approach.