Mathematical Programming Computation

On algorithms for projection onto the top- -sum sublevel set
Roth J and Cui Y
The operator computes the sum of the largest components of a given vector. The Euclidean projection onto the top- -sum sublevel set serves as a crucial subroutine in iterative methods to solve composite superquantile optimization problems. In this paper, we introduce a solver that implements two finite-termination algorithms to compute this projection. Both algorithms have complexity of floating point operations when applied to a sorted -dimensional input vector, where the absorbed constant . This stands in contrast to an existing grid-search-inspired method that has complexity, a partition-based method with complexity, where is the number of distinct elements in the input vector, and a semismooth Newton method with a finite termination property but unspecified floating point complexity. The improvement of our methods over the first method is significant when is linearly dependent on , which is frequently encountered in practical superquantile optimization applications. In instances where the input vector is unsorted, an additional cost is incurred to (partially) sort the vector, whereas a full sort of the input vector seems unavoidable for the other two methods. To reduce this cost, we further derive a rigorous procedure that leverages approximate sorting to compute the projection, which is particularly useful when solving a sequence of similar projection problems. Numerical results show that our methods solve problems of scale and within 0.05 s, whereas the most competitive alternative, the semismooth Newton-based method, takes about 1 s. The existing grid-search method and Gurobi's QP solver can take from minutes to hours.
Generalized Alternating Direction Method of Multipliers: New Theoretical Insights and Applications
Fang EX, He B, Liu H and Yuan X
Recently, the alternating direction method of multipliers (ADMM) has received intensive attention from a broad spectrum of areas. The generalized ADMM (GADMM) proposed by Eckstein and Bertsekas is an efficient and simple acceleration scheme of ADMM. In this paper, we take a deeper look at the linearized version of GADMM where one of its subproblems is approximated by a linearization strategy. This linearized version is particularly efficient for a number of applications arising from different areas. Theoretically, we show the worst-case 𝒪(1/) convergence rate measured by the iteration complexity ( represents the iteration counter) in both the ergodic and a nonergodic senses for the linearized version of GADMM. Numerically, we demonstrate the efficiency of this linearized version of GADMM by some rather new and core applications in statistical learning. Code packages in Matlab for these applications are also developed.