OPTIMIZATION METHODS & SOFTWARE

Self-consistent gradient flow for shape optimization
Kraft D
We present a model for image segmentation and describe a gradient-descent method for level-set based shape optimization. It is commonly known that gradient-descent methods converge slowly due to zig-zag movement. This can also be observed for our problem, especially when sharp edges are present in the image. We interpret this in our specific context to gain a better understanding of the involved difficulties. One way to overcome slow convergence is the use of second-order methods. For our situation, they require derivatives of the potentially noisy image data and are thus undesirable. Hence, we propose a new method that can be interpreted as a self-consistent gradient flow and does not need any derivatives of the image data. It works very well in practice and leads to a far more efficient optimization algorithm. A related idea can also be used to describe the mean-curvature flow of a mean-convex surface. For this, we formulate a mean-curvature Eikonal equation, which allows a numerical propagation of the mean-curvature flow of a surface without explicit time stepping.
Enhanced Bounding Techniques to Reduce the Protein Conformational Search Space
McAllister SR and Floudas CA
The complexity and enormous size of the conformational space that must be explored for the protein tertiary structure prediction problem has led to the development of a wide assortment of algorithmic approaches. In this study, we apply state-of-the-art tertiary structure prediction algorithms and instead focus on the development of bounding techniques to reduce the conformational search space. Dihedral angle bounds on the ϕ and ψ angles are established based on the predicted secondary structure and studies of the allowed regions of ϕ/ψ space. Distance bounds are developed based on predicted secondary structure information (including β-sheet topology predictions) to further reduce the search space. This bounding strategy is entirely independent of the degree of homology between the target protein and the database of proteins with experimentally-determined structures. The proposed approach is applied to the structure prediction of protein G as an illustrative example, yielding a significantly higher number of near-native protein tertiary structure predictions.
Proximal extrapolated gradient methods for variational inequalities
Malitsky Y
The paper concerns with novel first-order methods for monotone variational inequalities. They use a very simple linesearch procedure that takes into account a local information of the operator. Also, the methods do not require Lipschitz continuity of the operator and the linesearch procedure uses only values of the operator. Moreover, when the operator is affine our linesearch becomes very simple, namely, it needs only simple vector-vector operations. For all our methods, we establish the ergodic convergence rate. In addition, we modify one of the proposed methods for the case of a composite minimization. Preliminary results from numerical experiments are quite promising.
Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces
Boţ RI, Csetnek ER and Meier D
Proximal splitting algorithms for monotone inclusions (and convex optimization problems) in Hilbert spaces share the common feature to guarantee for the generated sequences in general weak convergence to a solution. In order to achieve strong convergence, one usually needs to impose more restrictive properties for the involved operators, like strong monotonicity (respectively, strong convexity for optimization problems). In this paper, we propose a modified Krasnosel'skiĭ-Mann algorithm in connection with the determination of a fixed point of a nonexpansive mapping and show strong convergence of the iteratively generated sequence to the minimal norm solution of the problem. Relying on this, we derive a forward-backward and a Douglas-Rachford algorithm, both endowed with Tikhonov regularization terms, which generate iterates that strongly converge to the minimal norm solution of the set of zeros of the sum of two maximally monotone operators. Furthermore, we formulate strong convergent primal-dual algorithms of forward-backward and Douglas-Rachford-type for highly structured monotone inclusion problems involving parallel-sums and compositions with linear operators. The resulting iterative schemes are particularized to the solving of convex minimization problems. The theoretical results are illustrated by numerical experiments on the split feasibility problem in infinite dimensional spaces.
Tensor methods for finding approximate stationary points of convex functions
Grapiglia GN and Nesterov Y
In this paper, we consider the problem of finding -approximate stationary points of convex functions that are -times differentiable with -Hölder continuous th derivatives. We present tensor methods with and without acceleration. Specifically, we show that the non-accelerated schemes take at most iterations to reduce the norm of the gradient of the objective below given . For accelerated tensor schemes, we establish improved complexity bounds of and , when the Hölder parameter is known. For the case in which is unknown, we obtain a bound of for a universal accelerated scheme. Finally, we also obtain a lower complexity bound of for finding -approximate stationary points using -order tensor methods.
Exact gradient methods with memory
Florea MI
The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.