OPTIMIZATION

The checkpoint ordering problem
Hungerländer P
We suggest a new variant of a row layout problem: Find an ordering of departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality.
Proximal-gradient algorithms for fractional programming
Boţ RI and Csetnek ER
In this paper, we propose two proximal-gradient algorithms for fractional programming problems in real Hilbert spaces, where the numerator is a proper, convex and lower semicontinuous function and the denominator is a smooth function, either concave or convex. In the iterative schemes, we perform a proximal step with respect to the nonsmooth numerator and a gradient step with respect to the smooth denominator. The algorithm in case of a concave denominator has the particularity that it generates sequences which approach both the (global) optimal solutions set and the optimal objective value of the underlying fractional programming problem. In case of a convex denominator the numerical scheme approaches the set of critical points of the objective function, provided the latter satisfies the Kurdyka-ᴌojasiewicz property.
A second-order dynamical system with Hessian-driven damping and penalty term associated to variational inequalities
Boţ RI and Csetnek ER
We consider the minimization of a convex objective function subject to the set of minima of another convex function, under the assumption that both functions are twice continuously differentiable. We approach this optimization problem from a continuous perspective by means of a second-order dynamical system with Hessian-driven damping and a penalty term corresponding to the constrained function. By constructing appropriate energy functionals, we prove weak convergence of the trajectories generated by this differential equation to a minimizer of the optimization problem as well as convergence for the objective function values along the trajectories. The performed investigations rely on Lyapunov analysis in combination with the continuous version of the Opial Lemma. In case the objective function is strongly convex, we can even show strong convergence of the trajectories.
New verifiable stationarity concepts for a class of mathematical programs with disjunctive constraints
Benko M and Gfrerer H
In this paper, we consider a sufficiently broad class of non-linear mathematical programs with disjunctive constraints, which, e.g. include mathematical programs with complemetarity/vanishing constraints. We present an extension of the concept of [Formula: see text]-stationarity which can be easily combined with the well-known notion of M-stationarity to obtain the stronger property of so-called [Formula: see text]-stationarity. We show how the property of [Formula: see text]-stationarity (and thus also of M-stationarity) can be efficiently verified for the considered problem class by computing [Formula: see text]-stationary solutions of a certain quadratic program. We consider further the situation that the point which is to be tested for [Formula: see text]-stationarity, is not known exactly, but is approximated by some convergent sequence, as it is usually the case when applying some numerical method.
Inertial forward-backward methods for solving vector optimization problems
Boţ RI and Grad SM
We propose two forward-backward proximal point type algorithms with inertial/memory effects for determining weakly efficient solutions to a vector optimization problem consisting in vector-minimizing with respect to a given closed convex pointed cone the sum of a proper cone-convex vector function with a cone-convex differentiable one, both mapping from a Hilbert space to a Banach one. Inexact versions of the algorithms, more suitable for implementation, are provided as well, while as a byproduct one can also derive a forward-backward method for solving the mentioned problem. Numerical experiments with the proposed methods are carried out in the context of solving a portfolio optimization problem.
An incremental mirror descent subgradient algorithm with random sweeping and proximal step
Boţ RI and Böhm A
We investigate the convergence properties of incremental mirror descent type subgradient algorithms for minimizing the sum of convex functions. In each step, we only evaluate the subgradient of a single component function and it back to the feasible domain, which makes iterations very cheap to compute. The analysis is made for a randomized selection of the component functions, which yields the deterministic algorithm as a special case. Under supplementary differentiability assumptions on the function which induces the mirror map, we are also able to deal with the presence of another term in the objective function, which is evaluated via a proximal type step. In both cases, we derive convergence rates of in expectation for the th best objective function value and illustrate our theoretical findings by numerical experiments in positron emission tomography and machine learning.
On uniform regularity and strong regularity
Cibulka R, Preininger J and Roubal T
We investigate uniform versions of (metric) regularity and strong (metric) regularity on compact subsets of Banach spaces, in particular, along continuous paths. These two properties turn out to play a key role in analyzing path-following schemes for tracking a solution trajectory of a parametric generalized equation or, more generally, of a differential generalized equation (DGE). The latter model allows us to describe in a unified way several problems in control and optimization such as differential variational inequalities and control systems with state constraints. We study two inexact path-following methods for DGEs having the order of the grid error and , respectively. We provide numerical experiments, comparing the schemes derived, for simple problems arising in physics. Finally, we study metric regularity of mappings associated with a particular case of the DGE arising in control theory. We establish the relationship between the pointwise version of this property and its counterpart in function spaces.
A forward-backward penalty scheme with inertial effects for monotone inclusions. Applications to convex bilevel programming
Boţ RI and Nguyen DK
We investigate a forward-backward splitting algorithm of penalty type with inertial effects for finding the zeros of the sum of a maximally monotone operator and a cocoercive one and the convex normal cone to the set of zeroes of an another cocoercive operator. Weak ergodic convergence is obtained for the iterates, provided that a condition expressed via the Fitzpatrick function of the operator describing the underlying set of the normal cone is verified. Under strong monotonicity assumptions, strong convergence for the sequence of generated iterates is proved. As a particular instance we consider a convex bilevel minimization problem including the sum of a non-smooth and a smooth function in the upper level and another smooth function in the lower level. We show that in this context weak non-ergodic and strong convergence can be also achieved under inf-compactness assumptions for the involved functions.
New duality results for evenly convex optimization problems
Fajardo MD, Grad SM and Vidal J
We present new results on optimization problems where the involved functions are evenly convex. By means of a generalized conjugation scheme and the perturbation theory introduced by Rockafellar, we propose an alternative dual problem for a general optimization one defined on a separated locally convex topological space. Sufficient conditions for converse and total duality involving the even convexity of the perturbation function and -subdifferentials are given. Formulae for the -subdifferential and biconjugate of the objective function of a general optimization problem are provided, too. We also characterize the total duality by means of the saddle-point theory for a notion of Lagrangian adapted to the considered framework.