This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Fair Open Access. The journal supports open data and open code whenever possible and authors are strongly encouraged to submit code and data sets along with their manuscript.


Indexing

  

 

 

Awards

The 2021 Beale — Orchard-Hays Prize given by MOS has been awarded to a paper published in OJMO:

Giacomo Nannicini. On the implementation of a global optimization method for mixed-variable problems. Open Journal of Mathematical Optimization, Volume 2 (2021), article  no. 1, 25 p. doi : 10.5802/ojmo.3

 

 

e-ISSN : 2777-5860

New articles

Screening for a Reweighted Penalized Conditional Gradient Method

The conditional gradient method (CGM) is widely used in large-scale sparse convex optimization, having a low per iteration computational cost for structured sparse regularizers and a greedy approach for collecting nonzeros. We explore the sparsity acquiring properties of a general penalized CGM (P-CGM) for convex regularizers and a reweighted penalized CGM (RP-CGM) for nonconvex regularizers, replacing the usual convex constraints with gauge-inspired penalties. This generalization does not increase the per-iteration complexity noticeably. Without assuming bounded iterates or using line search, we show O(1/t) convergence of the gap of each subproblem, which measures distance to a stationary point. We couple this with a screening rule which is safe in the convex case, converging to the true support at a rate O(1/(δ 2 )) where δ0 measures how close the problem is to degeneracy. In the nonconvex case the screening rule converges to the true support in a finite number of iterations, but is not necessarily safe in the intermediate iterates. In our experiments, we verify the consistency of the method and adjust the aggressiveness of the screening rule by tuning the concavity of the regularizer.

Available online:
PDF

Characterizations of Stability of Error Bounds for Convex Inequality Constraint Systems

In this paper, we mainly study error bounds for a single convex inequality and semi-infinite convex constraint systems, and give characterizations of stability of error bounds via directional derivatives. For a single convex inequality, it is proved that the stability of local error bounds under small perturbations is essentially equivalent to the non-zero minimum of the directional derivative at a reference point over the unit sphere, and the stability of global error bounds is proved to be equivalent to the strictly positive infimum of the directional derivatives, at all points in the boundary of the solution set, over the unit sphere as well as some mild constraint qualification. When these results are applied to semi-infinite convex constraint systems, characterizations of stability of local and global error bounds under small perturbations are also provided. In particular such stability of error bounds is proved to only require that all component functions in semi-infinite convex constraint systems have the same linear perturbation. Our work demonstrates that verifying the stability of error bounds for convex inequality constraint systems is, to some degree, equivalent to solving convex minimization problems (defined by directional derivatives) over the unit sphere.

Available online:
PDF