This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Fair Open Access. The journal supports open data and open code whenever possible and authors are strongly encouraged to submit code and data sets along with their manuscript.


Indexing

  

 

 

Awards

The 2021 Beale — Orchard-Hays Prize given by MOS has been awarded to a paper published in OJMO:

Giacomo Nannicini. On the implementation of a global optimization method for mixed-variable problems. Open Journal of Mathematical Optimization, Volume 2 (2021), article  no. 1, 25 p. doi : 10.5802/ojmo.3

 

 

e-ISSN : 2777-5860

New articles

Characterizations of Stability of Error Bounds for Convex Inequality Constraint Systems

In this paper, we mainly study error bounds for a single convex inequality and semi-infinite convex constraint systems, and give characterizations of stability of error bounds via directional derivatives. For a single convex inequality, it is proved that the stability of local error bounds under small perturbations is essentially equivalent to the non-zero minimum of the directional derivative at a reference point over the unit sphere, and the stability of global error bounds is proved to be equivalent to the strictly positive infimum of the directional derivatives, at all points in the boundary of the solution set, over the unit sphere as well as some mild constraint qualification. When these results are applied to semi-infinite convex constraint systems, characterizations of stability of local and global error bounds under small perturbations are also provided. In particular such stability of error bounds is proved to only require that all component functions in semi-infinite convex constraint systems have the same linear perturbation. Our work demonstrates that verifying the stability of error bounds for convex inequality constraint systems is, to some degree, equivalent to solving convex minimization problems (defined by directional derivatives) over the unit sphere.

Available online:
PDF

A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives

In this short note, we provide a simple version of an accelerated forward-backward method (a.k.a. Nesterov’s accelerated proximal gradient method) possibly relying on approximate proximal operators and allowing to exploit strong convexity of the objective function. The method supports both relative and absolute errors, and its behavior is illustrated on a set of standard numerical experiments.

Using the same developments, we further provide a version of the accelerated proximal hybrid extragradient method of [21] possibly exploiting strong convexity of the objective function.

Available online:
PDF