This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Fair Open Access. The journal requires the numerical results published in its papers to be reproducible by others, ideally by publishing code and data sets along with the manuscripts.

As detailed under the Policy tab, the journal also publishes:

  • Short papers, ensuring fast review process.
  • Significant extensions of conference proceedings.








e-ISSN : 2777-5860

New articles

The continuous quadrant penalty formulation of logical constraints

Could continuous optimization address efficiently logical constraints? We propose a continuous-optimization alternative to the usual discrete-optimization (big-M and complementary) formulations of logical constraints, that can lead to effective practical methods. Based on the simple idea of guiding the search of a continuous-optimization descent method towards the parts of the domain where the logical constraint is satisfied, we introduce a smooth penalty-function formulation of logical constraints, and related theoretical results. This formulation allows a direct use of state-of-the-art continuous optimization solvers. The effectiveness of the continuous quadrant penalty formulation is demonstrated on an aircraft conflict avoidance application.

Available online:

Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient

We study the linear convergence of the primal-dual hybrid gradient method. After a review of current analyses, we show that they do not explain properly the behavior of the algorithm, even on the most simple problems. We thus introduce the quadratic error bound of the smoothed gap, a new regularity assumption that holds for a wide class of optimization problems. Equipped with this tool, we manage to prove tighter convergence rates. Then, we show that averaging and restarting the primal-dual hybrid gradient allows us to leverage better the regularity constant. Numerical experiments on linear and quadratic programs, ridge regression and image denoising illustrate the findings of the paper.

Available online: