This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Fair Open Access. The journal requires the numerical results published in its papers to be reproducible by others, ideally by publishing code and data sets along with the manuscripts.

As detailed under the Policy tab, the journal also publishes:

  • Short papers, ensuring fast review process.
  • Significant extensions of conference proceedings.


Indexing

  

 

 

News

 

 

e-ISSN : 2777-5860

New articles

The backtrack Hölder gradient method with application to min-max and min-min problems

We present a new algorithm to solve min-max or min-min problems out of the convex world. We use rigidity assumptions, ubiquitous in learning, making our method – the backtrack Hölder algorithm applicable to many optimization problems. Our approach takes advantage of hidden regularity properties and allows us, in particular, to devise a simple algorithm of ridge type. An original feature of our method is to come with automatic step size adaptation which departs from the usual overly cautious backtracking methods. In a general framework, we provide convergence theoretical guarantees and rates. We apply our findings on simple Generative Adversarial Network (GAN) problems obtaining promising numerical results. It is worthwhile mentioning that a byproduct of our approach is a simple recipe for general Hölderian backtracking optimization.

Available online:
PDF

The continuous quadrant penalty formulation of logical constraints

Could continuous optimization address efficiently logical constraints? We propose a continuous-optimization alternative to the usual discrete-optimization (big-M and complementary) formulations of logical constraints, that can lead to effective practical methods. Based on the simple idea of guiding the search of a continuous-optimization descent method towards the parts of the domain where the logical constraint is satisfied, we introduce a smooth penalty-function formulation of logical constraints, and related theoretical results. This formulation allows a direct use of state-of-the-art continuous optimization solvers. The effectiveness of the continuous quadrant penalty formulation is demonstrated on an aircraft conflict avoidance application.

Available online:
PDF