This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Diamond Open Access. The journal requires the numerical results published in its papers to be reproducible by others, ideally by publishing code and data sets along with the manuscripts.

As detailed under the Policy tab, the journal also publishes:

  • Short papers, ensuring fast review process.
  • Significant extensions of conference proceedings.


Indexing

  

 

 

SCImago Journal & Country Rank

 

 

e-ISSN : 2777-5860

New articles

Short paper - The Binary Linearization Complexity of Pseudo-Boolean Functions

We consider the problem of linearizing a pseudo-Boolean function f:{0,1} n by means of k Boolean functions. Such a linearization yields an integer linear programming formulation with only k auxiliary variables. This motivates the definition of the linearization complexity of f as the minimum such k. Our theoretical contributions are the proof that random polynomials almost surely have a high linearization complexity and characterizations of its value in case we do or do not restrict the set of admissible Boolean functions. The practical relevance is shown by devising and evaluating integer linear programming models of two such linearizations for the low auto-correlation binary sequences problem. Still, many problems around this new concept remain open.

Available online:
PDF

Combinatorial Robust Optimization with Decision-Dependent Information Discovery and Polyhedral Uncertainty

Given a nominal combinatorial optimization problem, we consider a robust two-stages variant with polyhedral cost uncertainty, called Decision-Dependent Information Discovery (DDID). In the first stage, DDID selects a subset of uncertain cost coefficients to be observed, and in the second-stage, DDID selects a solution to the nominal problem, where the remaining cost coefficients are still uncertain. Given a compact linear programming formulation for the nominal problem, we provide a mixed-integer linear programming (MILP) formulation for DDID. The MILP is compact if the number of constraints describing the uncertainty polytope other than lower and upper bounds is constant. The proof of this result involves the generalization to any polyhedral uncertainty set of a classical result, showing that solving a robust combinatorial optimization problem with cost uncertainty amounts to solving several times the nominal counterpart. We extend this formulation to more general nominal problems through column generation and constraint generation algorithms. We illustrate our reformulations and algorithms numerically on the selection problem, the orienteering problem, and the spanning tree problem.

Available online:
PDF