This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Diamond Open Access. The journal requires the numerical results published in its papers to be reproducible by others, ideally by publishing code and data sets along with the manuscripts.

As detailed under the Policy tab, the journal also publishes:

  • Short papers, ensuring fast review process.
  • Significant extensions of conference proceedings.


Indexing

  

 

 

SCImago Journal & Country Rank

 

 

e-ISSN : 2777-5860

New articles

Combinatorial Robust Optimization with Decision-Dependent Information Discovery and Polyhedral Uncertainty

Given a nominal combinatorial optimization problem, we consider a robust two-stages variant with polyhedral cost uncertainty, called Decision-Dependent Information Discovery (DDID). In the first stage, DDID selects a subset of uncertain cost coefficients to be observed, and in the second-stage, DDID selects a solution to the nominal problem, where the remaining cost coefficients are still uncertain. Given a compact linear programming formulation for the nominal problem, we provide a mixed-integer linear programming (MILP) formulation for DDID. The MILP is compact if the number of constraints describing the uncertainty polytope other than lower and upper bounds is constant. The proof of this result involves the generalization to any polyhedral uncertainty set of a classical result, showing that solving a robust combinatorial optimization problem with cost uncertainty amounts to solving several times the nominal counterpart. We extend this formulation to more general nominal problems through column generation and constraint generation algorithms. We illustrate our reformulations and algorithms numerically on the selection problem, the orienteering problem, and the spanning tree problem.

Available online:
PDF

Tight analyses for subgradient descent I: Lower bounds

Consider the problem of minimizing functions that are Lipschitz and convex, but not necessarily differentiable. We construct a function from this class for which the Tþ iterate of subgradient descent has error Ω(log(T)/T). This matches a known upper bound of O(log(T)/T). We prove analogous results for functions that are additionally strongly convex. There exists such a function for which the error of the Tþ iterate of subgradient descent has error Ω(log(T)/T), matching a known upper bound of O(log(T)/T). These results resolve a question posed by Shamir (2012).

Available online:
PDF