The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers. The journal supports open data and open code whenever possible and authors are strongly encouraged to submit code and data sets along with their manuscript.


Open Access Policy

This journal provides immediate open access to its content. No subscription is needed to access the articles, and no download fees are charged.

Authors are not charged for publication in this journal, and the journal does not request any submission fee nor any article processing fee.

Short Papers

The authors have the possibility to submit short papers (no more than 8 pages using the journal template), for which the editorial board will guarantee a first answer within 3 months. Regular papers have no page limit.

Papers previously published in conference proceedings

Papers previously published in conference proceedings are welcome, provided that the authors inform the editorial board on this in the submission cover letter, highlight the novel parts in the new submission, and that the journal version contains significant extensions from the conference version.

Copyright Policy

All copyright is retained by authors.

Peer Review

Each manuscript is handled by one section editor and one associate editor. Independent referees are asked to submit their assessment within two or three months after receiving the manuscript, and possibly more depending on the length of the article.

Licensing Policy

All content is licensed under the Creative Commons Attribution 4.0 International License so that interested researchers are free to remix, transform, and build upon the material for any purpose. 

Retraction Policy

Plagiarism and duplicate publication are definitely considered to be misconduct.  Any paper that would correspond to such a case of misconduct will be removed from the platform and/or the journal site, and the case will be reported on the website through a formal notice that states the facts about the source of the work and provides the appropriate citations to that work.

Privacy Statement

The names and email addresses entered in this journal site will be used exclusively for the stated purposes of this journal, and will not be made available for any other purpose or to any other party.

New articles

Exact makespan minimization of unrelated parallel machines

We study methods for the exact solution of the unrelated parallel machine problem with makespan minimization, generally denoted as R||C max . Our original application arises from the automotive assembly process where tasks needs to be distributed among several robots. This involves the solutions of several R||C max instances, which proved hard for a MILP solver since the makespan objective induces weak LP relaxation bounds. To improve these bounds and to enable the solution of larger instances, we propose a branch–and–bound method based on a Lagrangian relaxation of the assignment constraints. For this relaxation we derive a criterion for variable fixing and prove the zero duality gap property for the case of two parallel machines. Our computational studies indicate that the proposed algorithm is competitive with state-of-the-art methods on different types of instances. Moreover, the impact of each proposed feature is analysed.

Available online:

On the implementation of a global optimization method for mixed-variable problems

We describe the optimization algorithm implemented in the open-source derivative-free solver RBFOpt. The algorithm is based on the radial basis function method of Gutmann and the metric stochastic response surface method of Regis and Shoemaker. We propose several modifications aimed at generalizing and improving these two algorithms: (i) the use of an extended space to represent categorical variables in unary encoding; (ii) a refinement phase to locally improve a candidate solution; (iii) interpolation models without the unisolvence condition, to both help deal with categorical variables, and initiate the optimization before a uniquely determined model is possible; (iv) a master-worker framework to allow asynchronous objective function evaluations in parallel. Numerical experiments show the effectiveness of these ideas.

Available online: