Open Journal of Mathematical Optimization

STATEMENTS

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

1. Continuous Optimization
2. Discrete Optimization
3. Optimization under Uncertainty
4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers. The journal supports open data and open code whenever possible and authors are strongly encouraged to submit code and data sets along with their manuscript.

JOURNAL POLICY

Open Access Policy

This journal provides immediate open access to its content. No subscription is needed to access the articles, and no download fees are charged.

Authors are not charged for publication in this journal, and the journal does not request any submission fee nor any article processing fee.

Short Papers

The authors have the possibility to submit short papers (no more than 8 pages using the journal template), for which the editorial board will guarantee a first answer within 3 months. Regular papers have no page limit.

Papers previously published in conference proceedings

Papers previously published in conference proceedings are welcome, provided that the authors inform the editorial board on this in the submission cover letter, highlight the novel parts in the new submission, and that the journal version contains significant extensions from the conference version.

All copyright is retained by authors.

Peer Review

Each manuscript is handled by one section editor and one associate editor. Independent referees are asked to submit their assessment within two or three months after receiving the manuscript, and possibly more depending on the length of the article.

Retraction Policy

Plagiarism and duplicate publication are definitely considered to be misconduct.  Any paper that would correspond to such a case of misconduct will be removed from the platform and/or the journal site, and the case will be reported on the website through a formal notice that states the facts about the source of the work and provides the appropriate citations to that work.

Privacy Statement

The names and email addresses entered in this journal site will be used exclusively for the stated purposes of this journal, and will not be made available for any other purpose or to any other party.

On the implementation of a global optimization method for mixed-variable problems

We describe the optimization algorithm implemented in the open-source derivative-free solver RBFOpt. The algorithm is based on the radial basis function method of Gutmann and the metric stochastic response surface method of Regis and Shoemaker. We propose several modifications aimed at generalizing and improving these two algorithms: (i) the use of an extended space to represent categorical variables in unary encoding; (ii) a refinement phase to locally improve a candidate solution; (iii) interpolation models without the unisolvence condition, to both help deal with categorical variables, and initiate the optimization before a uniquely determined model is possible; (iv) a master-worker framework to allow asynchronous objective function evaluations in parallel. Numerical experiments show the effectiveness of these ideas.

Available online:

Revisiting a Cutting-Plane Method for Perfect Matchings

In 2016, Chandrasekaran, Végh, and Vempala (Mathematics of Operations Research, 41(1):23–48) published a method to solve the minimum-cost perfect matching problem on an arbitrary graph by solving a strictly polynomial number of linear programs. However, their method requires a strong uniqueness condition, which they imposed by using perturbations of the form $c\left(i\right)={c}_{0}\left(i\right)+{2}^{-i}$. On large graphs (roughly $m>100$), these perturbations lead to cost values that exceed the precision of floating-point formats used by typical linear programming solvers for numerical calculations. We demonstrate, by a sequence of counterexamples, that perturbations are required for the algorithm to work, motivating our formulation of a general method that arrives at the same solution to the problem as Chandrasekaran et al. but overcomes the limitations described above by solving multiple linear programs without using perturbations. The key ingredient of our method is an adaptation of an algorithm for lexicographic linear goal programming due to Ignizio (Journal of the Operational Research Society, 36(6):507–515, 1985). We then give an explicit algorithm that exploits our method, and show that this new algorithm still runs in strongly polynomial time.

Available online: