This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Diamond Open Access. The journal requires the numerical results published in its papers to be reproducible by others, ideally by publishing code and data sets along with the manuscripts.

As detailed under the Policy tab, the journal also publishes:

  • Short papers, ensuring fast review process.
  • Significant extensions of conference proceedings.


Indexing

  

 

 

SCImago Journal & Country Rank

 

 

e-ISSN : 2777-5860

New articles

Short Paper - Quadratic minimization: from conjugate gradient to an adaptive Polyak’s momentum method with Polyak step-sizes

In this work, we propose an adaptive variation on the classical Heavy-ball method for convex quadratic minimization. The adaptivity crucially relies on so-called “Polyak step-sizes”, which consists of using the knowledge of the optimal value of the optimization problem at hand instead of problem parameters such as a few eigenvalues of the Hessian of the problem. This method happens to also be equivalent to a variation of the classical conjugate gradient method, and thereby inherits many of its attractive features, including its finite-time convergence, instance optimality, and its worst-case convergence rates.

The classical gradient method with Polyak step-sizes is known to behave very well in situations in which it can be used, and the question of whether incorporating momentum in this method is possible and can improve the method itself appeared to be open. We provide a definitive answer to this question for minimizing convex quadratic functions, an arguably necessary first step for developing such methods in more general setups.

Available online:
PDF

Iterative Linear Quadratic Optimization for Nonlinear Control: Differentiable Programming Algorithmic Templates

Iterative optimization algorithms depend on access to information about the objective function. In a differentiable programming framework, this information, such as gradients, can be automatically derived from the computational graph. We explore how nonlinear control algorithms, often employing linear and/or quadratic approximations, can be effectively cast within this framework. Our approach illuminates shared components and differences between gradient descent, Gauss–Newton, Newton, and differential dynamic programming methods in the context of discrete time nonlinear control. Furthermore, we present line-search strategies and regularized variants of these algorithms, along with a comprehensive analysis of their computational complexities. We study the performance of the aforementioned algorithms on various nonlinear control benchmarks, including autonomous car racing simulations using a simplified car model. All implementations are publicly available in a package coded in a differentiable programming language.

Available online:
PDF

First order algorithms for computing linear and polyhedral estimates

It was recently shown [6, 8] that “properly built” linear and polyhedral estimates nearly attain minimax accuracy bounds in the problem of recovery of unknown signal from noisy observations of linear images of the signal when the signal set is an ellitope. However, design of nearly optimal estimates relies upon solving semidefinite optimization problems with matrix variables, what puts the synthesis of such estimates beyond the reach of the standard Interior Point algorithms of semidefinite optimization even for moderate size recovery problems. Our goal is to develop First Order Optimization algorithms for the computationally efficient design of linear and polyhedral estimates. In this paper we (a) explain how to eliminate matrix variables, thus reducing dramatically the design dimension when passing from Interior Point to First Order optimization algorithms and (b) develop and analyse a dedicated algorithm of the latter type — Composite Truncated Level method.

Available online:
PDF

Short Paper - The Binary Linearization Complexity of Pseudo-Boolean Functions

We consider the problem of linearizing a pseudo-Boolean function f:{0,1} n by means of k Boolean functions. Such a linearization yields an integer linear programming formulation with only k auxiliary variables. This motivates the definition of the linearization complexity of f as the minimum such k. Our theoretical contributions are the proof that random polynomials almost surely have a high linearization complexity and characterizations of its value in case we do or do not restrict the set of admissible Boolean functions. The practical relevance is shown by devising and evaluating integer linear programming models of two such linearizations for the low auto-correlation binary sequences problem. Still, many problems around this new concept remain open.

Available online:
PDF

Combinatorial Robust Optimization with Decision-Dependent Information Discovery and Polyhedral Uncertainty

Given a nominal combinatorial optimization problem, we consider a robust two-stages variant with polyhedral cost uncertainty, called Decision-Dependent Information Discovery (DDID). In the first stage, DDID selects a subset of uncertain cost coefficients to be observed, and in the second-stage, DDID selects a solution to the nominal problem, where the remaining cost coefficients are still uncertain. Given a compact linear programming formulation for the nominal problem, we provide a mixed-integer linear programming (MILP) formulation for DDID. The MILP is compact if the number of constraints describing the uncertainty polytope other than lower and upper bounds is constant. The proof of this result involves the generalization to any polyhedral uncertainty set of a classical result, showing that solving a robust combinatorial optimization problem with cost uncertainty amounts to solving several times the nominal counterpart. We extend this formulation to more general nominal problems through column generation and constraint generation algorithms. We illustrate our reformulations and algorithms numerically on the selection problem, the orienteering problem, and the spanning tree problem.

Available online:
PDF

Tight analyses for subgradient descent I: Lower bounds

Consider the problem of minimizing functions that are Lipschitz and convex, but not necessarily differentiable. We construct a function from this class for which the Tþ iterate of subgradient descent has error Ω(log(T)/T). This matches a known upper bound of O(log(T)/T). We prove analogous results for functions that are additionally strongly convex. There exists such a function for which the error of the Tþ iterate of subgradient descent has error Ω(log(T)/T), matching a known upper bound of O(log(T)/T). These results resolve a question posed by Shamir (2012).

Available online:
PDF

An interior proximal gradient method for nonconvex optimization

We consider structured minimization problems subject to smooth inequality constraints and present a flexible algorithm that combines interior point (IP) and proximal gradient schemes. While traditional IP methods cannot cope with nonsmooth objective functions and proximal algorithms cannot handle complicated constraints, their combined usage is shown to successfully compensate the respective shortcomings. We provide a theoretical characterization of the algorithm and its asymptotic properties, deriving convergence results for fully nonconvex problems, thus bridging the gap with previous works that successfully addressed the convex case. Our interior proximal gradient algorithm benefits from warm starting, generates strictly feasible iterates with decreasing objective value, and returns after finitely many iterations a primal-dual pair approximately satisfying suitable optimality conditions. As a byproduct of our analysis of proximal gradient iterations we demonstrate that a slight refinement of traditional backtracking techniques waives the need for upper bounding the stepsize sequence, as required in existing results for the nonconvex setting.

Available online:
PDF

Cardinality-constrained structured data-fitting problems

A memory-efficient solution framework is proposed for the cardinality-constrained structured data-fitting problem. Dual-based atom-identification rules reveal the structure of the optimal primal solution from near-optimal dual solutions, which allows for a simple and computationally efficient algorithm that translates any feasible dual solution into a primal solution satisfying the cardinality constraint. Rigorous guarantees bound the quality of a near-optimal primal solution given any dual-based method that generates dual iterates converging to an optimal dual solution. Numerical experiments on real-world datasets support the analysis and demonstrate the efficiency of the proposed approach.

Available online:
PDF

Optimizing transient gas network control for challenging real-world instances using MIP-based heuristics

Optimizing the transient control of gas networks is a highly challenging task. The corresponding model incorporates the combinatorial complexity of determining the settings for the many active elements as well as the non-linear and non-convex nature of the physical and technical principles of gas transport. In this paper, we present the latest improvements of our ongoing work to tackle this problem for real-world, large-scale problem instances: By adjusting our mixed-integer non-linear programming model regarding the gas compression capabilities in the network, we reflect the technical limits of the underlying units more accurately while maintaining a similar overall model size. In addition, we introduce a new algorithmic approach that is based on splitting the complexity of the problem by first finding assignments for discrete variables and then determining the continuous variables as locally optimal solution of the corresponding non-linear program. For the first task, we design multiple different heuristics based on concepts for general time-expanded optimization problems that find solutions by solving a sequence of sub-problems defined on reduced time horizons. To demonstrate the competitiveness of our approach, we test our algorithm on particularly challenging historical demand scenarios. The results show that high-quality solutions are obtained reliably within short run times, making the algorithm well-suited to be applied at the core of time-critical industrial applications.

Available online:
PDF

The backtrack Hölder gradient method with application to min-max and min-min problems

We present a new algorithm to solve min-max or min-min problems out of the convex world. We use rigidity assumptions, ubiquitous in learning, making our method – the backtrack Hölder algorithm applicable to many optimization problems. Our approach takes advantage of hidden regularity properties and allows us, in particular, to devise a simple algorithm of ridge type. An original feature of our method is to come with automatic step size adaptation which departs from the usual overly cautious backtracking methods. In a general framework, we provide convergence theoretical guarantees and rates. We apply our findings on simple Generative Adversarial Network (GAN) problems obtaining promising numerical results. It is worthwhile mentioning that a byproduct of our approach is a simple recipe for general Hölderian backtracking optimization.

Available online:
PDF

The continuous quadrant penalty formulation of logical constraints

Could continuous optimization address efficiently logical constraints? We propose a continuous-optimization alternative to the usual discrete-optimization (big-M and complementary) formulations of logical constraints, that can lead to effective practical methods. Based on the simple idea of guiding the search of a continuous-optimization descent method towards the parts of the domain where the logical constraint is satisfied, we introduce a smooth penalty-function formulation of logical constraints, and related theoretical results. This formulation allows a direct use of state-of-the-art continuous optimization solvers. The effectiveness of the continuous quadrant penalty formulation is demonstrated on an aircraft conflict avoidance application.

Available online:
PDF