This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Fair Open Access. The journal supports open data and open code whenever possible and authors are strongly encouraged to submit code and data sets along with their manuscript.

 


Memberships and Indexing

 

   

 

e-ISSN : 2777-5860

New articles

Trading off 1-norm and sparsity against rank for linear models using mathematical optimization: 1-norm minimizing partially reflexive ah-symmetric generalized inverses

The M-P (Moore–Penrose) pseudoinverse has as a key application the computation of least-squares solutions of inconsistent systems of linear equations. Irrespective of whether a given input matrix is sparse, its M-P pseudoinverse can be dense, potentially leading to high computational burden, especially when we are dealing with high-dimensional matrices. The M-P pseudoinverse is uniquely characterized by four properties, but only two of them need to be satisfied for the computation of least-squares solutions. Fampa and Lee (2018) and Xu, Fampa, Lee, and Ponte (2019) propose local-search procedures to construct sparse block-structured generalized inverses that satisfy the two key M-P properties, plus one more (the so-called reflexive property). That additional M-P property is equivalent to imposing a minimum-rank condition on the generalized inverse. (Vector) 1-norm minimization is used to induce sparsity and, importantly, to keep the magnitudes of entries under control for the generalized-inverses constructed. Here, we investigate the trade-off between low 1-norm and low rank for generalized inverses that can be used in the computation of least-squares solutions. We propose several algorithmic approaches that start from a 1-norm minimizing generalized inverse that satisfies the two key M-P properties, and gradually decrease its rank, by iteratively imposing the reflexive property. The algorithms iterate until the generalized inverse has the least possible rank. During the iterations, we produce intermediate solutions, trading off low 1-norm (and typically high sparsity) against low rank.

Available online:
PDF

Robust Combinatorial Optimization with Locally Budgeted Uncertainty

Budgeted uncertainty sets have been established as a major influence on uncertainty modeling for robust optimization problems. A drawback of such sets is that the budget constraint only restricts the global amount of cost increase that can be distributed by an adversary. Local restrictions, while being important for many applications, cannot be modeled this way.

We introduce a new variant of budgeted uncertainty sets, called locally budgeted uncertainty. In this setting, the uncertain parameters are partitioned, such that a classic budgeted uncertainty set applies to each part of the partition, called region.

In a theoretical analysis, we show that the robust counterpart of such problems for a constant number of regions remains solvable in polynomial time, if the underlying nominal problem can be solved in polynomial time as well. If the number of regions is unbounded, we show that the robust selection problem remains solvable in polynomial time, while also providing hardness results for other combinatorial problems.

In computational experiments using both random and real-world data, we show that using locally budgeted uncertainty sets can have considerable advantages over classic budgeted uncertainty sets.

Available online:
PDF