This is the home page of the Open Journal of Mathematical Optimization, an electronic journal of computer science and mathematics owned by its Editorial Board.

The Open Journal of Mathematical Optimization (OJMO) publishes original and high-quality articles dealing with every aspect of mathematical optimization, ranging from numerical and computational aspects to the theoretical questions related to mathematical optimization problems. The topics covered by the journal are classified into four areas:

  1. Continuous Optimization
  2. Discrete Optimization
  3. Optimization under Uncertainty
  4. Computational aspects and applications

The journal publishes high-quality articles in open access free of charge, meaning that neither the authors nor the readers have to pay to access the content of the published papers, thus adhering to the principles of Fair Open Access. The journal requires the numerical results published in its papers to be reproducible by others, ideally by publishing code and data sets along with the manuscripts.


Indexing

  

 

 

News

 

 

e-ISSN : 2777-5860

New articles

Inertial-relaxed splitting for composite monotone inclusions

In a similar spirit of the extension of the proximal point method developed by Alves et al. [2], we propose in this work an Inertial-Relaxed primal-dual splitting method to address the problem of decomposing the minimization of the sum of three convex functions, one of them being smooth, and considering a general coupling subspace. A unified setting is formalized and applied to different average maps whose corresponding fixed points are related to the solutions of the inclusion problem associated with our extended model. An interesting feature of the resulting algorithms we have designed is that they present two distinct versions with a Gauss–Seidel or a Jacobi flavor, extending in that sense former proximal ADMM methods, both including inertial and relaxation parameters. Finally we show computational experiments on a class of the fused LASSO instances of medium size.

Available online:
PDF

Short Paper - A note on the Frank–Wolfe algorithm for a class of nonconvex and nonsmooth optimization problems

Frank and Wolfe’s celebrated conditional gradient method is a well-known tool for solving smooth optimization problems for which minimizing a linear function over the feasible set is computationally cheap. However, when the objective function is nonsmooth, the method may fail to compute a stationary point. In this work, we show that the Frank–Wolfe algorithm can be employed to compute Clarke-stationary points for nonconvex and nonsmooth optimization problems consisting of minimizing upper-C 1,α functions over convex and compact sets. Furthermore, under more restrictive assumptions, we propose a new algorithm variant with stronger stationarity guarantees, namely directional stationarity and even local optimality.

Available online:
PDF