Learning structured approximations of combinatorial optimization problems
Open Journal of Mathematical Optimization, Volume 6 (2025), article no. 10, 27 p.

Neural networks that include a combinatorial optimization layer can give surprisingly efficient heuristic policies for difficult combinatorial optimization problems. Three questions remain open: which architecture should be used, how should the parameters of the machine learning model be learned, and what performance guarantees can we expect from the resulting algorithms? Following the intuitions of geometric deep learning, we explain why equivariant layers should be used when designing such policies, and illustrate how to build such layers on routing, scheduling, and network design applications. We introduce a learning approach that enables to learn such policies when the training set contains only instances of the difficult optimization problem and not their optimal solutions, and show its numerical performance on our three applications. Finally, using tools from statistical learning theory, we prove a theorem showing the convergence speed of the estimator. As a corollary, we obtain that, if an approximation algorithm can be encoded by the neural network for some parametrization, then the learned policy will retain the approximation ratio guarantee. On our network design problem, our machine learning policy has the approximation ratio guarantee of the best approximation algorithm known and the numerical efficiency of the best heuristic.

Received:
Revised:
Accepted:
Published online:
DOI: 10.5802/ojmo.43
Keywords: Decision Focused Learning, Combinatorial Optimization Augmented Machine Learning, Combinatorial Optimization Layer, Generalization Bound, Perturbed Empirical Risk Minimization, Stochastic Vehicle Scheduling, Single Machine Scheduling

Axel Parmentier 1

1 CERMICS, ENPC, Institut Polytechnique de Paris, Marne-la-Vallée, France
License: CC-BY 4.0
Copyrights: The authors retain unrestricted copyrights and publishing rights
@article{OJMO_2025__6__A6_0,
     author = {Axel Parmentier},
     title = {Learning structured approximations of combinatorial optimization problems},
     journal = {Open Journal of Mathematical Optimization},
     eid = {10},
     pages = {1--27},
     year = {2025},
     publisher = {Universit\'e de Montpellier},
     volume = {6},
     doi = {10.5802/ojmo.43},
     language = {en},
     url = {https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.43/}
}
TY  - JOUR
AU  - Axel Parmentier
TI  - Learning structured approximations of combinatorial optimization problems
JO  - Open Journal of Mathematical Optimization
PY  - 2025
SP  - 1
EP  - 27
VL  - 6
PB  - Université de Montpellier
UR  - https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.43/
DO  - 10.5802/ojmo.43
LA  - en
ID  - OJMO_2025__6__A6_0
ER  - 
%0 Journal Article
%A Axel Parmentier
%T Learning structured approximations of combinatorial optimization problems
%J Open Journal of Mathematical Optimization
%D 2025
%P 1-27
%V 6
%I Université de Montpellier
%U https://ojmo.centre-mersenne.org/articles/10.5802/ojmo.43/
%R 10.5802/ojmo.43
%G en
%F OJMO_2025__6__A6_0
Axel Parmentier. Learning structured approximations of combinatorial optimization problems. Open Journal of Mathematical Optimization, Volume 6 (2025), article  no. 10, 27 p.. doi: 10.5802/ojmo.43

[1] Brandon Amos; J. Zico Kolter OptNet: Differentiable Optimization as a Layer in Neural Networks, Proceedings of the 34th International Conference on Machine Learning, PMLR (2017), pp. 136-145

[2] Maria-Florina Balcan; Dan DeBlasio; Travis Dick; Carl Kingsford; Tuomas Sandholm; Ellen Vitercik How Much Data Is Sufficient to Learn High-Performing Algorithms? Generalization Guarantees for Data-Driven Algorithm Design, Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, ACM Press (2021), pp. 919-932 | DOI | Zbl

[3] Yoshua Bengio; Andrea Lodi; Antoine Prouvost Machine learning for combinatorial optimization: a methodological tour d’horizon, Eur. J. Oper. Res., Volume 290 (2021) no. 2, pp. 405-421 | DOI | MR | Zbl

[4] Quentin Berthet; Mathieu Blondel; Olivier Teboul; Marco Cuturi; Jean-Philippe Vert; Francis Bach Learning with Differentiable Pertubed Optimizers, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual (2020)

[5] Dimitris Bertsimas; Angela King; Rahul Mazumder Best Subset Selection Via a Modern Optimization Lens, Ann. Stat., Volume 44 (2016) no. 2, pp. 813-852 | Zbl | MR

[6] Mathieu Blondel; Quentin Berthet; Marco Cuturi; Roy Frostig; Stephan Hoyer; Felipe Llinares-López; Fabian Pedregosa; Jean-Philippe Vert Efficient and Modular Implicit Differentiation, NIPS’22: 36th International Conference on Neural Information Processing Systems (2022), pp. 5230-5242

[7] Mathieu Blondel; André F. T. Martins; Vlad Niculae Learning with Fenchel–Young Losses, J. Mach. Learn. Res., Volume 21 (2020), 35, 69 pages | MR | Zbl

[8] Olivier Bousquet; Stéphane Boucheron; Gábor Lugosi Introduction to Statistical Learning Theory, Advanced Lectures on Machine Learning (Lecture Notes in Computer Science), Volume 3176, Springer, 2004, pp. 169-207 | DOI | Zbl

[9] Michael M Bronstein; Joan Bruna; Taco Cohen; Petar Veličković Geometric deep learning: Grids, groups, graphs, geodesics, and gauges (2021) | arXiv

[10] Guillaume Dalle; Léo Baty; Louis Bouvier; Axel Parmentier Learning with Combinatorial Optimization Layers: A Probabilistic Approach (2022) | arXiv

[11] Federico Della Croce; Vincent T’kindt A recovering beam search algorithm for the one-machine dynamic total completion time scheduling problem, J. Oper. Res. Soc., Volume 53 (2002) no. 11, pp. 1275-1280 | Zbl | DOI

[12] Adam N. Elmachtoub; Paul Grigas Smart “Predict, Then Optimize”, Manag. Sci., Volume 68 (2021) no. 1, pp. 9-26 | DOI

[13] Bruno Escoffier; Laurent Gourvès; Jérôme Monnot; Olivier Spanjaard Two-Stage Stochastic Matching and Spanning Tree Problems: Polynomial Instances and Approximation, Eur. J. Oper. Res., Volume 205 (2010) no. 1, pp. 19-30 | Zbl | DOI | MR

[14] Steven G. Johnson The NLopt Nonlinear-Optimization Package (http://github.com/stevengj/nlopt, Accessed on 2021-07-04)

[15] D. R. Jones; C. D. Perttunen; B. E. Stuckman Lipschitzian Optimization without the Lipschitz Constant, J. Optim. Theory Appl., Volume 79 (1993) no. 1, pp. 157-181 | DOI | MR | Zbl

[16] James Kotary; Ferdinando Fioretto; Pascal Van Hentenryck; Bryan Wilder End-to-End Constrained Optimization Learning: A Survey, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization (2021), pp. 4475-4482 | DOI

[17] Jayanta Mandi; Peter J Stuckey; Tias Guns et al. Smart predict-and-optimize for hard combinatorial optimization problems, Proceedings of the AAAI Conference on Artificial Intelligence 34(02) (2020), pp. 1603-1610 | DOI

[18] Ruben Martinez-Cantin BayesOpt: A Bayesian Optimization Library for Nonlinear Optimization, Experimental Design and Bandits, J. Mach. Learn. Res., Volume 15 (2014), pp. 3915-3919 | MR | Zbl

[19] Alex Nowak-Vila; Francis Bach; Alessandro Rudi A general theory for structured prediction with smooth convex surrogates (2019) | arXiv

[20] Sebastian Nowozin Structured Learning and Prediction in Computer Vision, Found. Trends Comput. Graph. Vision, Volume 6 (2010) no. 3-4, pp. 185-365 | DOI | Zbl

[21] Axel Parmentier Learning to Approximate Industrial Problems by Operations Research Classic Problems, Oper. Res., Volume 70 (2022) no. 1, pp. 606-623 | DOI | MR | Zbl

[22] Axel Parmentier; Vincent T’kindt Structured learning based heuristics to solve the single machine scheduling problem with release times and sum of completion times, Eur. J. Oper. Res., Volume 305 (2023) no. 3, pp. 1032-1041 | DOI | Zbl

[23] Michael L Pinedo Scheduling. Theory, algorithms, and systems, Springer, 2012, xx+673 pages | MR | DOI | Zbl

[24] Marin Vlastelica Pogančić; Anselm Paulus; Vit Musil; Georg Martius; Michal Rolinek Differentiation of Blackbox Combinatorial Solvers, International Conference on Learning Representations (2020) (https://openreview.net/forum?id=BkevoJSYPB)

[25] Michael M. Wolf Mathematical Foundations of Supervised Learning, 2018 (https://www-m5.ma.tum.de/foswiki/pub/M5/Allgemeines/MA4801_2018S/ML_notes_main.pdf)

Cited by Sources: