The Referenced Vertex Ordering Problem: Theory, Applications, and Solution Methods

We introduce the referenced vertex ordering problem (revorder) as a combinatorial decision problem generalizing several vertex ordering problems that already appeared in the scientific literature under different guises. In other words, revorder is a generic problem with several possible extensions corresponding to various real-life applications. Given a simple undirected graph G = (V, E), revorder basically asks whether the vertices of G can be sorted in a way to guarantee that every vertex is adjacent to a minimal number of its predecessors in the order. Previous works show that revorder, as well as its optimization counterpart, denoted in our work as min revorder, are NP-hard. We give a survey of methods and algorithms that can be applied to the solution of min revorder, and we develop a new enumeration scheme for its solution. Our theoretical analysis of this scheme yields several pruning techniques aimed at the reduction of the number of enumeration nodes. We then discuss how upper and lower bounds can be computed during the enumeration to design a branch-and-bound algorithm. Finally, we validate our branch-and-bound algorithm by conducting a large set of computational experiments on instances coming from various real-life applications. Our results highlight that the newly introduced pruning techniques allow the computation of good-quality solutions (in comparison with other solver’s solutions) while reducing the overall computational cost. Our branch-and-bound outperforms other existing solution methods: among 180 instances with 60 vertices, it solves 179 instances to optimality whereas the best existing method is only able to solve 109 of them. Moreover, our tests show that our algorithm can solve medium-scale instances up to 500 vertices, which opens the perspective of handling new real-life problems. Our implementation of the branch-and-bound algorithm, together with all instances we have used, is publicly available on GitLab1. Digital Object Identifier 10.5802/ojmo.8


Introduction
Consider the problem of an undergraduate or graduate student organizing her University program. Courses given at a second semester can be attended only when the classes planned in the first semester have been attended, but not all the possible classes offered in the first semester are actually necessary. Even though the problem that every student needs to solve is very close to scheduling, the fact that only a subset of classes is to be selected by the students makes this problem essentially different from classical scheduling. Suppose that G = (V, E) is a simple undirected graph where vertices are courses and there exists an edge between two vertices if and only if they belong to two consecutive semesters. From an initial selection of the classes of the very first semester, the student's problem consists therefore in constructing an order for (a part of) the other graph vertices so that they admit a predetermined number of predecessors (the courses given in the previous semester). The courses v ∈ V of the first year second semester have as a reference the initial class selection of the first semester, while classes v held in other semesters need to have as a reference other classes u ∈ V given in the previous semester and such that {u, v} ∈ E. The student's problem is one example of the problem that is the focus of this article.
Consider now the same student achieving a successful academic career. Some years later she is faced with the problem of organizing a conference scientific program. Again, any researcher in operational research would

Formal definition of the problem and notation
The problem is formally represented by a simple undirected graph G = (V, E), where the number of vertices is given by |V | = n. Denoting as [[a, b]] := {a, a + 1, . . . , b} for any a, b ∈ Z, with a < b, any bijective function σ : V → [ [1, n]], defines a total vertex order of G: given one vertex of v ∈ V , σ associates to v an integer in [ [1, n]], and reciprocally. For v ∈ V , we also say that the integer σ(v) is the rank of v in σ.
We are interested in vertex orders satisfying some specific connectivity properties. For their definitions, we introduce the following notations.
The neighborhood of a vertex v is denoted as N (v) := {u : {u, v} ∈ E}, and the degree of v, |N (v)|, is denoted as d • (v); If σ is a vertex order, then u ∈ V is called a reference in σ of v ∈ V if and only if u ∈ N (v) and σ(u) < σ (v). The set of references of v in σ is denoted as R σ (v). To simplify notations, we will denote the set of references of the vertex with rank i ∈ [ [1, n]], σ −1 (i), as R σ (i).
As already discussed in the Introduction, we wish to establish whether there exists a vertex order where each vertex has a given required number L of references. Since the first L vertices cannot satisfy this constraint, it is necessary to define a subset S of at least L initial vertices having ranks 1, . . . , L , with L ≥ L, where no constraints on their references need to be imposed. Notice that, if the argument of the vertex order σ is a set A, we suppose that σ(A) represents the image of A through σ. Let S be a subset of 2 V . Formally, we introduce the following decision problem: Such a vertex order is called a referenced order, and the set of referenced orders of G is denoted as Σ(S, L).
We point out that the constraint |R σ (i)| ≥ L for i = |S| + 1 immediately implies that the set S ∈ S needs to have at least cardinality L (i.e., |S| ≥ L).
In addition to the requirements defining a referenced order, some applications may also justify that we consider an ideal number of references U ≥ L. In such case, while it is imposed that every vertex other than the initial ones has at least L references, it is also required that a maximum number of vertices have at least U references. For a given referenced order σ with initial set S, we will then employ the following additional notation: for all v / ∈ S, δ σ (v) ∈ {0, 1} indicates that v lacks references, i.e., δ σ (v) = 1 if and only if |R σ (v)| ≤ U − 1; if vertex v is such that δ σ (v) = 1, v will be called a partially-referenced vertex. In contrast, if δ σ (v) = 0, v is called a fully-referenced vertex.
The above naturally yields the following optimization counterpart of revorder, which will be the focus of this article.
Definition 2 (Minimum Referenced Vertex Ordering Problem (min revorder)). Given a simple undirected graph G = (V, E)a positive integer L and a set S ∈ S, find an optimal solution to the following optimization problem: We notice that the constraint σ ∈ Σ(S, L) implies that |R σ (v)| ≥ L if σ(v) ≥ |S| + 1. Therefore, when considering the second constraint of the optimization problem, we can verify that partially-referenced vertices are those for which the value of |R σ (v)| is between L and U − 1, both extremes included.

State of the art and contribution statement
In our view, one important feature of revorder and min revorder is that they share a generic and rather simple form. This opens the perspective of using these formulations for several different real-life applications, as illustrated in the introductory examples. In Section 2, we support this statement by describing how min revorder emerges as a subproblem in the solution of some distance geometry problems, and we discuss a particular case of an interdiction problem where the adversary problem can be modeled as a min revorder.
In the context of distance geometry, previous research has focused on a particular class of referenced orders, called discretization orders, where S is the set of L-cliques of the graph. The associated decision problem is the Discretization Vertex Order Problem (dvop) [15]. Lavor et al. [11] have proposed a greedy algorithm that solves the dvop in polynomial time for any fixed value of L; this algorithm was successfully used in some applications related to distance geometry [7,21]. We point out that other works have focused on discretization orders satisfying the so-called consecutivity assumption. [14], where all reference vertices for a given vertex are its immediate predecessors. This additional assumption implies that the final vertex order can be seen as a sequence of overlapping cliques on the original graph [19], and the problem of finding such a vertex order was formally introduced in [3] and named Contiguous Trilateration Ordering Problem (CTOP 1 ). Notice that, in Definition 1, we have no assumptions on the positioning of the reference vertices in the corresponding vertex orders.
The greedy algorithm initially presented in [11] is our starting point for the development of the original contribution in our work. In Section 4.2, we adapt this greedy search for finding a revorder, or to prove that none exists (see Algorithm 2). This implies that revorder is in P. In contrast, it was proved in [22] that a particular case of min revorder is NP-hard for any fixed integer values of L ≥ 0 and U ≥ L + 1, unless L = 0 and U = 1. The authors of [22] also established that, even for L = 1, the greedy search does not approximate the optimal value of this problem within a constant factor. Moreover, recent works dedicated to the exact solution of special cases of min revorder have also reported that several methods based on integer programming (IP), constraint programming and decomposition techniques were unable to deal with instances containing as few as 60 vertices within a reasonable computation time [21,16].
With a view to overcoming these limits, we first improve one of the IP formulations, previously presented in [21] for a variant of min revorder. We develop new valid inequalities whose separation procedures are based on the analysis of cliques and cycles of the subgraph of G induced by low-degree vertices. We then propose a new enumeration scheme which builds upon the greedy algorithm introduced in [11]. For a faster and more accurate solution of the problem, we build a branch-and-bound framework based on this enumeration scheme. We also design several pruning techniques that are based on dominance and symmetry arguments.
We assess the performance of our branch-and-bound algorithm by a thorough comparison with four existing approaches: two constraint generation algorithms for extended IP formulations, one compact IP formulation and one constraint programming formulation. The computational experiments are carried out on a large benchmark including random artificial instances, and instances representing possible applications in structural biology, sensor network location [2] and interdiction problems. These experiments allow us to study the sensitivity of the solution methods on U , which was never previously analyzed (the value of U was set to L + 1 in all previous works due to the needs of the considered application). The experiments show that the new branch-and-bound widely outperforms existing methods and even allows us to solve medium-scale instances where the graph has 500 vertices and U = L + 1. The codes used for the experiments (see Section 6), as well as all our instances, are available on GitLab.
The remainder of the article is organized as follows. We describe two applications of min revorder in Section 2. We sketch the state of the art of existing IP formulations, and develop new valid inequalities in Section 3. In Section 4, we give a detailed presentation of the existing greedy completion algorithm, and we describe an enumeration scheme based on the same ideas to solve min revorder. The branch-and-bound algorithm based on this enumeration scheme is then developed in Section 5, where we show that several dominance rules and symmetries can be used to alleviate the computational effort. Finally, in Section 6, we assess our methodological contribution through numerical comparisons with the best existing methods.

2
Applications of revorder and min revorder

Discretization of distance geometry graphs
revorder appears as a fundamental pre-processing step for the solution of distance geometry problems (DGPs) [13]. The DGP consists in finding a realization in a K-dimensional Euclidean space of a simple edge-weighted undirected graph so that distances between realized vertices correspond to the weights on the corresponding edges.
Although the search space of the DGP is continuous in general, there exists a subclass of DGPs where it can be discretized and represented as a tree. The layer k of this search tree is associated with the vertex v ∈ V such that σ(v) = k: the nodes belonging to layer k all contain potential positions for the vertex v.
In order to verify whether an instance of the DGP is discretizable or not, the existence of a special vertex order on the vertices of V needs to be verified [20]. In the literature devoted to DGP, this special vertex order is named a discretization order, and it turns out that the search for discretization order (the dvop mentioned above) is a particular case of revorder.
The subclass of DGPs that include only discretizable instances is referred to as the Discretizable DGP (DDGP) [20]. The DDGP can be efficiently solved by employing a branch-and-prune algorithm [12], which performs a complete enumeration on the feasible branches of the search tree. This is made possible by the assumptions satisfied by a discretization order: for every vertex v having a rank larger than K, there exist at least K references for v, so that the feasible positions for v can be computed by intersecting the K spheres centered in the reference vertices and having as radius the known distance to v.
When working with the DDGP, one is therefore interested in finding referenced orders where L = K. In this version of revorder, every initial set S ∈ S (those that appear at the beginning of the order) must also form a K-clique. This is necessary because a unique realization (modulo translations and rotations) needs be identified for the initial K vertices of the graph. This allows us to fix the coordinate system (defined by the realization of the first K vertices) where the rest of the graph realization is eventually constructed. Moreover, when a vertex v has more than K references, then additional distance information is associated to v, so that a certain number of vertex positions in the tree can be immediately discarded. In other words, the total number of tree nodes can be a priori reduced when more than L reference vertices are associated to a given vertex v [7]. In fact, a vertex v with more than L = K reference vertices implies that at least K + 1 spheres are involved in the intersection which provides the set of possible positions for v. Since the intersection of K spheres in a K-dimensional space gives at most two singletons with probability 1, the additional sphere helps us in selecting one of these two singletons, or in showing that the intersection is rather empty [11]. The benefit is evident in this context, since the implication is that there is no branching at layer v of the search tree. As a consequence, a particular case of min revorder where U = L + 1 has been considered recently in [16] and [21] with the aim of selecting the most promising discretization order. This optimization problem has been called min double, because two positions may have to be enumerated for partially-referenced vertices.

An interdiction problem
Another problem that can be found in the scientific literature and that is related to min revorder is the following interdiction problem. This raises in two-player games where, in its basic version, the two players are allowed only two initial actions [8]. In this game, the graph G represents a network that Player 1 must protect from the possible attacks performed by Player 2. Player 1 plays first, by selecting a set W of vertices to protect: the attacker will never be able to take control over (or influence) such protected nodes. Player 2 can then perform his attack. The attack is performed by choosing a set S of vertices to influence. No more actions at this point are possible for both players. The influence is then propagated from S to vertices of V \ W as soon as some non-influenced vertex has at least U influenced neighbors. Stated otherwise, we determine the influenced vertices by finding the largest subgraph of G \ W that admits a referenced order starting with S and where the minimum number of references is U .
Suppose now that Player 2 does have the possibility to interfere with the propagation process, when necessary. The influence rule, based on the structure of a referenced order, implies that the propagation stops when every non-influenced vertex has fewer than U influenced neighbors. It would be natural to consider that Player 2 can keep spending resources to influence vertices in order to prevent the propagation from stopping, in particular if some vertices are close to being influenced (i.e. the number of neighbors is almost equal to U ). This motivates us to consider a variant of the problem, where two given values, L and U are considered in the propagation of influence. From a given set S of vertices initially influenced, a vertex becomes influenced if it has U influenced neighbors; otherwise, it may become influenced if it has at least L vertices at the expense of an additional action of Player 2. In this variant, we can remark that, once the initial action of Player 1 has been executed, the solution of min revorder provides the minimum number of actions that Player 2 would need to influence all vertices in V \ W .

Integer programming formulations
In the following, recall that the literature about the search of optimal discretization orders can be easily extended to min revorder. In this section, we focus on the IP formulations that were proposed for min double [21,16]. In [21], two compact formulations have been developed where binary variables indicate if u is a predecessor of v in the order, for all pairs of vertices u and v. One of the formulations guarantees that the variables describe a vertex ordering by considering transitivity constraints, while the other uses constraints similar to those that appear in the Miller-Tucker-Zemlin formulation of the travelling salesman problem [17]. The authors then show that both formulations are outperformed by a constraint generation procedure where cycle constraints are iteratively added to the model. In [16], Bodur and MacNeil present a third compact formulation, vertexrank, where binary variables indicate whether a vertex v is at rank r in the order for all v ∈ V and all r ∈ [ [1, n]]. They also develop a decomposition scheme, witness, where they introduce the notion of witness, which can be defined as a necessary reference. This means that partially and fully-referenced vertices have respectively exactly L and U witnesses. The decomposition yields a constraint generation algorithm that is also based on the separation of cycle constraints. One important difference between the models developed in [21] and in [16] is that the former enumerate all the feasible initial cliques and include one extra binary variable per clique, whereas the latter add constraints ensuring that the first vertices form a clique.
In this section, we will focus on the cycle-constrained extended formulation of [21] which appeared to be the best performing IP approach in our tests. We first generalize this formulation to min revorder, and we develop two sets of valid inequalities based on cliques and cycles in the subgraph of G induced by low degree vertices. In Section 4, we will see how the initial sets can be analysed to yield other valid inequalities.
Our computational experiments will also consider vertexrank and witness: the reader is referred to Appendix A for details.

Cycle-constrained extended formulation
Instead of solving independent IP models for each initial set of S, the IP formulations found in the literature all include the choice of an optimal initial set. The IP model thus describes a referenced order σ with the following three sets of binary variables.
Constraints (1) ensure that x defines an acyclic orientation of G. Constraint (2) states that exactly one set is chosen among S. Constraints (3) ensure that if a vertex is not in the initial set of vertices, then it must have at least L references if partially-referenced, and at least U if fully-referenced.
The above model includes one constraint per directed cycle of G, which can result in an IP with exponential number of constraints with respect to the cardinality of V . For this reason, the authors of [21] include the constraints (1) in a constraint generation algorithm. The algorithm starts by considering only the cycles whose lengths are at most three in (1). New cycles are then separated by running a depth-first search every time a new incumbent containing a cycle is found during the solution. For more details on the separation algorithm, the reader may refer to [21]. This overall procedure will be referred to as ccg in the remainder.

Low degree clique and cycle valid inequalities
IP ccg can be strengthened if we are able to identify subgraphs of G that will necessarily contain partiallyreferenced vertices in any referenced order. This will be the case in cliques and cycles that contain vertices with degrees close to U .
We first investigate the cliques of G. We intend to show that for some cliques, there cannot be a referenced order where every vertex is fully-referenced. For this, we can always look at the ideal situation where every vertex of a clique, K, is among the last |K| vertices of a referenced order. In this case, an optimal ordering of K yields the maximum number of fully-referenced vertices in K. Procedure maxfull(K), whose pseudo-code is given by Algorithm 1, is based on this idea. In the description of the algorithm, vertices of K are iteratively added to a set O of ordered vertices. At each iteration, either one vertex can be ordered so that it has at least U references or we choose the vertex with lowest degree. Notice that maxfull(K) represents the number of vertices that can be fully-referenced, and not the set containing such vertices. Proposition 3. Let K be a clique of G, then at most maxfull(K) vertices of K are fully-referenced in any referenced order of G.
Proof. We show the result by induction on the size of K. If |K| = 1, i.e., K = {v} for some v ∈ V , then Suppose now that the proposition is true for any clique with size p ≥ 1 and let K be a (p + 1)-clique. Let v ∈ K be the first vertex included in O during the execution of maxfull(K). The execution of maxfull(K) reduces to the inclusion of v in O followed by the execution of maxfull(K \ {v}). As a consequence, we have otherwise.
From the induction hypothesis, no more than maxfull(K \ {v}) vertices can be fully-referenced in K \ {v}. Since K contains one more vertex than K \ {v}, it is natural that no more than maxfull(K \ {v}) +1 vertices could be fully-referenced in K, so we only need to study the case where max u∈K {d • (u)} < U + |K| − 1.
Assume that max u∈K {d • (u)} < U + |K| − 1. Let σ be a referenced order and w be the vertex of K with smallest rank in σ.
The induction hypothesis thus yields that σ has at most maxfull(K \ {w}) fully-referenced vertices among K. Moreover, by design of Algorithm 1, we know that v ∈ argmin u∈K {d • (u)}, so d • (w) ≥ d • (v). As a consequence, one can easily verify that Hence, σ has less than maxfull(K \ {v}) = maxfull(K) fully-referenced vertices.
We point out that, for a p-clique K, Proposition 3 implies that if max v∈K {d • (v)} ≤ U + p − 2, so that at most p − 1 vertices of K can be fully-referenced in any referenced order.
A given vertex is fully-referenced only when it is neither partially-referenced, nor in the initial set of vertices. So, based on Proposition 3, for any clique K of G, we derive the following inequality: We wish to extend the above clique-cuts to cycles containing low degree vertices. Let V C be the vertices of a cycle C. Similarly to Proposition 3, we would like to start by deriving the maximum number of fully-referenced vertices among V C . But, the subgraph induced by V C can be any subgraph of G, provided that a cycle covers all its elements. In particular, if L ≥ 2, we can show by recurrence on the number of edges that if G has a referenced order than there is a (not necessarily elementary) cycle going through all its edges. So, the exact computation of the maximum number of fully-referenced vertices among V C is as hard as min revorder if L ≥ 2. Instead of deriving a procedure similar to Algorithm 1 for cycles, we thus show below that under a condition on the degrees of the vertices, it is guaranteed that at least one vertex will not be fully-referenced. This also allows us to focus on a potentially much smaller set of cycles.
Then any referenced order of G has at most |C| − 1 fully-referenced vertices among those of V C .
Proof. Let σ be a referenced order of G. By definition of a cycle, every vertex of C has at least two neighbors in C. The first vertex of C in σ cannot have any reference among the other vertices of C. So, if it has at most U + 1 neighbors, it has at most U − 1 references in σ, meaning that it is not fully-referenced.
As a consequence, for any cycle

A new enumeration scheme for min revorder
For a more concise and insightful description of the proposed solution algorithm, we will start in Section 4.1 with a series of definitions related to the iterative construction of a referenced order. Then, in Section 4.2, we will briefly present an adaptation to revorder of the greedy algorithm initially proposed in [11] for dvop. The basic idea behind our enumeration scheme will be detailed in Section 4.3, while Section 4.4 will describe how it can be used to solve min revorder.

Preliminary definitions
We then denote the preimage W of σ as PreIm(σ) and the number |W | of ordered vertices in σ as |σ|.
We then extend the definition of a "reference" so that it also makes sense for an incomplete order.

Definition 6 (References in incomplete orders). Let {u, v} ∈ E and σ be an incomplete referenced order of
Similarly to a referenced order, we denote as Finally, the objective value of σ is given by

Definition 7 (Extensions and candidates).
Let σ and σ be two incomplete referenced orders of G. We say that σ is an extension of σ if it starts like σ, i.e., We then say that we extend σ with vertex v / ∈ PreIm(σ) if we assign the next available rank to v. More formally, σ is the extension of σ with v if and only if

We denote as [σ|v] the extension of σ with v.
Vertex v is a valid candidate for the extension of σ if and only if v / ∈ PreIm(σ) and The sets of partial and full candidates are respectively denoted as Partial (σ) and Full (σ).
Since the vertices of an initial set S ∈ S can be ordered in any way with no impact on the completion of the order, we will simply refer to completions of S to talk about completions of any ordering of S. We say that S is feasible if it admits a completion and infeasible otherwise. In the remainder of the article, we discuss methods that start with some initial set in S and incrementally extend it until a completion is built. At every step of the procedures, we consider only valid candidates for the extension. Hence, when extending an incomplete order with a vertex v, it will be implicit that v is a valid candidate unless explicitly stated.

Greedy search for a referenced order
As already mentioned in Section 2.1, a greedy algorithm was initially proposed for finding discretization orders in the context of distance geometry [11]. The algorithm constructs the order starting from a given initial clique in O (n log(n) + m). A certificate of non-existence is delivered when no discretization order starting with this clique exists.
The close relationship between discretization and referenced orders makes it easy to adapt this greedy algorithm for revorder. We give the pseudo-code of this greedy search in Algorithm 2.
input : An initial set S ∈ S output : A completion of S or the proof that none exist stop: no referenced order starts with S; Notice that the initial clique S is preselected, so that a complete enumeration of the solution set can actually be achieved only by invoking the greedy algorithm for every possible initial clique [7]. Moreover, the key choice in the greedy algorithm design is that it always chooses the new candidate vertex with largest number of references (see step 3 of Algorithm 2). The clique preselection, together with the way new candidate vertices are selected, can lead the greedy algorithm to find suboptimal solutions [21]. We thus develop a new enumeration scheme that explores every relevant choice when selecting the next vertex candidates (instead of making one arbitrary decision).

The enumeration scheme
In this section, we study in deeper details the properties of valid and full candidates during the construction of a referenced order. These properties will form the basis of the different components of our enumeration scheme.

Propagation of full candidates
As a preamble, we first point out that, when building a referenced order by successive extensions from a given initial set, it is always best to choose a full candidate when one exists. Indeed, a full candidate does not increase the objective once added to the order, whereas it can only increase the number of references of its non-ordered neighbors. What is more, when more than one full candidate is available, then their relative ordering in the referenced order is irrelevant. The above two observations can be formalized with the following proposition.

Proposition 9.
Let σ be an incomplete referenced order such that there is a full candidate u, i.e., u / ∈ PreIm σ and |R σ (u)| ≥ U , then Proof. Let k = |σ| and let σ V be an optimal completion of σ (hence ∆ (σ V ) = ∆ * (σ)). We get a completion of [σ|u] by switching u and σ −1 V (k + 1) in σ V . Denoting this completion as σ V , we get By assumption, u is a full candidate. Given that σ V is also a completion of σ, Let σ be an incomplete referenced order. Based on Proposition 9, we consider a particular extension of σ that sequentially selects full candidates until there is none. We will call such an extension the propagation of full candidates in σ and denote it as Π(σ). Algorithm 3 gives the pseudo-code of function propagate(σ), which executes this operation.

Algorithm 3: Propagation of an incomplete referenced order
The recursive application of Proposition 9 immediately shows that full candidates can always be propagated without loss in the objective function value. We thus get the following fundamental motivation for an enumeration scheme based on the greedy search. Indeed, it implies that suboptimal extensions of an incomplete order can only be made if there is no full candidate.

Enumeration of partial candidates
Algorithm 4 describes a procedure that enumerates completions of a feasible initial set S ∈ S. It builds an enumeration tree where every node corresponds to an incomplete referenced order, and stores the pending nodes in Q. The root node is given by the propagation of the set S. Then, for each partial candidate v of an incomplete referenced order σ, one new branch is created. The child node corresponding to v is given by propagate([σ|v]). The propagation preceding every inclusion of σ to the list of pending nodes guarantees that every pending node corresponds to an incomplete referenced order without any full candidate. In the remainder, the node corresponding to the referenced order σ will simply be called enumeration node σ. Theorem 11. Let S ∈ S be a feasible initial set. Then Algorithm 4 computes an optimal completion of σ.
Proof. Using the notations of Algorithm 4, we show that at the start of any iteration of the while loop, min{UB, min τ ∈Q {∆ * (τ )}} = ∆ * (S).
At the first iteration, σ = Π(S), so Proposition 10 guarantees that ∆ * (σ) = ∆ * (S). So assume that the property remains true at the start of a given iteration, and denote as σ the element of Q selected at step 5.

Speeding-up the enumeration algorithm
The main benefit of the enumeration given in Algorithm 4 is that it allows us to focus on the partial candidates when selecting the next vertex in the order. Here, we show that some additional properties of the graph G may be used to further reduce the enumeration to a subset of the partial candidates.

Algorithm 4:
Enumeration algorithm for min revorder input : A feasible initial set S ∈ S output : An optimal completion of S 13 return σ;

Dominance rules Definition 12. Let σ and τ be two incomplete referenced orders. We say that σ dominates τ if and only if
Let τ be a pending node and σ be either a pending node or a treated node on a different branch than τ . This definition of dominance implies that if σ dominates τ , then τ can be pruned from the tree because σ will be completed into a referenced order with smaller or equal cost. This definition cannot be used in practice though, because it requires the computation of ∆ * (σ) and ∆ * (τ ). We propose a practical sufficient condition below.
Proof. Let τ V be an optimal completion of τ . Starting with σ, we can construct a vertex order of G, σ V , by adding the vertices of V \ PreIm(σ) after those of PreIm(σ) so that: Using that PreIm(τ ) ⊆ PreIm(σ), the above yields This yields that σ V is a referenced order and We first examine how the above rule applies to dominance among child nodes of a given pending node.
Proposition 14 (Child nodes dominance rules). Let σ be an incomplete order without full candidate such that there are u, v ∈ Partial (σ).
Proof. Vertices u and v are both partial candidates, so by Proposition 10, we know that Moreover, if u ∈ PreIm (Π([σ|v])), then any full candidate added to the order during the propagation of [σ|u] will also be added during that of [σ|v], so Based on the above two proposition, each new enumeration node is compared with two sets of nodes. From Proposition 14, we can see that the verification of dominance among child nodes has a rather low computational cost. Hence, when branching, we start by applying the child nodes dominance rules to compare child nodes with each other. Then, using the basic dominance rule, the search for dominance can be extended as follows. Let T be the set of all treated nodes that have not been pruned yet, and denote by T (z) the nodes of T with cost z. For some given value We make this restriction, because the dominance criterion of Proposition 13 is transitive. Hence, larger dominance is unlikely if smaller dominance does not occur. Moreover, in practice we set ∆ z to small value (typically, ∆ z = 1), so this restriction saves a significant amount of computational time.
Another case of dominance can be detected by comparing the neighborhoods of partial candidates.

Proposition 15 (Neighborhood dominance rule). Let σ be an incomplete referenced order such that there exist
Proof. Let σ V be any completion of [σ|v] and build σ V as the completion of [σ|u] obtained by swapping the ranks of u and v in σ V , i.e., The above dominance rules are all used in the final version of the enumeration algorithm as described in steps 11-20 of Algorithm 5.

Breaking symmetries
Another perspective of improvement is in the identification of symmetric referenced orders. The basic idea is to avoid building several referenced orders where the list of partially-referenced vertices is the same. For this, we arbitrarily index the vertices of V and denote u < v if the index of u ∈ V is smaller than that of v ∈ V . We wish to break symmetry by constructing referenced orders σ such that a vertex v is partially-referenced in σ only if every other partially-referenced vertex u such that u < v either has a smaller rank than v in σ, i.e., σ(u) < σ(v), or has less than L references with rank smaller than σ(v).
The second condition implies that we cannot obtain a valid referenced order by simply swapping the ranks of u and v in σ. In the context of Algorithm 4, the above condition is equivalent to requiring that, if u and v are two partial candidates for some incomplete order σ such that u < v, then u should be removed from the list of partial candidates when extending σ with v. Therefore, in the corresponding branch of the enumeration tree, it will be possible to extend σ with u only when fully-referenced. In the remainder, this selection rule will be referred to as the index priority rule. As a first step, we show that there exists a solution to min revorder that satisfies this rule.

Proposition 16 (Index priority rule). Let τ be a referenced order. Then, there is a referenced order σ such that for all v ∈ V :
if v is partially-referenced in σ, then it is partially-referenced in τ (i.e., ∆ (σ) ≤ ∆ (τ )), and ∀ u, v ∈ V with u < v such that δ σ (u) = δ σ (v) = 1: Proof. For any referenced order, we have seen that there is a referenced order with smaller or equal cost that can be constructed by using the enumeration algorithm sketched in Algorithm 4. Therefore, without loss of generality, we can consider a referenced order, τ , where the vertex at a given rank is partially-referenced if and only if there is no full candidate for this rank. Let v ∈ V be the partially-referenced vertex with smallest rank in τ , and consider P σ (v) = {u ∈ V : |w ∈ N (u) : τ (w) < τ (v)| ≥ L}. Set P σ (v) includes all the (partial) candidates for rank τ (v). Assume that v does not satisfy the index priority rule. This means that the vertex with smaller index in P σ (v), u, is such that u < v. Then let σ be the vertex order obtained by inserting u at the rank of v in τ , i.e., Then, u is partially-referenced in both τ and σ, and every other vertex has at least as many references in both orders. As a consequence, every partially-referenced vertex of σ is also partially-referenced in τ , but the index priority rule is satisfied at least up to rank τ (v). The result thus follows by induction on the minimum rank of a vertex that does not meet the index priority rule.
We enforce the index priority rule in the enumeration algorithm by introducing one new set of candidates, Fixed (σ), for each incomplete order σ. Any vertex in Fixed (σ) can be chosen to extend an order only if it has U or more references. In the enumeration algorithm, this means that those vertices can extend an order only during propagation. We then modify the branching rules by excluding fixed candidates from the list of valid candidates. When creating a node by extending some order σ with a partial candidate u, we set as fixed every other partial candidate v such that v < u. The corresponding modification appears at step 8 of Algorithm 5, and the identification of fixed vertices is detailed at steps 23.

Bound pruning
In addition to the above pruning rules, we use the traditional bound pruning of branch-and-bound algorithms. For this, we start the algorithm with the upper bound provided by running the greedy completion algorithm from every initial set of S. The upper bound, UB, will then be updated every time an improving complete order is found during the enumeration. A lower bound LB(σ) is then computed for each incomplete order σ in the pending nodes queue Q, and the order is pruned if LB(σ) ≥ UB.
In our implementation, we have tried two different methods for computing the lower bound. One trivial lower bound at node σ is given by ∆ (σ) + 1, where we add one to the cost of σ, because the next vertex in the order is necessarily partially-referenced. We slightly improve this bound by acknowledging that a vertex with degree smaller than U will necessarily be partially-referenced. Likewise, if two vertices with exactly U neighbors are linked by an edge, then at least one of them will be partially-referenced. The resulting lower bound is denoted as LB trivial (σ).
In order to compute a better lower bound, we can solve the linear relaxation of any IP formulation of the problem. As already remarked above, IP ccg includes every cycle constraints, but we can start the cycle-cut generation with 2-and 3-cycles. Since this initial model is a relaxation of min revorder, so is its linear relaxation. For a given incomplete order σ, we can then compute a lower bound, denoted as LB ccg (σ), by fixing all the variables corresponding to PreIm(σ) and by solving the linear relaxation of the initial relaxation of IP ccg .
The overall branch-and-bound algorithm is summarized in Algorithm 5. We can deduce from the above discussions that it yields the optimal completion of the input initial set S, or it proves that no completion of S can improve the input upper bound UB. In the algorithm, the method chosen for computing the lower bound of some incomplete order σ is abstracted by a call to function lowerbound(σ) that returns either LB trivial (σ) or LB ccg (σ), depending on the chosen option. This will be discussed further in Section 6.

Preprocessing the initial set
Before any call to Algorithm 5, the set of initial sets is preprocessed in order to delete as many of its entries as possible. For this, we propagate each set of S and compare them for dominance. We then run the greedy algorithm from every non-dominated initial set for an initial upper bound. While doing so, we also obtain the list of infeasible initial sets. In the end, we execute Algorithm 5 only with feasible and non-dominated initial sets. This preprocessing step is similarly performed before the execution of ccg to reduce the number of initial sets. But, we also use it to compute valid inequalities based on the propagation of each initial set. Indeed, if we consider S ∈ S, and Π(S) := propagate(S), we know that no vertex among those of PreIm(Π(S)) \ S will be partially-referenced if S is chosen as an initial set. Moreover, there will necessarily be one partially-referenced vertex among the partial candidates of Π(S). As a consequence, we add the following two sets of valid inequalities to IP ccg : 6

Computational experiments
We have tested the branch-and-bound framework given in Algorithm 5 on five different families of instances (Random, Synthetic, Protein, SensorNetwork and Interdiction), and we compared it with the other methods discussed in this article. For reproducibility of our results and more efficiency in future research on the topic, our implementation of Algorithm 5 is publicly available on GitLab 2 . The same repository also contains the instances used in our computational experiments, together with the bash scripts that run the tests.

Sets of test instances
The Random and Synthetic sets of instances contain randomly generated instances, obtained as described in [16]. In particular, Random contains randomly generated instances with no specific patterns, while Synthetic contains the so-called "synthetic" instances, where, starting from a randomly generated instance, edges are removed in order to build the sparsest graph that still admits a referenced order. An additional 0.15n or 0.20n edges are then added as noise. Notice that we use the same instances as those that appear in the experiments published in [16], which were made publicly available 3 by the authors. The details about the instances in Random and Synthetic are summarized in Table 1. In all our tables, we also include the values of L, because they are constant in all corresponding experiments. Moreover, S is given by the set of (L + 1)-cliques of the graph. The Protein set contains the protein instances that were already used in previous publications on distance geometry for protein structure determination (see [11] for the details about the instance generation procedure from known protein conformations). In order to increase the difficulty in finding solutions, and as already proposed in [16] and [21], we have removed a subset of edges from the original graphs representing those protein instances. In order to select the number of edges in the graph, we first observe that the minimum number of edges that can be kept in a feasible instance is m = L(n − L+1 2 ) ≈ L × n. It was observed in [21] that, for U = L + 1, the hardest instances are those where the ratio m/n is closest to 3.6. In our experiments, we generated instances where m/n = 3.2, 3.6, 4.0. Only the first 60 atoms of the original instances are taken into consideration, so that all the instances in this set have 60 vertices. Table 2 gives more details about those protein instances.
Another important and traditional application of distance geometry is the so-called Sensor Network Localization Problem [2,5]. The SensorNetwork set consists of randomly generated instances that resemble sensor networks in dimension 2. We implemented the following procedure for the generation of the instances used in our   experiments. First of all, we select a 2D region with a predetermined shape (ranging from a squared shape to a rectangular shape having one side much longer than the other), and then we generate a predetermined number of random positions in this region. The SensorNetwork instances are represented by graphs where the vertices correspond to the generated positions, and an edge is added between two vertices when their relative distance is smaller than a given threshold T . This rule simulates the fact that two related sensors must lie in each other's range of communication.
We remark that different shapes for the 2D regions containing the sensors imply the generation of instances having specific properties in the context of distance geometry. The neighbors of a given sensor are other sensors lying in the circle centered at its position with radius T . As a consequence, the surface of the intersection between this circle and the 2D region provides an estimate on the expected number of neighbors for each sensor. When the 2D region has a rectangular shape with constant width and decreasing height, all sensors, and in particular those lying near the boarders, are likely to have fewer and fewer neighbours. This in turn impacts the number of suitable vertex orders. Table 3 gives some details about the SensorNetwork instances: we indicate with the two letters h and w the height and width, respectively, of the underlying 2D region in each sensor network. All networks are composed of 60 sensors.
The Interdiction set is related to instances of the considered variant of the interdiction problem (see Section 2.2). The procedure that we use for the generation of the instances is a simple adaptation to undirected graphs of the procedure described in [8] in the context of the interdiction problem, and previously proposed by Chung and Lu in [4]. The required total number of vertices is given in input, together with a probability p ∈ [0, 1]. Starting with a graph containing only one vertex, the procedure iterates and executes one of the following two steps until the graph contains the desired number of vertices: 1. with probability p, add one new vertex to the graph and add one edge between this new vertex and another randomly selected vertex already present in the graph; 2. with probability 1 − p, choose two vertices already in the graph and add an edge between them. We generate instances with 60 vertices and use a probability p ranging from 0.1 to 0.9. Table 4 gives some additional details about these instances.

Comparison of our branch-and-bound against existing approaches
The experimental assessment of the capability of our branch-and-bound algorithm is made through a comparison with the three IP-based approaches ccg, witness and vertexrank and the solution of a constraint programming model cp, which was introduced by [16]. A detailed description of the last three approaches is given in Appendix A. We chose them, because they represent the best existing solution methods for min revorder. In particular we do not report the results of the two compact formulations developed by [21], because their numerical experiments showed that ccg significantly outperformed these formulations in all their tests. For each method, we solve all the instances described in the previous section with three different values of U chosen so that U − L = 1, 2, 3. As a result, this benchmark includes 180 instances. Every experiment is run on a workstation equipped with an Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 16GB RAM 4 . Every IP and constraint programming formulation is solved with CPLEX 12.9. The time limit is set to 1000 seconds, and for a fair comparison, every run is done on a single thread. In this section the reader can find the corresponding performance profiles: developed by Dolan and Moré [6], performance profiles allow us to scale the execution times of the compared algorithms with respect to the best solver in the set. For instance, Figure 1 shows that bb performs best for about 75% of the datasets and solves more than 95% instances less than 5 times slowlier than the best solver, whereas ccg is at least 5 times slower than the best performing algorithm for more than 50% of the datasets. All our performance profiles were constructed with a dedicated Julia package 5 . For more details about the computational results, the reader can refer to Appendix B.
Our implementation of the branch-and-bound algorithm corresponds to the sketch given in Algorithm 5, where lowerbound(σ) returns the trivial lower bound, LB trivial (σ) (see Section 5.3). The value of the parameter ∆ z is set to 1 in all the experiments. The nodes in the branch-and-bound queue are explored in a best-first fashion, where the best node is the one minimizing the ratio ∆ (σ) / |σ|. These two parameters have been set to these values according to preliminary tests. In the remainder, we will refer to this default implementation of our branch-and-bound as bb.
The first performance profile (see Figure 1) considers all the experiments (the logarithmic scale on the x-axis is in base 2). The profile clearly shows the superiority of the proposed method. Moreover, when comparing the other existing methods, we can remark that the results we report for witness, cp and vertexrank are consistent with those previously published in [16]. However, the results of ccg are better than those reported in [16]. This is because our implementation of ccg is enhanced by the initial enumeration and preprocessing of (L + 1)-cliques, and by the addition of valid inequalities (see Section 5.4).
The performance profiles in Figure 2 allow analyzing each family of instances independently. For instances of Random, Protein and Interdiction, the profiles seem representative of the overall results. We remark that the dominance of bb is more pronounced when the sensor network instances are concerned. Looking at the tables in Appendix B, it appears that even for bb, the solution of sensor network instances takes on average more time than for the other families of instances. Due to this additional difficulty, other methods reach the time limit. We can also remark that ccg outperforms bb on the Synthetic benchmark. In fact, for these particular problems and the considered instances, we find that the linear relaxation of ccg provides a bound that is very close to the optimal value, leading as a consequence to good performances. Moreover, these instances include only 25 to 35 vertices which yields many computational times below one second. As a consequence, machine overhead may have an impact on this comparison. To establish the impact of the value of U − L on the performances of the various algorithms, we provide independent performance profiles where the value of U − L is fixed to 1, 2 or 3 (see Figure 3). The results confirm the intuition that the problem gets more difficult when the value of U − L increases. This is expected for our branch-and-bound algorithm, because the enumeration is based on the number of partial candidates which must necessarily increase with the value of U . Since L is constant, the number of candidate vertices at a given enumeration node is the same, but the number of full ones will necessarily decrease. A similar combinatorial effect seems to affect every other approach. We also observe that witness is less impacted than other approaches. This seems to indicate that the concept of "witness" yields a stronger formulation when U − L increases, but additional analyses will be necessary to conclude on this issue.

Assessment of improvements in the branch-and-bound algorithm
In order to assess the various improvements that we have proposed for our branch-and-bound framework (see Section 5), we propose another performance study where different versions are compared.
For a reasonable computational effort, we perform this analysis on a subset of the above benchmark, where we keep only two instances per family. In doing so, we selected those that appear to be the most difficult in our previous tests 6 .
The "default" version of the algorithm is the implementation used in previous sections, bb. The "no dominance" and "no symmetry break" versions correspond to the "default" version where the dominance rules or the strategy to break symmetry, respectively, are not used. The "relaxation bound" version differs from the default one only in the computation of the lower bound at each enumeration node. In this version, lowerbound(σ) returns LB ccg (σ) instead of LB trivial (σ) (see Section 5.3), meaning that the linear relaxation of an IP is solved at each enumeration node to compute a lower bound better than the trivial one. Finally, the version "plain BB" does not use any pruning technique. Figure 4 shows the performance profile obtained when comparing these different versions against each other. The default version appears to have better performances in comparison with the others even though its profile is very similar to that observed when the "relaxation bound" version is used. With a more thorough look at results, it appears there is a clear dominance of the default version on the Random, SensorNetwork and Interdiction instances, whereas the "relaxation bound" version is the best performing one on Synthetic and Protein. For the latter two families of instances, we observed that LB ccg (σ) provides lower bounds that are in   general much closer to the value of the optimal completion of σ than LB trivial (σ). When this is not the case, the time spent computing LB ccg (σ) is not compensated by the resulting reduction in the number of enumeration nodes. In contrast, the performance profiles highlight that there is a clear contribution of the pruning techniques based on dominance and symmetry.

Impact of the size of the graph
We now study the impact of the size of the graph on the performances of the branch-and-bound algorithm. For this, we run the algorithm on instances including up to 500 vertices with U − L = 1, 2, 3. In order to create a benchmark with representative entries among those in the instances described in Section 6.1, we keep only two instances per family and per number of vertices. The parameters we use to generate the instances are those we  already identified when comparing the different versions of the branch-and-bound (which indicate the instances that are hardest to solve on average). Given that these parameters lead to infeasible or trivially solved instances for two families, we had to slightly modify some parameter values as follows.
For synthetic instances, we initially took as a reference the density (D = 0.3) related to the most difficult instances of the previous benchmark. With increasing numbers of vertices, our preliminary tests highlighted that the resulting instances had trivial solutions that included only fully-referenced vertices. As a consequence, we reduced the density by setting it to D = 0.3 √ 35 √ n , which matches D = 0.3 when n = 35. In order to generate larger sensor network instances, we have selected the threshold values 0.25 and 0.35 with a square shape. We then modified the lengths of the square's sides to keep a constant density of sensors in the area.
We run bb on every large instance, and, given that last section showed that the "relaxation bound" version gets better results on two families of instances, we also run this version. In Figure 5, we report the percentage of successful runs of the branch-and-bound algorithm with respect to the instance size, for separated values of U − L. On the left subfigure, we focus on default parameter values, whereas the right subfigure displays the percentage of instances solved by either of the two sets of parameters. We observe that every instance can be solved for U − L = 1 and n = 100, 150, 200 and 300 with one of the two versions. Moreover, never less than 80% are solved for instances including up to 500 vertices. In comparison, existing methods could not find provable optimal solutions of more than 50% of instances including as few as 35 to 60 vertices. Given that U − L = 1 is the value that appears in the applications related to distance geometry (see Section 2.1), these results establish a significant step towards the exact resolution of real instances where thousands of vertices need to be considered.
In contrast, as the value of U − L increases, it becomes more and more difficult for the branch-and-bound algorithm to find provable optimal solutions within the required time threshold.

Conclusions
We have introduced a vertex ordering decision problem, revorder, and its optimization counterpart, min revorder. By exploiting some previous results obtained for related applications, we have proposed a brief survey on existing methods. We have then proposed new valid inequalities for an existing IP formulation and developed a new branch-and-bound framework for min revorder. The computational results highlight that the branch-and-bound clearly outperforms existing solution methods. This improvement allows us to compute solutions of instances up to 500 vertices, even though these instances were willingly constructed to show the limits of solution methods. min revorder is a generic problem for which some applications are already known; moreover, we believe it can serve as a basis for the (re)formulation of other problems arising in other research areas. One of the research lines that we find interesting is, for example, the one related to epidemic networks [9], where the propagation of infections is studied in a way that is quite similar to the interdiction problem we have considered. To completely fulfill this aim, there are some possible extensions that we can foresee. First of all, we have limited our current study to undirected graphs G, because it captures the main applications we took into consideration. However, the entire work may be extended to directed graphs for considering other applications where the orientation of edges is important. This would be natural if some relevant applications raise in the area of scheduling.
Another assumption that we have considered in this work, which may be relaxed or totally removed in future works, is related to the shape of the objective function δ σ (v). Again, our choice in the current work was led by the fact that the considered applications do not need more generic objective functions. However, if G is a weighted graph (with weights on the vertices and/or on its edges), then the objective function may actually take these weights into consideration, so that it does not correspond anymore with the simple count that we have used above. For instance, consider another distance geometry problem where distances are represented by real-valued intervals instead of precise distances [18]. In this context, a reference with smaller distance interval range yields a reduced set of possible realizations. To do so, the objective function needs to include a term where the lengths of the distance interval of a vertex is added every time it serves as a reference.

A Existing integer programming and constraint programming formulations
In this section, we give more details about the approaches from the literature that are used in our computational tests. While doing this, we will simplify the presentation by assuming that the initial sets are the (L + 1)-cliques of G, which is the case in all our test instances. The generalization to arbitrary initial sets would be possible by enumerating them as in the cycle-based extended formulation. Please see [16] for the original descriptions of the models and other approaches to the discretization of distance geometry graphs.
Constraints (9) and (10) ensure that there is exactly one vertex per rank and reciprocally. Constraints (11) ensure that the L + 1 first vertices induce a complete subgraph of G. By (12), vertex v is at rank r only if it has at least L references, and by (13), v is fully-referenced and at rank r only if it has at least U references. Finally, constraints (14) ensure that if v is at rank r and partially-referenced, then the vertex at rank r is partially-referenced.

A.2 The witness-based decomposition: witness
Bodur and MacNeil [16] develop a decomposition scheme based on witness vertices. A witness is a reference vertex that is necessary to satisfy the reference constraints. As a consequence, partially and fully-referenced vertices have exactly L and U witnesses, respectively. Moreover, the initial vertices are not assigned a specific rank among the first L + 1. Instead they are all witness to one another and to all their other neighbors. The decomposition yields an extended formulation including one constraint per directed cycle of G. In this formulation, ∀ {u, v} ∈ E, the binary decision variable w u,v = 1 if and only if u is witness to v, ∀ v, δ v = 1 if and only if vertex v is partially-referenced and κ v = 1 if and only if v is among the first L + 1 in the order.
Constraints (15)- (17) guarantee that the L + 1 vertices belonging to the initial clique are pairwise-connected and that they are witness to all their neighbors. The valid inequalities (18) state that the first L + 1 vertices are not partially-referenced. Constraints (19) ensure that the vertices of the initial clique have exactly L witnesses, and that other vertices have L witnesses if they are partially-referenced and U witnesses if they are fully-referenced. Constraints (20) include one cycle cut per directed cycle of G. One specificity of the model is that the first L + 1 vertices are witness to one another. As a consequence, the cycle constraints (20) are lifted in the space (w, κ), so that cycles containing only vertices among the L + 1 first vertices in the order are actually allowed. This is required only for cycles including at most L + 1 vertices, because MacNeil and Bodur [16] have shown that it is not necessary to forbid directed cycles that include vertices both inside and outside the first L + 1 vertices. Similarly to ccg, the above extended formulation is solved with a cycle constraint generation algorithm, where every 2 and 3-cycle is included in the initial relaxation.

A.3 A constraint programming approach: cp
In [16], Bodur and MacNeil develop three different constraint programming approaches. In their experiments, two of these approaches performed similarly well and the both outperformed the third one. The model we describe, cp, is among those two best approaches. In cp, decision variable v r is equal to the vertex whose rank is r in an optimal referenced order, and δ r is as in vertexrank.
Constraint (21) ensures that variables v 1 , . . . , v n describe a vertex ordering. It uses the AllDifferent constraint, which is classical to handle scheduling problems. Constraints (22) state that the first L + 1 vertices are pairwise connected, while constraints (23) ensure that the following ones have at least L references and that they are partially-referenced if they have less than U . These two constraints make use of constraint programming element constraints where decision variables can be used as indices.

B Details on computational experiments
The following tables show our computational experiments aimed at performing a global comparison among all methods discussed in the article. For every instance belonging to one of the five sets described in Section 6, we run the experiments for three values of U − L: 1, 2 and 3. For every instance, for every value of U − L, and for every solver, we report in the following tables: the objective value of the best solution found ("obj"), the computational time in seconds ("time"), and the number of branch-and-bound nodes explored by the solver ("#nodes"). In column "time", TL indicates that the time limit was reached. In every other case, optimality is reached, so the value indicated in column "obj" is the optimal one.   Table 6 Experiments with the instances in the Synthetic set.    Table 9 Experiments with the instances in the Interdiction set.