Characterizations of Stability of Error Bounds for Convex Inequality Constraint Systems

In this paper, we mainly study error bounds for a single convex inequality and semi-infinite convex constraint systems, and give characterizations of stability of error bounds via directional derivatives. For a single convex inequality, it is proved that the stability of local error bounds under small perturbations is essentially equivalent to the non-zero minimun of the directional derivative at a reference point over the sphere, and the stability of global error bounds is proved to be equivalent to the strictly positive infimum of the directional derivatives, at all points in the boundary of the solution set, over the sphere as well as some mild constraint qualification. When these results are applied to semi-infinite convex constraint systems, characterizations of stability of local and global error bounds under small perturbations are also provided. In particular such stability of error bounds is proved to only require that all component functions in semi-infinite convex constraint systems have the same linear perturbation. Our work demonstrates that verifying the stability of error bounds for convex inequality constraint systems is, to some degree, equivalent to solving the convex optimization/minimization problems (defined by directional derivatives) over the sphere.


Introduction
Our main goal in this paper is to study error bounds of a single convex inequality and semi-infinite convex constraint systems and to provide characterizations of stability of local and global error bounds under perturbations. Theory of error bounds can be traced back to the pioneering work by Hoffman [22] for systems of affine functions in which it has been proved that for a given matrix A and a vector b, the distance from x to the polyhedral set {u : Au ≤ b} is bounded above by some scalar constant (depending on A only) times the norm of the residual error (Ax − b) + , where for any vector z, (z) + denotes the positive part of z. Hoffman's result was extensively and intensively studied by Robinson [48], Mangasarian [40], Auslender and Crouzeix [2], Pang [44], Lewis and Pang [36], Klatte and Li [30], Jourani [29], and there have been important developments of various aspects of error bounds for convex and nonconvex functions in recent years. We refer the readers to bibliographies [3,4,6,9,14,15,17,18,19,23,27,34,37,41,43,45,52] and references therein for the summary of the theory of error bounds and their various applications for more details.
Error bounds have been applied to the sensitivity analysis of linear programs (cf. [47,49]) and to the convergence analysis of descent methods for linearly constrained minimization (cf. [20,21,28,38,51]). In addition, it is proved that error bounds play an important role in the feasibility problem of finding a point in the intersection of a finite collection of closed convex sets (cf. [7,8,9]) and have an application in the domain of image reconstruction (cf. [13]). Also, error bounds are extensively discussed in connection with weak sharp minima of functions and metric regularity/subregularity as well as Aubin property/calmness of set-valued mappings (cf. [1,6,10,11,12,19,25,26,31,32,48,54,55] and references therein).
Since real-world problems typically have inaccurate data, it is of practical and theoretical interest to know the behavior of error bounds under data perturbations. For systems of linear inequalities, this question has been studied by Luo and Tseng [39] and Azé and Corvellec [5]. Subsequently Deng [16] studied systems of a finite number of convex inequalities. In 2005, Zheng and Ng [53] considered the stability of error bounds for systems of conic linear inequalities in a general Banach space. In 2010, Ngai, Kruger and Théra [42] studied the stability of error bounds for semi-infinite convex constraint systems in a Euclidean space and established subdifferential characterizations of the stability under small perturbations. The infinite dimensional extensions were considered by Kruger, Ngai and Théra in [35]. In 2012, by relaxing the convexity assumption, Zheng and Wei [56] discussed the stability of error bounds for quasi-subsmooth (not necessarily convex) inequalities in a general Banach space and provided Clarke subdifferential characterizations of the stability of error bounds. In 2018, Kruger, López and Théra [33] extended the development in [35,42] and characterized the stability of error bounds for convex inequalities in the Banach space setting. From the viewpoint of infinite dimensional Banach spaces, results on the stability of error bounds in [33,35,42,56] are dual conditions, and it is a pretty natural idea to study this issue not involving the dual space since information on the dual space may be missing. Inspired by this observation, we study characterizations of stability of local and global error bounds of a single convex inequality and semi-infinite convex constraint systems via directional derivatives. For a single convex inequality, we prove that the stability of local error bounds under small perturbations holds if and only if the minimum of the directional derivative at a reference point over the unit sphere is non-zero, and the stability of global error bounds is proved to be equivalent to the strictly positive infimum of the directional derivatives, at all points in the boundary of the solution set, over the unit sphere as well as some mild constraint qualification. When these results are applied to semi-infinite convex constraint systems, characterizations of the stability of local and global error bounds under small perturbations are also provided. Particularly such stability of error bounds is proved to only require that all component functions in semi-infinite convex constraint systems have the same linear perturbation. Our work demonstrates that verifying the stability of error bounds for convex inequality constraint systems is, to some degree, equivalent to solving optimization/minimization problems (defined by directional derivatives) over the unit sphere.
The paper is organized as follows. In Section 2, we give some definitions and preliminary results. Section 3 is devoted to the study on stability of error bounds for a single convex inequality. In terms of directional derivatives, we provide characterizations of local and global error bounds for a single convex inequality under small perturbations(see Theorem 5 and Theorem 10). When these results are applied to the semi-infinite convex constraint systems in Section 4, the stability of local and global error bounds can be obtained (see Theorem 13 and Theorem 15). Conclusions of this paper are given in Section 5.

Preliminaries
In what follows we consider the Euclidean space R m equipped with the norm · := · , · . We denote by B m the closed unit ball of R m and following the standard notation by Γ 0 (R m ) the set of extended-real-valued lower semicontinuous convex functions f : R m → R ∪ {+∞} which are supposed to be proper, that is such that For a subset D of R m , we denote by d(x, D) the distance from x to D which is defined by where we use the convention inf ∅ = +∞. We denote by bdry(D) and int(D) the boundary and the interior of D, respectively.
Let f ∈ Γ 0 (R m ) andx ∈ dom(f ). For any h ∈ R m , we recall that the directional derivative f (x, h) of f atx along the direction h is defined as It is known from [46,50] that the function is nonincreasing as t → 0 + and thus We denote by ∂f (x) the subdifferential of f atx which is defined by It is known from [46,50] that if ∂f (x) is nonempty, one has and where we use the convention sup ∅ = −∞.
We conclude this section with the following lemma which is used in our analysis.

Stability of Error Bounds for A Single Convex Inequality
In this section, we mainly study local and global error bounds for a single convex inequality, and provide characterizations of stability (in terms of directional derivatives) of error bounds. We first recall the definition of error bounds for a single convex inequality.
For a given f ∈ Γ 0 (R m ), we consider the set of solutions of a single convex inequality: Recall that convex inequality (7) is said to have a global error bound if there exists a constant τ ∈ (0, +∞) such that where [f (x)] + := max{f (x), 0}. We denote by τ min (f ) := inf{τ > 0 : (8) holds} the global error bound modulus of S f . Forx ∈ bdry(S f ), convex inequality (7) is said to have a local error bound atx if there exist constants τ, δ ∈ (0, +∞) such that We denote by τ min (f,x) := inf{τ > 0 : there exists δ > 0 such that (9) holds} the local error bound modulus of S f atx.
The following theorem gives characterizations of global and local error bounds. We refer the readers to [5] for more details. This result is needed in the sequel.

has a global error bound if and only if
iii. The following equality holds: For a mapping φ : X → Y between two normed linear spaces X, Y , we denote by Lip(φ) the Lipschitz constant which is defined by

Stability of Local Error bounds
In this subsection, we mainly study local error bounds for a single convex inequality and aim to provide equivalent criterion for the stability of local error bounds for convex inequality (7). We first give a sufficient condition for the local error bound of convex inequality (7).
Then convex inequality (7) has a local error bound atx and moreover Then for any x =x, by (2), one can verify that Suppose that β(f,x) < 0. Then Lemma 1 implies that d(0, ∂f (x)) = −β(f,x) and by virtue of Theorem 2, one has .
Hence (10) holds. The proof is complete.
Remark 4. Close analysis of the proof of Proposition 3 shows that the solution set S f will reduce to the singleton {x} if inf h =1 f (x, h) > 0 and f (x) = 0, which means thatx is the sharp (or strong) minimizer of f . Further, it should be noted that the condition inf h =1 f (x, h) = 0 is only sufficient for the existence of a local error bound of (7).
The following theorem shows that the condition inf h =1 f (x, h) = 0 can be used to give characterizations of stability of the local error bound for the convex inequality (7). For the sake of completeness, we provide a self-contained proof of this theorem.
Then the following statements are equivalent: Let us first prove i ⇒ ii. Take any ε > 0 such that ε < |β(f,x)| and let c := (|β(f,x)| − ε) −1 . For any g ∈ Γ 0 (R m ) such thatx ∈ S g and (11) holds. If β(f,x) > 0, then for any h ∈ R m , one has This and Proposition 3 imply that τ min (g, If β(f,x) < 0, then for any h ∈ R m , one has By using Proposition 3 again, one yields that Note that the implication ii ⇒ iii is clear and it remains to prove iii ⇒ i. Let ε > 0. Suppose on the contrary that there exists a sequence {h k } in R m with h k = 1 such that Without loss of generality, we can assume that |α k | < ε for all k (considering sufficiently large k if necessary) and consider the function for any x =x. By the definition of directional derivative, there exists a sequence {δ k } decreasing to 0 such that By virtue of the Ekeland variational principle, we can select This implies that z k →x, g ε (x) = f (x) = 0 and We claim that (Otherwise, inf h =1 g ε (z k , h) ≥ 0 and then one has g ε (z k ) = inf x∈R m g ε (x), which contradicts g ε (x) = 0). For any h ∈ R m with h = 1 and any t > 0, by (13), one has Thanks to Lemma 1 and Proposition 3, one can obtain that τ min (g ε ,x) ≥ 1 5ε , which contradicts iii as ε is arbitrary. The proof is complete.
Remark 6. (a). From [33,42], the condition (11) means that g is an ε-perturbation of f nearx, and the condition inf h =1 f (x, h) = 0 is proved to be equivalent to the stability of this ε-perturbation of local error bounds. Further, it has been shown in Theorem 5 that the stability of such ε-perturbation is essentially equivalent to that of ε-linear perturbation.

Stability of Global Error Bounds
This subsection is devoted to the study of stability of global error bounds for a single convex inequality, and the aim is to give sufficient and/or necessary conditions for the stability via directional derivatives. The following theorem gives a criterion for the stability of global error bounds.
. Consider the following statements: i. There exists τ ∈ (0, +∞) such that ii. There exist constants c, ε ∈ (0, +∞) such that for all g ∈ Γ 0 (R m ) satisfying one has τ min (g) ≤ c. iii. There exist constants c, ε ∈ (0, +∞) such that for all g ∈ Γ 0 (R m ) satisfying one has τ min (g) ≤ c. We next consider the case inf h =1 f (x, h) ≤ 0 for allx ∈ bdry(S f ). By virtue of (15) and Theorem 2, one can verify that S f has a global error bound with the constant 1 τ ; that is, Take any ε ∈ (0, τ ). Suppose that g ∈ Γ 0 (R m ) satisfies (16). Let x ∈ R m be such that g(x) > 0. Then f (x) > 0 as S f ⊆ S g . We claim that Granting this, by Lip(f − g) < ε in (16), one can prove that This and Theorem 2 imply that τ min (g) ≤ (τ − ε) −1 . We next prove the claim (19). Take z ∈ bdry(S f ) such that x − z = d(x, S f ) and (18) implies that Then for any t ∈ (0, 1), one has and thus This means that iii ⇒ i. Suppose that there exists a sequence {x k } ⊆ bdry(S f ) such that Let ε > 0 be arbitrary and k be sufficiently large such that Note that for any x = x k , one has Choose h k ∈ R m with h k = 1 such that Then we can take r k → 0 + (as k → ∞) such that This and (21) imply that Applying Ekeland variational principle, we can select y k ∈ R m such that and This implies that and thus y k = x k . Let us consider a function g ε ∈ Γ 0 (R m ) defined by By virtue of (20), (21), (22) and (25), one has If inf h =1 g ε (y k , h) ≥ 0, then for any x = y k , one has This and g ε (y k ) > 0 imply that S gε = ∅, and thus τ min (g ε ) = +∞, which contradicts iii.
Next, we consider the case inf h =1 g ε (y k , h) < 0. For any h ∈ R m with h = 1 and t > 0, by (25), one has Thanks to Lemma 1 and Theorem 2, we obtain τ min (g ε ) ≥ 1 4ε , which contradicts iii as ε is arbitrary. The proof is complete. [42,Theorem 7] in which a subdifferential characterization of stability of global error bounds was established with the aid of the so-called asymptotic qualification condition, Theorem 7 studies the stability of global error bounds via directional derivatives without additional hypothesis. It is known from Theorem 7 that the condition (15) is sufficient for the stability of global error bounds as said in Theorem 7.ii, and is necessary for the stability as in Theorem 7.iii. (15) is not sufficient for the stability of global error bounds as in Theorem 7.iii, and the assumption S f ⊆ S g for the stability as said in Theorem 7.ii is crucial. To see this, let us consider the following example:

(b). It should be noted that the condition
However, for any ε ∈ (0, +∞), let us consider the function g ε (x) := f (x) − εx for all x ∈ R. Then one can verify that g ε has two different zero points which are denoted by x 1 :=x < 0 and x 2 := 0 and S gε = [x, 0]. Thus S f ⊆ S gε and for any x <x, one has This implies that and consequently the global stability (for f ) as said in Theorem 7.iii does not hold as ε > 0 is arbitrary.
Further, a natural question arises from the above example:

Does there exist some type of stability of global error bounds that can be characterized by condition (15)?
We do not have an answer to this question. However, if the answer is affirmative, we conjecture that such global stability should be strictly stronger than that of Theorem 7.ii and weaker than that of Theorem 7.iii.
The following theorem gives characterizations of the stability of global error bounds for a convex inequality as said in Theorem 7.iii.
Theorem 10. Let f ∈ Γ 0 (R m ) be such that bdry(S f ) ⊆ f −1 (0). Then the following statements are equivalent: i. There exists τ ∈ (0, +∞) such that (15) holds and the following qualification condition is satisfied: ii. There exist constants c, ε ∈ (0, +∞) such that for all g ∈ Γ 0 (R m ) satisfying (17), one has τ min (g) ≤ c; iii. There exist constants c, ε > 0 such that for anyx ∈ bdry(S f ) and u ∈ R m with u ≤ 1, one has τ min (g u,ε ) ≤ c, Proof. Let us first prove i ⇒ ii. Based on Remark 4 and the proof of Theorem 7, we only need to consider the case inf h =1 f (x, h) ≤ 0 for allx ∈ bdry(S f ). We first prove the following claim: Claim. There exists ε 0 > 0 such that for all x 0 ∈ bdry(S f ), one has Proof of the claim. Suppose on the contrary that there exist ε k → 0 + , x k ∈ bdry(S f ) and z k ∈ R m such that Then f (z k ) ≤ 0 for all k (otherwise, similar to the proof of (19), one can prove that inf h =1 f (z k , h) > τ , a contradiction). By (15), one has z k ∈ S f \ bdry(S f ) and it follows from (27) that This and the qualification condition in i imply that which contradicts (27). Hence the claim is proved.
Let ε > 0 be such that ε < min{ε 0 , τ }. Suppose that g ∈ Γ 0 (R m ) satisfies (17). Take anyx ∈ bdry(S f )∩g −1 (0). Then for any x ∈ R m with g(x) > 0, one has By virtue of Lemma 1 and Theorem 2 we derive the inequality τ min (g) ≤ 1 τ −ε . Note that ii ⇒ iii follows immediately and it remains to prove iii ⇒ i. Suppose on the contrary that i does not hold. Based on iii ⇒ i in Theorem 7, we only consider the case that there exist z k ∈ S f \ bdry(S f ) and x k ∈ bdry(S f ) such that Let ε > 0 be arbitrary. Without loss of generality, we can assume that z k −x k z k −x k → h 0 (considering subsequence if necessary). Then h 0 = 1. Suppose that k is sufficiently large such that Let us consider a function g h0,ε ∈ Γ 0 (R m ) defined by Then g h0,ε (z k ) = f (z k ) + ε h 0 , z k − x k > 0 by (29) and thus This together with Lemma 1 and Theorem 2 implies that τ min (g h0,ε ) ≥ 1 2ε , which contradicts iii as ε is arbitrary. The proof is complete.
Remark 11. Note that condition (QC) is necessary for the stability of global error bounds. Consider Example 9 given in Remark 8 again. Let f (x) := e x − 1 for all x ∈ R. Then the stability of global error bounds for f as said in Theorem 7.iii does not hold. Further, for any z k → −∞, one can verify that inf |h|=1 f (z k , h) = e z k → 0 as k → ∞, which means that (QC) (for f ) fails.

Stability of error bounds for Semi-infinite Convex Constraint Systems
In this section, we study local and global error bounds for semi-infinite convex constraint systems, and mainly provide characterizations of stability of error bounds by directional derivatives. We first recall the definition of error bounds for semi-infinite convex constraint systems. For semi-infinite convex constraint systems in R m , we mean the problem of finding x ∈ R m satisfying: where I is a compact, possibly infinite, Hausdorff space, f i : R m → R, i ∈ I, are given convex functions such that i → f i (x) is continuous on I for each x ∈ R m . It is known from [50, Theorem 7.10] that in this case, We denote the solution set of system (30) by For any x ∈ R m , we set Recall that system (30) is said to have a global error bound if there exists a constant τ ∈ (0, +∞) such that We denote by τ min (F ) := inf{τ > 0 : (33) holds} the global error bound modulus of S F . Forx ∈ bdry(S F ), system (30) is said to have a local error bound atx if there exist constants τ, δ ∈ (0, +∞) such that and thus there is z * ∈ ∂f (x) such that Note that the subdifferential of the function f at a point x ∈ R m is given by (see Ioffe & Tikhomirov [24]) where "co" denotes the convex hull of a set. Then by (37), there exist λ 1 , . . . , λ N ≥ 0, i 1 , . . . , i N ∈ I f (x) and This and (37) imply that Hence (35) follows from (36) and the above inequality. The proof is complete.
The following theorem gives characterizations (by directional derivatives) of stability of local error bounds for system (30).

Theorem 13.
Letx ∈ R m be such that f (x) = 0. Then the following statements are equivalent: then one has τ min (G,x) ≤ c. iii. There exist constants c, ε > 0 such that for all u * ∈ R m with u * ≤ 1, one has τ min (G,x) ≤ c, where G ∈ C(I × R m , R) is defined by Proof. We set Suppose that β(f,x) > 0. Then one can verify that S F = {x} by Remark 4. Choose any ε ∈ (0, β(f,x)). Suppose that G, g i and g satisfy all conditions said in (ii). Then for any i ∈ I f (x) ⊆ I g (x), one has and it follows from Proposition 12 that ). Applying Proposition 3, we derive the inequality Suppose that β(f,x) < 0. Choose any ε > 0 such that β(f,x) + ε < 0. Then for any i ∈ I g (x) ⊆ I f (x), one has and it follows from Proposition 12 that ). Applying Proposition 3 again, we obtain the inequality ii ⇒ iii. The implication follows immediately as I f (x) = I g (x).
iii ⇒ i. Let u * ∈ R m with u * ≤ 1 and G ∈ C(I × R m , R) be defined as (38). Note that This means that the implication follows from iii ⇒ i as in Theorem 5. The proof is complete.
Remark 14. (a). Theorem 13, given in terms of directional derivatives, can be regarded as an equivalent version and a supplement of [42,Theorem 4] in which a subdifferential characterization of stability of local error bounds for system (30) was established. Further, in contrast with [42,Theorem 4], the stability of local error bounds for system (30) only requires that all component functions in system (30) have the same ε-linear perturbation.
We now turn our attention to the stability of global error bounds for semi-infinite constraint system (30) and mainly give equivalent criterion for such stability. Based on Theorem 10, the following theorem establishes equivalent conditions for the stability of global error bounds for the system (30).
Theorem 15. The following statements are equivalent: i. There exists τ ∈ (0, +∞) such that and (QC) as in Theorem 10 is satisfied. ii. There exist constants c, ε > 0 such that if G, g i (x), g(x) and I g (x) as said in Theorem 13.ii; then one has τ min (G) ≤ c.
iii. There exist constants c, ε > 0 such that for allx ∈ bdry(S f ) and u * ∈ R m with u * ≤ 1, one has τ min (G) ≤ c, where G ∈ C(I × R m , R) is defined by Proof. i ⇒ ii. Thanks to Remark 4 and the proof of Proposition 13, we only need to consider the case inf h =1 f (x, h) ≤ 0 for allx ∈ bdry(S f ). By virtue of the claim given in the proof of Theorem 10, there exists ε 0 > 0 such that for all x 0 ∈ bdry(S f ), one has Take any ε > 0 such that ε < min{ε 0 , τ }. Suppose that G, g i and g satisfy all conditions said in ii. Let x ∈ R m be such that g(x) > 0. We claim that Granting this, one has (thanks to sup i∈I Lip(f i −g i ) < ε and I g (x) ⊆ I f (x)). Applying Lemma 1 and Theorem 2, we derive the inequality τ min (G) ≤ 1 τ −ε . It remains to prove relation (42). For the case that f (x) > 0, similar to the proof of (19), one can verify that (42) holds. Thus we only need to consider the case that f (x) ≤ 0.
From the second condition in (ii), there is z 0 ∈ bdry(S f ) such that f i (z 0 ) = g i (z 0 ) for all i ∈ I. Then for any i ∈ I g (x) ⊆ I f (x), one has and thus f (x) > −ε 0 x − z 0 . This and (41) imply that (42) holds.
ii ⇒ iii. The implication follows immediately since I f (x) = I g (x) for all x ∈ R m .
iii ⇒ i. Letx ∈ bdry(S f ), u * ∈ R m with u * ≤ 1 and G ∈ C(I × R m , R) be defined as (40). Note that Thus, the implication follows from iii ⇒ i as in Theorem 10. The proof is complete

Conclusions
This paper is devoted to the study of stability of local and global error bounds for convex inequality constraint systems including a single convex inequality and semi-infinite convex constraint systems. The main results provide characterizations (in terms of directional derivatives) of stability of local and global error bounds for a single convex inequality (see Theorem 5 and Theorem 10). When these results are applied to error bounds for semi-infinite convex constraint systems, characterizations of the stability of local and global error bounds are also established in terms of directional derivatives (see Theorem 13 and Theorem 15). These results show that the stability of error bounds for convex inequality constraint systems can be equivalent to solving the convex optimization/minimization problems defined by directional derivatives over the unit sphere.