Jul 25, 2015 - vector Lyapunov functions and comparison principles [10]â. [13]. Particularly the ... of comparison equations to stability analysis o...

0 downloads 5 Views 177KB Size

arXiv:1507.07142v1 [math.DS] 25 Jul 2015

Soumya Kundu and Marian Anghel Abstract— Sum-of-squares (SOS) methods have been shown to be very useful in computing polynomial Lyapunov functions for systems of reasonably small size. However for large scale systems it is necessary to use a scalable alternative using vector Lyapunov functions. Earlier works have shown that under certain conditions the stability of an interconnected system can be studied through suitable comparison equations. However finding such comparison equations can be non-trivial. In this work we propose an SOS based systematic procedure to directly compute the comparison equations for interconnected system with polynomial dynamics. With an example of interacting Van der Pol systems, we illustrate how this facilitates a scalable and parallel approach to stability analysis.

I. INTRODUCTION Lyapunov functions methods have long been used in studying stability properties of dynamical systems [1], [2]. Finding a Lyapunov function for a given dynamical system, however, is often not an easy task. Recent advances in sumof-squares (SOS) methods and semi-definite programming, [3]–[5], have enabled algorithmic construction of polynomial Lyapunov functions [6], [7]. However such sum-of-squares based computational methods become intractable as the system size grows to larger than 6-8 states [8], [9]. It is useful to model large-scale systems in the form of many interacting subsystems and study the stability of the full interconnected system using only the subsystem Lyapunov functions. There are different functional forms for the Lyapunov function of the interconnected system, such as a scalar Lyapunov function expressed as a weighted sum of the subsystem Lyapunov functions, or applications of vector Lyapunov functions and comparison principles [10]– [13]. Particularly the formulations using vector Lyapunov functions are computationally very attractive because of their parallel structure and scalability. Based upon the results on comparison equations [14]–[16], the authors in [17], [18] introduced the concept of vector Lyapunov functions. It was shown that if the subsystem Lyapunov functions and the interactions satisfy certain conditions, then the stability of the interconnected system can be studied by analyzing the stability of a set of linear ordinary differential equations. However computing these comparison equations, for a given interconnected system, still remained a challenge. In absence of suitable computational tools, analytical insights were used *This work was supported by the U.S. Department of Energy through the LANL/LDRD Program. 1 Soumya Kundu is with the Center for Nonlinear Studies and Information Sciences Group (CCS-3), Los Alamos National Laboratory, Los Alamos, USA [email protected] 2 Marian Anghel is with the Information Sciences Group (CCS-3), Los Alamos National Laboratory, Los Alamos, USA [email protected]

to build those comparison equations, such as the trigonometric inequalities in power systems network [19]. In this work we use the sum-of-squares and semi-definite programming methods to study the stability of an interconnected system by computing the comparison equations. While this approach is applicable to any generic dynamical system, we choose a randomly generated network of modified1 Van der Pol oscillators for illustration. Each Van der Pol oscillator can be represented as a two-state system with state dynamic equations as polynomials of degree three [20]. The network is then decomposed into many interacting subsystems. Each subsystem parameters are so chosen that individually each subsystem is stable, when the disturbances from neighbors are zero. SOS based expanding interior algorithm [6], [7] is used to obtain estimate of region of attraction as sub-level sets of polynomial Lyapunov functions for each such subsystem. Finally SOS optimization is used to compute the linear comparison equation to certify stability of the network under disturbances. Following some brief background in Sec. II we outline the problem statement in Sec. III. We present the SOS-based direct approach to computing the comparison equations in Sec. IV. Sec. V shows an application of comparison equations to stability analysis of a network of Van der Pol systems. We conclude the article in Sec. VI. II. BASIC CONCEPTS AND BACKGROUND A. Lyapunov Stability Methods Let us consider the dynamical system x˙ (t) = f (x (t)) ,

t ≥ 0, x ∈ Rn , f (0) = 0 ,

(1)

with an equilibrium at the origin2 , and f : Rn → Rn is locally Lipschitz. The important notions of stability are: Definition 1 The equilibrium point at origin is called 1) stable in the sense of Lyapunov (i.s.L) if ∀ε > 0, ∃δ > 0 s.t. kx(0)k2 < δ =⇒ kx(t)k2 < ε ∀t, 2) asymptotically stable if it is stable i.s.L, and ∃δ˜ > 0 s.t. kx(0)k2 < δ˜ =⇒ lim kx(t)k2 = 0, t→+∞

3) exponentially stable if it is asymptotically stable, and ∃b, c, δˆ > 0 s.t. kx(0)k2 <δˆ =⇒kx(t)k2 < ce−bt kx(0)k2 ∀t 1 We choose the Van der Pol ‘oscillator’ parameters in such a way that these have a stable equilibrium at origin. 2 Note that this is not a restrictive assumption, since by shifting of state variables, the origin can always be made an equilibrium point.

The Lyapunov stability theorem [1], [21], also called Lyapunov’s first or direct method, presents a sufficient condition of stability through the construction of a certain positive definite function. Theorem 1 The equilbrium point x = 0 of the dynamical system in (1) is stable i.s.L in D ⊆ Rn , if there exists a continuously differentiable positive definite function V˜ : D → R (henceforth referred to as Lyapunov function) such that, V˜ (0) = 0 , V˜ (x) > 0, ∀x ∈ D\{0} , and, − V˙˜ (x) ≥ 0, ∀x ∈ D .

Theorem 2 Let K = {x ∈ Rn |u1 (x) ≥ 0, . . . , um (x) ≥ 0 } be a compact set, where u j ∈ R[x], ∀ j ∈ {1, . . . , m}. Suppose u ∈ σ0+∑ j σ j u j σ0 , σ j ∈ Σ[x], ∀ j ∃ u ∈ R[x], so that, (6) & {x ∈ Rn |u(x) ≥ 0} is compact. If p(x) > 0, ∀x ∈K , then p ∈ σ0+∑ j σ j u j σ0 , σ j ∈ Σ[x], ∀ j .

(2a) (2b)

Often for the ui , ∀i, used in this work, the existence of u(x) in (6) would be guaranteed [29].

(2c)

C. Linear Comparison Principle

If V˜ satisfies −V˙˜ (x) > 0, ∀x ∈ D\{0}, then the equilibrium point at origin is asymptotically stable in D. Further, the origin is exponentially stable3 in D ∈ Rn if ∃α > 0, s.t. − V˙˜ (x) ≥ α V˜ (x) , ∀x ∈ D .

SOS conditions into SOS feasibility problems. Then the Putinar’s Positivestellensatz theorem4 states,

(3)

Here V˙˜ (x) = ∇V˜ T f (x). When there exists such a function V˜ (x), the region of attraction (ROA) of the stable equilibrium point at origin can be (conservatively) estimated as R := x ∈ D V˜ (x) ≤ γ max , (4a) max n ˜ where, γ := argmax x ∈ R V (x) ≤ γ ⊆ D. (4b)

Before finishing this section, let us take a look at a nice result on the ordinary differential equations which helps form the framework of stability analysis of inter-connected systems via vector Lyapunov functions. Noting that all the elements of the vector eAt , t ≥ 0, where A = [ai j ] ∈ Rm×m , are non-negative if and only if ai j ≥ 0, i 6= j, the authors in [16], [17] proposed the following result: Lemma 1 Let A = [ai j ] ∈ Rm×m have only non-negative offdiagonal elements, i.e. ai j ≥ 0, i 6= j. Then v(t) ˙ ≤ A v(t), t ≥ 0, v ∈ Rn , v(0) = v0 ,

γ

The Lyapunov function can be scaled by γ max , so that, R := {x ∈ Rn |V (x) ≤ 1 } , where, V (x) = V˜ (x)/γ max .

Henceforth, for simplicity, we would assume, without any serious loss of generality, that the ROA is estimated to be sub-level set of V (x) = 1. B. Sum-of-Squares and Positivstellensatz Theorem Relatively recent studies have shown that sum-of-squares based optimization techniques can be utilized in finding Lyapunov functions by restricting the search space to sumof-square polynomials [6], [7], [22], [23]. Let us denote by R [x] the set of all polynomials in x ∈ Rn . Then, Definition 2 A multivariate polynomial p ∈ R[x], x ∈ Rn , is called a sum-of-squares (SOS) if there exists hi ∈ R[x], i ∈ {1, 2, . . . , r}, such that p(x) = ∑ri=1 h2i (x). Further, we denote the set of all SOS polynomials in x ∈ Rn by Σ[x]. Checking if p ∈ R[x] is an SOS is a semi-definite problem R which can be solved with a MATLAB toolbox SOSTOOLS [3], [4] along with a semidefinite programming solver such as SeDuMi [5]. SOS technique can be used to search for polynomial Lyapunov functions, by translating (2) to equivalent SOS conditions [3], [4], [6], [24]–[27]. An important result from algebraic geometry called Putinar’s Positivstellensatz theorem [28], [29] helps in translating the 3 We

will be referring to α > 0 in (3) as the ‘self-decay rate’.

implies v(t) ≤ w(t), ∀t ≥ 0, where w(t) ˙ = A w(t), t ≥ 0, w ∈ Rn , w(0) = v(0) = v0 .

(5a) (5b)

(7)

(8)

This result will henceforth be referred to as the ‘linear comparison principle’ and the differential equation in (8) as the ‘comparison equation’. III. PROBLEM DESCRIPTION For the rest of this work, let us make the simplifying assumption that the dynamical system in (1) is in polynomial form5, denoted by f ∈ R[x]n , and that the system in (1) is (locally) asymptotically stable. A. Decomposed System Model The dynamical system in (1) can be expressed in the form of m (≥ 2) interacting, and asymptotically stable subsystems ∀i = 1, 2,. . . , m,

Si : x˙i = fi (xi ) + gi(x), xi ∈ Rni , x ∈ Rn fi (0) = 0, gi (xˆi ) = 0, ∀xˆi ∈ x ∈ Rn x j = 0, ∀ j 6= i T where, x = xT1 , xT2 , . . . , xTm ∈ Rn

(9a) (9b) (9c) (9d)

m

and n = ∑ ni , xi ∩ x j = 0/ .

(9e)

i=1

4 For

other versions of the Positivstellensatz theorem please refer to [29]. dynamics can be recasted into an equivalent polynomial form, with introduction of additional state variables and suitable equality constraints [7], [24], [26], [30]. 5 Non-polynomial

Here xi represents the states that belong to the i-th subsystem Si , fi ∈ R[xi ]ni denotes the isolated subsystem dynamics, and gi ∈ R[x]ni represents the neighbor interactions. Let us assume that the interactions can be expressed as ∀i ∈ {1, 2, . . . , m} ,

gi (x) = ∑ gi j (xi , x j ) ,

(10)

j6=i

where gi j ∈ R[xi , x j ]ni quantifies how subsystem S j affects the dynamics of subsystem Si . Note that (10) is not a very restrictive assumption, since given the choice of states xi of the subsystem Si , the rest of the subsystems can always be chosen in a way such that (10) holds. We denote by (11a) Ni := {i} ∪ j ∃ xi , x j , s.t. gi j (xi , x j ) 6= 0 and x¯i :=

[

xj ,

(11b)

j∈Ni

the set of indices of the subsystems in the neighborhood of Si (including the subsystem itself) and the states that belong to this neighborhood, respectively. The next step is to characterize the stability properties of the isolated subsystems ni

∀i ∈ {1, 2, . . . , m} , x˙i = fi (xi ), xi ∈ R . by computing a polynomial Lyapunov function Vi ∈ R [xi ] for each i, and the corresponding estimate of the ROA as in (5). An SOS based expanding interior algorithm, [6], [7], is used to iteratively enlarge the estimate of the ROA by finding a ‘better’ Lyapunov function at each step of the algorithm. At the completion of this iterative algorithm, the stability of each isolated subsystem (assuming no interaction) is quantified by its Lyapunov function Vi ∈ R[xi ], with a final estimate of the domain of attraction given by Ri0 := {xi ∈ Rni |Vi (xi ) ≤ 1 } , ∀i = 1, 2, . . . , m . B. Stability under Interactions Let us define the domain R 0 := x ∈ Rn xi ∈ Ri0 , ∀i = 1, 2, . . . , m ,

(12)

(13)

which could be interpreted as the ROA of the ‘free’ interconnected system (9), in absence of the all the interactions. The disturbances coming from the neighbors can be expressed by the subsystem Lyapunov function level-sets. While the equilibrium at origin corresponds to the level sets Vi (0) = 0, ∀i, any disturbance (or initial condition) away from this equilibrium would result in positive level-sets Vi (xi (0)) = γi0 ∈ (0, 1] for some or all of the subsystems. A necessary and sufficient condition of asymptotic stability (Definition 1) can then be translated into the condition ∀i, Vi (xi (0)) = γi0 =⇒ ∀i, lim Vi (xi (t)) = 0 , t→+∞

(14)

where xi (t), t > 0, are solutions of the coupled dynamics in (9). Even though (14) reduces the dimensionality of the problem, it still remains a generally non-trivial problem. An attractive, and scalable, alternative approach is to construct a vector Lyapunov function V : Rn → Rm V (x) := [V1 (x1 ) V2 (x2 ) . . . Vm (xm )]T ,

(15)

and use a comparison equation to certify if the condition (14) holds. Restricting our focus to the linear comparison principle (Lemma 1), the aim is to seek an A = [ai j ] ∈ Rm×m and a domain D ⊂ R 0 , such that V˙ (x) ≤ AV (x), ∀x ∈ D ⊂ R 0 ,

(16a)

where, ai j ≥ 0 ∀i 6= j , A = [ai j ] is Hurwitz, and

(16b) (16c)

D is invariant under the dynamics (1).

(16d)

If there exist a ‘comparison matrix’ A = [ai j ] and D ⊂ R 0 satisfying (16), then any x(0) ∈ D would guarantee exponentially convergence of V (x(t)) to the origin (Lemma 1), ∃ b, c > 0 s.t. kV (x(t))k2 < ce−bt kV (x(0))k2 , ∀t > 0 , (17) which also translates into exponential convergence of the states themselves [10]. Note that, D ⊂ R 0 , if exists, presents an estimate of the ROA of the full interconnected system. IV. COMPUTING THE COMPARISON EQUATION A. Traditional Approach In [10], [11], [13], [19], and related works, authors laid out a formulation of the linear comparison equation using certain conditions on the Lyapunov functions and the neighbor interactions. It was observed that if there exists a set of Lyapunov functions, vi : Rni → R , ∀ i = 1, 2, . . . , m, satisfying the following conditions ∀i ∈ {1, 2, . . . , m} , ∃ η˜ i1 , η˜ i2 , η˜ i3 > 0 such that,

∀xi ∈ Di ⊂ Ri0 , η˜ i1 kxi k2 ≤ vi (xi ) ≤ η˜ i2 kxi k2 and (∇vi ) fi ≤ −η˜ i3 kxi k2 T

(18a) (18b)

and if the interaction terms in (10) satisfy ∀i ∈ {1, . . . , m} , ∀ j ∈ Ni \ {i} , ∃ζ˜i j > 0 such that,

∀xi ∈ Di , ∀x j ∈ D j , (∇vi )T gi j ≤ ζ˜i j x j 2 ,

(19)

∀ x(t)∈D, v˙ (x (t)) ≤ A˜ v (x (t)) , A˜ = [a˜i j ] ∈ Rm×m

(20a)

2

then the following comparison equation can be formed, T

where, v(x) = [v1 (x1 ) v2 (x2 ) . . . vm (xm )] , n

(20b)

0

D = {x ∈ R | xi ∈ Di , ∀i ∈{1, . . . , m}}⊂ R , (20c) −η˜ i3 /η˜ i2 , j = i (20d) and a˜i j = j ∈ Ni \ {i} , ∀ i, ∀ j ζ˜ /η˜ , i j j1 0, j∈ / Ni

If the ‘comparison matrix’ A˜ = [a˜i j ] is Hurwitz, then any invariant domain R ⊆ D provides an estimate of a region of exponential stability of the full system [11], [19]. B. Motivation for Direct Approach

While this approach provides very useful analytical insights into the construction of the comparison matrix A˜ = [a˜i j ], it has certain computational issues. This requires finding the bounds in (18) and (19), and also the Lyapunov functions vi , ∀ i, that satisfy those. Clearly the polynomial Lyapunov functions, Vi ∀ i, cannot satisfy the linear bounds in (18).

Assuming that the polynomial Lyapunov functions, Vi ∀ i, we found using the expanding interior√algorithm (Sec. III-A) are quadratic, we can define vi := Vi , ∀ i, which would satisfy the conditions in (18) [11], [19]. In such a case, one needs to find the following bounds, ∀i, ∀ j ∈ Ni \ {i} , ∀xi ∈ Di , ∀x j ∈ D j ,

ηi1 kxi k22 ≤ Vi (xi ) ≤ ηi2 kxi k22 ,

∇V T fi ≤ −ηi3 kxi k22 ,

T i and ∇Vi gi j 2 ≤ ζi j kxi k2 x j 2 ,

for some positive scalars ηi1 , ηi2 , ηi3 , ζi j . Then using algebra the bounds in (18) and (19) can be obtained √ √ ∀i, ∀ j ∈ Ni \ {i} , η˜ i1 = ηi1 , η˜ i2 = ηi2 , ζi j ηi3 η˜ i3 = √ and ζ˜i j = √ . 2 ηi2 2 ηi1

(21a) (21b)

simple as (22a) (22b)

C. SOS Based Direct Approach We propose to use SOS methods to directly compute the comparison equation in (16), in a decentralized way by calculating each row of A = [ai j ] directly at each subsystem level. Note that, in (16), we will be using quadratic (or, in general, polynomial) Lyapunov functions which do not satisfy the bounds (18)-(19). But we may observe that, Lemma 2 If there exist Lyapunov functions vi , for each i ∈ {1, 2, . . . , m} , satisfying the comparison equation in (20a)(20b), for some matrix A˜ = [a˜i j ] with m

∑ a˜i j < 0 ∀i ,

j=1

then there exists another matrix A = [ai j ] satisfying the comparison equation (16a) with Vi := v2i , ∀ i , with ai j ≥ 0 ∀ i 6= j , and

m

m

j=1

j=1

∑ ai j < ∑ a˜i j < 0 ∀i .

Proof: Please refer to Appendix A. It will be useful to note here that, an application of Gershgorin’s Circle theorem [31] says that if a matrix with negative diagonal elements is strictly diagonally dominant6 then the matrix is Hurwitz. We are now in a position to outline the SOS based procedure to directly compute the matrix A = [ai j ] in the comparison equation (16). 6A

= [ai j ] is strictly diagonally dominant if ∑ j6=i ai j < |aii |,∀i.

Note that we exclude the boundary of the isolated subsystem ROA, γi0 = 1 ∀ i, for reasons explained later. The comparison equation in (16a) can then be translated into

(21c)

Thus the computation of each element of the comparison matrix A˜ in (20) requires multiple optimization steps. We may also note that some of the bounds in (18) and (19), while convenient for analytical insights, need not be optimal for computing

a Hurwitz comparison matrix. For example, in

(19), ∇vTi gi j 2 is function of both xi and x j but is bounded by using only the norm on x j .

a˜i j ≥ 0 ∀ i 6= j , and

In this work, we are interested in Di , ∀ i, of the form, ∀i, Di := xi ∈ Rni Vi (xi ) ≤ γi0 , γi0 ∈ (0, 1) , (23a) ) ( \ (23b) and, D := x ∈ Rn Vi (xi ) ≤ γi0 . i

m

∑ ai jV j (x j ),

∀ i , V˙i (xi ) ≤

j=1

i.e., V˙i ≤

∑

j∈Ni

∀ x ∈ D,

(24a)

ai jV j , when V j ≤ γ 0j ∀ j ∈ Ni ,

(24b)

since we know that ai j = 0 ∀ j ∈ / Ni . Using the Positivstel lensatz theorem (Theorem 2), with ui := γi0 − Vi(xi ) and K = D, we can cast (24) into an SOS feasibility problem, − ∇ViT ( fi + gi ) + ∑ ai jV j − σi j γ 0j − V j ∈ Σ[x¯i ] (25a) j∈Ni

with σi j ∈ Σ[x¯i ], ∀i ∈ {1, 2, . . . , .m} , ∀ j ∈ Ni .

(25b)

where x¯i was defined in (11). The goal is to find the ‘optimal’ scalars ai j ∀i, j ∈ Ni satisfying (25) so as to obtain the tightest possible bound in (16a). We can thus formulate the following SOS optimization problem, ∀i ∈ {1, 2, . . . , m} ,

min σi j

∑

ai j , subject to (25) .

(26)

j∈Ni

This simple SOS formulation helps us find the comparison equation (16) in a decentralized way, by computing each row of the comparison matrix A = [ai j ] in a single optimization problem at each subsystem level. The optimization problem can be easily implemented on a parallel platform, with the complexity of the problem essentially dependent on the size of the largest neighborhood Ni . Further note that, if the minimal values of all the row-sums in (26) are negative, then the matrix A = [ai j ] thus found is a strictly diagonally dominant with negative diagonal entries, and hence, Hurwitz [31]. However, if ∑mj=1 ai j ≥ 0 for any i, then the eigenvalues of A = [ai j ] need to be computed. Finally a note on invariance of the domain D, which along with the presence of Hurwitz A = [ai j ] guarantees that D is a domain of exponential stability, as noted in (16). According to [11], an estimate of the ROA can be given by !) ( γ 0j Vi (xi ) ≤ min , (27a) R := x ∈ D max j i pi pj where, pi > 0,

and Ap < 0,

∀i ∈ {1, 2, . . . , m} ,

(27b)

T

p := (p1 , p2 , . . . , pm ) .

(27c)

Then it is easy to see that, R ≡ D, if Aγ 0 < 0, γ 0 := γ10 , γ20 , . . . , γm0

T

.

(28)

V. NUMERICAL EXAMPLE A. Model Description We consider a network of nine Van der Pol ‘oscillators’ [20], with parameters of each oscillator chosen to make them individually stable. Each Van der Pol oscillator constitutes a subsystem, with the interconnections shown below N1 : {1, 2, 5, 9} N2 : {2, 1, 3} N3 : {3, 2, 8} N4 : {4, 6, 7} N5 : {5, 1, 6} N6 : {6, 4, 5} N7 : {7, 4, 8, 9} N8 : {8, 3, 7} N9 : {9, 1, 7} .

(29)

similar to (25)-(26), to find the maximal αi , ∀i, the ‘selfdecay rates’, for a set of given γi0 , ∀i. Higher values of αi indicates better chance of finding a Hurwitz comparison matrix. In Fig. 2 we show the variations of αi for each i, when the initial level set γi0 is varied from 0 to 1. For each subsystem, as γi0 approaches 1, αi approaches 0. This shows that it is not possible to obtain a Hurwitz comparison matrix when the initial conditions lie close to the boundary of the estimated ROAs, and hence the exclusion of γi0 = 1 in (23).

The dynamics of each oscillator, in presence of the neighbor interactions, is given by ∀ j ∈ {1, 2, . . . , 9} , − x j,1 + x j,1

∑

β jk xk,2 , (30b)

k∈N j \{ j}

where µ j , ∀ j, are chosen randomly from (−3 , −1) and the interaction coefficients β jk , ∀ j, ∀k∈N j \ { j}, are chosen randomly from (−0.4 , 0.4). Using the expanding interior algorithm, we find estimates of the ROAs of the isolated, or ‘free’, subsystems via quadratic Lyapunov functions. As an example, Fig. 1 shows a comparison of the true ROA of the isolated subsystem 9 and an estimate using a quadratic Lyapunov function, R90 = {(x9,1 , x9,2 ) | V9 ≤ 1 } ,

where, V9 =

Fig. 1.

0.35

(30a)

0.595 x29,1 + 0.227 x9,1 x9,2 + 0.520 x29,2 .

0.25 0.2 0.15 0.1 0.05 0 0

Fig. 2.

1 2 3 4 5 6 7 8 9 0.2

0.4

γ0i

0.6

0.8

1

Evolution of self-decay rates against varying initial level-sets.

(31a) (31b)

Comparison of estimated and true ROA for isolated subsystem 9.

B. Exponential Stability of Isolated Subsystems Existence of a comparison matrix A = [ai j ] requires that the diagonal entries aii are negative, which necessitates that ∀i , ∃ αi > 0 , so that ∇ViT fi ≤ −αi Vi , ∀ xi ∈ Di

0.3 αi

x˙ j,2 =

0.45 0.4

x˙ j,1 = x j,2

µ j x j,2 1 − x2j,1

0.5

(32)

where Di , ∀i, were defined in (23). Note that the condition (32) is a sufficient condition of exponential stability for the isolated subsystems, as in (3). We can use SOS optimization,

C. Comparison Equation We recall that two sufficient criteria for a domain D to be an estimate of the ROA, are that the comparison matrix in (16) is Hurwitz and D is an invariant domain under the dynamics (9). To compare the performance of the traditional approach and the direct approach, we need to monitor how well the above mentioned criteria are satisfied for a set of arbitrarily chosen D. While this would require an exhaustive simulation over all possible domains D defined in (23), we choose to examine only those D where γ10 = γ20 = · · · = γ90 = γ ∗ , i.e. ) ( 9 \ ∗ 9 (33) D := x ∈ R V1 ≤ γ i=1

for some γ ∗∈(0, 1). For each γ ∗ , and domain D, we compute the comparison matrices using the traditional and the direct approach. Denoting by Re (λ ) the real parts of the eigenvalues of a matrix, we note that if the maximum of Re(λ ) is negative, then the matrix is Hurwitz. Further, by applying (28), the domain (33) is guaranteed to be invariant if the maximum row-sum of the comparison matrix is negative. Fig. 3 shows an evolution of these two properties (maximum Re(λ ) and maximum row-sum) for the comparison matrices, computed using the two approaches, for a range of γ ∗ . We note that both the maximum row-sum and the maximum Re(λ ) generally increases as γ ∗ increases from 0 to 1, indicating that as the domain D ‘expands’, it becomes more difficult to certify stability. We also note that, for both approaches, the maximum row-sum becomes positive before

and Aγ 0 < 0. The solution, w(t) ∈ R9 , of the corresponding comparison equation w˙ = A w , w(0) = γ 0 , is plotted against the actual Lyapunov level sets in Fig. 5, for subsystems 2, 6, 7 and 8. The trajectories w(t) exponentially converge to

0.4 max. Re(λ) [traditional] max. row−sum [traditional] max. Re(λ) [direct] max. row−sum [direct]

0.2 0.1

0.7

0 −0.1

0.6

−0.2

0.5

V2 w2 V6

−0.3 −0.4

0.1

0.2

0.3

γ*

0.4

0.5

0.6

0.7

Lyapunov level−sets

properties of comparison matrices

0.3

w6 V7 w7

0.4

V8 w8

0.3 0.2

Fig. 3. Evolution of the properties of comparison matrices, computed using traditional and direct approach, against varying initial level-sets.

maximum Re(λ ), indicating that the ‘invariance’ criterion is lost before the ‘Hurwitz’ criterion. Significantly, we also note that both the Hurwitz and invariance criteria are satisfied for a wider range of γ ∗ in case of the direct approach than in the case of the traditional one. Thus, with regards to both the criteria, the direct approach is seen to perform better than the traditional approach. D. Test Case Let us illustrate how this method can be used to certify exponential convergence of a given initial condition to the origin. The system dynamics is evolved against a randomly generated initial condition, and is found to be converging to the origin. Fig. 4 shows the evolution of the states belonging to subsystems - 2, 6, 7 and 8. The initial condition yields the

subsystem states

0.5

x2,1 x2,2

−0.5

x6,1 x6,2

−1

x7,1 x7,2 x8,1

−1.5

0 0

10

20

30

40

50

t

Fig. 5. Comparison of the actual Lyapunov level sets and their upper bounds from the linear comparison equation.

zero and, from Lemma 1, provide an upper bound on the corresponding subsystem Lyapunov function level sets. When the same procedure is done with the traditional ˜ with approach, we obtain a Hurwitz comparison matrix, A, 1/2 0 > 0, thus maximum Re(λ ) as -0.001, but with A˜ γ violating the invariance condition. VI. CONCLUSIONS AND FUTURE WORKS A. Conclusions We have presented an SOS based direct approach to compute the linear comparison principle for stability analysis of interconnected systems. We have also discussed the traditional approach to obtaining the comparison equations, and shown how the direct approach can yield ‘better’, or less conservative, certificates of exponential stability. Using a network of Van der Pol systems we have presented a comparison of the two approaches. The proposed approach can be implemented on a suitable parallel platform where each row of the comparison matrix, corresponding to each subsystem, is computed in parallel.

1

0

0.1

x8,2 0

5

10

15

t

Fig. 4.

Evolution of subsystem states under an arbitrary disturbance.

following level sets,

γ 0 = [ 0.08 , 0.56 , 0.58 , 0.31 , 0.08 , 0.61 , 0.18 , 0.45 , 0.14 ]T which is then used to define the domain D, in (23). Then the SOS-based direct approach is used to compute the comparison matrix, A ∈ R9×9 , with maximum Re(λ ) as -0.078,

B. Future Works A decentralized control framework can be visualized where each subsystem computes a local control law that will guarantee satisfaction of the Hurwitz and invariance conditions. SOS methods can be used to extend the stability analysis to higher order, and more general, comparison equations. Also, it would be interesting to see how the use of higher order (for example, quartic) Lyapunov functions in the comparison equation affects the conservativeness of the stability certificates.

R EFERENCES [1] A. M. Lyapunov, The General Problem of the Stability of Motion. Khatkov, Russia: Kharkov Math. Soc., 1892. [2] W. M. Haddad and V. Chellaboina, Nonlinear Dynamical Systems and Control. Princeton, New Jersey: Princeton University Press, 2008. [3] A. Papachristodoulou, J. Anderson, G. Valmorbida, S. Prajna, P. Seiler, and P. A. Parrilo, “SOSTOOLS: Sum of squares optimization toolbox for MATLAB,” 2013, available from http://www.eng.ox.ac.uk/control/sostools. [4] S. Prajna, A. Papachristodoulou, P. Seiler, and P. A. Parrilo, Positive Polynomials in Control. Berlin, Heidelberg: Springer-Verlag, 2005, ch. SOSTOOLS and Its Control Applications, pp. 273–292. [5] J. F. Sturm, “Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones,” Optimization Methods and Software, vol. 11-12, pp. 625–653, Dec. 1999, software available at http://fewcal.kub.nl/sturm/software/sedumi.html. [6] Z. W. Jarvis-Wloszek, “Lyapunov based analysis and controller synthesis for polynomial systems using sum-of-squares optimization,” Ph.D. dissertation, University of California, Berkeley, CA, 2003. [7] M. Anghel, F. Milano, and A. Papachristodoulou, “Algorithmic construction of Lyapunov functions for power system stability analysis,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 60, no. 9, pp. 2533–2546, Sept 2013. [8] J. Anderson and A. Papachristodoulou, “A decomposition technique for nonlinear dynamical system analysis,” IEEE Transactions on Automatic Control, vol. 57, pp. 1516–1521, June 2012. [9] ——, “A network decomposition approach for efficient sum of squares programming based analysis,” in American Control Conference (ACC), 2010, June 2010, pp. 4492–4497. [10] D. Siljak, “Stability of large-scale systems under structural perturbations,” Systems, Man and Cybernetics, IEEE Transactions on, vol. SMC-2, no. 5, pp. 657–663, Nov 1972. [11] S. Weissenberger, “Stability regions of large-scale systems,” Automatica, vol. 9, no. 6, pp. 653–663, 1973. [12] A. N. Michel, “On the status of stability of interconnected systems,” Automatic Control, IEEE Transactions on, vol. 28, no. 6, pp. 639–653, 1983. [13] M. Araki, “Stability of large-scale nonlinear systems quadraticorder theory of composite-system method using M-matrices,” IEEE Transactions on Automatic Control, vol. 23, no. 2, pp. 129 – 142, 1978. [14] R. Conti, “Sulla prolungabilit`a delle soluzioni di un sistema di equazioni differenziali ordinarie.” Bollettino dell’Unione Matematica Italiana, vol. 11, no. 4, pp. 510–514, 1956. [15] F. Brauer, “Global behavior of solutions of ordinary differential equations,” Journal of Mathematical Analysis and Applications, vol. 2, no. 1, pp. 145–158, 1961. [16] E. F. Beckenbach and R. Bellman, “Inequalities,” Spring-Verlag, New York/Berlin, 1961. [17] R. Bellman, “Vector Lyapunov functions,” Journal of the Society for Industrial & Applied Mathematics, Series A: Control, vol. 1, no. 1, pp. 32–34, 1962. [18] F. N. Bailey, “The application of Lyapunov’s second method to interconnected systems,” J. SIAM Control, vol. 3, pp. 443 – 462, 1966. [19] L. Jocic, M. Ribbens-Pavella, and D. Siljak, “Multimachine power systems: Stability, decomposition, and aggregation,” Automatic Control, IEEE Transactions on, vol. 23, no. 2, pp. 325–332, Apr 1978.

[20] B. Van der Pol, “On relaxation-oscillations,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 2, no. 11, pp. 978–992, 1926. [21] J.-J. E. Slotine, W. Li et al., Applied nonlinear control. Prentice-Hall Englewood Cliffs, NJ, 1991, vol. 199, no. 1. [22] P. A. Parrilo, “Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization,” Ph.D. dissertation, Caltech, Pasadena, CA, 2000. [23] W. Tan, “Nonlinear control analysis and synthesis using sum-ofsquares programming,” Ph.D. dissertation, University of California, Berkeley, CA, 2006. [24] A. Papachristodoulou and S. Prajna, “On the construction of Lyapunov functions using the sum of squares decomposition,” in Proceedings of the IEEE Conference on Decision and Control, Dec. 2002, pp. 3482– 3487. [25] Z. J. Wloszek, R. Feeley, W. Tan, K. Sun, and A. Packard, Positive Polynomials in Control. Berlin, Heidelberg: Springer-Verlag, 2005. [26] A. Papachristodoulou and S. Prajna, Positive Polynomials in Control. Berlin Heidelberg: Springer-Verlag, 2005. [27] ——, “A tutorial on sum of squares techniques for systems analysis,” in Proceedings of the 2005 American Control Conference, June 2005, pp. 2686–2700. [28] M. Putinar, “Positive polynomials on compact semi-algebraic sets,” Indiana University Mathematics Journal, vol. 42, no. 3, pp. 969–984, 1993. [29] J.-B. Lasserre, Moments, positive polynomials and their applications. World Scientific, 2009, vol. 1. [30] M. Anghel, J. Anderson, and A. Papachristodoulou, “Stability analysis of power systems using network decomposition and local gain analysis,” in Bulk Power System Dynamics and Control-IX Optimization, Security and Control of the Emerging Power Grid (IREP), 2013 IREP Symposium. IEEE, 2013, pp. 1–7. [31] H. E. Bell, “Gershgorin’s theorem and the zeros of polynomials,” American Mathematical Monthly, pp. 292–295, 1965.

A PPENDIX A. Proof of Lemma 2 Since Vi = v2i ∀ i and a˜i j > 0 ∀ i 6= j, we have m

∀i ∈ {1, 2, . . . , m} , V˙i ≤ 2vi ∑ ai j v j j=1

≤ 2a˜iiVi + 2 ∑ a˜i j vi v j j6=i

≤ 2a˜iiVi + ∑ a˜i j (Vi + V j ) j6=i

m

=

a˜ii + ∑ a˜i j Vi + ∑ a˜i jV j (34) j=1

!

j6=i

Choosing aii = a˜ii + ∑mj=1 a˜i j ∀ i, and ai j = a˜i j ∀i 6= j, and recalling that ∑mj=1 a˜i j < 0 we may conclude the proof.