Kalman Filtering Over Graphs: Theory and Applications

2230 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 9, SEPTEMBER 2009 [18] A. D. Roy, “Safety first and the holding of assets,” Econometric, v...
Author: Tracey Terry
1 downloads 2 Views 343KB Size
2230

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 9, SEPTEMBER 2009

[18] A. D. Roy, “Safety first and the holding of assets,” Econometric, vol. 20, pp. 431–449, 1952. [19] M. K. Sain, “Control of linear systems according to the minimal variance criterion: A new approach to the disturbance problem,” IEEE Trans. Automat. Control, vol. AC-11, no. 1, pp. 118–122, Jan. 1966. [20] M. K. Sain and S. R. Liberty, “Performance measure densities for a class of LQG control systems,” IEEE Trans. Automat. Control, vol. AC-16, no. 5, pp. 431–439, Oct. 1971. [21] J. F. Sturm, “Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones,” Optimiz. Methods and Softw., vol. 11–12, pp. 625–653, 1999. [22] H. T. Toivonen and P. M. Mäkilä, “Computer-aided design procedure for multiobjective LQG control problems,” Int. J. Control, vol. 49, pp. 655–666, 1989. [23] P. Whittle, Risk-Sensitive Optimal Control. Chichester, U.K.: Wiley, 1990. [24] K. M. Zhou and J. C. Doyle, Essentials of Robust Control. Upper Saddle River, NJ: Prentice-Hall, 1998.

Kalman Filtering Over Graphs: Theory and Applications Ling Shi, Member, IEEE

Abstract—In this technical note we consider the problem of distributed discrete-time state estimation over sensor networks. Given a graph that represents the sensor communications, we derive the optimal estimation algorithm at each sensor. We further provide a closed-form expression for the steady-state error covariance matrices when the communication graph reduces to a directed tree. We then apply the developed theoretical tools to compare the performance of two sensor trees and convert a random packet-delay model to a random packet-dropping model. Examples are provided throughout the technical note to support the theory. Index Terms—Kalman filter.

I. INTRODUCTION Advances in fabrication, modern sensor and communication technologies, and computer architecture have enabled a variety of new networked sensing and control applications. For example, wireless sensor networks form an important class of such applications, which have attracted much attention in the past few years. Sensor networks can be used for environment and habitat monitoring, health care, home and office automation, traffic control, etc. [1]. This area of research brings together researchers from computer science, communication, control, etc. [2]. In many wireless sensor network applications, there is an economic incentive towards using off-the-shelf sensors and standardized communication solutions. A consequence of this is that the individual hardware components might be of relatively low quality and that communication resources are quite limited. Due to the limited communication resources, data packets generated at a particular time may arrive at the sensors at variable times, not necessarily in order, and sometimes not at all. Estimation and control over such resource-constrained Manuscript received March 27, 2009; revised June 05, 2009. First published August 11, 2009; current version published September 04, 2009. Recommended by Associate Editor S. Mascolo. The author is with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TAC.2009.2026851

Fig. 1. State estimation using a wireless sensor network.

networks thus require new design paradigms beyond traditional sampled-data control. For example, consider the problem of state estimation over such a network using a Kalman filter. The Kalman filter is a well-established methodology for model-based fusion of sensor data [3]. In the standard Kalman filter, it is assumed that sensor data are transmitted along perfect communication channels and are available to the estimator instantaneously, and no interaction between communication and control is considered. Kalman filtering under certain information constraints, such as decentralized implementation, has been extensively studied [4]. Implementations for which the computations are distributed among network nodes were considered by Alriksson and Rantzer [5]. Sinopoli et al. [6] studied Kalman filtering with intermittent sensor observations, and they showed that there exists a critical packet arrival rate below which the expected value of the estimation error covariance matrix becomes unbounded. The problem of Kalman filtering for systems with delayed measurements is not new and has been studied even before the emergence of networked control [7], [8]. It is well known that discrete-time systems with constant or known time-varying bounded measurement delays may be handled by state augmentation in conjunction with the standard Kalman filtering or by the reorganized innovation approach [9]. This technical note focuses on developing theoretical tools for distributed estimation over sensor networks. The main contributions are summarized as follows. 1) Given an undirected graph G that represents the sensor communications, we provide an optimal estimation algorithm at each sensor. The algorithm is fully distributed, and can deal with arbitrary data packet drops in the network. 2) When the communication graph reduces to a directed tree, we provide an exact expression on the steady-state error covariance matrices at each sensor. 3) We apply the developed theoretical tools to compare the performance of two sensor trees and convert a random packet-delay model to a random packet-dropping model. The rest of the technical note is organized as follows. In Section II, we give the mathematical models of the considered problems, and provide some preliminary results on Kalman filtering to facilitate the analysis in the remaining sections. In Section III, we present the main result of the technical note. Some concluding remarks and discussions are given in Section IV. II. PROBLEM SETUP Consider the problem of distributed state estimation over a wireless sensor network (Fig. 1). The process dynamics is described by

xk+1 = Axk + wk : A

wireless

sensor

network

consisting

(1) of

N

sensors

fS0 ; S1 ; . . . ; SN 01 g is used to measure the state. When Si takes a measurement of the state in (1), it returns

0018-9286/$26.00 © 2009 IEEE

yki

= Hi xk + vki :(i = 0; 1; . . . ; N 0 1:)

(2)

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 9, SEPTEMBER 2009

Fig. 2. Graph Example.

Bk k Bk1 j

1=

0 j

k=

0

where vk is zero-mean Gaussian random vectors with [vk vj 0 ] = kj Rk ; Rk > 0, and [wk vj 0 ] = 0 8j; k . Notice that we consider time-varying Ck and Rk here. The Kalman filter in its most general form can assume time-varying A and Q. The special form we look at here suffices for deriving the optimal estimation algorithms in subsequent sections. Assume a linear estimator receives yk and computes the optimal state estimate at each time k . Let k be the set of all measurements received by the estimator at time k . Define

Y

In (1) and (2), xk 2 IRn is the state vector in the real n-dimensional vector space, yki 2 IRm is the observation vector at Si ; wk 2 IRn and vki 2 IRm are zero-mean Gaussian random vectors with [wk wj 0 ] = kj Q; Q  0; [vki vti 0 ] = kt 5i ; 5i > 0; [vki vtj 0 ] = 0 8t; k and i 6= j; [wk vti 0 ] = 0 8i; t; k , where p kj = 0 if k 6= j and kj = 1 otherwise. We assume that (A; Q) is controllable. We use an undirected graph G (e.g., Fig. 2) to represent the sensor communications. The nodes N of G correspond to the N sensors fS0 ; S1 ; . . . ; SN 01 g, and the edges E of G correspond to the active communication links between the sensors, e.g., eij 2 E means Si communicates with Sj , etc. We assume G is connected, i.e., there exists a path between any Si and Sj for i 6= j . At time k; Si generates the measurement packet yki and sends it with all previously received measurements from its neighbor sensors that are after k 0 D to all its neighbors, where D  1 is a constant. This is reasonable, as when D is sufficiently large, the late-arriving measurement related to the system state in the far past may not contribute much to the improvement of the accuracy of the current estimate. We assume all data packets yki are time-stamped, therefore when a sensor receives a data packet, it will know when the measurement is taken and from which sensor it comes. Let Bki j k0l (l = 0; . . . ; D 0 1) be the set of all measurement packets that are taken at time k 0 l and are available at Si at time k . For example, consider S1 in Fig. 2, when D = 2; S1 sends fyk1 ; yk001 ; yk301 ; yk401 g to S0 ; S3 ; S4 and receives fyk0 ; yk101 ; yk201 g from S0 ; fyk3 ; yk101 ; yk401 ; yk601 g from S3 , and fyk4 ; yk101 ; yk301 ; yk501 g from S4 . Therefore 1

2231

1

2

3

4

5

6

yk01 ; yk01 ; yk01 ; yk01 ; yk01 ; yk01 ; yk01 yk0 ; yk1 ; yk3 ; yk4 :

Remark 2.1: Notice that since we only require G to be connected, G may contain cycles. This implies that the same measurement packet

may arrive at a sensor node multiple times, e.g., yk101 is received twice by S1 in the previous example. When this happens, the sensor node simply discards any packet that has been received before and the set Bki j k0l only includes distinct measurement packets that are taken at time k 0 l. It is the set Bki j k0l that will be processed by Si at time k as we shall see in Section III. In this technical note, we are interested in the following problem. Problem 2.2: Given an undirected graph G that represents the sensor communications with possible data packet drops, find out the optimal i ^k computed at each sensor Si . state estimate x Before we provide an optimal estimation algorithm for each sensor i in Section III, we first provide a short summary of Kalman filtering upon which our main result relies. A. Kalman Filtering Preliminaries Consider the process in (1) with the following single sensor measurement equation:

yk

=

Ck xk + vk

(3)

j Yk ] [(xk 0 x ^k )(xk 0 x ^k ) j Yk ]

x^k Pk P

[xk

(4)

0

lim

k!1

Pk ; if

(5)

the limit exists:

(6)

It is well known that x ^k and Pk can be computed as (^ xk ; Pk ) =

KF

KF(^xk

1 ; Pk01 ; yk ; Ck ; Rk )

0

where denotes the Kalman filter which consists of the following update equations at time k:

x^k j k01 = Ax^k01 ; Pk j k01 = APk01 A0 + Q Kk = Pk j k01 Ck0 [Ck Pk j k01 Ck0 + Rk ]01 x^k = Ax^k01 + Kk (yk 0 Ck x^k j k01 ) Pk = (I 0 Kk Ck )Pk j k01 :

(7) (8) (9) (10) (11)

Let n + be the set of n by n positive semi-definite matrices. For funcn f1 (f2 (X )). tions f1 ; f2 : n + ! + , define f1  f2 as f1  f2 (X ) n as ! Define the functions h; g~[C;R] ; g[C;R] : n + +

h(X ) g~[C;R] (X ) g[C;R] (X )

AXA0 + Q X 0 XC 0 [CXC 0 + R]01 CX h  g~[C;R] (X ):

(12) (13) (14)

We write g[C;R] (X ) and g~[C;R] as gC and g~C when there is no confusion on the underlying parameters R. With some manipulation, it can then be shown that Pk j k01 and Pk from (8) and (11) evolve as

Pk j k01 = gC (Pk01 j k02 ) Pk = g~C (Pk j k01 ):

(15) (16)

When the parameters Ck and Rk are not time-varying, i.e., Ck = C and Rk = R, we have the following result on the steady-state error covariance matrix P . R for all k  0. Further Lemma 2.3: Assume Ck = C; Rk = p assume that (A; C ) is observable and (A; Q) is controllable. Then P exists and satisfies P = g~  h(P ). Proof: Standard result from Kalman filtering analysis (e.g., [3]) and the proof is omitted. Now consider the case when data packet can be dropped by the network. Let k be the indicator functor for yk at time k , i.e., k = 1 means yk is received and k = 0 otherwise. In this case, (^ xk ; Pk ) is ) [6]. We known to be computed by a modified Kalman filter ( xk ; Pk ) in compact form as write (^

MKF

(^ xk ; Pk ) =

MKF(^xk

1 ; Pk01 ; k ; yk ; Ck ; Rk )

0

2232

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 9, SEPTEMBER 2009

Fig. 4. Three sensor trees.

MKF

Fig. 3. Kalman filter iterations at time k .

which represents the same set of update equations as in (7)–(9) together with

x^k = Ax^k01 + k Kk (yk 0 Ck Ax^k01 ) Pk = (I 0 k Kk Ck )Pk j k01 : Notice that if k = 1 for all k , then standard Kalman filter.

(17) (18)

MKF simply reduces to the

III. KALMAN FILTERING OVER GRAPHS Let the undirected graph G which represents the sensor communica^ik (G ); Pki (G ); P i (G ) tions be given, and consider any Si 2 G . Define x at Si similar as that in (4)–(6). We write x ^ik (G ) as x^ik , etc., for convenience in this section. In Section II, we denote Bki j k0l as the set of all measurement packets that are taken at time k 0 l and are available at Si at time k . It is easy to verify that

Bki

8i; k; and 0  l  D 0 1: In other words, Si has more measurements of time k 0 l at time k than at time k 0 l. Therefore, Si can obtain a better estimate of xk l at time k than at time k 0 l. This inspires us to recompute the optimal estimate lk l

0 j 0

 Bki

j

where Cki j k0l ; Rki j k0l (l = 0; . . . ; D 0 1) are the joint measurement matrix and measurement noise covariance matrix of the measurement is used to update x ^ik0l data set Bki j k0l . In case Bki j k0l = ;; i and Pk0l . Proof: We know that the estimate x ^ik is generated from the estimate of x ^ik01 together with all the available measurements at time k through a Kalman filter. Similarly, the estimate x ^ik01 is generated from i the estimate of x ^k02 together with all the available measurements for time k 0 1 at time k , etc. This recursion for D steps corresponds to the D Kalman filters stated in the theorem. Remark 3.2: Notice that when implementing the algorithm in Theorem 3.1, each Si only processes Bki j k0l ; l = 0; . . . ; D 0 1. As seen in Section II, Bki j k0l is obtained through communication with its one-hop neighbors, and no complete knowledge of the graph is needed. Therefore the estimation algorithm presented in Theorem 3.1 is fully distributed.

k0l ;

A. Kalman Filtering Over a Tree In this section, we apply the estimation procedure in Theorem 3.1 to a directed tree that is rooted at Si with depth D . The joint measurement matrix Cki j k0l and noise covariance matrix Rki j k0l in this case can be written as Cli and Rli respectively (l = 0; . . . ; D 0 1). It is easy to verify that Cli and Rli satisfy the following:

C0i = 00i ; Cli = [Cli01 ; 0li ]; 8l = 1; . . . ; D 0 1 and

R0i = 70i ; Rli = diag(Rli01 ; 7li ); 8l = 1; . . . ; D 0 1

0

of the previous states and use them to generate the current estimate. That is the basic idea contained in Theorem 3.1, where we recompute the optimal estimate of xk0D+1 ; . . . ; xk01 at time k and then make ^ik . Fig. 3 use of the updated estimates to compute the current estimate x shows the overall estimation scheme at time k . Theorem 3.1: For Si in the undirected graph G with communication ^ik and its error covariance matrix Pki can depth D , the optimal estimate x be computed from D Kalman filters in sequence as

(^ xik0D+1 ; Pki0D+1 ) =

KF

B

x^ik0D ; Pki0D ; ki j k0D+1 ;

Cki j k0D+1 ; Rki j k0D+1

KF

x^ik02 ; Pki02 ; Bki j k01 ; Cki j k01 ; Rki j k01

KF

x^ik01 ; Pki01 ; Bki j k ; Cki j k ; Rki j k

(^ xk ; Pk )

 gC  1 1 1  gC

i (P1 )

(19)

i ) = Pi . i is the unique solution to g where P1 (P1 1 C Proof: Equation (19) follows directly from (15) and (16). For a given directed tree T with root at S0 , define 0hop

(^ xik01 ; Pki01 )

=

P i = g~C

Sl

.. .

=

where 0li and 7li are the joint measurement matrix and the joint noise covariance matrix of those sensors that are exactly l + 1 hops away from Si . Following the optimal estimation algorithm over a graph in Theorem 3.1, we have the following result: Corollary 3.3: Consider a sensor tree Ti with depth Di that is rooted i at Si . If (A; CD 01 ) is observable, then the steady-state error covariance i  matrix P satisfies

(T )

fSi : Si is within l 0 hops away from S0 g (20)

for l = 1; . . . ; D . For example, in Fig. 4, S10hop (T2 ) = fS1 ; S2 g, and S20hop (T2 ) = fS1 ; S2 ; S3 ; S4 g. Theorem 3.4: For two trees T1 and T2 , if Sl0hop (T1 )  Sl0hop (T2)8l = 1; . . . ; D, then P (T2)  P (T1).

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 9, SEPTEMBER 2009

2233

Fig. 6. Estimation over a packet-dropping network.

Fig. 5. Performance of the three sensor trees.

Proof: Since Sl0hop (T1 )  Sl0hop (T2 )8l = 1; . . . ; D , it is easy to verify that gC (T )  gC (T ) and g~C (T )  g~C (T ) . Therefore the theorem follows immediately from (19). Corollary 3.5: If T1  T2 , then P (T2 )  P (T1 ). These results provide an easy way to compare the performance of different sensor trees. Example 3.6: Consider the three sensor trees in Fig. 4. Apparently, T1  T2 , and Sj 0hop (T2 )  Sl0hop (T3 ); l = 1; 2, therefore from Theorem 3.4 and Corollary 3.5, we immediately obtain

A=

0

1

H1 H2 ; H3 H4

=

1

0

1

0

0

1

0

1

;Q =

(^ xk0i ; Pk0i ) =

(^ xk0i+1 ; Pk0i+1 ) =

(^ xk01 ; Pk01 ) =

This is indeed verified through the simulation in Fig. 5, where the following parameters are used:

0:1

MKF

.. .

P (T3 )  P (T2 )  P (T1 ):

1

Let kk0i = 1 or 0 be the indicator function whether the measurement packet generated at time k 0 i arrives at the estimator at time k i or not. Define k0i

k0j , i.e., k0i indicates whether yk0i j =0 k0i is received by the estimator at or before k . The recursive Kalman filtering technique from Theorem 3.1 dealing with delayed measurement provides a promising way to bridge the gap between packet drop analysis and packet delay analysis. The basic ideas is as follows. Since yk0i may arrive at time k , we can improve the estimation quality by recalculating x ^k0i utilizing the new available measurement yk0i . Once x ^k0i is updated, we can update x ^k0i+1 in a similar fashion. The following proposition summarizes the estimation process. Proposition 3.7: Let yk0i ; i 2 [0; D 0 1] be the oldest measurement received by the estimator at time k . Then x ^k is computed by i + 1 s as

0:5 0

0

0:5

(^ xk ; Pk ) =

MKF(^x 0 01 ; P 0 01 ; 1; y 0 ) MKF(^x 0 ; P 0 ; 0 +1 ; y 0 +1 ) k

i

k

i

k

k

i

i

k

k

i

k

i

i

MKF(^x 02; P 02 ; 01 ; y 01) MKF(^x 01; P 01 ; ; y ): k

k

k

k

k

k

k

k

Proof: Similar to the Proof of Theorem 3.1. Define ^i (D) as =0 f (j ); 01 f (j ); j =0 i j D

^i (D)

 i < D; if i  D: if 0

Then it is easy to verify that

and 5i = 0:5(i = 1; . . . ; 4).

Pr[ 0

B. From Packet Delay to Packet Drop

k

Consider the problem of state estimation over a packet-delaying network as seen from Fig. 6. The process dynamics is the same as in (1) and sensor measurement equation is given by

yk = Cxk + vk :

(21)

After taking a measurement at time k , the sensor sends yk to a remote estimator for generating the state estimate. We assume that the measurement data packets from the sensor are to be sent across a packet-delaying network to the estimator. Each yk is delayed by dk times, where dk is a random variable described by a probability mass function f , i.e.,

f (j ) = Pr[dk = j ]; j = 0; 1; . . .

(22)

We assume dk and dk are independent if k1 6= k2 , and the estimator s ^k ) that are delayed by D times or more. discards any data yk (or x Given the system and the network delay models in (1), and (21)–(22), [Pk  M ], the probability that Pk we are interested in computing is bounded by a given matrix M 2 n + . The probabilistic metric was proposed in [10] for state estimation over packet-dropping networks.

Pr

i

= 1] = ^i (D ):

(23)

Pr

Notice that now [ k0i = 1] becomes a constant, thus given a stochastic description of the packet delays in (22), we can convert the packet delay model into a packet drop model. Similar to [11], we are then able to obtain similar bounds on [Pk  M ] using the corresponding new packet arrival rate ^i (D).

Pr

IV. DISCUSSIONS In this technical note, we consider the problem of distributed estimation over sensor networks. We derive the optimal estimation algorithm at each sensor when the sensor communications are represented by an undirected graph. When the communication graph reduces to a directed tree, we also provide an exact expression on the steady-state error covariance matrices at each sensor. We show in Section IV how the previously developed theoretical tools can be applied to compare the performance of two sensor trees and convert a random packet-delay model to a random packet-dropping model. There are many interesting directions that can be pursued along the line of this work. For example, if the sensors communicate with their neighbors using their state estimate instead of measurement data, how

2234

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 9, SEPTEMBER 2009

should the optimal estimation algorithm be in this case? What are the tradeoffs using the state estimate communication and using the measurement communication? Given a desired performance metric, for exi [Pk  M ]  1 0 i ; 8i for a given ample, it is required that 0 < i  1, how should we determine the minimum number D ? This is interesting as D determines the computational load at each sensor (i.e., running a chain of D Kalman filters at each time). If centralized control and coordination is allowed, what is the optimal communication graph for the sensors so that maxi Pki is minimized? Those problems will be pursued in the future.

On Structural Properties of the Lyapunov Matrix Equation for Optimal Diagonal Solutions

Pr

REFERENCES [1] N. P. M. , Sensor Networks and Configuration. New York: Springer, 2007. [2] D. Culler, D. Estrin, and M. Srivastava, “Overview of wireless sensor networks,” IEEE Computer, Special Issue in Sensor Networks, vol. 37, no. 8, pp. 41–49, Aug. 2004. [3] B. Anderson and J. Moore, Optimal Filtering. Englewood Cliffs, NJ: Prentice Hall, 1990. [4] D. Siljak, Large-Scale Dynamic Systems: Stability and Structure. New York: North-Holland, 1978. [5] P. Alriksson and A. Rantzer, “Distributed kalman filtering using weighted averaging,” in Proc. 17th Int. Symp. Math. Theory Networks Syst., Kyoto, Japan, 2006, CD ROM. [6] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. Jordan, and S. Sastry, “Kalman filtering with intermittent observations,” IEEE Trans. Automat. Control, vol. 49, no. 9, pp. 1453–1464, Sep. 2004. [7] A. Ray, L. W. Liou, and J. Shen, “State estimation using randomly delayed measurements,” J. Dyn. Syst., Meas. Control, vol. 115, pp. 19–26, 1993. [8] E. Yaz and A. Ray, “Linear unbiased state estimation for random models with sensor delay,” in Proc. IEEE Conf. Decision Control, Dec. 1996, pp. 47–52. [9] H. Zhang, L. Xie, D. Zhang, and Y. Soh, “A re-organized innovation approach to linear estimation,” IEEE Trans. Automat. Control, vol. 49, no. 10, pp. 1810–1814, Oct. 2004. [10] L. Shi, M. Epstein, A. Tiwari, and R. M. Murray, “Estimation with information loss: Asymptotic analysis and error bounds,” in Proc. IEEE Conf. Decision Control, 2005, pp. 1215–1221. [11] L. Shi, M. Epstein, and R. M. Murray, “Kalman filtering over a packet dropping network: A probabilistic approach,” in Proc. 10th Int. Conf. Control, Automat., Robot. Vision, Hanoi, Vietnam, Dec. 2008.

Sandip Roy and Ali Saberi

+

0

Abstract—We revisit the classical problem of finding a positive diagonal to the Lyapunov equation that minimizes solution the Lyapunov exponent (the maximum eigenvalue of ) for , with the aim of identifying structural properties of the Lyapunov matrix equation at the optimum. Using eigenvalue sensitivity notions together with optimization machinery, we are able to obtain an explicit characterization of the minimum Lyapunov exponent that provides such structural insight.

+

Index Terms—Lyapunov exponent.

I. INTRODUCTION The construction of positive diagonal solutions P to the Lyapunov equation AT P + P A < 0; A 2 Rn2n , has been of interest to the linear algebra and control systems communities (e.g., [1], [2])1. Researchers in these fields have been motivated by a range of applications, including analysis of economic systems over ranges of market speed adjustment rates [3], characterization of singularly-perturbed systems [4], analysis of interconnected systems [6], and design of neural networks [7], among others. Progress has been made in several fronts in this body of research: 1) Algorithms to check for the existence of a diagonal solution have been developed [2], [5]. More generally, Boyd has shown that the existence of structured solutions to the Lyapunov equation—including diagonal ones—can be found by solving a convex optimization [8]. 2) Explicit necessary and sufficient conditions for the existence of a diagonal solution have been obtained in several special cases, including for A 2 R323 (as developed by [9] in the linear algebra literature and revisited by [7] in the controls community), for block-triangular and normal A (e.g., [9]–[11]), and for A in the class of M matrices (see, e.g., [12]). More generally, the the existence of a diagonal solution has been equivalenced with other linear algebraic conditions (e.g., [12], [13]), leading to the explicit necessary and sufficient conditions as well as more general sufficient conditions. 3) The methodologies have been extended (often trivially) to minimize the Lyapunov exponent—the largest eigenvalue of AT P + P A, which serves as measure of performance/robustness in several applications—with respect to P (see e.g., [2], [5]). In our recent studies of high-performance controller design for communicating-agent networks (e.g., teams of autonomous robots or networks of sensors), we also have encountered the problem of finding a diagonal solution that minimizes the Lyapunov exponent, among other Manuscript received April 06, 2008; revised November 07, 2008. First published August 18, 2009; current version published September 04, 2009. This work was supported in part by the National Science Foundation under Grants ECS-0528882 and ECS-0725589, by the National Aeronautics and Space Administration under Grant NNA06CN26A, and by the Office of Naval Research under Grant N000140310848. Recommended by Associate Editor M. Fujita. The authors are with Washington State University, Pullman, WA 99164 USA (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/TAC.2009.2026852 1A matrix that admits such a diagonal solution is termed diagonally Lyapunov stable or alternatively Volterra Lyapunov stable in the literature.

0018-9286/$26.00 © 2009 IEEE

Suggest Documents