Kalman filtering for time-delayed linear systems

Science in China Series F: Information Sciences 2006 Vol.49 No.4 461—470 461 DOI: 10.1007/s11432-006-2008-4 Kalman filtering for time-delayed linea...
0 downloads 0 Views 567KB Size
Science in China Series F: Information Sciences 2006 Vol.49 No.4 461—470

461

DOI: 10.1007/s11432-006-2008-4

Kalman filtering for time-delayed linear systems LU Xiao1,2 & WANG Wei1 1. Research Center of Information and Control, Dalian University of Technology, Dalian 116024, China; 2. Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055, China Correspondence should be addressed to Lu Xiao (email: [email protected]) Received November 1, 2005; accepted December 19, 2005

Abstract This paper is to study the linear minimum variance estimation for discretetime systems. A simple approach to the problem is presented by developing re-organized innovation analysis for the systems with instantaneous and double time-delayed measurements. It is shown that the derived estimator involves solving three different standard Kalman filtering with the same dimension as the original system. The obtained results form the basis for solving some complicated problems such as H∞ fixed-lag smoothing, preview control, H∞ filtering and control with time delays. Keywords: discrete-time systems, delayed measurements, innovation analysis, Riccati equations.

The problem of estimation has been one of the key research topics of control community since Wiener filtering[1]. However, only time-invariant single-variable stationary signal can be considered for Wiener filtering. In the 1960’s, Kalman filtering[2,3], which has been a major tool of state estimation and control since then, was presented to conquer the limitation of Wiener filtering. However, the Kalman filtering formulation is only applicable to the standard systems without delays. Much interest has been attached to the case of time delays for the actual requirements. For the case of discrete-time systems, the problem has been investigated via system augmentation[2,4], but the approach is computationally expensive, especially when the dimension of the system is high and the measurement lags are large. We aim to present a new approach denoted as re-organized innovation sequence to the estimate for the linear discrete-time system with instantaneous and two time-delayed measurements. The main idea is to re-organize the delayed measurements into delay-free measurements from different systems, then define innovation sequence for the reorganized measurements, thus the new Kalman filtering formulae can be given. The new approach consists of three Riccati equations with the same dimensions as the original system. It should be noted that the problem in the paper is related with many difficult probwww.scichina.com

www.springerlink.com

462

Science in China Series F: Information Sciences

lems in control, such as H∞ fixed-lag smoothing estimate[5,6], H∞ filtering for the delayed system[7], H∞ preview control[8] and the control with delay in the control signal, etc. The derived re-organized innovation sequence forms the basic theory for these prob― lems[5 9]. The rest of the paper is organized as follows. The problem statement is in section 1. Section 2 presents the re-organized innovation sequence theory, Riccati equation and the optimal filter. Section 3 gives an example. The conclusions are given in section 4. 1

Problem statement Consider the linear discrete-time system: x (t + 1) = Φ (t ) x (t ) + Γ (t )u(t ),

(1)

y (t ) = H (t ) x (t ) + v (t ),

(2)

y1 (t ) = H1 (t ) x (t1 ) + v1 (t ), t1 = t − d1 , d1 > 0,

(3)

y2 (t ) = H 2 (t ) x (t2 ) + v2 (t ), t2 = t1 − d 2 , d 2 > 0,

where x (t ) ∈ R

n

is the state, and u(t ) ∈ R

r

(4)

is the input noise, y (t ) ∈ R

m

and

yi (t ) ∈ R pi (i = 1, 2) are measurements and delayed measurements, respectively. v (t ) ∈ R m

and vi (t ) ∈ R pi (i = 1, 2) are measurement noises. Coefficient matrices are of compatible dimensions, that is, Φ (t ) ∈ R n× n ,

Γ (t ) ∈ R n×r ,

H (t ) ∈ R p1 ×n ,

H 2 (t ) ∈ R p1 ×n . The

initial state x (0) , u(t ), v (t ) and vi (t ) (i = 1, 2) are uncorrelated white noises with zero means and known covariance matrices, E [ x (0) x T (0)] = P0 , E [u(k )uT ( j )] = Qu (k )δ kj ,

E [v (k )v T ( j )] = Qv (k )δ kj , E [vi (k )v T ( j )] = Qvi (k )δ kj (i = 1, 2). In (1)―(4), y1 (t )( y2 (t )) is the measurement of the state x (t1 ) ( x (t2 )) at time t with delay d1 (d1 + d 2 ), where di is an integer. So the system (1)―(4) is not the standard Kalman filtering. Let y (t ) denote the observation of the system. Then ⎧[ y ′(t ) 0 0]′, ⎪ y (t ) = ⎨[ y ′(t ) y1′ (t ) 0]′, ⎪[ y ′(t ) y ′ (t ) y ′ (t )]′, 1 2 ⎩

0 ≤ t < d1 , d1 ≤ t < d1 + d 2 ,

(5)

t ≥ d1 + d 2 .

The problem in the paper can be stated as: Given the observation {{ ys (i )}it =0 } , find a linear least mean square error estimator xˆ (t | t ) of the state x(t). It should be noted that the above optimal problem has much application such as communications and multiple-sensor fusion[10]. 2

Optimal estimation based on re-organized innovation

2.1

Re-organized innovation sequence

In this subsection, the instantaneous and two delayed measurements will be

Kalman filtering for time-delayed linear systems

463

re-organized to give the optimal estimate to the system (1)―(4). For the convenience of discussion, we will suppose that t ≥ d1 + d 2 , t < d1 + d 2 can be considered similarly. As is well known, given the measurement sequence { ys (i )}it =0 , the optimal estimate xˆ (t | t ) is the projection of x(t) onto the linear space spanned by the measurement se-

quence, denoted by L {{ y (i )}ti =0 } . Note that L {{ y (i )}ti =0 } is equivalent to

L (Y 3 (0)," ,Y 3 (t2 ); Y 2 (t2 + 1)," ,Y 2 (t1 ); Y 1 (t1 + 1)," ,Y 1 (t )},

(6)

where ⎡

y (τ )

⎤ ⎥ , Y (τ )  ⎡ y (τ ) ⎤ , Y  y (τ ). 2 1 ⎢ y (τ + d ) ⎥ ⎥ ⎣ 1 1 ⎦ ⎢⎣ y2 (τ + d1 + d 2 ) ⎥⎦

Y 3(τ )  ⎢⎢

y1 (τ + d1 )

(7)

It is obvious that Y i (t ) satisfies

Yi (t ) = H i (t ) x (t ) +Vi (t ), i = 1, 2,3,

(8)

H (t ) ⎡ ⎤ ⎡ H (t ) ⎤ ⎢ H 3(t )  ⎢ H1 (t + d1 ) ⎥⎥ , H 2(t )  ⎢ ⎥ , H 1  H (t ), ⎣ H1 (t + d1 ) ⎦ ⎢⎣ H 2 (t + d1 + d 2 ) ⎥⎦

(9)

v(t ) ⎡ ⎤ ⎡ v(t ) ⎤ ⎢ , V 1(t )  v(t ). V 3(t )  ⎢ v1 (t + d1 ) ⎥⎥ , V 2(t )  ⎢ v1 (t + d1 ) ⎦⎥ ⎣ ⎢⎣ v2 (t + d1 + d 2 ) ⎥⎦

(10)

It is clear that V 1(t ), V 2(t ) and V3 (t ) are white noises of zero means and covariance matrices ⎡Q (t ) ⎤ 0 0 ⎢ v ⎥ Qv3 (t ) = ⎢ 0 Qv1 (t + d1 ) 0 ⎥, ⎢ ⎥ 0 Qv2 (t + d1 + d 2 ) ⎦⎥ ⎣⎢ 0

(11)

0 ⎡Qv (t ) ⎤ Qv2 (t ) = ⎢ , Qv1 (t ) = Qv (t ). Qv1 (t + d1 ) ⎥⎦ ⎣ 0

It is clear that (1) and (8) are the standard state-space model. So the re-organized observations of (6) are delay-free. But the new observation are from different measurements (8) (i = 1, 2, 3). For the convenience of discussion, we will give the following definitions: ξˆ( j | t ) : denotes the estimate of ξ ( j ) given the observation { y (0)," , y (t )} . s

s

ξˆ( j | t , r , s ) : denotes the estimate of ξ ( j ) given the observation { y (0)," , y (t ); y1 (d1 )," y1 (r + d1 ); y2 (d1 + d 2 )," , y2 (d1 + d 2 + s )} .

From the above definitions, it is clear that xˆ ( j | t , t , t ) is the standard Kalman filtering for the system (1) and (8) for i = 3. We now have the following relationships. According to the above definitions, the estimate xˆ (t | t ) can be rewritten as xˆ (t | t , t1 , t2 ). Introduce the following stochastic sequence with the new observations Y i (t ) : w1t (k )  Y 1 (t + k ) −Y 1ˆ (t + k | t + k − 1, t , t − d 2 ), k > 0,

(12)

464

Science in China Series F: Information Sciences

w2t (k )  Y 2 (t + k ) −Y 2ˆ (t + k | t + k − 1, t + k − 1, t ), k > 0, w3t (k )

 Y 3 (t + k ) −Y ˆ3 (t | t − 1, t − 1, t − 1), Y ˆ3 (0 | −1, −1, −1) = 0.

(13) (14)

where Y 3ˆ (t | t − 1, t − 1, t − 1) is the projection of Y 3 (t ) onto the linear space of {Y 3 (0)," ,Y 3 (t − 1)} ,Y ˆ2 (t + k | t + k − 1, t + k − 1, t ) is the projection of Y 2 (t + k ) onto the

linear space of {Y 3 (0)," ,Y 3 (t ); Y 2 (t + 1)," ,Y 2 (t + k − 1)} , Y 1ˆ (t + k | t + k − 1, t , t − d 2 ) is the projection of Y 1 (t + k )

onto the linear space of {Y 3 (0)," ,Y 3 (t − d 2 );

Y 2 (t − d 2 + 1)," ,Y 2 (t ); Y 1 (t + 1)," ,Y 1 (t + k − 1)} . It is clear that w3t is the standard Kalman filtering innovation sequence for the system (1) and (8) for i = 3, we have w1t (k ) = H 1 (t + k )e1t (k ) +V 1 (t + k ), k > 0,

(15)

w2t (k ) = H 2 (t + k )e2t (k ) +V 2 (t + k ), k > 0,

(16)

w3t (k ) = H 3 (t )e3t +V 3 (t ),

(17)

e1t (k )  x (t + k ) − xˆ (t + k | t + k − 1, t , t − d 2 ),

(18)

e2t (k )

 x (t + k ) − xˆ (t + k | t + k − 1, t + k − 1, t ),

(19)

e3t (k )

 x (t ) − xˆ (t | t − 1, t − 1, t − 1).

(20)

where

The following lemma shows that w is the innovation sequence of the re-organized observations Y i . Theorem 1. {w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (d1 )}

is the innovation sequence which spans the same linear space as L {Y 3 (0)," ,

Y 3 (t2 ); Y 2 (t2 + 1)," ,Y 2 (t1 );Y 1 (t1 + 1),"Y 1 (t )} or L { ys (0)," , ys (t )}. Proof.

It can be easily known from (1) and (8) that w3i , i ≥ 0 (or w2t2 (i ), i > 0, or

w1t1 (i ), i > 0)

is a linear combination of the observations Y 3 (0)," ,Y 3 (i )

(or

Y 3 (0)," ,Y 3 (t2 ); Y 2 (t2 + 1)," ,Y 2 (t2 + i ) or Y 3 (0)," ,Y 3 (t2 );Y 2 (t2 + 1)," ,Y 2 (t1 );Y 1 (t1 + 1)," ,Y 1 (t1 + i )) . Conversely, Y 3 (i ), i ≥ 0 (orY 2 (t2 + i ), i > 0 or Y 1 (t1 + i ), i > 0) can be

given in terms of a linear combination of w30 ," , w3i (or w30 ," , w3t2 ; w2t2 (1)," , w2t2 (i ) or w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (i )). Thus, {w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (d1 )} spans the same linear space as Y 3 (t2 ); " ,Y 2 (t1 ); Y 1 (t1 + 1)," ,Y 1 (t )} or equivalently L { ys (0)," ,

L {Y 3 (0)," ,Y 2 (t2 + 1), ys (t )}. Next, we show

that w is an uncorrelated sequence. In fact, for any i > 0, j > 0 and k≥0, from (15), we have

E [ w1t1 (i )( w3k )T ] = E [H 1 (t1 + i )e1t1 (i )( w3k )T ] + E [V (t1 + i )( w3k )T ].

(21)

where, since H 1 (t1 + i ) is a certain matrix, so we have E [H 1 (t1 + i )e1t1 (i )( w3k )T ]

Kalman filtering for time-delayed linear systems

= H 1 (t1 + i )E [e1t1 (i )( w3k )T ].

e1t1 (i )

For

465

is

the

state

prediction

error,

that

is

E [e1t1 (i )( w3k )T ] = 0. Then we have E [ w1t1 (i )( w3k )T ] = H 1 (t1 + i )E [e1t1 (i )( w3k )T ] + E [V (t1 + i )( w3k )T ] = E [V (t1 + i )( w3k )T ].

(22)

In the above, we use the fact that the observations noises V (t1 + i ) are uncorrelated with w3k . Thus E [ w1t1 (i )( w3k )T ] = 0 , which means ( w3k )T (k ≥ 0)

is uncorrelated with

w1t1 (i ) (i > 0) . Similarly, we can verify that E [ w1t1 (i )( w2t2 ( j ))T ] = 0 and E [ w2t2 ( j )( w3k )T ] = 0. So, {w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (d1 )} is an innovation sequence,

thus the proof is over. 2.2

Riccati equation

P3 t  E [e3t (e3t )T ],

(23)

t2

[e2t2 (i )(e2t2

t1

[e1t1 (i )(e1t1 (i ))T ],

P2 (i )  E P1 (i )  E

T

(i )) ], i > 0, i > 0.

(24) (25)

It is clear that P3 t is the solution to the Riccati equation of the system (1) and (8) for i = 3. Next, we will give the solution P2 t2 (i ) and P1 t1 (i ) to the Riccati equation of the system (8) with i = 2 and i = 1, respectively. For the convenience of discussion, we will give the innovation covariance matrices. From (15), (16) and (17), the innovation covariance matrices can be calculated as Q3t  E [ w3t ( w3t )T ] = H 3 (t )P3 t H 3 T (t ) + Qv3 (t ),

(26)

Q2t2 + i  E [ w2t2 (i )( w2t2 (i ))T ] = H 2 (t2 + i )P2 t2 (i )H 2T (t2 + i ) + Qv2 (t2 + i ),

(27)

Q1t1 + i  E [ w1t1 (i )( w1t1 (i ))T ] = H 1 (t1 + i )P1 t1 (i )H 1 T (t1 + i ) + Qv1 (t1 + i ).

(28)

Then we have Theorem 2. The covariance matrices P cati equation: ● P3

t

P3 where

can be calculated recursively as t +1

Q3t

●For

can be given by the following three Ric-

= Φ (t )P3 tΦ T (t ) − Φ (t )P3 t H 3 T (t )(Q3t ) −1 H 3 (t )P3 tΦ T (t ) + Γ (t )Qu (t )Γ T (t ),

(29)

0

is as in (27), the initial condition P3 = P0 .

i > 0, P2 t2 (i ) can be calculated recursively as

P2 t2 (i + 1) = Φ (t2 + i )P2 t2 (i )Φ T (t2 + i ) − Φ (t2 + i )P2 t2 (i )H 2T (t2 + i )(Q2t2 + i ) −1 H 2 (t2 + i )P2 t2 (i )Φ T (t2 + i ) + Γ (t2 + i )Qu (t2 + i )Γ T (t2 + i ),

where Q2t2 +i is as in (27), the initial condition P2 t2 (1) = P3 t2 +1. ●For

i > 0, P1 t1 (i ) can be calculated recursively as

(30)

466

Science in China Series F: Information Sciences

P1 t1 (i + 1) = Φ (t1 + i )P1 t1 (i )Φ T (t1 + i ) − Φ (t1 + i )P1 t1 (i )H 1 T (t1 + i )(Q1t1 + i ) −1 H 1 (t1 + i )P1 t1 (i )Φ T (t1 + i ) + Γ (t1 + i )Qu (t1 + i )Γ T (t1 + i ),

(31)

where Q1t1 + i is as in (28), the initial condition P1 t1 (1) = P2 t2 (d 2 + 1). Proof.

For P3 t , it is clear that P3 t +1 is the covariance matrix of the one step

ahead prediction error of the state x (t + 1) , associated with (1) and (8). Thus, in according to the standard Kalman filtering formulae, P3 t +1 satisfies the Riccati equation (29). For P2 t2 (i ), note that xˆ (t2 + i + 1| t2 + i, t2 + i, t2 ) is the projection of the state x (t2 + i + 1) onto the linear space of {w30 ," , w3t2 ; w2t2 (1)," , w2t2 (i )} . Since w is white

noise, the estimate xˆ (t2 + i + 1| t2 + i, t2 + i, t2 ) can be given by projection formulae xˆ (t2 + i + 1| t2 + i, t2 + i, t2 ) = Proj{ x (t2 + i + 1| w30 ," , w3t2 ; w2t2 (1)," , w2t2 (i − 1)} + Proj{ x (t2 + i + 1| w2t2 (i))} = Φ (t2 + i ) xˆ (t2 + i | t2 + i − 1, t2 + i − 1, t2 ) + Φ (t2 + i )E [ x (t2 − i )(e2t2 (i ))T ]H 2T (t2 + i )(Q2t2 + i ) −1 w2t2 (i ) =Φ (t2 + i ) xˆ (t2 + i | t2 + i − 1, t2 + i − 1, t2 ) + Φ (t2 + i )P2 t2 (i )H 2T (t2 + i )(Q2t2 + i ) −1 w2t2 (i ).

(32)

From (1) and (32), it can be easily given as e2t2 (i + 1) = x (t2 + i + 1) − xˆ (t2 + i + 1| t2 + i, t2 + i, t2 ) = Φ (t2 + i )e2t2 (i ) + Γ (t2 + i )u(t2 + i ) − Φ (t2 + i )P2 t2 (i )H 2T (t2 + i )(Q2t2 + i ) −1 w2t2 (i ). (33)

Because e2t2 (i + 1) is uncorrelated with w2t2 (i ) , so is with e2t2 (i ) and u(t2 + i ) , then we have

P 2t2 (i + 1) + Φ (t2 + i )P2 t2 (i )H 2T (t2 + i )(Q2t2 + i ) −1 H 2 (t2 + i )P2 t2 (i )Φ T (t2 + i ) = Φ (t2 + i )P2 t2 (i )Φ T (t2 + i ) + Γ (t2 + i )Qu (t2 + i )Γ T (t2 + i ),

(34)

which is (30). For P1 t1 (i ), note that xˆ (t1 + i + 1| t1 + i, t1 , t2 ) is the projection of the state x (t1 + i + 1) onto the linear space of {w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (i )} . For w is white noise, the estimate, xˆ (t1 + i + 1| t1 + i, t1 , t2 ) can be given by projection formulae as xˆ (t1 + i + 1| t1 + i, t1 , t2 ) = Proj{ x (t1 + i + 1| w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," w1t1 (i − 1)} + Proj{ x (t1 + i + 1| w1t1 (i )} = Φ (t1 + i ) xˆ (t1 + i | t1 + i − 1, t1 , t2 ) + Φ (t1 + i )E [ x (t1 + i )(e1t1 (i ))T ]H 1 T (t1 + i )(Q1t1 + i ) −1 w1t1 (i ) = Φ (t1 + i ) xˆ (t1 + i | t1 + i − 1, t1 , t2 ) + Φ (t1 + i )P1 t1 (i )H 1 T (t1 + i )(Q1t1 + i ) −1 w1t1 (i ).

(35)

From (1) and (35), we have e1t1 (i + 1) = x (t1 + i + 1) − xˆ (t1 + i + 1| t1 + i, t1 , t2 ) = Φ (t1 + i )e1t1 (i ) + Γ (t1 + i )u(t1 + i ) − Φ (t1 + i )P1 t1 (i )H 1 T (t1 + i )(Q1t1 + i ) −1 w1t1 (i ).

(36)

Kalman filtering for time-delayed linear systems

467

For e1t1 (i + 1) is uncorrelated with w1t1 (i ) , so is with e1t1 (i ) and u(t1 + i) , then we have

P1 t1 (i + 1) + Φ (t1 + i )P1 t1 (i )H 1 T (t1 + i )(Q1t1 + i ) −1 H 1 (t1 + i )P1 t1 (i )Φ T (t1 + i ) = Φ (t1 + i )P1 t1 (i )Φ T (t1 + i ) + Γ (t1 + i )Qu (t1 + i )Γ T (t1 + i ),

(37)

which is (31). 2.3

Optimal filter

In this subsection, the optimal filter will be given based on re-organized innovation sequence and associated Riccati equation. Theorem 3. Consider the system (1)―(4) with d1, d2 > 0, the optimal filter xˆ (t | t ) = xˆ (t | t , t1 , t2 ) can be computed as xˆ (t | t , t1 , t2 ) = xˆ (t | t − 1, t1 , t2 ) + P1 t1 (d1 )H 1 T (t )(Q1t ) −1[Y1 (t ) − H 1 (t ) xˆ (t | t − 1, t1 , t2 )],

(38)

where xˆ (t | t − 1, t1 , t2 ) can be calculated recursively as xˆ (t1 + i + 1| t1 + i, t1 , t2 ) = Φ (t1 + i ) xˆ (t1 + i | t1 + i − 1, t1 , t2 ) + Φ (t1 + i )P1 t1 (i )H 1 T (t1 + i )(Q1t1 + i ) −1 × [Y1 (t1 + i ) − H 1 (t1 + i ) xˆ (t1 + i | t1 + i − 1, t1 , t2 )],

xˆ (t1 + 1| t1 , t1 , t2 ),

i = 1, 2," , d1 − 1,

(39)

where P1 t1 (i )(i = 1," , d1 ) is as (31). In (39), the initial value xˆ (t1 + 1| t1 , t1 , t2 ) (i = d2) can be calculated recursively as xˆ (t2 + i + 1| t2 + i, t2 + i, t2 ) = Φ (t2 + i ) xˆ (t2 + i | t2 + i − 1, t2 + i − 1, t2 ) + Φ (t2 + i )P2 t2 (i )H 2T (t2 + i ) × (Q2t2 + i ) −1[Y2 (t2 + i ) − H 2 (t2 + i ) xˆ (t2 + i | t2 + i − 1, t2 + i − 1, t2 )],

xˆ (t2 + 1| t2 , t2 , t2 ),

i = 1," d 2 ,

(40)

where P2 t2 (i )(i = 1," , d 2 ) can be computed in (30). In (40), the initial value xˆ (t2 + 1| t2 , t2 , t2 ) can be calculated recursively as xˆ (t2 + 1| t2 , t2 , t2 ) = Φ (t2 ) xˆ (t2 | t2 − 1, t2 − 1, t2 − 1) + Φ (t2 )P3 t2 H 3 T (t2 )(Q3t2 )−1 × [Y3 (t2 ) − H 3 (t2 ) xˆ (t2 | t2 − 1, t2 − 1, t2 − 1)], xˆ (0 | −1, −1, −1) = 0,

(41)

where P3 t2 is as in (29). Proof.

By Theorem 1, xˆ (t | t ) = xˆ (t | t , t1 , t2 ) is the projection of the state x(t) onto

{w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (d1 ))} . Since w is white noise, the filter

xˆ (t | t , t1 , t2 ) can be computed by the projection formulae as xˆ (t | t , t1 , t2 ) = Proj{ x (t ) | w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (d1 − 1)} + Proj{ x (t ) | w1t1 (d1 )} = xˆ (t | t − 1, t1 , t2 ) + E [ x (t )( w1t1 (d1 ))T ](Q1t ) −1 w1t1 (d1 ) = xˆ (t | t − 1, t1 , t2 ) + P1 t1 (d1 )H 1 T (t )(Q1t ) −1[Y1 (t ) − H 1 (t ) xˆ (t | t − 1, t1 , t2 )],

which is (38).

(42)

468

Science in China Series F: Information Sciences

Similarly, from Theorem 1, xˆ (t1 + i + 1| t1 + i, t1 , t2 )(i > 0) is the projection of the state x (t1 + i + 1) onto the linear space of {w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (i )} , by

the projection formulae, xˆ (t1 + i + 1| t1 + i, t1 , t2 ) = Proj{ x (t1 + i + 1) | w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (i )} = Φ (t1 + i ) xˆ (t1 + i | t1 + i, t1 , t2 ) + Γ (t1 + i )Proj{u(t1 + i ) | w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (i )}.

(43)

Note that u(t1 + i ) is uncorrelated with the innovation w30 ," , w3t2 ; w2t2 (1)," , w2t2 (d 2 ); w1t1 (1)," , w1t1 (i ) , we have xˆ (t1 + i + 1| t1 + i, t1 , t2 ) = Φ (t1 + i ) xˆ (t1 + i | t1 + i, t1 , t2 ) = Φ (t1 + i ) xˆ (t1 + i | t1 + i − 1, t1 , t2 ) + Φ (t1 + i )E [ x (t1 + i )( w1t1 (i ))T ](Q1t1 + i ) −1 w1t1 (i ) = Φ (t1 + i ) xˆ (t1 + i | t1 + i − 1, t1 , t2 ) + Φ (t1 + i )P1 t1 (i )H 1 T (t1 + i )(Q1t1 + i ) −1 × [Y1 (t1 + i ) − H 1 (t1 + i ) xˆ (t1 + i | t1 + i − 1, t1 , t2 )],

(44)

which is (39). From Theorem 1, xˆ (t2 + i + 1| t2 + i, t2 + i, t2 )(i > 0) is the projection of the state x (t2 + i + 1) onto the linear space of {w30 ," , w3t2 ; w2t2 (1)," , w2t2 (i )} , then we have xˆ (t2 + i + 1| t2 + i, t2 + i, t2 ) = Proj{ x (t2 + i + 1) | w30 ," , w3t2 ; w2t2 (1)," , w2t2 (i )} = Φ (t2 + i ) xˆ (t2 + i | t2 + i, t2 + i, t2 ) + Γ (t2 + i ) Proj{u(t2 + i ) | w30 ," , w3t2 ; w2t2 (1)," , w2t2 (i )}.

Note that u(t2 + i) is uncorrelated with the innovation

w30 ," , w3t2 ;

w2t2

(1)," , w2t2 (i )

(45) , then

we have xˆ (t2 + i + 1| t2 + i, t2 + i, t2 ) = Φ (t2 + i ) xˆ (t2 + i | t2 + i, t2 + i, t2 ) = Φ (t2 + i ) xˆ (t2 + i | t2 + i − 1, t2 + i − 1, t2 ) + Φ (t2 + i )E [ x (t2 + i )( w2t2 (i ))T ](Q2t2 + i ) −1 w2t2 (i ) = Φ (t2 + i ) xˆ (t2 + i | t2 + i − 1, t2 + i − 1, t2 ) + Φ (t2 + i )P2 t2 (i )H 1 T (t2 + i )(Q2t2 + i ) −1 × [Y2 (t2 + i ) − H 2 (t2 + i ) xˆ (t2 + i | t2 + i − 1, t2 + i − 1, t2 )],

(46)

which is (40). Note that xˆ (t2 + 1| t2 , t2 , t2 ) is the standard Kalman filter for the system (1) and (8), then (41) is given clearly. Remark. By applying the re-organized innovation sequence, the Kalman filtering solution for measurement delayed system (1)―(4) is given. Different from the standard Kalman filter, it includes three parts: ●The first part is (41) and (29), which is the Kalman filtering for the system (1) and (8) with i = 3, the initial condition P0 and xˆ (0 | −1, −1, −1) are given. ●The

second part is (40) and (30) , which is the Kalman filtering for the system (1) and (8) with i = 2, the initial value is given from the above part. ●The third part is (39) and (31) , which is the Kalman filtering for the system (1) and (8) with i = 1, the initial value is given from the above part.

Kalman filtering for time-delayed linear systems

469

Then take the value xˆ (t | t − 1, t1 , t2 ) into (38), then the optimal filter xˆ (t | t ) = xˆ (t | t , t1 , t2 ) can be yielded. 3

Numerical example

A numerical example is given in below to show the efficiency of the presented method, ⎡ 0.98 0 ⎤ ⎡1⎤ , Γ (t ) = ⎢ ⎥ , H (t ) = [1 4], H1 (t ) = [1 1], H 2 (t ) = [4 2], Φ (t ) = ⎢ (47) ⎥ ⎣ 0.5 0.2 ⎦ ⎣1⎦ ⎡1 ⎤ ⎡0⎤ ⎡1 0 ⎤ the initial condition x (0) = ⎢ ⎥ , the initial value of the filter xˆ(0) = ⎢ ⎥ , P0 = ⎢ ⎥. ⎣ 0.5⎦ ⎣0⎦ ⎣0 1 ⎦ The system noise u(t), v(t), v1(t) and v2(t) are white noises of zero means with known covariance matrices. Let d1 = d2 = 10, u(t ) ~ N (0,1), v (t ) ~ N (0,1), v1 (t ) ~ N (0,1), v2 (t ) ~ N (0,1). According to subsection 2.1, we have ⎡1 4⎤ ⎡1 4 ⎤ ⎢ ⎥ H 1 (t ) = [1 4], H 2 (t ) = ⎢ ⎥ , H 3 (t ) = ⎢ 1 1 ⎥ . ⎣1 1 ⎦ ⎣⎢ 4 2 ⎦⎥

(48)

⎡ xˆ (t | t ) ⎤ According to (38)―(41) in subsection 2.3, we design the filter xˆ (t | t ) = ⎢ 1 ⎥ . The ⎣ xˆ 2 (t | t ) ⎦ figures can be given as below. It can be easily seen from Fig. 1, the filter xˆ1 (t | t ) can

give a very good tracking performance for x1(t), from Fig. 2, xˆ 2 (t | t ) can give a very good tracking performance for x2(t). The above results ensure that the new presented approach can estimate the system state very well and are efficient.

Fig. 1.

4

The estimate for the state x1(t). 1, The original state; 2, the estimate xˆ1 (t | t ) .

Conclusion The problem of H∞ estimate and control has been studied well in the late of the

470

Science in China Series F: Information Sciences

Fig. 2. The estimate for the state x2(t). 1, The original state; 2, the estimate xˆ 2 (t | t ) .

1980s[11]. However, H∞ fixed-lag smoothing estimate and preview control have not been solved until recent years[5,6,8,12,13]. The main reason is that these problems are actually equivalent to the optimal filter and control for the system with instantaneous and one delayed measurement in Krein space[5]. So the presented re-organized innovation sequence for the system with instantaneous and two delayed measurements forms the base for these complicated problems. Moreover, the theory can be used to solve more complicated problems such as linear quadratic regulation with multiple input delays and control, etc. The result consummates Kalman filtering theory, the key technique is re-organized innovation analysis. Acknowledgements The authors would like to thank Prof. Huanshui Zhang for his kind help and an anonymous reviewer for his/her helpful comments. This work was supported by the National Natural Science Foundation of China (Grant Nos. 60574016, 60474058, 60534010).

References 1 Wiener N. Extrapolation Interpolation and Smoothing of Stationary Time Series. New York: The Technology Press and Wiley, 1950 2 Kailath T, Sayed A H, Hassibi B. Linear Estimation. New Jersey: Prentice-Hall, 1999 3 Kalman R E. A new approach to linear filtering and prediction problems. J Basic Engin Trans ASME-D, 1960, 82(1): 35―45 4 Anderson B D O, Moore J B. Optimal Filtering. New Jersey: Prentice-Hall, 1979 5 Zhang H, Xie L, Soh Y C. A Unified Approach to Linear Estimation for Discrete-Time Systems-Part I: H∞ Estimation. In: 41th IEEE Conf Decision Contr, 2001, 2917―2922 6 Zhang H, Xie L, Zhang D, et al. H∞ fixed-lag smoothing for linear time-varying discrete-time systems. Automatica, 2005, 41(5): 839―846 [DOI] 7 Zhang H, Zhang D, Xie L. An innovation approach to H∞ prediction for continuous-time systems with application to systems with delayed measurements. Automatica, 2004, 40(7): 1253―1261 8 Kojima A, Ishijima S. H∞ performance of preview control systems. Automatica, 2003, 39(4): 693―701 [DOI] 9 Zhang H, Xie L, Zhang D, et al. A re-oganized innovation approach to linear estimation. IEEE Trans on Automatic Control, 2004, 49(10): 1810―1814 [DOI] 10 Klein L A. Sensor and Data Fusion Concepts and Applications. Society of Photo-Optical Instrumentation Engineers Press, 1999 11 Doyle J C, Glover K, Khargoneckar P P, et al. State-space solutions to standard H2 and H∞ control problems. IEEE Trans on Automatic Control, 1989, 34(8): 831―847 [DOI] 12 Bolzern P, Colaneri P, Nicolao G D. H∞ smoothing in discrete-time: A direct approach. In: 41th IEEE CDC, Las Vegas, 2002, 4233―4238 13 Mirkin L. On the H∞ fixed-lag smoothing: How to exploit the information preview. Automatica, 2003, 39(8): 1495―1504 [DOI]

Suggest Documents