Fundamentals of Kalman Filtering: A Practical Approach
4-1
Polynomial Kalman Filters Overview • Kalman filtering equations - Scalar derivation • Polynomial Kalman filter without process noise • Comparing recursive least squares filter to Kalman filter • Properties of polynomial Kalman filters • Initial covariance matrix • Polynomial Kalman filter with process noise • Example of tracking a falling object • A Kalman filter for accelerometer testing problem
Fundamentals of Kalman Filtering: A Practical Approach
4-2
Filtering Equations
Fundamentals of Kalman Filtering: A Practical Approach
4-3
General Continuous Equations Model of real world must be placed in state space form x = Fx + Gu + w
Often this is fudge factor Where u is known and we are interested in estimating x Process noise matrix related to process noise Q = E[w wT ]
Matrix of spectral densities
Measurements must be linearly related to states z = Hx + v
Measurement noise matrix related to measurement noise R = E[vv T ]
Matrix of spectral densities
Fundamentals of Kalman Filtering: A Practical Approach
4-4
Modifying Real World Equations so Discrete Kalman Filter Can be Built Fundamental matrix -1
Substitution and combining similar terms yields xk = (1 - Kk H)xk-1 !k + (1 - K k H)wk - Kk vk
Squaring both sides and taking expectations yields 2
Pk = (1 - Kk H)2 (Pk-1 !k + Qk ) + K 2k Rk
Where Pk = E(x2k )
Qk = E(w 2k)
New definition
E(xk-1wk)=0 E(xk-1vk)=0
E(w kvk)=0
Rk =
E(v2k ) Fundamentals of Kalman Filtering: A Practical Approach
4-8
Derivation of Scalar Riccati Equations - 3 From previous slide 2
Pk = (1 - Kk H)2 (Pk-1 !k + Qk ) + K 2k Rk
By defining This is analogous to first Riccati equation
2
Mk = Pk-1 !k + Qk
And substituting we get Pk = (1 - Kk H)2 Mk + K2k Rk
To find gain that will minimize the variance of the error in the estimate we can use calculus (i.e., take derivative and set to zero) !P k = 0 = 2(1 - K kH)Mk (-H) + 2Kk Rk !Kk
Solving for the gain yields Kk =
Mk H 2 H Mk + Rk
= Mk H(H2 Mk + Rk )-1
This is analogous to second Riccati equation
Fundamentals of Kalman Filtering: A Practical Approach
4-9
Derivation of Scalar Riccati Equations - 4 Recall from previous slide Pk = (1 - Kk H)2 Mk + K2k Rk Kk =
Mk H = Mk H(H2 Mk + Rk )-1 H2 Mk + Rk
Optimal gain equation
Substitute optimal gain into variance equation 2
Pk = (1 -
Mk H 2 ) M + ( Mk H )2 R k k H2 Mk + Rk H2 Mk + Rk
Which simplifies to Pk =
Rk Mk 2 H Mk + Rk
= Rk K k H
Variance equation
By inverting optimal gain equation K kRk = MkH - H2 Mk Kk
Substitute back into variance equation Pk =
or
Rk Kk Mk H - H2 Mk Kk = = Mk - HMk Kk H H
Pk = (1 - Kk H)Mk
This is analogous to third Riccati equation Fundamentals of Kalman Filtering: A Practical Approach
4 - 10
Riccati Equation Summary Matrix Riccati equations T
Mk = !k P k -1 !k + Q k
K k = M k H T(HMk H T + R k )-1
Used in actual Kalman filter
P k = (I - K k H)Mk
Scalar Riccati equations 2
Mk = Pk-1 !k + Qk
Kk = MkH(H2Mk + Rk)-1 Pk = (1 - Kk H)Mk
Fundamentals of Kalman Filtering: A Practical Approach
4 - 11
Polynomial Kalman Filter Without Process Noise
Fundamentals of Kalman Filtering: A Practical Approach
4 - 12
State Equations For Different Order Polynomials Constant signal x = a0
x=0
x = Fx
x = a1
x = 0 1 x 0 0
F=0
Ramp signal x = a0 + a1 t
x=0
F= 0 1 0 0
x x
Parabolic Signal x = a1 + 2a2 t x = a0 + a1 t + a2
t2
x = 2a2
. x .. x ... x
=
0
1
0
0
0
1
0
0
0
x . x .. x
F =
0
1
0
0
0
1
0
0
0
Fundamentals of Kalman Filtering: A Practical Approach
4 - 13
We are Assuming Measurement is Polynomial Signal Plus Noise Constant signal z = x* = a0 + noise
!noise = !n
z=x+n
H=1
Ramp signal z = x* = a0 + a1 t + noise
!noise = !n
z= 1 0
x +n x
H= 1 0
Parabolic signal z = x* = a0 + a1 t + a2 t2 + noise
!noise = !n
z= 1 0 0
x x +n x
H= 1 0 0
Fundamentals of Kalman Filtering: A Practical Approach
4 - 14
Example of Deriving a Fundamental Matrix For first-order system F= 0 1 0 0
Squaring the systems dynamics matrix F2 = 0 1 0 0
0 1 = 0 0 0 0 0 0
Fundamental matrix can be found by Taylor series expansion !(t) = eFt = I + Ft +
(Ft)2 (Ft)n + ... + + ... 2! n!
Only two terms are required because 2 !(t) = eFt = 1 0 + 0 1 t + 0 0 t = 1 t 0 1 0 0 0 0 2 0 1
Therefore discrete fundamental matrix is !k = 1 Ts 0 1 Fundamentals of Kalman Filtering: A Practical Approach
4 - 15
Important Matrices for Different Order Polynomial Kalman Filters
Fundamentals of Kalman Filtering: A Practical Approach
4 - 16
Comparing Zeroth-Order Recursive Least Squares and Kalman Filters
Fundamentals of Kalman Filtering: A Practical Approach
4 - 17
Zeroth-Order Polynomial Kalman Filter Kalman filter equation x k = !k x k-1 + K k (zk - H !k x k-1)
Substituting matrices for zeroth-order filter xk = xk-1 + K 1 k(x*k - xk-1)
If we define the residual to be RESk = x*k - xk-1
Then zeroth-order Kalman filter becomes xk = xk-1 + K 1 kRESk
*This is identical to equation for zeroth-order recursive least squares filter!
Fundamentals of Kalman Filtering: A Practical Approach
4 - 18
Solving for Gain of Zeroth-Order Polynomial Kalman Filter - 1 For zero process noise Ricatti equations are Mk = !kPk-1!Tk +Qk = !k Pk-1!Tk
K k = M k H T(HMk H T + R k )-1 P k = (I - K k H)Mk
Assume initial covariance is infinite P0 = !
From first Riccati equation T
M1 = !1 P0 !1 = 1*"*1 = "
From second Riccati equation -1
K1 = M1 HT(HM1 HT + R1 ) =
" M1 = =1 M1 +!2n " + !2n
From third Riccati equation M1 !2 M P1 = (I - K1 H)M1 = 1 M1 = n 1 = !2n M1 +!2n M1 +!2n
Same gain as recursive least squares filter with k=1 Same variance as recursive least squares filter with k=1 Fundamentals of Kalman Filtering: A Practical Approach
4 - 19
Solving for Gain of Zeroth-Order Polynomial Kalman Filter - 2 Second iteration of Riccati equation T
M2 = !2 P1 !2 = 1*"2n *1 = "2n
P2 =
Same gain as recursive least squares filter with k=2
M2 !2n = = .5 M2 +!2n !2n + !2n
K2 =
Same variance as recursive least squares filter with k=2
!2n M2 !2 !2 !2 = n n = n M2 +!2n !2n +!2n 2
Third iteration of Ricatti equation M3 = P2 =
M3 .5!2n = =1 2 2 2 M3 +!n .5!n + !n 3
K3 = P3 =
!2n 2
!2n M3 !2 *.5!2n !2n = n = M3 +!2n .5!2n +!2n 3
Same gain and variance as recursive Least squares filter with k=3
Same gain and variance as recursive Least squares filter with k=4 Fundamentals of Kalman Filtering: A Practical Approach
4 - 20
Summary of Comparison
Thus we can see that when the zeroth-order polynomial Kalman filter has zero process noise and infinite initial covariance matrix, it had the same gains and variance predictions as the zeroth-order recursive least squares filter.
Fundamentals of Kalman Filtering: A Practical Approach
4 - 21
Comparing First-Order Recursive Least Squares and Kalman Filters
Fundamentals of Kalman Filtering: A Practical Approach
4 - 22
First-Order Polynomial Kalman Filter Kalman filtering equation x k = !k x k-1 + K k (zk - H !k x k-1)
Substitution of matrices yields xk xk
= 1 Ts 0 1
xk-1
+
xk-1
K1 k K2 k
x*k - 1 0
1 Ts 0 1
xk-1 xk-1
Multiplying out the matrices xk = xk-1 + Tsxk-1 + K1 k(x*k - xk-1 - Tsxk-1) xk = xk-1 + K 2 k(x*k - xk-1 - Ts xk-1)
If we define the residual to be RESk = x*k - xk-1 - Ts xk-1
The filter becomes xk = xk-1 + Tsxk-1 + K1 kRESk
Identical to equations of first-order recursive least squares filter
xk = xk-1 + K2 kRESk
Fundamentals of Kalman Filtering: A Practical Approach
4 - 23
Review of Equations for Gains and Variances of First-Order Recursive Least Squares Filter Gains K1 k =
2(2k-1) k=1,2,...,n k(k+1)
K2 k =
6 k(k+1)Ts
Variances of errors in the estimates P1 1k =
2(2k-1)!2n k(k+1)
P 2 2k =
12!2n k(k2 -1)T2s
Fundamentals of Kalman Filtering: A Practical Approach
4 - 24
MATLAB Simulation to Compare Both Filters - 1 ORDER=2; TS=1.; SIGNOISE=1.; PHI=[1 TS;0 1]; P=[99999999 0;0 999999999]; IDNP=eye(ORDER); H=[1 0]; HT=H'; R=SIGNOISE^2; PHIT=PHI'; count=0; for XN=1:100
Fundamental & initial covariance matrices for first-order polynomial Kalman filter