Tracking and Kalman Filtering

Department of Computer Engineering University of California at Santa Cruz Tracking and Kalman Filtering CMPE 264: Image Analysis and Computer Vision ...
Author: Caroline White
3 downloads 2 Views 90KB Size
Department of Computer Engineering University of California at Santa Cruz

Tracking and Kalman Filtering CMPE 264: Image Analysis and Computer Vision Hai Tao

Department of Computer Engineering University of California at Santa Cruz

Object tracking Q

Object tracking is the problem of estimating the positions and other relevant information of moving objects in image sequences • Two-frame tracking can be accomplished using correlation-based matching methods, optical flow techniques, or change-based moving object detection methods • The main difficulties in reliable tracking of moving objects include - Rapid appearance changes caused by image noise, illumination changes, nonrigid motion, and varying poses - Occlusion - Cluttered Background - Interaction between multiple multiple objects • In a long image sequence, if the dynamics of the moving object is known, prediction can be made about the positions of the objects in the current image. This information can be combined with the actual image observation to achieve more robust results

Department of Computer Engineering University of California at Santa Cruz

Correlation-based tracking Q

For a given region in one frame, find the corresponding region in the next frame by finding the maximum correlation score in a search region

Department of Computer Engineering University of California at Santa Cruz

Change-based tracking Q

Algorithm • Align the background using a parametric motion model, e.g. a homography • Image subtraction to detection motion blobs - Compute the difference image between two frames - Thresholding to find the blobs - Locate the blob center as the position of the object

Department of Computer Engineering University of California at Santa Cruz

Change-based tracking Q

Example

Department of Computer Engineering University of California at Santa Cruz

The best linear unbiased estimator (BLUE) Q

Q

An optimal estimation algorithm – produce optimal estimates of the state of a dynamic system, on the basis of noisy measurements and an uncertain model of the system’s dynamics To derive the Kalman filter, we will first introduce the best linear unbiased estimator (BLUE) y = Hx + n y : observation, x : state, H : measurement matrix, n : noise N (0,R) Given y, R, H , estimate x

• • • •

Linear filter: xˆ = Ly Unbiased: E (xˆ − x) = 0 Best: Covariance E[(xˆ − x)T (xˆ − x)] is minimum Lemma: The linear estimator is unbiased iff I = LH Proof: E[x − xˆ ] = E[x − Ly ] = E[x − L( Hx + n)] = E[( I − LH )x − Ln] = E[( I − LH )x] E[( I − LH )x] = 0 iff LH = I

Department of Computer Engineering University of California at Santa Cruz

The best linear unbiased estimator (BLUE) Q

Theorem: The best linear unbiased estimator xˆ = Ly for the measurement model y = Hx + n is L = ( H T R −1 H ) −1 H T R −1and the covariance matrix of the estimate is P = ( H T R −1 H ) −1 Proof: This is equivalent to min || E[( Ly − x)( Ly − x)T ] || L

subject to LH = I Observe that E[( Ly − x)( Ly − x)T ] = E[ LnnT LT ] = LRLT Suppose a solution can be written as L = L0 + ( L − L0 ) then it is obvious that ( L − L0 ) H = LH − L0 H = I − I = 0. The covariance can be written as P = LRLT = ( L0 + ( L − L0 )) R ( L0 + ( L − L0 ))T = L0 RLT0 + ( L − L0 ) RLT0 + L0 R ( L − L0 )T + ( L − L0 ) R ( L − L0 )T Since RLT0 = R[( H T R −1 H ) −1 H T R −1 ]T = H ( H T R −1 H ) −1 therefore ( L − L0 ) RLT0 = ( L − L0 ) H ( H T R −1 H ) −1 = 0, similarly L0 R ( L − L0 )T = [( L − L0 ) RLT0 ]T = 0 As the result P = L0 RLT0 + ( L − L0 ) R ( L − L0 )T Since both terms are positive definite or semidefinite matrices, it is minimum when L = L0

Department of Computer Engineering University of California at Santa Cruz

Kalman filter – problem statement Q

Dynamic system and the state x k = Φ k −1x k −1 + ξ k −1 z k = H k xk + µk where x k is the a state vector at time instant k , e.g. x k = [ xk , yk , v x k , v y k ], z k is the a observation vector at time instant k , e.g. z k = [ z x , z y ], ⎡1 ⎢0 Φ k −1 is the state transition matrix, e.g. Φ k −1 = ⎢ ⎢0 ⎢ ⎣0

0 ∆t 1

0

0 0

1 0

⎡1 0 0 H k is the measurement matrix, e.g. H k = ⎢ ⎣0 1 0 ξ k −1 is a random vector modelling the uncertainty of

0⎤ 0⎥⎦

0⎤ ∆t ⎥⎥ 0⎥ ⎥ 1⎦

the model, assumed to be N (0 ,Qk −1 )

µ k is a random vector modelling teh additive noise in the observation, assumed to be N (0,Rk )

Department of Computer Engineering University of California at Santa Cruz

Kalman filter – derivation Suppose the prediction is xˆ −k , with covariance matrix Pk− . They are based on observations before k . As the result, if the true state is x k xˆ −k = x k + ek : N (0, Pk− ) Another piece of information is z k = H k x k + µ k : N (0, Rk ) This can be written as ⎛ ⎡ Pk− 0 ⎤ ⎞ ⎡xˆ −k ⎤ ⎡ I ⎤ ⎢ ⎥ = ⎢ ⎥ x k + n : N ⎜⎜ 0, ⎢ ⎥ ⎟⎟ H ⎣z k ⎦ ⎣ k ⎦ ⎝ ⎣ 0 Rk ⎦ ⎠ The BLUE estimator of this system is xˆ k =

Pk [ I , H kT

⎡( Pk− ) −1 ]⎢ ⎣ 0

− −1 ⎛ T ⎡( Pk ) ⎜ = [ I , H k ]⎢ ⎜ ⎣ 0 ⎝ Therefore

Pk−1

0 ⎤ ⎡xˆ −k ⎤ − −1 − T −1 ⎥ = Pk [( Pk ) xˆ k + H k Rk z k ] −1 ⎥ ⎢ Rk ⎦ ⎣ z k ⎦ 0 ⎤ ⎡ I ⎤ ⎞⎟ = ( Pk− ) −1 + H kT Rk−1 H k ⎥ −1 ⎥ ⎢ Rk ⎦ ⎣ H k ⎦ ⎟⎠

xˆ k = Pk [( Pk− ) −1 xˆ −k + H kT Rk−1z k ] = Pk [( Pk−1 − H kT Rk−1 H k )xˆ k− + H kT Rk−1z k ] = xˆ −k + Pk H kT Rk−1 ( z k − H k xˆ −k )

This is the update stage of the Kalman filter

Department of Computer Engineering University of California at Santa Cruz

Kalman filtering Propogation stage : xˆ −k +1 = Φ k xˆ k Pk−+1 = E[(xˆ k−+1 − x k +1 )(xˆ k−+1 − x k +1 )T ] = E[(Φ k xˆ k − Φ k x k − ξ k )(Φ k xˆ k − Φ k x k − ξ k )T ] = Φ k Pk Φ Tk + Qk

Update equations

Propagation equations

Pk = (( Pk− ) −1 + H kT Rk−1 H k ) −1

xˆ −k +1 = Φ k xˆ k

K k = Pk H kT Rk−1

Pk−+1 = Φ k Pk Φ Tk + Qk

xˆ k = xˆ k− + K k (z k − H k xˆ k− )

or K k = Pk− H kT ( H k Pk− H kT + Rk ) −1 xˆ k = xˆ k− + K k (z k − H k xˆ k− ) Pk = ( I − K k H k ) Pk− ( I − K k H k )T + K k Rk K kT = ( I − K k H k ) Pk−

Department of Computer Engineering University of California at Santa Cruz

Kalman filtering - algorithm Q

Kalman_filtering_algorithm • Initialize Q0 , R0 , Φ k = Φ, H k = H , x 0 = z 0 , and P0 as a large covariance matrix • For each time instant k=1,… − − - Predict Pk , x k using the propogation equations - Compute the gain K k and update Pk , xˆ k using the update equations - Output the estimate xˆ k and optionally the covariance matrix Pk

Department of Computer Engineering University of California at Santa Cruz

Hidden Markov model Q

Hidden Markov model (HMM) • xk is called the hidden state and z k is called the observation • Markov chain

p( xk | x1 ,..., xk −1 ) = p( xk | xk −1 ) p( z k | x1 ,..., xk ) = p( z k | xk )

• As the result

k

p( x1 ,..., xk , z1 ,..., z k ) = p( x1 ) p( z1 | x1 )∏ [ p( xi | xi −1 ) p( zi | xi )] i =2

xk −1

xk

z k −1

zk

Department of Computer Engineering University of California at Santa Cruz

Forward algorithm Q

To compute the a posterior probability p( xk | z1 ,..., z k ) , we can use the forward algorithm p( xk | z k ,..., z1 ) ∝ p( z k | xk , z k −1 ,..., z1 ) p( xk | z k −1 ,..., z1 ) = p( z k | xk ) ∫ p( xk , xk −1 | z k −1 ,..., z1 )dxk −1 xk −1

= p( z k | xk ) ∫ p( xk | xk −1 , z k −1 ,..., z1 ) p( xk −1 | z k −1 ,..., z1 )dxk −1 xk −1

= p( z k | xk ) ∫ p( xk | xk −1 ) p( xk −1 | z k −1 ,..., z1 )dxk −1 xk −1

Q

Notice that Bayes rules is used for computing the a posterior probability p( x | z ) =

p ( z | x) p ( x) p( z )

Department of Computer Engineering University of California at Santa Cruz

Probabilistic interpretation of the Kalman filter N ( xˆ k −1 : xk −1 , Pk −1 ) xk −1

p( xk | xk −1 ) = N ( xk− : Φ k −1 xˆ k −1 , Φ k −1 Pk −1Φ Tk −1 + Qk −1 )

Φ k −1

xk Hk

z k −1

zk p( z k | xk ) = N ( H k xk : z k , Rk )

p( xk | z k , z k −1 ,..., z1 ) ∝ p( z k | xk , z k −1 ,..., z1 ) p( xk | z k −1 ,..., z1 ) = p( z k | xk ) p( xk | z k −1 ,..., z1 ) = p( z k | xk ) ∫ p( xk | xk −1 ) p( xk −1 | z k −1 ,..., z1 )dxk −1 xk −1

= N ( H k xk : z k , Rk ) ∫ N ( xk : Φ k −1 xk −1 , Φ k −1 Pk −1Φ Tk −1 + Qk −1 )dxk −1 xk −1

Department of Computer Engineering University of California at Santa Cruz

Homework Q Q

In the Kalman filtering equations, does Pk depend on the observation data ? Does K k depend on observation data ? Optional: Using the probabilistic interpretation to derive the Kalman equations

Suggest Documents