Multirate and Event-Driven Kalman Filters for Helicopter Flight

Multirate and Event-Driven Kalman Filters for Helicopter Flight B. Sridhar, P. Smith, R. Suorsa, and B. Hussien elicopters flying at low altitude nee...
4 downloads 0 Views 347KB Size
Multirate and Event-Driven Kalman Filters for Helicopter Flight B. Sridhar, P. Smith, R. Suorsa, and B. Hussien

elicopters flying at low altitude need information about objects in the vicinity of their flight path for automatic obstacle avoidance in a guidance system or as a display to warn the pilot. NASA is examining the role of a vision based obstacle detection system to provide a range map, i.e., information about objects as a function of azimuth and elevation. The range map is computed using a sequence of images from a passive sensor and an extended Kalman filter is used to estimate range to obstacles. The computation of a range map for a typical scene may involve several hundred Kalman filters. Optical flow computations provide the measurements

H

Presented at the First IEEE Conference on Control Applications, Dayton, OH, September 13-16, 1992. BSricihar; P Smith, and R.Suorsa are with NASA Ames Research Center; Moffett Field, CA 94035. B. Hussien is M.ith Sterliilg Federal Sxstems Group, Palo Alto, CA 94303.

26

for each Kalman filter. The magnitude of the optical flow varies significantly over the image depending on the helicopter motion and object location. In a standard Kalman filter, the measurement update takes place at fixed intervals. It may be necessary to use a different measurement update rate in different parts of the image in order to maintain the same signal to noise ratio in the optical flow calculations. A range estimation scheme is presented here, based on accepting the measurement only under certain conditions. The estimation results based on the standard Kalman filter are compared with results from a multirate Kalman filter and an event-driven Kalman filter on a sequence of helicopter flight images. Results show that, compared to the standard Kalman filter, either the multirate or event-driven formulation can be used to achieve a more uniform estimation accuracy over a large portion of the image with a substantial reduction in the amount of computation.

0272- 170X/93/SO3.0001993IEEE

IEEE Control Systems

Detection of Obstacles Rotorcraft operating in ahigh-threat environment fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy [I]. Far-field planning involves the selection of goals and a nominal trajectory between the goals. Far-field planning is based on apn'on' information and requires a detailed map of the local terrain. However, the database for even the best surveyed landscape will not have adequate resolution to indicate objects such as trees, buildings, wires and transmission towers. This information has to be acquired using an on-board sensor and integrated into the navigatiodguidance system to modify the nominal trajectory of the rotorcraft. Because vision alone will not be adequate for detecting small obstacles such as wires, it is expected that the system will include an active sensor whdse search can be directed to complement the vision system while minimizing the risk of detection [2]. The recovery of depth using electrooptical sensors, referred to as passive ronging, is based on triangulation and requires two images of the outside world from two different imaging locations. In stereo methods, two or more cameras located at different positions Are used to obtain images of the outside world. In motion methods, the same camera is moved from one position to another to capture two or more images of the outside world. Given a sequence of images, employing image-object differential equations, a Kalman filter [3] can be used to estimate both the relative coordinates and the earth coordinates of objects on the ground. The Kalman filter can also be used in a predictive mode to track features in the images, leading to a significant reduction in the optical flow computations. The computation of a range map for a typical scene may involve several hundred Kalman filters. Optical flow computations provide the measurements for each Kalman filter. The magnitude of the optical flow varies significantly over the image depending on the helicopter motion and object location. In a standard Kalman filter, the measurement update takes place at fixed intervals. It may be necessary to use a different measurement update rate in different parts of the image in order to maintainthe same signal-to-noise ratio in the optical flow calculations. In this estimation scheme, which is based on accepting the measurement only under certain conditions, estimation results are compared using a standard Kalman filter, a variable rate Kalman filter, and an event-driven Kalman filter on a sequence of helicopter flight images. The following section describes the relation between the image, the rotorcraft, and objects of interest under full curvilinear motion. Next we describe the nature of optical flow during a typical helicopter flight. We then describe the recursive range estimation using Kalman filters. Finally, we consider the performance of the standard, multirate and event-driven filter algorithms using image data acquired during helicopter flight.

Passive Ranging Passive ranging is the ability to estimate distahceb to various objects close to the flight path of the helicopter using passive sensing by electrooptical cameras. We will now describe the basic relations between the two-dimensional (2D) sensor image variables (displacement and velocity), the three-dimensional (3D) terrain geometry (Le. points, lines, and other features), and sensor motion parameters (Le. position, attitude, and translational and rotational velocity). These relations provide the dynamic model for the estimation of range. For simplicity, the

August 1993

camera is assumed to be fixed at the center of gravity of the rotorcraft with its optical axis oriented along the rotorcraft's longitudinal body axis. Fig. 1 shows the viewing geometry of the camera. In practice, the camera is mounted at a convenient location away from the center of gravity of the rotorcraft. If necessary, the camera is allowed variable orientation with respect to the body of the rotorcraft. This flexibility is provided in our implementation of the passive ranging algorithm. ~

IMAGE

Fig. 1. Kewing geometry of the imaging sensos

Consider an earth-fixed, northeast-down, coordinate system. ~ r is the vector from Let rh = (xh,yh, Zh)T and r = (x,y , z ) where the earth-fixed coordinate system (ECS) origin to a point 0 on the ground and rh is the vector from the ECS origin to the center of gravity of the rotorcraft. The rotorcraft moves with respect to the earth at a translational velocity V = (V,, V,, VJT. The orthonormal coordinate transformation from the ECS to rotorcraft body-fixed coordinate system (BCS) is denoted by the 3 x 3 matrix T which depends on the rotorcraft attitude. Let Ts be the corresponding transformation matrix from the BCS to the sensor-fixed coordinate system (SCS). The relative position, p, of any point 0 with respect to the rotorcraft can be written as = r - rh .

(1)

The rate of change of this vector as viewed from the moving rotorcraft can be determined by the Coriolis equation relating rate of change of a vector as expressed in an ECS frame to that expressed in a moving frame. The standard vector form of the Coriolis equation [4] is

Q = Qb i-W b x p h

(2)

where pb is the vector between the rotorcraft and point 0 expressed in BCS and wb represents the rotational velocity of the rotorcraft in BCS. Differentiating ( I ) , we have

27

direction of flight. The optical flow in this case can be expressed as where Vb represents the translational velocity of the rotorcraft in BCS. From ( 2 ) and (3) we have the relation P h = -Oh

x ph-

v,, ,

(4)

It is convenient to express this equation in the SCS as

T

(7)

deldt = V sinelR

where V is the velocity of the helicopter. For an electrooptical sensor, 0 may vary from 0" to 25". The objects of interest may be located in the range of 100 to 600 ft. The optical flow is measured by tracking the features from frame to frame. The motion of an object in the image plane between two successive frames is referred to as disparity and is measured in pixels. Let AT,,, be the time interval between two frames. Then, the disparity or optical flow between the two frames in pixels is

where p.$= [xS,xr,zS1 , V, = [V,,, V.,?., V.JT and o.\ = [as,, o.,!, represent the vector between the rotorcraft and point 0,the translational velocity and the rotational velocity of the rotorcraft respectively in the SCS. Let the image plane be perpendicular to the optical axis. Then, where C, is the number of pixels per radian. A typical value of 620 pixels per radian is assumed in the following discussion. using similar triangles I

wherefis the focal length of the sensor. As the rotorcraft moves, the image of the object 0 moves in the image plane. The velocity ( M , 6) associated with each point in an image is referred to as optical flow [ 5 ] . For a rotorcraft flying in a straight line, the optical flow will be zero at a point in the image plane referred to as the focus-of-expansion (FOE). The FOE corresponds to the intersection of Vs and the image plane and it plays an important role in optical flow computations. Given a sequence of measurements u(k),~ ( k )k; = I , 2, . . . N, using the image-object differential equations ( 5 ) and ( 6 ) , we estimate both the relative coordinates p ( k ) and the earth coordinates r of the corresponding object 0 on the ground. The range estimation consists of two major parts: (a) computation of optical ) the image flow by extracting of measurements (u(k),~ ( k )from and (b) estimation of range given the sequence of measurements u(k), v(k): k = I , 2, ... N. The computation of optical flow requires the determination of the displacement of image points over a sequence of images. The main difficulty in the computation is due to the assumption that an object in the terrain corresponds to a unique point in the image. In an actual image, an object on the ground is more likely to be a region in the image. Another complication in the computation of the optical flow, referred to as the correspondence problem [ 6 ] ,results from the ambiguity in identifying features in two images that are projections of the same entity in the three-dimensional world. There are two approaches to the computation of optical flow: a) fieldbased techniques and b) feature-based techniques. These methods are discussed in greater detail in [7]-[9]. Features in an image can be points, lines, contours. regions, or any other geometrical definition that corresponds to a distinguishable part of an object. We use a feature-based approach in the computation of optical flow. The next section will discuss optical flow during a typical helicopter flight.

Optical Flow Consider a helicopter flying at 20 knots along a straight line. Fig. 2 shows an object at a distance R making an angle 0 to the

28

Object

I

Fig. 2. Object locution.

Table I shows the disparity between successive frames as a function of R and 0 for a I O Hz frame rate. Fig. 3 divides the ( R , e) envelope into several regions depending on the' optical flow. It is clear from Table I that the disparity can vary by a factor of 10 or more in an image. This indicates that the dynamic range and accuracy of measurements ~ ( k )v,( k ) vanes significantly in the image and the accuracy with which an object's range can be estimated depends on its location in the image. Fig. 3 can also be used in establishing rules of thumb regarding the selection of ATt71. - - -- ___-_ ~I Table I Variation of Disparity with ( R 1) I _

1.81

1

_

~

~

_

200

300

400 .__

0.9

0.6

0.45

_

3.61

1.8

I .2

0.9

0.72

5.39

2.7

1.8

1.35

1.08

2.37-

1.78

_ 20 _ ~ _7.12 _ _ 3.56

~~

IEBE Control Systems

X = F( t)X(t ) + G(t )U(t ) + c d t )

Recursive Range Estimation The object location estimation problem may be formulated as follows. Let a point object 0 have earth coordinates r = (x, y, z ) ~ . The image oint corresponding to this object point has coordinates (u, v) . The actual image point location will be different from the true value (u, v)Tdue to noise in the sensor and errors )~ introduced by computation of the optical flow. Let (u,,,, v , , ~be the measured coordinates of the image point, such that

F

(IO)

where h(t)= (u.

Z(I) = Z(iAT)

The measured coordinates of an image point will move as the rotorcraft flies along its trajectory. Given the estimates of rotorcraft’s position and velocity (translational and rotational) along its trajectory, image point measurements from successive image frames can be used to estimate the object point coordinates in ECS or in SCS. Because the measurements Z are a nonlinear function of the object point coordinates r or P.~,an extended Kalman filter must be used. The Kalman filters investigated in this article have a linear continuous state model of the form

Given the state equations ( 1 3) and the image point measurements Z ( k ) of (17), the state estimate k(k)and it’s error covariance matrix P can be computed recursively using the Kalman filter [ IO]. The Kalman filter consists of two parts: I . Measurement Update: The measurement update is performed whenever a new measurement is available. Prior to processing a new measurement Z ( k ) , we have the estimated value of the state 2 and the covariances P ( k ) , Q(k),and R(k). The new measurement improves our estimate of the state and it’s covarance. The updated values are

m

10 -

p-P c0.5 PIXEL ’

160



260



360 RANGE

Fig. 3. Regions based on optica1,flow.

August 1993

(14)

where 6 is a predetermined disparity threshold. When the measurements are chosen in this manner, the Kalman filter is referred to as an event-driven or decision-driven Kalman filter. Thus, for some values of k, as described earlier, we have

EI -

0

(13)

wherej is the least positive integer such that

15-

5-

uk)

where i takes integral values ( / I , 12, / 3 , . ..j such that /i+l > li. If li is chosen such that Ir = M.i where M is an integer, then we have a multirate Kalman filter. This is the situation where the system dynamics are propagated at a higher rate than the measurement update rate. Another approach is to compute the measurement for each value of i and accept it under the condition

If nu and n , are assumed to be independent scalar white noise processes with standard deviations ouand G,., respectively, then the measurement error covariance matrix is given by

+

x(k + I ) = aqkjx(k) + r ( k ) u ( k )+

\If

i ~ t=)(nu,

2oi

at)

whereX is the state vector, U is the control input, is acontinuous white noise process with covariance Qc (representing modeling uncertainty), and F(t) and G(t)are time-varying matrices. Using a sampling interval of ATseconds, (1 2) can be replaced by the discrete form

where k = i . AC i = I , 2 , 3 . . ., @(k) is the state transition matrix and T ( k )is the input distribution matrix. The process noise cdk) is used to model uncertainties in the knowledge of V , and w.,. i,(k) is a discrete white noise sequence with covariance Q = Qc/AT.The measurements Z(t)are related to the state through the nonlinear vector function h(X(t))and can be linearized to give the measurement equation. The measurements Z(t) are generated by the computation of optical flow and are available for only some values of k . Let

where nu and n, represent “pixel” noise of the imaging system. In vector notation, measured or actual image point coordinates can be represented as Z ( t ) = 4 t ) + i;(O

(12)



460



5bO



6!30

i I

X ( k ) = 2 ( k ) + K ( k ) [ Z ( k )- h ( X ( k ) ) l P ( k ) = [I - K ( k ) H ( k ) ] P ( k )

where the matrix of partial derivatives

(19)

29

is evaluated at X = 2 and the Kalman filter gain K ( k ) is computed using the equation

K(k) = P(k)HT(k)lH(k)P(k)HT(k) + R(k)l-' .

(20)

the discrete system equations analytically. This results in a more accurate value for @(t)than the one that can be obtained by a numerical approximation using a Taylor series expansion. Using (17), we can write

When i # li, no measurement update is performed, so

X(k) =2(k)

Z ( k ) = h[X(k)l + S A k ) (21)

where

h[X(k)l = [hl, h2IT= ~xs/Xdz5,bslzslT

P ( k ) = B(k) .

(22)

The matrix

urements are updated one at a time. This results in a Kalman filter gain different from K(k). However, the final and P at the end of the measurement update is the same as before. We choose to estimate the relative coordinates of the object point 0 with respect to the rotorcraft in the SCS as the state vector. Thus, the state vector is

x

ps = (xs, ys,

:\IT

(24)

cr

Equation ( 5 ) is modified by adding a process noise term to model the uncertainties in the estimated rotorcraft angular and translational velocity vectors in SCS provided by the on-board inertial navi ation system. 6 Define [os] the skew symmetric matrix as

We have

x = -Lw.r]x- 0,+

Suggest Documents