Lecture on STRUCTURAL SYSTEM IDENTIFICATION USING SYSTEMS THEORY AND WAVELETS

Lecture on STRUCTURAL SYSTEM IDENTIFICATION USING SYSTEMS THEORY AND WAVELETS K. C. PARK Department of Aerospace Engineering Sciences and Center for ...
Author: Dorothy Fleming
11 downloads 1 Views 293KB Size
Lecture on STRUCTURAL SYSTEM IDENTIFICATION USING SYSTEMS THEORY AND WAVELETS

K. C. PARK Department of Aerospace Engineering Sciences and Center for Aeropace Structures University of Colorado, Campus Box 429 Boulder, CO 80309 Phone: 303-492-6330/ Fax: 303-492-4990 e-mail: [email protected]

A CLASSICAL ORIENTAL IDENTIFICATION QUESTION:

The sky is blue. Is that really so or does it look blue because it stretches off into infinity? — Chuang Tzu (365-290 B. C.)

FUNDAMENTAL BUILDING BLOCK FOR SYSTEM REALIZATION:

Availability of Accurate Impulse Response Data or Markov Parameter for Reliable Modeling of Structures from Test Measurements

WHAT IS STRUCTURAL SYSTEM IDENTIFICATION? According to S. H. Crandall (1956), engineering problems can be classified into three categories: • equilibrium problems • eigenvalue problems • propagation problems Equilibrium problems are characterized by the structural or mechanical deformations due to quasi-static or repetitive loadings. In other words, in structural and mechanical systems the solution of equilibrium problems is a stress or deformation state under a given load. The modeling and analysis tasks are thus to obtain the system stiffness or flexibility so that the stresses or displacements computed accurately match the observed ones. Eigenvalue problems can be considered as extentions of equilibrium problems in that their solutions are dictated by the same equilibrium states. There is an additional distinct feature in eigenvalue problems: their solutions are characterized by a unique set of system configurations such as resonance and buckling. Propagation problems are to predict the subsequent stresses or deformation states of a system under the time-varying loading and deformation states. It is called initial-value problems in mathematics or disturbance transmissions in wave propagation. Modal testing is perhaps the most widely accepted words for activities involving the characterization of mechanical and structural vibrations through testing and measurements. It is

primarily concerned with the determination of mode shapes (eigenvevtors) and modes (eigenvalues), and to the extent possible the damping ratios of a vibrating system. Therefore, modal testing can be viewed as experimental solutions of eigenvalue problems.

There is one important distinction between eigenvalue analysis and modal testing. Eigenvalue analysis is to obtain the eignvalues and eigenvectors from the analytically constructed governing equations or from a given set of mass and stiffness properties. There is no disturbance or excitation in the problem description. On the other hand, modal testing is to seek after the same eigenvalues and eigenvectors by injecting disturbances into the system and by measuring the system response. However, modal testing in earlier days tried to measure the so-called free-decay responses to mimick the steady-state responses of equilibrium problems.

The term structural system identification adopted in this course encompasses the modeling of all the three engineering problems through measurements. Table 1 offers a comparison of engineering analysis vs. system identification for the three problems.

Table 1: Comparison of Engineering Analysis and System Identification Engineering Analysis

System Identification

Equilibrium

Construct the model first, then obtain deformations under any given load.

Measure the dynamic input/output first, then obtain the flexibility.

Eigenvalue

Construct the model first, then obtain eigenvalues without any specified load.

Measure the dynamic input/output first, then obtain eigenvalues that corresponds to the specific excitation.

Propagation

Construct the model first, then obtain responses for time-varying loads.

Measure the dynamic input/output first, then obtain the model corresponds to the specific load

Observe from the above Table that the models are first constructed in engineering analysis. In system identification the models are constructed only after the appropriate input and output are measured. Nevertheless, for both engineering analysis and system identification, modeling is a central activity. Observe also that, in engineering analysis, once the model is constructed

it can be used for all of the three problems. On the other hand, the models obtained by system identification are usually valid only under the specific set of input and output pairs. The extent to which a model obtained through system identification can be applicable to dynamic loading and transient response measurements depends greatly upon the input characteristics and the measurement setup and accuracy. Structural Modeling by System Identification As noted in the previous section, modeling constitutes a key activity in engineering analysis. For example, the finite element method is a discrete structural modeling methodology. Structural system identification is thus a complementary endeavor to discrete modeling techniques. A comprehensive modeling of structural systems is shown in Fig. 1. The entire process of structural modeling is thus made of seven blocks and seven information transmission arrows (except the feedback loop). Testing consists of the first two blocks, Structures and Signal Conditioning along with three actions, the application of disturbances as input to the structures, the collection of sensor output, and the processing of the sensor output via filtering for noise and aliasing treatment. FFT and Wavelets Transforms are software interface with the signal conditioniners. From the viewpoint of system identification, its primary role is to produce as accurately as possible impulse response functions either in frequency domain or in time domain variables. It is perhaps the most important software task because all the subsequent system realizations and the determination

Disturbances

Sensors Structures

Signal Conditioning

Filtered

Data

FFT's Wavelets

Impulse

Response

Realization x = Ax + Bu y = Cx + Du

Realization Parameter

? Model-based Diagnostics/ Health monitoring Active tayloring, ...

Repair Design modification Accident investigation

Monitoring

Instrumentation

FEM/ Active Control Models

Model Update

Structural Models, M (mass) D (damping) K (stiffness) Nonlinearities

Control, vibration isolation design, damage detection, ...

Fig. 1 Wonderful World of Structural Modeling

of structural model parameters do depend on the extracted impulse response data. About a

fourth of this course will be devoted to learn methods and techniques for extracting the impulse response functions.

System realization performs the following task: For the model problem of plant: x˙ = A x + B u Given measurements of output: y = C x + D u input: u Determine system characteristics: A, B C and D

Structural modeling block is to extract physical structural quantities from the system characteristics or realization parameters (A, B, C, D). This is because realization characteristics still consist of abstract mathematical models, not necessarily in terms of the desired structural quantities. Specifically, one obtains

Given realization parameters: A, B, C, and D Determine either modal quantities: modes(ω) and mode shapes (φ) or physical matrices: mass (M), stiffness(K) and damping(D)

Finite element model updating, active controls and health monitoring are the beneficiaries of the preceding four activities. Hence, we will try to touch upon these topics, perhaps as term projects, depending on how this course progresses itself before the Thanksgiving recess. Finally, if necessary, one may have to repeat testing, hopefully this time utilizing the experience gained from the first set of activities. Even experienced experimentalists often must repeat testing. A good experimentalist rarely believes his/her initial results whereas a typical analyst almost always thinks his/her initial results are valid!

Discussions As stated, structural analysis is first to construct the structural models, viz., the mass, the damping, and stiffness operators. Then by using known forcing functions or best possible forcing function models, it is to determine the response of the structure. Structural system identification on the other hand is to determine the structural model or model parameters based on the measured forcing functions and the measured structural response data. More precisely, let us consider a general system model given by (we will derive them in the next chapter) x˙ (t) = A x(t) + B u(t) y(t) = C x(t) + D u(t) Structural analysis is to obtain x for a given u knowing that A and B are known by modeling work, usually by the finite element method or the boundary element method. Structural system identification is to obtain A along with the remaining three opeartors B, C and D, when u and the sensor output y are available. In other words, structural analysis is to obtain vectorial quantities from known matrix quantities, whereas structural system identification is to obtain matrix quantities from measured vectorial quantities. Therefore, the structural response can be obtained uniquely once the structural model (structural mass, stiffness and damping matrices) and the forcing function are given. This is not the case in general in determining the structural model from measured forcing function and measured response data. This is because the information (usually frequency contents) contained

in the measured response can vary widely, depending on the sensor types, sensor locations, and forcing function characteristics. For example, if the forcing function consists of two tuned harmonic frequencies, the measured response would capture at most two modes by tuning the forcing function frequencies to match two of the system modes. This is why one prefers to utilize forcing functions that contain rich enough frequency contents. This and other related issues will be discussed throughout the course. We now outline the course:

Chapter 2. MATHEMATICAL MODELS FOR SYSTEMIDENTIFICATION

Model Equations for Structural System Identification The discrete matrix equation for structures derived in the preceding section is given in the form primarily for structural analysis, that is, for any specified forces, its solution would give a complete set of system states, from the acceleration to strains and stresses at every discrete point. This is not possible in measurement world. Only a limited number of accelerations, velocities, displacement and strains are usually measured. In addition, the number of applied forces are limited and their locations play an important role in subsequent identification activities. Therefore, for system identification purposes, we specialize the equations for structures and augment it with the sensor output state equation as shown below: ¨ ˙ + K q(t) = Bˆ u(t) M q(t)+D q(t) ˙ + Sa q(t) ¨ y(t) = Sd q(t) + Sv q(t)

(1)

where q is the n-displacement vector; u is a m-input force vector; y is a l-sensor output vector, either displacement, velocity, acceleration or their combinations; Bˆ is the input-state influence matrix, and Sd , Sv and Sa are state-output influence matrices for displacement, velocity, and acceleration, respectively. The reason that we introduce the input and output Boolean influence matrices is that both the number of excitations m and of output measurements l are in general much smaller than the number of discrete internal variable n.

Normal Modes and Normal Mode Shapes As we have already experienced in the previous chapter, the use of structural modes and their mode shapes is so widespread that it is natural to transform (2.1), whenever possible and to the extent possible, into its modal basis. To this end, we first express the discrete displacement q(t) in terms of its normal mode shapes φ and the associated modal coordinates η: q=φη

(2)

η(t) ¨ + φT D φ η(t) ˙ + Ω2 η(t) = φT Bˆ u(t) (3) y(t) = Sd φ η(t) + Sv φ η(t) ˙ + Sa φ η(t) ¨ It should be emphasized that φ is normalized with respect to the system mass matrix and is obtained from the following eigenproblem Kφ = MφΩ2 φT M φ = I(n×n) φT K φ = Ω2 = diag {Äk 2 , k = 1, . . . , n}

(4)

φT D Φ = where Äk is the undamped natural frequency for mode k. Observe that the transformed damping matrix is in general non-diagonal. A popular damping model that is diagonaliable is due to Lord Rayleigh given by D = αM + βK

(5)

where α and β are constants to be determined depending on the measured data. The modal damping ratio ζ is defined as = diag {2ζk Äk , k = 1, . . . , n}

(6)

Transfer Function or Input/Output Relation As stated, for a n-degree of freedom system, only a limited number of sensors and forces are employed in experiemnts. Suppose the structure was excited by m forces and the response was measured by l sensors. The resulting input/output relation called herein transfer function or impulse response function can be expressed by the Laplace transform as £

¤ Ms 2 + Ds + K q(s) = Bˆ u(s) £ ¤ Sd + Sv s + Sa s 2 q(s) = y(s)

(7)

˙ where the initial conditions are assumed to have q(0) = q(0) = 0 without loss of generality. By eliminating the internal variable q(s) from the above equation, one obtains the following input/output relation: y(s) = H(s)u(s)

£

¤−1

H(s) = (Sd + Sv s + Sa s ) Ms + Ds + K 2

2



(8)

where H(s) is called the transfer function or impulse response function. It should be emphasized that the transfer function is invariant with respect to the choice of the coordinate systems employed. For example, when the normal mode basis is used, the transfer function is obtained by £ ¤−1 T y(s) = (Sd + Sv s + Sa s 2 ) φ φT (Ms 2 + Ds + K) φ φ Bˆ u(s) £ 2 ¤−1 2 = (Sd + Sv s + Sa s ) Ms + Ds + K Bˆ u(s) = H(s) u(s)

(9)

In other words, the transfer function H(s) is invariant regardless of the definition of the internal dynamical states, q and η.

General State Space Representations We have learned from §2.1 that, unless the damping is a Rayleigh type, the homogeneous part of the second-order structural dynamics equation (2.1) or (2.3a) is not simultaneously diagonalizable. Although a Rayleigh damping is a reasonable approximation for lightly damped structures, experience indicates that many moderately or heavily damped structures do not lend themselves to a Rayleigh damping model. In other words, in (2.4d) is fully coupled and the modal coordinate vector η cannot be obtained in terms of the normal mode shapes φ. For these general damping cases, it is preferable to employ a first-order form that can be characterized by a complex set of eigenvectors. There is a second imperative for us to adopt a first-order form for general damping cases. System realization algorithms have been developed, primarily based on modern systems theory whose mathematical models are expressed in first-order state space forms. To this end, we introduce a linear, time-invariant state space model x˙ (t) = A x(t) + B u(t) y(t) = C x(t) + D u(t)

(10)

where x is a (N × 1) is the internal state vector, u(t) is a (m × 1) applied force vector; y(t) is a (l × 1) output (or physical sensor measurements such as displacement, velocity or acceleration) vector; A is a (N × N ) matrix that is called a state transition matrix; B is a (N × m) input (force or actuator application) location matrix; C is a (l × N ) output (sensor measurement) location matrix; and, D is a (l ×m) matrix that represents any direct input/output feedthrough. Observe

that the size of the internal variable vector N is twice the size of the second-order structural dynamics equation (2.18a), namely, N = 2n. At this point we recall that the input-output transfer function H(s) (2.8) remains unchanged under a normal mode coordinate change given by (2.2). We will show below that the above state space equation (2.10) also preserves the same transfer function expression, if it represents an equivalent form of the model structural equation (2.1). If (2.10) is to be equivalent to the model structural dynamics equation (2.1), its eigenvalues associated with the internal variable x must be the same as those of the structural internal variable q, viz., |Mλ2 + Dλ + K| = 0



|λI − A| = 0

(11)

In addition, one obtains the following input-output relation: ˆ y(s) = Hu Hˆ = C (sI − A)−1 B

(12)

It will shown in the next several subsections that Hˆ = H regardless of how the internal variable x is represented. Euilvalent Realizations

It turns out that when one has obtained a set of realizations from measured data, each with utilizing the data in a distinctive way, the resulting realization coordinates are in general in different basis, although they are related by similarity transformations. In other words, even if the internal variable vector x consists of the physical displacement q and the velocity q˙ such that x = [ q˙

(13)

q ]T

the resulting coordinates, say z, from a realization can be in a different basis. Suppose the realization coordinates are related to the theoretical basis x by z = Tx

(14)

Nevertheless, all possible realizations are equivalent as demonstrated below: ¡ ¢−1 −1 y(t) = CT sT−1 T − T−1 AT T B u(s) = H(s) u(s),

−1

H(s) = C (sI − A)

B

(15)

which shows that, as long as each different realization basis has a similarity transformation to the reference model (2.10), they are equivalent model realizations. Therefore, it is possible to have an infinite number of equivalent realizations, all of which can be transformed to the reference model. This fact offers both realization validity and challenge in performing the desired transformations for each particular realization. We defer further discussions on this aspect to later chapters.

State Space Model with Genearalized Momentum Of a number of possible ways to cast the second-order structural model equation (2.1) into the state space model (2.10), we will first consider the use of a generalized momentum variable given by

˙ + E2 q(t) v(t) = E1 Mq(t)

(16)

where E1 and E2 are constant matrices to be determined. With this choice, the internal state space variable becomes ½ ¾ q(t) x(t) = (17) v(t) Different choices of E1 and E2 will lead to different but equivalent state space representations. For example, the choice of

E1 = M−1 , leads to the following state space operators:

E2 = 0

(18)

·

0 −M−1 K · ¸ 0 B= M−1 Bˆ

A=

C = [ Sd

I −M−1 D

0 ] + [ Sv

= [ Sd − Sa M−1 K D = Sa M−1 Bˆ

¸

0 ] A + [ Sa

0]A

2

(19)

Sv − Sa M−1 D ]

There is another choice for E1 and E2 which is frequently used in direct time integration of the structural model equation for transient response analysis. It is given by E1 = I,

E2 = D

(20)

which gives the following state space operators: · ¸ −M−1 D M−1 A= −K 0 · ¸ 0 B= ˆ B C = [ Sd

0 ] + [ Sv D = Sa M−1 Bˆ

0 ] A + [ Sa

(21) 0 ] A2

Once again, regardless of the specific choices of E1 and E2 , the input-output relation H remains invariant. Let us take the case of (2.19). The transfer function Hˆ for this state space model can be shown to be the same as that of the second-order case (2.8b): ˆ H(s) = C (sI − A)−1 B −1

= [ Sd − Sa M K

· · Sv − Sa M−1 D ] sI −

0 −M−1 K

I −M−1 D

¸¸−1 ·

0

¸

(22)

M−1 Bˆ

which, after expansion, can be shown to be £ ¤ −1 −2 2 ˆ H(s) = Sd + Sv s + Sa s (M s − M−1 DM−1 s −3 − (M−1 (K − DM−1 D)M−1 )s −4 + · · · )Bˆ £ ¤−1 = (Sd + Sv s + Sa s 2 ) Ms 2 + Ds + K Bˆ = H(s)

(23)

Thus, the input-output relation or the system transfer function is invariant for any consistent set of the state space internal variable. State Space Model with Normal Modes The normal mode coordinates η is perhaps the most widely used internal variable in structural system identification. This choice can be invoked from the generalized momentum equation (2.16) in conjunction with the normal-mode coordinate structural equation model

(2.3). To this end, let us introduce the following generalized momentum: v(t) = E1 η(t) ˙ + E2 η(t) so that the resulting state space internal variable x becomes ½ ¾ η(t) x(t) = v(t)

(24)

(25)

If one chooses E1 = I,

E2 = 0

and utilize (2.3), the following normal-mode state space model results: · ¸ 0 I A= −Ω2 − · ¸ 0 B= φT Bˆ C = [ Sd φ 0 ] + [ Sv φ 0 ] A + [ Sa φ 0 ] A2 = [ Sd φ − Sa φΩ2 Sv φ − Sa φ ] D = Sa φφT Bˆ = Sa M−1 Bˆ

(26)

(27)

Clearly, other choices are also possible such as the one corresponding to (2.20). This is left for exercise.

Symmetric State Space Model Finally, we introduce a symmetric state space model so that the corresponding eigenvalue analysis can be performed by using a symmetric eigenanalysis algorithm (but not necessarily symmetric positive definite system!). One popular symmetric state space form with the displacement output sensor is given below. ·

D M

¸½ ¾ · ¸ ¸½ ¾ · q Bˆ −K 0 M q˙ u + = 0 0 M q˙ q¨ 0 ½ ¾ q yd = [ Sd 0 ] q˙

(28)

whose companion eigenproblem is given by ·

−K 0

0 M

¸·

¸

·

Ψ D = ΨΛ M

M 0

¸·

¸

Ψ Λ ΨΛ

with its eigenvector Ψ and the eigenvalues Λ satisfying the following perperties:

(29)

· ¸ q x= =Ψz q˙ · ¸T · ¸· ¸ Ψ D M Ψ =I ΨΛ M 0 ΨΛ · ¸T · ¸· ¸ Ψ −K 0 Ψ =Λ ΨΛ 0 M ΨΛ

(30)

Λ = diag{σi ± jωi , i = 1, . . ., n} Ψ = b. . . , r Y(`) ⇒ 0, ` > r

(42)

Markov Parameter

Input

Time

Time

u(2)

u(1)

u(0)

Y(1) Y(2) Y(0)

For a periodic function with its period T = N 1t, we have Y(N + r ) = Y(r )

(43)

which means that the availability of y(s) for a complete period is necessary and sufficient for representing the output. However, not all the oscillatory components for the output consisting of multi-modes can be sampled to satisfy the periodicity requirement (2.42). This issue will be revisited several times throughtout the course. In passing, as will be seen from the properties of wavelets later in the course, wavelets appear to perform well in the absence of the periodicity of measured data. There are two distinct cases that meet the requirement (2.42): frequency band-limited excitation and damped systems so that Y(`) = C A¯ (`−1) B¯ → 0

as

`→∞

(44)

Of course, when the excitations are of unit impulse types, viz., u(0) = 1

and

u(s) = 0, s > 0

(45)

we have from (2.40) y(s) = Y(s),

s = 0, 1.2, . . . , r

(46)

Chapter 3. SYSTEM REALIZATION In Chapter 1 a system realization is defined as the determination of the model character¯ B, ¯ C) in the discrete state space equation from the measured excitations (input) and istics (A, measured responses (output) data. There have been two approaches: frequency-domain and time-domain system realizations. This chapter will focus on the latter, often called time-domain parameter determintion. As alluded to in the previous chapters, the basis of system realization is modern state space systems theory that was originally developed for building the models of complex systems as well as for automatic controls. Two fundamental aspects of systems theory we will employ for system realization are the so-called observability (measurability) and controllability (excitability). Observability is the ability to obtain the response output which can contain a number of response characteristics starting with an initial response x(0). Controllability pertains to the extent the input can excite a number of system response components, x(t) by a train of excitations u(t). Because of their relevance in system realization we review them first below before moving on to system realization description.

Observability and Controllability Operators Let us recall the discrete state space model (2.36): x(k + 1) = A x(k) + B u(k) y(k) = C x(k) + D u(k)

(47)

where we have dropped the bar (¯) symbol on A and B for notational simplicity. First, if there were no excitation except the initial condition x(0) 6= 0, the corresponding output y for the succesive discrete time steps, t = 1t, 21t, 31t, ..., r 1t can be expressed as 

 y0  y1     y2    = V p x0 ,  .    . yp

 C  CA     CA2  Vp =   (` p × n) .     . CA( p−1) 

(48)

The (` p × n)-matrix V p is call an observability matrix. It will be shown that, together with the observability matrix we are about to derive, the observability matrix plays a crucial role in two ways: it facilitates the formulation of the minimal realization and at the same time becomes a part of factored Hankel matrix needed for realization computationans. Now, if we apply a train of excitations, the internal state variable vector xr can be written as



 ur −1  ur −2     ur −3  r xr = A x0 + Wr  ,  .    . u0

Wr = [ B

AB

A2 B

...

A(r −1) B ] (n × mr )

(49)

where Wr is called a controllability matrix. In order to appreciate the roles of the observability and controllability matrices, let us ask two questions: 1). Under what condition the initial state x0 6= 0 can be uniquely reconstructed from a set of the measured output y˜ Tp = (y0T , y1T , y2T , ..., y(Tp−1) ), if there were no excitation ? 2.) If the system was initially at rest,namely, x0 = 0, what is the condition that ensures the initial zero state to arrive at the desired state xr , when a train of input T T T T u˜ rT = (u(r −1) , u(r −2) , u(r −3) , ..., u0 ) are applied to the system? In addressing the first question, we note that, if the dimension of x0 is (n × 1), then the row vectors of the observability matrix V p must span the whole n-dimensional space in order for x0 to be uniquely reconstructed from the output y˜ p . With regard to the second question, suppose that the rank of A is n. This means that xr will consist of the distinct n-response components if and only if the column vectors of the

controllability matrix Wr spans the same n-dimensional space. It should be noted that, while the initial state x0 can be uniquely reconstructable provided the row vectors of V p span the n-dimensional space, there exists in general no unique set of input vectors that satisfy the controllability requirement. To see this, we observe with x0 : xr (n × 1)

=

Wr

(n × r m)

u˜ r (r m × 1)

(3.3a)

Since in general r m > n, u˜ r is given by u˜ r = Wr+ xr − Nw α

(50)

where Nw is a null space basis of Wr and α is an arbitrary vector. Therefore, u˜ r cannot be unuquely determined. On the other hand, as long as the row rank of V p is n, one obtains x0 = [VTp V p ]−1 VTp y˜ p

(51)

To gain further insight into the properties of the controllability matrix Wr , consider a singlepoint excitations so that B becomes a column vector, say, b. With Ψ−1 AΨ = Λ(n × n), the controllability matrix becomes: Wr = [ B

AB

...

A(r −1) B ]

= ΨΨ−1 [ B

AΨΨ−1 B

...

A(r −1) ΨΨ−1 B ]

= Ψ [ Ψ−1 B

ΛΨ−1 ΨB

...

Λ(r −1) Ψ−1 B ]

b

... ... ... ... ... ... ...

1

 b2   .  = Ψ .   bi  . bn

λ1 b1 λ2 b2 . . λi bi . λn bn

−1) λ(r b1  1 −1) λ(r b2  2   .  , .  (r −1) λi bi   . −1) λ(r bn n



 b1  b2     .     .  = Ψ−1 B    bi    . bn

(52)

Physically, bi = 0 corresponds to the nodal line of the i-th vibrational modes. Hence, if the control is applied at one of the vibrational nodal lines, that mode is not controllable or it introduces rank-one loss in the controllability. A careful planning in the selection of excitation points is mandatory in order to excite the desired mode set of interest. Once again, as stated previously, the ramnifications of the observability and controllability matrices in system realization are both their theoretical basis of providing minimal order realization as well as computational

utility. These will be discussed below.

The Hankel Matrix

The basic building block for a system realization is the Hankel matrix H pr (0) defined as

 C  CA     CA2  = V p Wr =   [B .     . CA( p−1) (` p × n) 

H pr (0)

(` p × r m)

AB

A2 B

...

A(r −1) B ]

(53)

(n × r m)

The above Hankel matrix is perhaps the most basic fundamental building block for system theory-based identification. A key step is thus to recognize that H pr (0) can be constructed in terms of the Markov parameters Y(k). To this end, we expand it to read:



CB  CAB   CA2 B  H pr (0) =  .  .   . ( p−1) CA B 

Y (1)  Y (2)   Y (3)   . =  .   .  Y ( p)

CAB CA2 B CA3 B . . . ( p) CA B

Y (2) Y (3) Y (4) . . . Y ( p + 1)

CA2 B CA3 B CA4 B . . . ( p+1) CA B

Y (3) Y (4) Y (5) . . . Y ( p + 2)

... ... ... ... ... ... ...

 ... CA(r −1) B ... CA(r ) B   ... CA(r +1) B   ... .   ... .   ... . ... CA(r + p−2) B 

Y (r ) Y (r + 1)   Y (r + 2)   .   .   .   Y (p + r)

(54)

Note that (3.8a) consists of the system parameters (A, B, C) that are to be determined, whereas (3.8b) consists of the Markov parameters that can be extracted from the measured data. A generalization of the basic Hankel matrix H pr (0) that is also needed in system realization can be expressed as

H pr (k) = V p Ak Wr 

Y (k + 1)  Y (k + 2)   Y (k + 3)  .  = .   .   Y (k + p)

Y (k + 2) Y (k + 3) Y (k + 4) . . . Y (k + p + 1)

Y (k + 3) ... Y (k + 4) ... Y (k + 5) ... . ... . ... . ... Y (k + p + 2) ...

 Y (k + r ) Y (k + r + 1)   Y (k + r + 2)   .   .   .   Y (k + p + r )

(55)

which shows that a Hankel matrix made of the Markov parameters can be factored in terms of the observability matrix, the system dynamics operator A and the controllability matrix. This intrinsic factored form of Hankel matrices is exploited in the development of computational algorithms for system realization.

Minimal Realization Suppose you are given a set of test data and asked to perform system realization. The one that comes to every analyst’s mind would be: how do I determine the order of realization model?, what modes to include in the model? and how good is the realization model would be? This section addresses the first of these questions. During the early days of modal (not model!) testing, the lack of criterion for determining the model order and modal accuracy criterion often led to realizations models of different orders and varying modal components from using the same test data, depending upon the skills and tools available to each analyst. For complex problems, therefore, a series of model syntheses were required before a consensus could be reached. The problem of minimal realization was, perhaps, first solved by Ho and Kalman in 1965 presented what is now konwn a minimal realization theorem for noise-free data which states: there exists a minimum size of the system operator A for a given pairs of the input and the output. Thus, a realization whose size is less than the minimal realization order is deficient. On the other hand, a realization whose size is greater than the minimum realization order will contain a set of dependent (superfluous) internal state vector components. To see this, we recall the Laplace-transformed transfer function (2.22): H(s) = C(sI − A)−1 B We now state the system theory-based realization:

(56)

Statement of Realization: Given the transfer function (3.10), a minimal realization (A, B, C) consists of the smallest-size A matrix. This is possible only if the ranks of the observability and controllability matrices, V p (` × p n) and Wr (r n × m), are n. Theorem of Ho and Kalman: A realization {A(n × n), B(n × m), C(` × n)} is minimal if and only if it is observable and controllable. Hence, a nonobservable or noncontrollable realization cannot be minimal. ¯ n¯ × n), ¯ n¯ × m), C(` ¯ × n)} Suppose a noncontrollable realization is {A( ¯ B( ¯ which satisfy the same transfer function H(s), but subject to n¯ < n (57) This means that {A(n × n), B(n × m), C(` × n)} cannot be a minimal realization. ¯ r < n, then one can perform a similarity transformation T such that First, if the rank of W " Aˆ = T−1 A¯ T = ·

Aˆ c (nˆ c × nˆ c )

Aˆ 12 (nˆ c × n − nˆ c )

0 (n − nˆ c × nˆ c )

Aˆ 22 (n − nˆ c × n − nˆ c )

¸ ˆ Bc (nˆ c × r n) Bˆ = , 0 (n − nˆ c × r n)

·

CˆcT T ˆ C = ˆT C2

¸

#

(58)

¯ p < n, then employing a similar step in deriving the above noncontrolSecond, if the rank of V lable case we obtain

" Aˆ = ·

Aˆ o (nˆ o × nˆ o )

0 (nˆ o × n − nˆ o )

Aˆ 21 (n − nˆ o × nˆ o )r

Aˆ 22 (n − nˆ o × n − nˆ o )

Bˆ Bˆ = ˆo B2

¸

·

,

CˆcT T ˆ C = 0

#

¸

(59)

However, since the two realizations come from the same transfer function H(s), one must have H(s) = Cˆc (s Iˆc − Aˆ c )−1 Bˆc = Cˆo (s Iˆo − Aˆ o )−1 Bˆo = C(sI − A)−1 B

(60)

¯ B, ¯ C) ¯ becomes both controllable It is noted that a noncontrollable system (but observable) (A, and observable for a reduced size system n¯ c ≤ n. Likewise, a nonobservable system becomes ¯ B, ¯ C) ¯ is observable and controllable for a reduced size system n¯ o ≤ n. Therefore, if (A, controllable and observable, one must have n¯ c = n¯ o = n

(61)

This proves Theorem of Minimal Realization. An important practical consequence of the minimal realization theorem is that, if the data are free of noises, the smallest size A matrices of all the possible realizations must be the same, and they are related by similarity transformations.

A remaining question: what if the rank of the observability matrix is different (smaller or larger) from that of the observability matrix? It is precisely for this reason (3.12) and (3.13) are derived. For example, if nˆ c < nˆ o , then one must employ the reduction given by (3.12). When nˆ o < nˆ c , one must emply (3.13). To conlcude, the system theory-based realization offers a fundamental basis for a minimal realization, at least for noise-free data. An Example: Kalman’s problem Let us consider a single input/single output sequence given by y = {1, 1, 1, 2, 1, 3, 2, . . .}, and find a minimal realization.

u = {1, 0, 0, 0, 0, 0, 0, . . .}

Step 1: Realization Since the input is a unit impulse, the output becomes the Markov parameters. Hence, we can construct a series of Hankel matrices as follows:   1 1 1 2 " # · ¸ 1 1 1 1 1 1 1 2 1 H22 (1) = , H33 (0) = 1 1 2 , H44 (0) =   2 1 1 2 1 3 1 2 1 2 1 3 2 where H22 is taken from {1, 1, 2, 1, . . .} instead of from the beginning of the series. Since |H22 | 6= 0, |H33 | 6= 0 and |H44 | = 0, we conclude that the rank of the system is 3. In other words, the ranks of the observability and controllability matrices are three. A singular value decomposition of H33 (0) yields: · 0.4597 0.0000 −0.8881 ¸ · 3.7321 0.0 0.0 ¸ · 0.4597 0.6280 0.6280 ¸ H33 (0) = PSQT =

0.6280 0.6280

−0.7071 0.7071

0.3251 0.3251

·

0.0 0.0

1.0 0.0

0.0 0.2679

Now, if H33 (0) = V3 W3 = C B, then " H33 (1) =

1 1 2

1 2 1

2 1 3

#

must possess the form of H33 (1) = V3 AW3 = C A B from (3.9).

·

0.0 0.7071 −0.8881 0.3251

−0.7071 0.3251

1

Since V3 = PS 2 = W3T , A can be determined from " −1 A1 = V−1 3 H33 (1)W3 =

1 1 − − T S 2 P H33 (1)QS 2

=

−0.3981 −1.5 0.7692

1.2603 0.3981 −0.2041

−0.2041 −0.7691 −0.7605

#

and finally B and C are obtained from:

C1 = h 1

( ) ( ) 1 0.8881 0 = 0.0 0 −0.4597

B1 =

1 S 2 QT

0

1 0 i PS 2

= h 0.8881

0.0

A check with the above realization shows that, with x0T = h 0

−0.4597 i 0

0 i, it reconstructs the output.

Step 2: Uniqueness of Realization The question is whether the preceding realization is the only possible one. There is a second way of realizing the above sequence. Instead of the singular decomposition employed, one can invoke the wellknown LU-decomposition as H33 (0) is symmetric: " H33 (0) = LU =

1 1 1

0 0 1

0 1 0

# " ·

1 0 0

1 1 0

1 0 1

#

so that S = I Following a parallel derivation, we obtain "

1 1 0 A2 = 0 −1 1 1 0 −1 ( ) 1 B2 = 0 0 C2 = h 1 0 0 i

#

One can verify that this second realization, although the sequences of x are completely different, gives the the same output y. Step 3: Singular Values and Similarity Transformation The singular values of the two A matrices, A1 and A2 , are the same as they should be: Λ = diag(1.2056, −1.1028 + 0.6655 j, −1.1028 − 0.6655 j) Of course, their bases are different, which are related by a similarity transformation such that φ1 = Tφ2 so that the two realizations are related according to (A1 , B1 , C1 ) = (T−1 A2 T, T−1 B2 , C2 T)

Eigensystem Realization Algorithms The minimal realization theory of Ho and Kalman is valid for noise-free data. In practice there are two major factors that influence realization computations: incompleteness of the measured data and data contaminated with noises. The incompleteness of measued data implies that the collected data are either insufficent or may not satisfy the periodicity requirement (2.43) as well as the asymptotic properties of (2.44). One possible way to deal with incomplete and noisy measured data is to perform overdetermined matrix computations, which necessarily involve singular value decompositions. In the structural engineering community it was Juang and Pappa who in 1984 extended the Ho-Kalman algorithm suitable for handling of structral test data. A key idea used in the Juang-Pappa’s eigensystem realization algorithm (ERA) is to utilize the properties of the singular-decomposed form of the overdetermined Hankel matrix. A singular decomposition of a rectangular matrix H pr (0)(`p × r m) can be expressed as ¸ · Snn 0nz ¯ pr (0) = P¯ p S pr Q ¯ rT , H (62) S pr = 0zn ²zz where ² is close to zero, which can be ignored unless some of them represent the physical rigid-body modes of the structure. It should be mentioned that the task of truncating the zero eigenvalues is not as straightforward as on the paper. This is because the eigenvalues continuously changes in most large-scale test data and the lowest structural eigenvalue and the zero eigenvalues get blurred, especially for

very flexible structures or very large structures. Nevertheless, it is this truncation concept that constitutes a central part of Junag and Pappa’s ERA technique. Once the eigenvalue truncation is decided, we partition P p and QrT as follows: " P¯ p = [ P pn (1 : `p, 1 : n)

P pz (1 : `p, 1 : z) ] ,

¯ Tp = Q

QrTn (1 : n, 1 : r m) QrTz (1 : z, 1 : r m)

# (63)

Hence, the truncated form of the Hankel matrix now becomes H pr (0) ≈ P pn Snn QrTn = V p Wr

(64)

from which the observability and controllability matrices are obtained as follows Vp =

1 2 P pn Snn ,

Wr =

1 2 Snn

QrTn

(65)

Computation of A Now, in order to compute the system operator A, we recall from (3.9) H pr (1) = V p A Wr

(66)

The second computational step the ERA technique employs is the generalized inverses of the observability and controllability matrices: 1 − 2 Qr n Snn

(67)

PTpn P pn = QrTn Qr n = I (1 : n, 1 : n)

(68)

V+ p

=

1 − 2 Snn

Wr+

PTpn

=

with the identities

Notice that the generalized inverses of the observability and controllability matrices, V+ p and Wr+ satisfy + + V+ p Vp Vp = Vp ,

V p V+ p Vp = Vp

+ + W+ p Wp Wp = Wp ,

(69)

W p W+ p Wp = Wp

To obtain A we perform the computational operations: V+ p

H pr (1)

W+ p

=

1 − Snn2

PTpn

1 2 P pn Snn

⇓ + A = V+ p H pr (1) W p

Computations of B and C

A

1 2 Snn

QrTn Qr n

1 − Snn2

=A

(70)

To extract B and C, we notice that the observability and controllability matrices are given by  C  CA    1  CA2  T 2 Vp =   (1 : `p, 1 : n) = Pr n (1 : `p, 1 : n) Snn .     . CA( p−1) 

Wr = [ B

AB

A B 2

...

A

(r −1)

B ] (1 : n, 1 : r m) =

1 2 Snn

(71)

QrTn (1 : n, 1 : r m)

Therefore, C can be obtained by extracting the first (1 : `, 1 : n)-block entries from V pn . Similarly, B can be obtained by extracting the first (1 : n, 1 : m)entries from Wr n from (3.19): B=

1 2 Snn

QrTn (1 : n, 1 : m)

C = PrTn (1 : `, 1 : n)

1 2 Snn

(72)

So far we have described the basic form of the eigensystem realization algorithm. Experience with the basic ERA indicated that it often has to deal with nonsquare Hankel matrices, which can encounter numerical rostness difficulties. In addition, accuracies of the mode shapes that are contained in C or B are much to improve. One improvement over the basic ERA is t owork with a symmetrized form of Hankel matrices. This is discussed below.

Realizations Based on Product of Hankel Matrices Instead of performing realizations with nonsquare Hankel matrices, one can work with a symmetric overdetermined matrices. The two symmetrized overdetermined matrices most often used can be constructed as H HT

= H(0) HT (0) = (P p Snn QrT ) (Qr Snn PTp ) = P p S2nn PTp (73)

HT H

= HT (0) H(0) = (Qr Snn PTp ) (P p Snn QrT ) = Qr S2nn QrT

where the singular value decomposition of (3.18) and the identities of (3.22) have been used. The above two matrices, H H T and controllability matrices (3.19) as H HT

(`p × `p) HT H

(r m × r m)

HT H ,

=

can be expressed in terms of the observability and Vp

(`p × n) =

WTp

(r m × n)

Snn (n × n) Snn (n × n)

VTp (n × `p) Wp (n × r m)

(74)

(75)

Realizations based on the preceding symmetrized matrices are known by various names: ERA with data correlation (ERA/DC) proposed by Juang and co-workers, Principal Hankel Component Algorithm by Kung et al, Q-markov Algorithm by Skelton and his co-workers, among

others. A major advantage of using the symmetrized matrices is, in addition to their computational advantages in carrying out singular value decompositions, is their inherent noise filtering characteristics. This can be seen by examining the matrix structure of the matrix, e.g., H HT :  Y (1) Y (2) Y (3) ... Y (r )   Y T (1) Y T (2) Y T (3) ... Y T (r )  H HT

Y (2)  Y (3)  = .  . . Y ( p)

Y (3) Y (4) . . . Y ( p + 1)

Y (4) Y (5) . . . Y ( p + 2)

... ... ... ... ... ...

Y (r + 1) Y T (2) Y (r + 2)   Y T (3) × . .     . . . . Y T ( p) Y (p + r)

Y T (3) Y T (4) . . . Y T ( p + 1)

Y T (4) Y T (5) . . . Y T ( p + 2)

... ... ... ... ... ...

Y T (r + 1) Y T (r + 2)   .   . . Y T (p + r)

(76)

It is seen from the above expression that H H T consists of correlation functions of the Markov parameters. Therefore, if the noises present in the Markov parameters are uncorrelated, H H T should filter out the noises much more effectively than performing realizations with H(0) alone. Specifically, since H H T works on the Markov parameter product form Y(k) YT (k) (1 : `, 1 : `), one may conjecture it would filter the output noises more effectively than the input noises. On the other hand, H T H works on the form YT (k) YT (k) (1 : m, 1 : m), it would filter the input noises more effectively than the output noises. Apart from the possible flitering effectiveness, experience indicates that either form would capture the eigenvalues competitively, including using H(0) as discussed in the previous section. Thus, as an accurate determination of the system mode shapes is more critical and the mode shapes are typically extracted from C, one should evaluate which matrix would yield more accurate C. However, as the number of sensors ` is typically much larger the number of actuators m, H H T -based realizations would be computationally more intensive than those based on

HT H .

We now present two realizations following closely the so-called FastERA implementation proposed by Peterson. Realizations based on

H HT

A singular value decomposition of

H HT

= H(0) HT (0) yields

H HT

= V p Snn VTp

(77)

It should be caustioned that one should not use the result of a singular value decomposition of H(0) as obtained by (3.18), thus literally believing the expression of (3.27) and (3.28). Noting that the computed V p is related to the analytical expression of the obervability matrix by  C  CA     CA2  Vp =   (1 : `p, 1 : n) .     . CA( p−1) 

we obtain two shifted operators from V p :

(78)

 C  CA     CA2  =  ((` − 1) p × n), .     . CA( p−2) 

V0p−1

 CA  CA2    =  CA3 .  ((` − 1) p × n)   . CA( p−1) 

V1p−1

(79)

Note that V0p−1 and V1p−1 are related via V0p−1 A = V1p−1

(80)

A = [V0p−1 ]+ V1p−1

(81)

from which one obtains A as

To obtain B, we utilize the following identity:     Y (1) C  Y (2)   CA     2   Y (3)   CA  0 V( p−1) B =   B=  . .         . . Y ( p − 1) CA( p−2)

(82)

so that B is given by 

 Y (1)  Y (2)    Y (3)   B = [V0p−1 ]+   .     . Y ( p − 1)

(83)

C = V0( p−1) (1 : `, 1 : n)

(84)

Finally, C is obtained from

as in the case of the ERA derivation. Realizations based on

HT H

For this case, we obtain the following shifted operators from the controllability matrix Wr : W0(r −1) = [ B

AB

A2 B

...

A(r −2) B ] (1 : n, 1 : (r − 1)m) (85)

W1(r −1) = [ AB

A2 B

A3 B

...

A(r −1) B ] (1 : n, 1 : (r − 1)m)

Notice that W0(r −1) and W1(r −1) are related by

from which one can obtain

A W0(r −1) = W1(r −1)

(86)

A = W1(r −1) [W0(r −1) ]+

(87)

B matrix is just the first n-column entries in W0(r −1) : B = W0(r −1) (1 : n, 1 : m)

(88)

Finally, C can be extracted from the following identity: C Wr0−1 = [ Y (1)

Y (2)

Y (3)

...

Y (r − 1) ] (89)

⇓ C = [ Y (1)

Y (2)

Y (3)

...

Y (r − 1) ] [Wr0−1 ]+ (1 : `, 1 : n)

From the computational accuracy viewpoint, we conjecture that (3.37) would offer more accurate solution of B, whereas (3.43b) would offer a better solution of C. Confirming or disapproving this conjecture may present one of the term projects.

Continuous Realization Model from Discrete Model ¯ B, ¯ C) obtained in the preceding two sections valid for the discrete state space Realizations (A, model (3.1). While they are adequate for discrete event dynamics, it will prove more convenient to deduce the continuous state space model (2.10). The second-order structural dynamics models (2.1) or (2.3) can then be obtained from the continuous state space model. In this section we focus on the derivation of a modal-form, continuous state space equation. We defer the transformation of the modal-form state space equation into the second-order structural equations to the next chapter. First, we transform the discrete system matrix A¯ into its continuous case A as follows. Using the relation ¯ = eΛ ⇒ Ψ−1 A Ψ = Λ Ψ−1 A¯ Ψ = Λ we obtain the continuous system eigenvalues from ¯ ln Λ Λ= = diag{σi ± jωi , i = 1, . . . , n} 1t

(90)

where σi and ωi are the real and imaginary parts of the continuous state-space system (CSS) characteristic roots. It should be noted that, although the real roots represent the structural damping, the imaginary roots are not the same as the natural frequencies as will become clear in Chapter 4. Second, we recall the discrete operator B¯ (2.37b) for the zero-hold case given by

¯ B B¯ = (−A)

(91)

Ψ−1 B¯ = ( −Ψ−1 A¯ Ψ ) Ψ−1 B

(92)

from which we have

Therefore, the modal excitation matrix Bz can be written as ¯ −1 Ψ−1 B¯ Bz = Ψ−1 B = ( −Ψ−1 A¯ Ψ )−1 Ψ−1 B¯ = ( −Λ)

(93)

Third, in order to obtain the modal measurement matrix Cz , we recall the output matrix relation (2.27c) C = [ Sd

0 ] + [ Sv

0 ] A + [ Sa

0 ] A2

(94)

In practice, the output data from each sensor type are treated separately. This means the modal output matrix becomes different according to sensor types: (

for displacement sensing Sd Ψ Cz = Sv ΨΛ1 for velocity sensing Sa ΨΛ2 for acceleration sensing Cz = b. . . ,

Suggest Documents