SPECTRAL APPROXIMATION OF TIME WINDOWS IN THE SOLUTION OF DISSIPATIVE LINEAR DIFFERENTIAL EQUATIONS. 1. Introduction. Consider the differential system

SPECTRAL APPROXIMATION OF TIME WINDOWS IN THE SOLUTION OF DISSIPATIVE LINEAR DIFFERENTIAL EQUATIONS K. BURRAGE∗, Z. JACKIEWICZ†, AND B. D. WELFERT‡ Ab...
Author: Bridget Adams
0 downloads 2 Views 897KB Size
SPECTRAL APPROXIMATION OF TIME WINDOWS IN THE SOLUTION OF DISSIPATIVE LINEAR DIFFERENTIAL EQUATIONS K. BURRAGE∗, Z. JACKIEWICZ†, AND B. D. WELFERT‡ Abstract. We establish a relation between the length T of the integration window of a linear differential equation x0 + Ax = b and a spectral parameter s∗ . This parameter is determined by comparing the exact solution x(T ) at the end of the integration window to the solution of a linear system obtained from the Laplace transform of the differential equation by freezing the system matrix. We propose a method to integrate the relation s∗ = s∗ (T ) into the determination of the interval of rapid convergence of waveform relaxation iterations. The method is illustrated with a few numerical examples. Key words. Linear differential systems, time window, spectral approximation, waveform relaxation. AMS(MOS) subject classi£cations. 65L05.

1. Introduction. Consider the differential system (1.1)

x0 (t) + Ax(t) = b(t),

t ∈ [0, T ],

x(0) = 0,

with solution (1.2)

x(t) =

Z

t

e−(t−s)A b(s)ds. 0

The matrix A ∈ Cn×n in (1.1) is constant and assumed to be positive de£nite, with eigenvalues λi such that 0 < 0 is assumed to be £xed. Our strategy for determining s∗ = s∗ (T ) is based on the minimization of the norm kx(T ) − y(T )k2 of the difference between the solution x(T ) of (1.1) at the end of the interval of integration and the solution y(T ) of (1.5) (which depends on s∗ ); i.e., we seek s∗ such that kx(T ) − y(T )k2 → min .

(1.9)

We £rst start in Section 2 by considering speci£c right-hand sides b(t) and provide a detailed analysis under the assumption that A is hermitian. Section 3 deals with general right-hand sides. We show that the solution of the minimization problem (1.9) satis£es a nonlinear equation which is wellposed and guaranteed to have a solution for small enough time windows and we suggest a method to integrate the computation of s∗ (t) and the resulting window of fast convergence of waveform relaxation iterations for each 0 < t ≤ T into the ODE solution process. We illustrate the results numerically in Section 4 using an example arising from a discretization of a one-dimensional boundary value problem for the convection-diffusion equation via the method of lines. The relationship between s∗ and T derived in this paper puts the determination of the size of the window of rapid convergence of waveform relaxation iterations on a solid theoretical and practical ground and it is in our opinion the most serious attempt up to date to address this important problem of great practical importance. 2. Minimization with particular right-hand side. In this section we consider speci£c right-hand side functions of the form b(t) = f (t)b with b ∈ C n , b 6= 0, and f (t) a scalar function. We focus on two choices for f (t): monomials tk with k ≥ 0 integer and (combinations of) exponentials eat . 2.1. Monomial right-hand side. The choice b(t) = tk b with k ≥ 0 integer 1 (k) and b ∈ Cn , b 6= 0, is driven by the observation that b(t) ' tk k! b (0) for some k ≥ 0 is a reasonable approximation of b(t) on small windows, provided b(t) is analytic around t = 0. We shall denote by Rk,0 (z) and Rk,1 (z) the (k, 0)− and (k, 1)−Pad´e approximations of ez at z = 0, respectively [8, p. 48]. It is easy to verify (e.g. by induction) that the solution (1.2) of (1.1) at t = T can then be expressed as (2.1)

x(T ) =

Z

T

e−(T −s)A sk b ds = 0

T ϕk (−T A) b(T ) k+1

with ϕk de£ned by (2.2)

ϕk (z) = (k + 1)! z −(k+1) (ez − Rk,0 (z))

for z 6= 0 and ϕk (0) = 1 by continuity. The functions ϕk (z), k = 0, . . . , 4, are shown in Fig. 1 and shall play an important role in the remainder of this section. Our £rst theorem shows that if s ∗ = k+1 T the quantity y(T ) is exactly equal to the approximation of x(T ) obtained by replacing e−T A in the expression of

4

K. BURRAGE ET AL. 1 0.9 0.8 0.7 0.6

ϕ0 to ϕ4

0.5 0.4 0.3 0.2 0.1 0 −20

−18

−16

−14

−12

−10

−8

−6

−4

−2

0

z F IG . 1. Functions ϕk (z), 0 ≤ k ≤ 4, for −20 ≤ z ≤ 0.

ϕk (−T A) in (2.1) by a (k, 1)-Pad´e approximation. This can for example be the result of solving (1.1) using an appropriate Runge-Kutta or linear multistep method. k+1 . Then T HEOREM 2.1. Assume that b(t) = tk b, b 6= 0, and let s∗ = T y(T ) =

(2.3)

T ψk (−T A) b(T ) k+1

with (2.4)

ψk (z) = (k + 1)! z −(k+1) (Rk,1 (z) − Rk,0 (z)) .

Proof. From (2.4), Lemma A.1 (see Appendix), and (1.5) the right-hand side of (2.3) reduces to µ ¶−1 k+1 T TA −(k+1) (−T A) RHS = I+ b(T ) (k + 1)! (−T A) k+1 (k + 1)! k+1 ¶−1 µ k+1 I + A b(T ) = y(T ) = T provided s∗ =

k+1 T .

5

SPECTRAL APPROXIMATION OF TIME WINDOWS

is numerically acceptable Theorem 2.1 shows that the choice s∗ = k+1 T when b(t) = tk b. The following result then compares y(T ) to the exact solution (2.1) of (1.1) rather than an approximation. The quantity µ2 (A) represents the logarithmic norm of A with respect to the norm k · k2 [5, pp 18-19]. The positive H de£niteness of A and of its hermitian part A H = A+A implies in particular that 2 (2.5)

− 0, where Ω is the region 0 ≤ tk ≤ . . . ≤ t0 ≤ 1 of Rk+1 . Therefore ° °Z ° ° −tk T A 0 ° dtk . . . dt0 ° kϕk (−T A)k2 = (k + 1)! ° tk e ° =





(k + 1)!

Z

Z

2

tk ke−tk T A k2 dtk . . . dt0



tk etk T µ2 (−A) dtk . . . dt0



(k + 1)!

=

ϕ0k (T µ2 (−A)).



The last inequality follows from the fact that the logarithmic norm µ2 (A) provides the optimal exponential bound exp(µ2 (A)t) for k exp(At)k2 , see [5, p. 18]). The bound (2.6) then follows from kx(T ) − y(T )k2 ≤ which completes the proof.

T kAk2 0 ϕ (T µ2 (−A)) ky(T )k2 k+1 k

6

K. BURRAGE ET AL.

The function φk (z) introduced in Theorem 2.2 is negative for z < 0. We can also verify using Lemma B.1(a) and (b) that µµ ¶ ¶ z z k+1 k+1 0 ϕ (z) = φk (z) = 1− ϕk (z) + k+1 k k+1 z z z ϕk (z) − ϕk (z) + 1 = ϕk−1 (z) − ϕk (z) = k+1 for k ≥ 0. Since ϕk (z) = 1 +

z + O(z 2 ) k+2

z→0

as

and k+1 +O ϕk (z) = − z

µ

1 z2



as

z → −∞

it follows that φk (z) '

z (k + 1)(k + 2)

as

z→0

and φk (z) '

1 z

as

z → −∞.

Hence, the solution y(T ) of (1.5) is a £rst order approximation of x(T ) on short and long time windows, i.e., ( O(T ) as T → 0, kx(T ) − y(T )k2 = −1 ky(T )k2 O(T ) as T → ∞. The hermitian case. A £ner analysis can be carried out when A is hermitian and positive de£nite. We denote by A = U ΛU H the Schur decomposition of A with Λ = diag(λi )1≤i≤n , 0 < λn ≤ . . . ≤ λ1 , and U a unitary matrix. ¿From (2.1) and (1.5) we obtain ° °2 ° T ° ∗ −1 2 ° kx(T ) − y(T )k2 = ° ϕk (−T A) b(T ) − (s I + A) b(T )° ° k+1 2 ° °2 ° T ° ∗ −1 ° = ° ° k + 1 ϕk (−T Λ) c − (s I + Λ) c° 2 ¯2 n ¯ X ¯ ¯ T 1 2 ¯ ¯ = ¯ k + 1 ϕk (−T λi ) − s∗ + λi ¯ |ci | i=1 with c = U H b(T ).

7

SPECTRAL APPROXIMATION OF TIME WINDOWS 0

φ

φ5

4

−0.05

−0.1

φ1

φ

−0.15

2

−0.2

−0.25

φ0 −0.3

−20

−18

−16

−14

−12

−10

−8

−6

−4

−2

0

z F IG . 2. Functions φk (z) from Theorem 2.1, 0 ≤ k ≤ 4, for z ≤ 0.

T HEOREM 2.3. Assume that A is hermitian, positive de£nite, and that T > 0. For each k ≥ 0 there exists a value µk with 0 < λn ≤ µk ≤ λ1 and such that kx(T ) − y(T )k2 corresponding to b(t) = tk b, b 6= 0, considered as a function of s ≥ 0 is minimal for s∗k =

(2.7)

ϕk−1 (−T µk ) k + 1 , ϕk (−T µk ) T

with the convention that ϕ−1 (z) ≡ ez . Moreover, the following interlacing property holds: (2.8)

0
0 and µ > 0 de£ne the functions f (µ, s) = Pn 2 1 − s+µ and F (s) = i=1 |f (λi , s)| |ci |2 . The function F is de£ned and continuously differentiable for all s ≥ 0. From Lemma B.1(a) we have T k+1 ϕk (−T µ)

f (µ, 0) = −

−T µϕk (−T µ) + k + 1 ϕk−1 (−T µ) =− 0. k+1

8

K. BURRAGE ET AL.

Pn f (λi ,s) 2 Consequently, the function F 0 (s) = 2 i=1 (s+λ 2 |ci | admits at least one zero i) s = s∗k > 0. Using Lemma B.1(a) again, the relation F 0 (s∗k ) = 0 can be written as µ ¶ n n X ϕk−1 (−T λi ) k + 1 T (s∗k + λi )ϕk (−T λi ) − k − 1 2 X ∗ αi s k − |ci | = (k + 1)(s∗k + λi )3 ϕk (−T λi ) T i=1 i=1 =0

T ϕk (−T λi ) 2 3 |ci | (k+1)(s∗ k +λi ) Pn 6= 0 and i=1 αi

with αi =

≥ 0. Note that αi > 0 if ci 6= 0 so that b 6= 0

(z) is implies c > 0. By Lemma B.1(d) the function ϕϕk−1 k (z) non-decreasing for z < 0. Therefore s∗k satis£es ! Ã n X ϕk−1 (−T λ1 ) k + 1 ϕk−1 (−T λi ) k + 1 αi ∗ Pn ≤ sk = ϕk (−T λ1 ) T ϕk (−T λi ) T j=1 αj i=1 (2.9) ϕk−1 (−T λn ) k + 1 ≤ ϕk (−T λn ) T

so that (2.7) holds for some λn ≤ µk ≤ λ1 . The fact that F (s) reaches its minimum at s∗k follows from F 0 (0) = 2

n X T λi ϕk (−T λi ) − k − 1

(k + 1)λ3i

i=1

|ci |2 = −2

n X ϕk−1 (−T λi ) i=1

(k + 1)λ3i

|ci |2 < 0

and F 0 (∞) = 0+ . Finally, the interlacing property (2.8) is a consequence of k ϕk−1 (−∞) = ϕk (−∞) k+1

and

ϕk−1 (0) = 1. ϕk (0)

For k = 0 we obtain s∗0 =

ϕ−1 (−T µ0 ) 1 µ0 ϕ−1 (−T λn ) 1 λn = . ≥ = T λn −1 −1 ϕ0 (−T µ0 ) T ϕ0 (−T λn ) T e

eT µ 0

This completes the proof. If ci = cj then it is easy to verify that the weights αi in the proof of Theorem 2.1 satisfy 0 ≤ α1 ≤ . . . ≤ αn so that, in such a case, µk ' λ n . For small time windows T the value s∗k given by (2.7) becomes (2.10)

s∗k '

k+1 T

for k ≥ 0. On the other hand for large time windows T (e.g. T λn À 1) we obtain (2.11)

s∗k '

k T

9

SPECTRAL APPROXIMATION OF TIME WINDOWS

for k > 0 and s∗0 ' λn e−T λn .

(2.12)

R EMARK 2.1. If cp+1 = . . . = cn = 0 for some 0 < p < n (i.e., c = U H b(T ) is `2 -orthogonal to the £rst n − p eigenvectors of A) then αp+1 = . . . = αn = 0 and s∗0 ' λp e−T λp instead for large T . Such a situation may occur in particular when b is randomly chosen. Indeed, such a vector is typically “oscillatory” and likely to be orthogonal to the “smooth” eigenvectors of the matrix A associated with the smallest eigenvalues. Note however that round-off errors in the numerical determination of the vector c may prevent cn from vanishing exactly and still may yield, for larger values of T and in the case k = 0, αn À αn−1 if cn ≈ cn−1 . R EMARK 2.2. Other strategies for £nding an optimal s∗ may be used. For example it is possible, °R ° for a given window [0, T ], to minimize the “average error” ° T ° ° 0 (x(t) − y(t))dt°. If xk and yk denote the solutions of (1.1) and (1.5) when b(t) = tk b, respectively, note however that ¶ Z T Z T µZ t −(t−s)A k ∗ −1 k (xk (t) − yk (t))dt = e s b ds − (s I + A) t b dt 0

=

0

Z

T

0

µ

1 k+1

1 = k+1

ÃZ

1 = k+1

ÃZ

0

µ ¶ Z t ¯t ¯ e−(t−s)A sk+1 b ¯ − A e−(t−s)A sk+1 b ds 0

0

¶ −(s∗ I + A)−1 tk b dt !

T

(tk+1 b − Axk+1 (t)) dt − (s∗ I + A)−1 T k+1 b 0 T 0

x0k+1 (t) dt

− yk+1 (T )

!

=

1 (xk+1 (T ) − yk+1 (T )) , k+1

i.e., the optimization will lead to s∗ = s∗k+1 rather than s∗ = s∗k . 2.2. Exponential right-hand side. The choice of exponential right-hand sides b(t) = eat b with a ∈ C is essentially guided by the fact that many physical processes are driven by exponentially growing forcing terms. We shall restrict ourselves to values of a such that the matrix A + aI remains positive de£nite, i.e., in particular 0. From (1.2) we obtain Z T Z T x(T ) = e−(T −s)A+asI b ds = e−(T −s)(A+aI) ds b(T ) 0

(2.13)

0

= T ϕ0 (−T (A + aI)) b(T )

which is exactly the solution (2.1) of (1.1) with k = 0 and A replaced by A + aI. The expression (2.13) is now to be compared to y(T ) = (s∗ I + A)−1 b(T ) = ((s∗ − a)I + A + aI)

−1

b(T ).

10

K. BURRAGE ET AL.

¿From Section 2.1 the optimal choice of s∗ satis£es, in the hermitian case and with a ∈ R (so that A + aI is also hermitian), (2.14)

s∗ − a =

µ ϕ−1 (−T µ) 1 = Tµ ϕ0 (−T µ) T e −1

for some a + λn ≤ µ ≤ a + λ1 . On short time windows eT µµ−1 ' T1 so that s∗ ' a + T1 while s∗ → a exponentially fast as T increases, provided 0. ¢ ¡ iωt a >−iωt 1 e −e b We now consider a right-hand side b(t) = sin(ωt)b = 2i where b 6= 0 is a constant vector (e.g. a Fourier mode for general functions b(t)). The solution of (1.1) at t = T is now x(T ) ¢ T ¡ ϕ0 (−T (A + iωI))eiωT − ϕ0 (−T (A − iωI))e−iωT b = 2i ¢ 1 ¡ (A + iωI)−1 (eiωT I − e−T A ) − (A − iωI)−1 (e−iωT I − e−T A ) b = 2i ¡ ¢ 1 = (A2 + ω 2 I)−1 (A − iωI)(eiωT I − e−T A ) − (A + iωI)(e−iωT I − e−T A ) b 2i ¢ ¡ = (A2 + ω 2 I)−1 sin(ωT )A − ω cos(ωT )I + ωe−T A b while the solution y(T ) of (1.5) is

y(T ) = sin(ωT ) (s∗ I + A)

−1

b.

Note that the optimization process to determine the optimal value of s∗ breaks down when ωT = π since y(T ) then becomes independent of s∗ . For values T ¿ ωπ the approximation b(t) ' ωt holds and we expect the optimal choice of s∗ to follow the recommendations from Section 2.1 with k = 1. 3. Minimization with general right-hand side. We now consider a general right-hand side b(t) and derive a nonlinear equation for the optimal value s ∗ obtained in the case k · k = k · k2 . Let As∗ = s∗ I + A. We write (3.1) (3.2)

∗ x(T ) − y(T ) = A−1 s∗ (s x(T ) + Ax(T ) − b(T )) ∗ 0 = A−1 s∗ (s x(T ) − x (T )) .

Since x(T ) does not depend on s∗ we have also d d −2 (x(T ) − y(T )) = − ∗ A−1 ∗ b(T ) = As∗ b(T ). ds∗ ds s Therefore µ ¶ d 2 H d kx(T ) − y(T )k = 2< (x(T ) − y(T )) (x(T ) − y(T )) 2 ds∗ ds∗ ´ ³ H −2 (3.3) = 2< (s∗ x(T ) − x0 (T )) A−H s∗ As∗ b(T )

SPECTRAL APPROXIMATION OF TIME WINDOWS

11

vanishes for (3.4)

¢ ¡ −2 < x0 (T )H A−H s∗ As∗ b(T ) ¢. s = ¡ −2 < x(T )H A−H s∗ As∗ b(T ) ∗

The expression (3.4) de£nes s ∗ as the solution of a nonlinear equation. The following result shows that for suf£ciently small windows this equation admits at least one solution s∗ > 0. T HEOREM 3.1. Assume that b(t) is continuous on an interval [0, T + ] for some T + > 0. Then there exists an interval [0, T − ] with 0 < T − ≤ T + such that the equation (3.4) is well-posed and admits a solution s∗ > 0 for all 0 < T ≤ T −. Proof. From (1.1) we have x0 (0) = b(0). We £rst assume that b(0) 6= 0 and show that both numerator and denominator on the right-hand side of (3.4) are positive for any s∗ > 0 provided T > 0 is small enough. By the continuity of ∗ x0 on [0, T + ] and the positive de£niteness of A s∗ and A−1 s∗ for any s > 0 there + exists 0 < T1 ≤ T such that ³¡ ¢ ¡ ¢´ ¢H −1 ¡ −1 −2 0 < x0 (t)H A−H As∗ As∗ b(T ) > 0 (A−1 s∗ As∗ b(T ) = < s∗ x (t)

¢ ¡ −2 for all t such that 0 ≤ t ≤ T ≤ T1 . In particular < x0 (T )H A−H s∗ As∗ b(T ) > 0 ¢ ¢ ¡ R T ¡ 0 H −H −2 −2 and < x(T )H A−H s∗ As∗ b(T ) = 0 < x (t) As∗ As∗ b(T ) dt > 0 for all T ∈ (0, T1 ] and s∗ > 0. We next show that the minimum of kx(T ) − y(T )k2 is solution of (3.4). Similarly as (3.3) we obtain ¯ ¯ ¡ ¢ d −2 2¯ = −2< x0 (0)H A−H kx(0) − y(0)k 2¯ 0 A0 b(0) ∗ ds s∗ =0 ³¡ ¢H ¡ ¢´ = −2< A−1 b(0) A−1 A−1 b(0) < 0 because A and A−1 are positive de£nite. By continuity there exists T 2 > 0 such d that ∗ kx(T ) − y(T )k22 < 0 for all T ∈ [0, T2 ]. ds Since x(0) = 0, x0 (0) 6= 0 and x0 is continuous on [0, T + ] there also exists d 0 < T3 ≤ T + such that dT kx(T )k22 > 0 for all T ∈ (0, T2 ]. Therefore ¡ ¢ ¡ ¢ 1 d kx(T )k22 + < x(T )H Ax(T ) > 0 < x(T )H b(T ) = 2 dT

for all T ∈ (0, T3 ]. Since As∗ ' s∗ I as s∗ → ∞ we obtain ¡ ¢ ¯ ¯ < x(T )H b(T ) d 2¯ >0 kx(T ) − y(T )k2 ¯ ' 2 2 ds∗ (s∗ ) s∗ →∞

for all T ∈ (0, T3 ]. This shows that for any T such that 0 < T ≤ T − = min(T1 , T2 , T3 ) the quantity kx(T ) − y(T )k2 reaches its minimum at a critical point 0 < s∗ < ∞ which satis£es (3.4).

12

K. BURRAGE ET AL.

The result can also be shown to hold in the case b(0) = 0 using a continuity argument based on b(t) 6= 0 for any t > 0 suf£ciently small. Observe that the equation (3.4) can be interpreted as a weak form of the condition y(T ) − x(T ) = 0. Indeed, we obtain from (1.5), (3.1) and (3.4) ³ ´ ¡ ¢ H −H −2 ∗ 0 (3.5) < (y(T ) − x(T ))H A−1 s∗ y(T ) = < (s x(T ) − x (T )) As∗ As∗ b(T ) = 0.

On long time windows the right-hand side of (3.4) may vanish or become negative, in particular when b(T ) itself vanishes as in the case of sinusoidal right-hand side (see Section 2.2). In this case y(T ) vanishes as well and (3.5) can no longer be used to determine s∗ . 3.1. Short time window estimate. For right-hand sides b(t) of the form b(t) = f (t)b where f (t) is a continuous scalar function and b ∈ Rn , b 6= 0, we obtain from (1.2) ÃZ ! Z T T x(T ) = (I + O(T − t)) f (t)b dt = f (t)dt (1 + O(T )) b 0

0

and x0 (T ) = f (T )b − Ax(T ) = f (T ) (1 + o(1)) b for small time windows. Then (3.4) reduces to (3.6)

f (T ) s∗ ' R T f (t)dt 0

for small T independently of A (although the time interval on which (3.6) remains a good approximation does depend in general on A). In particular for f (t) = t α with α > 0 we obtain the estimate s∗ ' α+1 T , which agrees with the results of aT Section 2 for integer values of α. For f (t) = eat we also obtain s∗ ' eae aT −1 which¡is identical to (2.14) with µ = a. The choice f (t) = sin(ωt) yields s∗ ' ¢ ω cot ωT . 2

3.2. Numerical solution of (3.4). From a practical point of view s∗ can be ef£ciently computed from (3.4) for increasing values of T once x(T ) and x 0 (T ) have been determined (for example using an adaptive solver), until the right-hand side of (3.4) fails to remain positive. The nonlinear equation (3.4) is of the form s∗ = F (s∗ ). In all our numerical tests a Picard iteration s∗p,m = F (s∗p,m−1 ) for m > 0 starting with the solution of (3.4) obtained at the previous time step T (and with a predicted s∗1,0 = h10 for the £rst step) was successfully used to solve (3.4) at time T = h0 + . . . + hp−1 , see Algorithm 1. Pq−1 Algorithm 1: Adaptive determination of s∗ = s∗ (T ) for T = p=0 hp .

SPECTRAL APPROXIMATION OF TIME WINDOWS

13

x0 = 0, h0 > 0 given, t0 = 0, s∗1,predicted = h10 for p = 1, 2, . . . , q compute hp−1 and xp ' x(tp−1 + hp−1 ) using an (adaptive) ODE solver tp = tp−1 + hp−1 x0p = b(tp ) − Axp s∗p,0 = s∗p,predicted for m = 1, 2, . . . , M d = A−H A−2 b(tp ) s∗ s∗ p,m−1

p,m−1

0 for z ∈ C and k > 0, where Ωk = {(tk , tk−1 , . . . , t0 ) : 0 ≤ tk ≤ · · · ≤ t0 ≤ 1} ⊂ Rk+1 ;

(d)

³

ϕk−1 (z) ϕk (z)

´0

≥ 0 for z ≤ 0 and k ≥ 0.

Proof. A direct calculation yields ϕk (z) = (k + 1)! z

−(k+1)

µ

zk e − Rk−1,0 (z) − k! z



=

k+1 (ϕk−1 (z) − 1) z

25

SPECTRAL APPROXIMATION OF TIME WINDOWS

for z 6= 0 and k > 0. It is easy to check that (a) also holds for k = 0 with the given de£nition of ϕ −1 . The relation (b) can be shown for example by using (a): ³ ´ ϕ0k (z) = (k + 1)! z −(k+1) (ez − Rk−1,0 (z)) − (k + 1)z −(k+2) (ez − Rk,0 (z)) ¶ µ k+1 k+1 z = (ϕk−1 (z) − ϕk (z)) = ϕk (z) + 1 − ϕk (z) z z k+1 µ ¶ k+1 k+1 = 1− ϕk (z) + . z z The integral form (c) of ϕk (z) can easily be Rchecked for k = 0 and is proved for 1 we obtain k > 0 by using the induction and (a). Since Ωk−1 dtk−1 . . . dt0 = k! for z 6= 0 ! Ã Z k+1 tk−1 z e dtk−1 . . . dt0 − 1 k! ϕk (z) = z Ωk−1 Z etk−1 z − 1 = (k + 1)! dtk−1 . . . dt0 z Ωk−1 µZ tk−1 ¶ Z tk z e dtk dtk−1 . . . dt0 = (k + 1)! 0

Ωk−1

=

(k + 1)!

Z

etk z dtk . . . dt0 . Ωk

R Since ϕk (0) = 1 = (k + 1)! Ωk dtk . . . dt0 the result holds also for z = 0. The formula (c) implies that ϕk (z) > 0 for z ∈ R. Moreover, the p-th derivative of ϕk becomes Z (p) ϕk (z) = (k + 1)! tpk etk z dtk . . . dt0 > 0 Ωk

for z ∈ R. For the functions f , g : Ωk → R de£ne the scalar inner product by Z hf, gi = f (tk , . . . , t0 )g(tk , . . . , t0 )dtk . . . dt0 . Ωk

Applying the Cauchy-Schwarz inequality to the functions f (tk , . . . , t0 ) = e

tk z 2

,

g(tk , . . . , t0 ) = tk e

tk e

tk z

tk z 2

we obtain (ϕ0k (z))2 ((k + 1)!)2

= ≤

2

hf, gi = Z

µZ

Ωk

etk z dtk . . . dt0 Ωk

Z

Ωk

dtk . . . dt0

¶2

t2k etk z dtk . . . dt0 =

ϕk (z)ϕ00k (z) . ((k + 1)!)2

26

K. BURRAGE ET AL.

This leads to 2

(ϕ0k (z)) ≤ ϕk (z)ϕ00k (z) for z ∈ R. Hence, µ

ϕ0k (z) ϕk (z)

¶0

=

ϕ00 (z)ϕk (z) − (ϕ0k (z))2 ≥0 ϕ2k (z)

ϕ0k (z) ϕk (z)

and, as a result, the function other hand (a) and (b) yield

is non-decreasing on R for all k ≥ 0. On the

k k (ϕk−1 (z) − 1) = ϕk−1 (z) − ϕk (z) z k+1 ³ ´ ϕ0k−1 (z) k+1 k (z) for z 6= 0 and k > 0. Consequently, the function ϕϕk−1 1 − ϕk−1 (z) = k (z) ³ ³ ´0 ´0 (z) k (z) ≤ 0 and ϕϕk−1 ≥ 0 for z ≤ 0. The is non-increasing, i.e., ϕϕk−1 (z) (z) k property (d) can easily be veri£ed in the case k = 0. ϕ0k−1 (z) = ϕk−1 (z) −

C. Symbols of lower triangular Toeplitz matrices. The symbol of a lower triangular Toeplitz matrix  r0  .. ..  = toeplitz(r0 , . . . , rn−1 ) ∈ Rn×n . . R= .. . r0 rn−1 is de£ned as the (polynomial) function n−1 X

Ã

z n−1 .. rj z j = (0, . . . , 1)R σR (z) = . 1 j=0

!

(see [13] for example). L EMMA C.1. Let T1 and T2 = toeplitz(β0 , . . . , βn−1 ) be two n × n lower triangular Toeplitz matrices. Then σT2 T1 (z) = σT2 (z)σT1 (z) + O (z n ) . Pn−1 for |z| < 1. Moreover, if β0 6= 0 and if j=0 |βj | can be bounded independently of n, then

(C.1)

(C.2)

σT −1 T1 (z) = 2

σT1 (z) + O (z n ) . σT2 (z)

Proof. The product T2 T1 is again a lower triangular Toeplitz matrix. The relation (C.1) is a discrete convolution formula which is easy to verify by multiplying out the symbols of the two matrices and comparing it with the symbol of

SPECTRAL APPROXIMATION OF TIME WINDOWS

27

the product of the two matrices. The second relation (C.2) can be derived from (C.1) as σT1 (z) = σT2 (T −1 T1 ) (z) = σT2 (z)σT −1 T1 (z) + O (z n ) 2

2

Pn−1

and the fact that |σT2 (z)| ≤ j=0 |βj | = O(1) for |z| < 1 independently of n. Pn−1 Note that the condition in Lemma C.1 that j=0 |βj | be bounded independently of the dimension n of T2 is in particular satis£ed if T 2 has a £xed bandwidth. REFERENCES [1] K. B URRAGE , Z. JACKIEWICZ , S. P. N ØRSETT AND R. R ENAUT, Preconditioning waveform relaxation iterations for differential systems, BIT, 36: 54–76, 1996. [2] K. B URRAGE , Z. JACKIEWICZ AND R. R ENAUT, Waveform relaxation techniques for pseudospectral methods, Numer. Methods Partial Differential Equations, 12: 245–263, 1996. [3] K. B URRAGE , G. H ERTONO , Z. JACKIEWICZ AND B. D. W ELFERT, Acceleration of convergence of static and dynamic iterations, BIT, 41: 645–655, 2001. [4] C. C ANUTO , M.Y. H USSAINI , A. Q UARTERONI AND T.A. Z ANG, Spectral Methods in Fluid Mechanics, Springer-Verlag, New York, Berlin, Heidelberg, 1988. [5] K. D EKKER AND J. G. V ERWER, Stability of Runge-Kutta methods for stiff nonlinear differential equations, North-Holland, Amsterdam, 1984. [6] D. G OTTLIEB AND J.S. H ESTHAVEN, Spectral methods for hyperbolic problems, J. Comput. Appl. Math., 128: 83–131 2001. [7] E. H AIRER , S. P. N ØRSETT AND G. WANNER, Solving Ordinary Differential Equations I, Nonstiff Problems, Springer Series in Computational Math., 2nd rev. ed., Springer-Verlag, New York, 1993. [8] E. H AIRER AND G. WANNER, Solving Ordinary Differential Equations II, Stiff and Differential–Algebraic Problems, Springer Series in Computational Math., 2nd rev ed., Springer-Verlag, New York, 1996. [9] P. H ENRICI, Discrete Variable Methods in Ordinary Differential Equations, John Wiley & Sons, Inc., New York, 1962. [10] Z. JACKIEWICZ , B. OWREN AND B. W ELFERT, Pseudospectra of waveform relaxation operators, Comp. & Math. with Appls, 36: 67–85, 1998. [11] B. L EIMKUHLER, Estimating waveform relaxation convergence, SIAM J. Sci. Comput., 14: 872–889, 1993. [12] E. L ELARASMEE, The waveform relaxation methods for the time domain analysis of large scale nonlinear dynamical systems, Ph.D. Thesis, University of California, Berkeley, California, 1982. [13] L. R EICHEL AND L.N. T REFETHEN, Eigenvalues and pseudo-eigenvalues of Toeplitz matrices, Linear Algebra Appl., 162: 153–185, 1992. [14] L.N. T REFETHEN, Pseudospectra of linear operators, SIAM Rev., 39: 383–406, 1997.

Suggest Documents