File : compact. April 6, Time : 14 : 14

File : compact April 6, 1992 Time : 14 : 14 Typeset by AMS-TEX The Analysis of Multigrid Algorithms for Pseudo-Di erential Operators of Order Minus...
1 downloads 0 Views 246KB Size
File : compact April 6, 1992 Time : 14 : 14

Typeset by AMS-TEX

The Analysis of Multigrid Algorithms for Pseudo-Di erential Operators of Order Minus One James H. Bramble, Zbigniew Leyk, and Joseph E. Pasciak

February 1992

Abstract. Multigrid algorithms are developed to solve the discrete systems approximating the solutions of operator equations involving pseudo-di erential operators of order minus one. Classical multigrid theory deals with the case of di erential operators of positive order. The pseudo-di erential operator gives rise to a coercive form on H ?1=2 ( ). E ective multigrid algorithms are developed for this problem. These algorithms are novel in that they use the inner product on H ?1 ( ) as a base inner product for the multigrid development. We show that the resulting rate of iterative convergence can, at worst, depend linearly on the number of levels in these novel multigrid algorithms. In addition, it is shown that the convergence rate is independent of the number of levels (and unknowns) in the case of a pseudo-di erential operator de ned by a single layer potential. Finally, the results of numerical experiments illustrating the theory are presented.

1. Introduction.

The goal of this paper is to develop a technique for de ning and analyzing multigrid algorithms for solving equations which involve discretizations of pseudo-di erential operators of negative order. Standard multilevel methods most often apply to discretizations of di erential operators of positive order (cf. [1]{[4], [7]{[9],[16],[17]). Let be a polygonal domain in R2 . For nonnegative real s, let H s ( ) denote the Sobolev space of real valued functions with norm kks (see, [14]). In addition, we shall use Sobolev spaces of negative index. For the purpose of this paper, we shall de ne H ?1 ( ) to be the set of distributions for which the norm kvk?1 = sup1 (kv;k) 2H ( ) 1 is nite. Here (; ) denotes the inner product in L2( ). For 0 < s < 1, the spaces H ?s ( ) are de ned by the real method (K-method) of interpolation [13] between L2( ) and H ?1 ( ). These spaces are Hilbert spaces and we shall let < ;  >?s be the corresponding inner product. As a canonical example of a pseudo-di erential operator of negative order, we consider an operator which is de ned in terms of a symmetric bilinear form V (; ) on H ?1=2 ( ). We will assume that the above form satis es the following coercivity and boundedness inequalities: (1.1) C0 kvk2?1=2  V (v; v)  C1 kvk2?1=2 : This manuscript has been authored under contract number DE-AC02-76CH00016 with the U.S. Department of Energy. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes. This work was also supported in part under the National Science Foundation Grant No. DMS-9007185 and by the U.S. Army Research Oce through the Mathematical Sciences Institute, Cornell University. Typeset by AMS-TEX

Here and in the remainder of this paper, C with or without subscripts denotes a generic positive constant which can take on di erent values in di erent places. These constants will always be independent of mesh sizes and the number of levels in subsequent multigrid schemes. Multigrid schemes will be developed in this paper for the ecient solution of the problem: Given a bounded linear functional F on H ?1=2 ( ), nd U 2 H ?1=2 ( ) satisfying (1.2) V (U; ) = F () for all  2 H ?1=2 ( ): This problem has a unique solution by the Riesz Representation Theorem. The basic philosophy of a multigrid/multilevel algorithm is that simple relaxation schemes can be used to reduce the high eigenvalue components of the errors while a coarser grid problem is used to reduced the smooth components. This works very well in the case of di erential operators of positive order since the high eigenvalue components of the differential operator correspond to highly oscillatory components of the error and thus all components are reduced either by smoothing or correction. In contrast, in the case of pseudo-di erential operators of negative order, the high eigenvalue component is smooth and thus neither relaxation nor coarse grid correction reduces the oscillatory components. The solution to the above problem is to use a base inner product which corresponds to a weaker norm than that induced by the form V (; ). We use the norm in H ?1( ). This e ectively changes the relationship between eigenvalues and eigenvectors so that the eigenvectors with large eigenvalues correspond to highly oscillatory components in the eigenspace decomposition. Smoothers for the multigrid algorithm are developed in terms of discrete inner products which are equivalent to the inner product in H ?1 ( ) on the respective subspaces. The discrete inner products are de ned in Section 3 in terms of a discretization of a second order problem. The use of smoothers involving di erences for this type of problem was suggested earlier in [12] although no supporting theory was included. There are three basic theories for providing estimates for V-cycle multigrid algorithms. The rst is the so called \regularity and approximation theory" and provides estimates as long as elliptic regularity results are available for the underlying operator V [2],[3]. The second theory requires the weakest hypotheses but gives rise to estimates which depend on the number of levels in the multigrid algorithm [7]. The third theory does not require elliptic regularity and often leads to uniform estimates on the rate of iterative convergence [4]. We prove the conditions required for the application of the \no-regularity" theory of [7] in Section 4 provided that (1.1) holds. In Section 5, we reduce to the case when the form V is given by the single layer potential Z Z u(s1 )v(s2 ) ds ds : (1.3) V (u; v) = 1 2

js1 ? s2 j We will show that (1.1) holds and hence [7] can be applied. We next apply [4] to develop uniform multigrid convergence estimates in the case when is a polygonal domain in R2. We note that integral operators of positive order are considered in [19]. Finally, the results of numerical experiments are given in Section 6. These results are in agreement with the theory of earlier sections. 2

2. The multigrid algorithm.

In this section, we describe the multigrid algorithm following [3]. We assume that we are given a nested sequence of nite dimensional inner product spaces M0  M1  : : :  Mj  H ?1=2 ( ): As earlier described, let V (; ) be a symmetric positive de nite bilinear form on Mj satisfying (1.1). We shall develop multigrid algorithms for computing the solution of the problem: Given a linear functional F on Mj , nd u 2 Mj satisfying (2.1) V (u; ) = F () for all  2 Mj : We describe the multigrid algorithms in terms of certain operators. To this end, we rst de ne Vk : Mk 7! Mk by (2.2) < Vk u; v >?1 = V (u; v) for all u; v 2 Mk : In addition, we de ne the projectors Pk : H ?1 ( ) 7! Mk and Pk : H ?1=2 ( ) 7! Mk by < Pk w;  >?1=< w;  >?1 and V (Pk w; ) = V (w; ) for all  2 Mk . The nal ingredient in a multigrid algorithm is a sequence of smoothing operators. These operators are de ned in terms of additional discrete inner products on Mk which will be denoted [; ]k , k = 1; : : : ; j . The smoothing operator Rk : Mk 7! Mk is then de ned by for all  2 Mk : (2.3) [Rk w; ]k = 1 < w;  >?1 k Here k is an upper bound for the eigenvalue k = sup V[;(;]) : k 2M The assumption on k implies that V (Rk Vk ; )  1k=2 [Rk Vk ; Rk Vk ]1k=2 V (; )1=2 = V (Rk Vk ; )1=2 V (; )1=2 ; and hence (2.4) V (Rk Vk ; )  V (; ): The inequality (2.4) shows that the smoothing operator is properly scaled [3]. In addition, k should be bounded by a xed multiple of k . In standard applications, it is often more e ective to use smoothers de ned by variations of Gauss-Seidel iterative methods [5]. However, in the case of the integral equation application of this paper, Gauss-Seidel smoothing is inappropriate whereas the smoother de ned by (2.3) is both computable and theoretically justi ed. The multigrid algorithm is de ned in terms of a sequence of operators Bk : Mk 7! Mk which \approximately" invert Vk . The following algorithm provides the simplest V-cycle algorithm. k

3

Algorithm 2.1. For k = 0, set B0 = V0?1 . For k > 0, Bk is de ned in terms of Bk?1 as

follows: Let g 2 Mk . (1) Set

x1 = Rk g:

(2.5) (2) De ne x2 = x1 + q where

q = Bk?1Pk?1 (g ? Vk x1 ): (3) Finally, set

Bk g = x2 + Rk (g ? Vk x2):

The multigrid algorithm is presented this way for theoretical purposes. Even though the use of the inner product on H ?1 ( ) is often not computationally feasible, it is possible to implement the above algorithm in practice provided that the inner products [; ]k are appropriately de ned. These inner products are de ned in Section 3. They are critical from both the theoretical and implementation points of view. The concrete realization of the algorithm in terms of matrices is also given in Section 3. The rst and last step in the above algorithm correspond to smoothing. The second step is the coarser grid correction step. More general versions of this algorithm involving increased smoothing on the various levels as well as more iterations in the coarser grid step are de ned in the usual way (cf., [1],[3],[11],[15]). Our theory extends to these algorithms as well (cf., [7]). Typical presentations of the multigrid algorithm directly give rise to an iterative process with a linear reducer. This linear reducer is equal to I ? Bj Vj (cf. [4]). Thus, the usual multigrid reduction process applied to the problem

Vj v = f is equivalent to the the simple linear preconditioned iterative scheme

vi+1 = vi + Bj (f ? Vj vi); i = 0; 1; : : : ; with Bj de ned by Algorithm 2.1. Thus, our algorithm is the usual symmetric V-cycle multigrid algorithm described in a notation which is convenient for our analysis. Note that Bk is clearly a linear operator for each k. Nonsymmetric cycling algorithms are de ned by avoiding Steps 1 or 3 (cf. [3]). Both the symmetric and nonsymmetric versions are covered by the analysis to be presented. The symmetric operator Bj given above can also be used as a preconditioner in, for example, the conjugate gradient algorithm. 4

3. The discrete inner products.

We de ne the discrete inner products [; ]k used in the de nition of the smoothing operators in this section. As we shall see, there are two distinct cases depending on whether the functions in Mk are continuous or not. The discrete inner product will always be de ned in terms of a di erence operator Ak : Mk 7! Mk . We shall only consider multigrid algorithms for nite element approximations to (1.2) in this paper. Because the form V (; ) is so weak, the nite element approximation subspaces need only be in H ?1=2 ( ). However, for simplicity, we shall rst consider the case when Mk consists of continuous piecewise linear nite element functions. To this end, we start with a coarse triangulation f0ig of . Assuming that fki?1g has been de ned, the ner triangulation fki g is de ned by breaking each triangle in fki?1g into four by connecting the midpoints of the sides. The space Mk , for k = 0; : : : ; j , is de ned to be the set of functions which are piecewise linear with respect to fki g and continuous on . To avoid the inversion of Gramm matrices in the multigrid implementation, we next consider a diagonal inner product approximating the L2( ) inner product on the subspace. Let (; )k be de ned for v 2 Mk by X jki j [v(xi;k 1 )2 + v(xi;k 2 )2 + v(xi;k 3 )2 ]: (v; v)k = 31 ki 2k

i Here jki j denotes area of ki and fxi;l k g, l = 1; 2; 3 denotes the vertices of k . It is known that (3.1) j(v; w)k ? (v; w)j  Chk kvk kwk1 for all v 2 Mk ; w 2 Mk where hk (hk = 2?k h0) is the mesh size of the k'th mesh. It is also immediate that (3.2) c (v; v)k  (v; v)  C (v; v)k for all v 2 Mk : The discrete form [; ]k is de ned in terms of the nite element discretization of a second order problem. Let Z (3.3) A(v; w) = rv  rw + vw dx:



Clearly, A(; ) is symmetric and positive de nite on H 1 ( ) and kuk1 = A(u; u)1=2 : For k = 0; : : : ; j , let the operators Ak : Mk 7! Mk be de ned by (3.4) (Ak w; )k = A(w; ) for all w;  2 Mk : The operator Ak is clearly symmetric (in both the A(; ) and (; )k inner products) and positive de nite. In this case, the discrete inner product used in the multigrid algorithm is de ned by (3.5) [u; v]k = (A?k 1 u; v)k for all u; v 2 Mk : As we shall see later, the implementation of the multigrid algorithm using this discrete inner product only requires the evaluation of the action of Ak (not A?k 1). The next lemma shows that the norm corresponding to this inner product is uniformly equivalent to the norm in H ?1 ( ). 5

Lemma 3.1. Let [; ]k be de ned by (3.5). There exist positive constants C0 ; C1 which

are independent of j and satisfy

C0 kvk2?1  [v; v]k  C1 kvk2?1 :

(3.6)

(3.6) holds for all v 2 Mk and k = 0; : : : ; j . Proof: Let Qk denote the L2 ( ) orthogonal projector onto Mk . It is well known that (see, for example, [10])

k(I ? Qk )vk  Chk kvk1 ; kQk vk1  C kvk1 :

(3.7)

Moreover, it easily follows from (3.7) and the fact that Mk  H 1( ), that 2 C2 kvk2?1  sup A(v;(;))  kvk2?1 2M

(3.8)

k

for all v 2 Mk :

The constant C2 is independent of k. We note that ?1=2 2 [v; v]k = (A?k 1 v; v)k = sup (v; (A;k ) )k k 2M 2 = sup A(v;(;)k) : 2M

(3.9)

k

k

Using the well known inverse property,

A(; )  ch?k 2 kk2

for all  2 Mk ;

and (3.8) implies (3.10)

2 2 kvk2 = sup (v; )2  Ch?k 2 sup A(v;(;))  Ch?k 2 kvk2?1 : 2M 2M kk k

k

The same argument and (3.2) gives that (3.11)

2

kvk2  Ch?k 2 sup A(v;(;)k) : 2M k

Applying (3.1) and (3.10) gives (3.12)

j(v; )k j  j(v; )k ? (v; )j + j(v; )j  Chk kvk kk1 + kvk?1 kk1  C kvk?1 kk1 : 6

Combining (3.9) and (3.12) proves the upper inequality of (3.6). For the lower inequality, because of (3.8), we need only show 2 2 sup A(v;(;))  C sup A(v;(;)k) : 2M 2M

(3.13)

k

k

Applying (3.1) gives

j(v; )j  j(v; ) ? (v; )k j + j(v; )k j  Chk kvk kk1 + j(v; )k j:

(3.14)

Combining (3.11) and (3.14), (3.13) follows. This completes the proof of the lemma. We next consider the case when the subspaces are discontinuous functions. In this case we assume for simplicity that is a rectangle in R2 . We divide this rectangle into a rectangular mesh to de ne the partitioning f0ig. Subsequently ner meshes are de ned by mathematical induction. Given fki?1g, the rectangles of fki g are de ned by breaking each rectangle in fki?1g into four congruent smaller rectangles. Functions in Mk are de ned to be piecewise constant with respect to the mesh fki g. In this application, we do not require the introduction of a discrete L2( ) inner product since the Gramm matrix for Mk (with the natural basis) is already diagonal. The discrete operators Ak are de ned directly in terms of di erences. Functions in Mk are determined by doubly indexed arrays of coecients fcij g, (i; j ) 2 Nk . The set Nk consists of the indices labeling the rectangles in fki g. Ak is de ned to be the symmetric di erence operator with diagonal (3.15)

(Ak c; c) = h2k

X

(i;j )2Nk

c2ij +

X

(cij ? ci;j+1)2 +

X

(cij ? ci+1;j )2 :

Terms are included in the last two sums above if and only if both of the coecients' indices are in Nk . Note that Ak is the standard ve point di erence operator with appropriate modi cation near the boundary. We de ne [; ]k by (3.16)

[v; w]k = (A?k 1 v; w)

for all v; w 2 Mk :

We then have the following lemma. Lemma 3.2. Let Ak be de ned by (3.15). Then (3.6) holds for [; ]k de ned by (3.16). Proof: We rst note that for v 2 Mk , (3.17)

2

) : [v; v]k = (A?k 1 v; v) = sup (A(v;; k ) 2M k

We again let Qk denote the L2 ( ) orthogonal projection onto Mk . It is easy to check that (Ak Qk ; Qk )  C kk21 7

for all  2 H 1 ( ):

Consequently, for v 2 Mk , 2 ( v; Q k ) ?1 = 2sup kk21 H 1 ( ) 2  C sup1 (A(v;QQ;k Q) )  C (A?k 1 v; v): k k k 2H ( )

kvk2

Let  be in Mk . We de ne a shifted rectangular mesh f~ki g by connecting the centers of the original rectangles and denote by k , the union of the shifted rectangles contained in . Note that k di ers from by a strip of width hk =2. Let  denote the function which is continuous on k , piecewise bilinear with respect to the shifted rectangles, and interpolates  on the vertices of the shifted rectangles. We extend  to all of in a piecewise bilinear fashion. For example, the bilinear function  on the shifted triangle ~ki in Figure 1 is extended bilinearly into the shaded region. This extension is also denoted by . It is easy to see that  2 H 1 ( ) and furthermore, it is elementary to see that

c(Ak ; )  kk21  C (Ak ; ); k ? k  Chk kk1;

(3.18) for all  2 Mk .

Figure 1.

Local extension regions.

We now prove the upper bound of (3.6). By (3.18), (3.19)

j(v; )j  j(v;  ? )j + j(v; )j  C (hk kvk + kvk?1)kk1  C (hk kvk + kvk?1)(Ak ; )1=2 : 8

Thus, using (3.17), the upper inequality of (3.6) will follow if we prove that

kvk  Ch?k 1 kvk?1 :

(3.20)

For any  2 Mk , there exists a function  2 H 1 ( ) which has the same average values as  on the rectangles of fki g and satis es

kk1  Ch?k 1 kk :

(3.21)

In fact,  can be constructed by taking linear combinations of smooth functions supported on the rectangles of fki g. Thus,

kvk = sup (v;k) k 2M ?1  Ch?k 1 sup (kv;) k  Chk kvk?1 :

(3.22)

k

2Mk

1

This proves (3.20) and hence completes the proof of the lemma. We now present the matrix form of Algorithm 2.1 since this elucidates its computational implementation. A common notation for the two cases can be developed if we de ne (; )k = (; ) in the case when Mk consists of discontinuous constants. We shall use a bar for denoting matrices and vectors. Let nk = dim(Mk ) and flk gnl=1 be the natural basis functions for Mk . Note that by scaling the basis functions, we may assume that (ik ; lk )k = il , where il is the Kronecker Delta. Let k

 Ak = (Ak ik ; lk )k ni;l=1 k

and



nk i;l=1

Vk = V (ik ; lk )

:

k = cilk be de ned from the coecients fcilkg satisfying In addition, let the matrix C P ik = ln=1+1 cilklk+1. In terms of matrices, (2.1) can be represented k

(3.23)

 Vj u = F:

Here u is the vector of coecients in the expansion of the function u in the basis fij g and F is the vector fF (ij )g. We consider the matrix operator Bj de ned by the following algorithm. As we shall see in the subsequent proposition, this algorithm provides a concrete realization of the operator Bj . 9

Algorithm 3.1. (Matrix form of Algorithm 2.1) Set B0 = V0?1. Assume that Bk?1 has been de ned and de ne Bk G for G 2 Rn as follows: (1) Set  x1 = ?k 1Ak G: (2) De ne x2 = x1 + Ckt ?1q where k

q = Bk?1Ck?1(G ? Vk x1): (3) Finally, set

Bk G = x2 + ?k 1 Ak (G ? Vk x2 ):

Proposition 3.1. Let v 2 Mk and v denote the vector of coecients for the expansion of

the function v in the basis fik g. Then, Bk Vk v is the vector of coecients for the expansion of the function Bk Vk v. Proof: The proof is by induction. The result is obvious for k = 0. Let k be greater than zero and let v and v be as above. We consider applying Algorithm 2.1 to Vk v and Algorithm 3.1 to Vk v. In the rst step of Algorithm 2.1, we compute the function x1 satisfying (3.24)

[x1 ; ]k = (A?k 1 x1 ; )k = ?k 1 V (v; )

for all  2 Mk :

In contrast, G = Vk v in Algorithm 3.1. Let G 2 Mk satisfy (3.25)

(G; )k = V (v; )

for all  2 Mk :

Note that G is the vector of coecients for the expansion of G in the basis for Mk . Combining (3.24) and (3.25) and changing variables gives (x1 ; ik )k = ?k 1(Ak G; ik )k ; i = 1; : : : ; nk : This means that the coecients of x1 (denoted by x1 ) satisfy  x1 = ?k 1Ak G = ?k 1Ak G: This coincides with the rst step of Algorithm 3.1. To compare the results of the second steps of the algorithms, we note that the previous conclusion immediately implies that G ?Vk x1 = Vk (v ?x1) equals the vector fV (v?x1 ; ik )g. Consequently Ck?1(G ? Vk x1 ) is the vector fV (v ? x1 ; ik?1)g. However, (3.26)

V (v ? x1 ; ik?1) =< Pk?1 Vk (v ? x1 ); ik?1 >?1 =< Vk?1Pk?1 (v ? x1 ); ik?1 >?1 : 10

Note that the right hand side of (3.26) is Vk?1 applied to the coecients of Pk?1(v ? x1 ) expanded in the basis for Mk?1. Thus, by the induction hypothesis, q in Algorithm 3.1 is the vector of coecients (with respect to the basis for Mk?1) of the expansion of

Bk?1Vk?1 Pk?1(v ? x1 ) = q Consequently, x2 de ned by Algorithm 3.1 gives the coecients of x2 de ned by Algorithm 2.1. The proof that the nal step of Algorithm 3.1 results in the coecients of the function developed in the nal step of Algorithm 2.1 is similar. This completes the proof of the proposition. Remark 3.1: The proposition immediately implies that for all v 2 Mj , (Vj (I ? Bj Vj )v)  v = V ((I ? Bj Aj )v; v) and (Vj v)  v = V (v; v): Thus, contraction estimates for I ? Bj Aj and estimates on the condition number (Bj Aj ) lead to the same results for their matrix counterparts. 4. \No-Regularity" Multigrid analysis.

We apply the theory of [7] to develop a convergence theory for Algorithm 2.1. To do this, we need operators Qk : Mj 7! Mk with Qj = I and satisfying

k(Qk ? Qk?1)uk2?1  C2?k 1V (u; u); for k = 1; : : : ; j; V (Qk u; Qku)  C3V (u; u); for k = 0; : : : ; j ? 1; The inequalities (4.1) and (4.2) hold for all u 2 Mj . Lemma 4.1. Let Mk consist of continuous piecewise linear functions on triangles as described in the rst application of the previous section and set Qk = Qk . Then (4.1) and

(4.1) (4.2)

(4.2) hold with constants C2 and C3 which are independent of j . Proof: The inequalities (3.7) and duality immediately imply that k(I ? Qk )vk?1  Chk kvk (4.3) kQk vk?1  C kvk?1 : Applying the real method of interpolation gives

k(I ? Qk )vk?1  ch1k=2 kvk?1=2 : Hence (1.1) implies that for all v 2 Mj , k(Qk ? Qk?1 )vk2?1  2 k(I ? Qk )vk2?1 + 2 k(I ? Qk?1 )vk2?1  Chk kvk2?1=2  ChkV (v; v): 11

Similarly, by interpolation

V (Qk v; Qk v)  C kQk vk2?1=2  C kvk2?1=2  C V (v; v): To complete the proof, we need only show that

hk  C?k 1:

(4.4) By de nition,

kk2?1=2 V ( ;  ) k = sup [; ]  C sup 2 : k 2M kk?1 2M k

k

By (3.8) and (3.10)

kk2?1=2  C kk kk?1  Ch?k 1 kk2?1 :

This completes the proof of the lemma. Lemma 4.2. Let Mk consist of discontinuous piecewise constant functions on rectangles as described in the second application of the previous section and set Qk = Pk . Then, (4.1) and (4.2) hold with constants C2 and C3 which are independent of j . 1 Proof: As in the proof of Lemma 4.1, it follows from (3.22) that hk  C? k . In addition, (4.5)

k(I ? Qk )vk?1 = sup1 (v; (Ik?kQk ))  Chk kvk : 2H

( )

1

Since (I ? Qk )v is the minimizer,

k(I ? Qk )vk?1  Chk kvk : The operator (I ? Qk ) is clearly bounded from H ?1 ( ) into H ?1 ( ) and hence by interpolation, k(I ? Qk )vk?1  Ch1k=2 kvk?1=2 : Inequality (4.1) follows from (1.1). For inequality (4.2), we rst note that by (3.22),

k(Qk ? Qk )vk  Ch?k 1 k(Qk ? Qk )vk?1  C kvk : Consequently, Qk is bounded both as an operator on L2( ) 7! L2 ( ) as well as an operator on H ?1 ( ) 7! H ?1 ( ). By interpolation, kQk vk?1=2  C kvk?1=2 : Inequality (4.2) now follows from (1.1). This completes the proof of the lemma. The following theorem is a consequence of [7] and Lemmas 4.1 and 4.2. 12

Theorem 4.1. Let Mk be as in Section 3. There exists a constant C not depending on

j such that for

0  V ((I ? Bj Vj )u; u)  j V (u; u)

for all u 2 Mj ;

j = 1 ? (Cj )?1:

In terms of matrices,

0  (Vj (I ? Bj Vj )v)  v  j (Vj v)  v

for all v 2 Rn : k

Proof: We need only verify that the smoothing operators satisfy appropriate conditions.

The appropriate upper bound for the smoothing operator is given by (2.4). By the de nition of Rk and Lemmas 3.1 and 3.2, for w 2 Mk ,

kwk2?1 = k [R1k=2 w; R1k=2 w]k  ck < Rk w; w >?1 : This provides the appropriate lower bound and completes the proof of the theorem. Remark 4.1: The above theorem provides an estimate for the contraction associated with the linear preconditioned iteration ui+1 = ui + Bj (F ? Vj ui); i = 0; 1; : : : : In fact, the error ei = u ? ui satis es (Vj ei )  ei  j2i (Vj e0)  e0: In addition, the theorem implies that the condition number (Bj Vj ) is bounded by Cj . Such a bound implies that the corresponding preconditioned conjugate gradient iteration also converges rapidly. Remark 4.2: There is no problem extending the results of this section to the case when

consists of a union of polygonal faces and forms the boundary of a domain in R3. 5. The case of a single layer potential.

In this section, we consider the case when the form V is de ned from the single layer potential (1.3). It is rst shown that (1.1) holds. Next, conditions for the application of the theory of [4] are veri ed and lead to iterative convergence estimates for the multigrid algorithm which are independent of the number of levels. Let S be a smooth boundary of a domain in R3. It was shown in [18] that (5.1)

C0 kvk2?1=2;S  V(S )(v; v)  C1 kvk2?1=2;S

for all v 2 H ?1=2 (S );

where V(S ) is de ned by

V(S )(u; v) =

Z Z

u(s1 )v(s2 ) ds ds : 1 2 S S js1 ? s2 j 13

In addition, they also showed that the operator V(S ) de ned by (V(S )u)(s2 ) =

Z

u(s1) ds 1 S js1 ? s2 j

is an isomorphism of H s (S ) onto H s+1(S ) for all real s. This implies that there are constants C0 and C1 such that (5.2)



C0 kwks;S  V(S )w s+1;S  C1 kwks;S

for all w 2 H s(S ):

The above norms denote the norms in H s(S ) and H s+1(S ). Remark 5.1: (5.1) holds for = S when S consists of the union of polygonal faces and is the boundary of a region in R3 . Consequently, we can apply the results of Section 4 in this case (see Remark 4.2). The following lemma will be critical in the analysis provided in the remainder of this section. Its proof will be given at the end of this section. Lemma 5.1. Let  2 L2 ( ) and ~ denote the extension by zero of  to L2 (S ). There exists constants C2 and C3 such that for s 2 [?1; 0],

C2 kks  k~ks;S  C3 kks

for all  2 H s( ):

The domain can be extended to be the smooth boundary of a bounded domain in R3. We shall denote this extended surface by S . By (5.1), V (; ) = V(S )(~; ~ ) is equivalent to k~k2?1=2;S ;. But by Lemma 5.1, k~ k?1=2;S is equivalent to kk?1=2 and hence (1.1) holds. We next verify inequality (3.5) of [4]. For our application, this translates into proving the inequality (5.3)

~ k?l)2 V (w; w) ?k 1 kVk wk2?1  (C

for all w 2 Ml :

The above inequality must be proved for l  k and some  < 1 not depending on l, k, or j . We rst note that

kVk wk2?1 =< Vk w; Vk w >?1= V (w; Vk w)



= (V(S )w; ~ Vg ~ 1;S

Vg ; k w)S  V(S )w k w ?1;S where (; )S is the inner product in L2(S ). Applying (5.2) and Lemma 5.1 gives

kVk wk2?1  C kwk kVk wk?1 and hence (5.4)

kVk wk2?1  C kwk2 : 14

The inverse inequality (5.5)

kwk2  Ch?l 1 kwk2?1=2

for all w 2 Ml

easily follows from standard inverse inequalities, convexity and duality in the case when Ml consists of continuous piecewise linear functions. In the case of discontinuous constants, (5.5) still holds. This can be seen by noting that for  2 Mk , the function  constructed in the proof of Lemma 3.2 satis es the inequality kk  C kk : By convexity and (3.21), kk1=2  Chk?1=2 kk : Consequently, for w 2 Mk , ) kwk = sup (w; kk 2M  Chk?1=2 sup k(w;k)  Chk?1=2 kwk?1=2 : 2M 1=2 k

k

Combining (5.4) and (5.5) gives

?k 1 kVk wk2?1  C (k hl)?1 V (w; w) for all w 2 Ml: The proof of (5.3) will be complete once we show that (5.6) ?k 1  Chk : Let Qk be as in Lemmas 4.1 and 4.2. In either case, it was shown that Qk was simultaneously bounded as an operator on both H ?1 ( ) and L2( ) (with bounds which are independent of k). Thus, it follows from (3.2), Lemmas 3.1 and 3.2 that (Qk w; Qk w)1k=2  C kwk ; (A?k 1 Qk w; Qk w)1k=2  C kwk?1 : Since Ak is positive de nite and symmetric with respect to (; )k , its powers de ne a Hilbert scale. Using the real method of interpolation gives (Ak?1=2 Qk w; Qk w)1k=2  C kwk?1=2

for all w 2 H ?1=2 ( ):

We then have by (1.1) and Lemmas 3.1 and 3.2, k = sup V[;(;]) k 2M 1=2 ; ) ? 1=2 ; ) ( A ( A k k  C sup (A?1 ; ) = C sup (k; ) k  Ch?k 1: k 2M 2M k k k

k

k

15

The last inequality above follows easily from the fact that the largest eigenvalue of Ak is bounded from below by ch?k 2. This completes the proof of (5.3). The second condition which one must verify before applying [4] is as follows. We must show that there is a constant C0 not depending on j and satisfying (5.7)



V (w; w)  C0 V (P0w; w) +

j X k=1

?k 1 kVk Pk wk2?1



for all w 2 Mj :

Let P^k denote the orthogonal projection operator onto the subspace Mk with respect to the inner product < ;  >?1=2. Lemma 3.1 of [4] shows that (5.7) will follow if we prove the analogous inequality for the equivalent form < ;  >?1=2 , i.e. (5.7) follows from  (5.8) < w; w >?1=2  C < P^0w; w >?1=2 +

j X k=1

2 

?k 1

V^k P^k w

?1

for all w 2 Mj :

Here V^k : Mk 7! Mk is de ned by

< V^k w;  >?1 =< w;  >?1=2

(5.9)

for all  2 Mk :

By Remark 3.1 of [4], (5.8) will follow if we prove full regularity and approximation for P^k . More precisely, it suces to show that there exists a constant C not depending on k = 1; 2; : : : ; j satisfying (5.10)

2 < (I ? P^k?1 )w; w >?1=2  C?k 1

V^k w

?1



for all w 2 Mk :

It is well known that the spaces H s ( ) form a Hilbert scale and hence there is an unbounded self adjoint operator V^ de ned on H ?1 ( ) (with domain L2( )) such that for

2 [0; 1],

^ kvk?1+ = V v ?1 : It follows that (5.11)

(V^ ?1(I ? P^k?1 )w; ) ^ (I ? Pk?1)w

?1 = sup 2 kk



2L

( )

< (I ? P^k?1 )w; (I ? P^k?1 ) >?1=2 : kk 2L2 ( )

= sup

By convexity, the L2( ) boundedness of Qk?1 , (4.3) and (4.5),

k(I ? Qk?1)k?1=2  Ch1k=2 kk : 16

Using the minimization property of the orthogonal projector gives < (I ? P^k?1 );  >?1=2  < (I ? Qk?1); (I ? Qk?1 ) >?1=2 (5.12)  Chk kk2 : Combining (5.11) and (5.12) with the inequality hk  C?k 1 gives



?1=2 < (I ? P^ )w; w >1=2 ^  C

(I ? Pk?1 )w k?1 k ?1=2 ?1 and hence < (I ? P^k?1 )w; w >?1=2 =< (I ? P^k?1)w; V^k w >?1

 Ck?1=2

V^k w

?1 < (I ? P^k?1)w; w >?1=12=2 : The inequality (5.10) immediate follows. This completes the proof of (5.7). The following theorem is a consequence of (5.3) and (5.7) [4]. Theorem 5.1. Let Bj be de ned by Algorithm 2.1 with V de ned by (1.3). Then 0  V ((I ? Bj Vj )v; v)  V (v; v) for all v 2 Mj ; where  < 1 is a constant independent of j . In terms of matrices, 0  (Vj (I ? Bj Vj )v )  v  (Vj v)  v for all v 2 Rn : Remark 5.2: Additive versions of the multigrid algorithm can also be de ned (as developed in [4]). Theorem 3.1 of [4] guarantees that the additive version will lead to a preconditioned system which has a condition number which is independent of the number of levels. Remark 5.3: We proved full regularity and approximation for the equivalent quadratic form < ;  >?1=2 as part of the proof of (5.7). In contrast, it is unlikely that the original form V (; ) satis es the full regularity and approximation condition (with constants that are independent of j ). We conclude this section with the proof of Lemma 5.1. We rst prove the upper inequality. This is obvious for s = 0. Moreover,  ; )S k~ k?1;S = sup1 (~ kk k

2H

(S )

1;S

 sup1 (k;k)  kk?1 : 2H (S ) 1

The upper inequality follows by interpolation. To prove the lower inequality of the lemma, we use a simultaneously bounded extension operator E : H s ( ) 7! H s (S ), for s 2 [0; 1]. The existence of such an extension is well known (cf. [14]). Then, kk?s = sup (k;k) 2H ( ) s  ; E)S  C k~k :  C sup (~ ?s;S 2H ( ) kEks;S This completes the proof of Lemma 5.1. s

s

17

6. Numerical computations.

In this section, we provide the results of numerical examples illustrating the theory developed in earlier sections. We shall consider the integral equation (1.3) de ned on

= [?1; 1]  [?1; 1]. We consider the case when the subspaces are given by piecewise constant functions on a rectangular mesh. Let mk = 2k+1 and de ne the k'th mesh by partitioning the domain

into mk  mk square subdomains of side length 1=mk . The approximation space Mk is de ned to be the set of functions which are picewise constant with respect to this mesh. Equations involving B0 are solved exactly. Because of the fact that the mesh lines are parallel to the x and y axes, the integrals required for the entries of the matrix Vk can be computed analytically. Moreover, since Vk is translationally invariant, its action can be computed in O(knk ) operations by use of the fast discrete Fourier transform [6]. We will present results using the multigrid operator as a preconditioner in a preconditioned conjugate gradient iteration. One factor which can be used to interpret the eciency of the proposed iterative scheme is the number of iterations required to achieve a certain accuracy. Let NI be the number of steps required to reduce the initial error by the factor of 10?6, i.e.

keNI k  10?6ke0k;

(6.1)

Here ei = u ? vi , u is the solution of (3.23), and vi is the i'th iterate in the iterative algorithm. Table 6.1.

CG preconditioned with

1=h 16 32 64 128 256

(Bj Vj ) 1.92 1.98 2.00 2.01 2.01

Bj . NI 7 8 8 8 8

We compare two iterative schemes for computing the solution of (3.23). The rst is the conjugate gradient algorithm using the multigrid preconditioner of Algorithm 3.1. The second is the conjugate gradient algorithm (CG) applied directly to (3.23). In Tables 6.1 and 6.2, we report the condition numbers, (Bj Vj ) and (Vj ) respectively and the number of iterative steps required to satisfy the condition (6.1). The use of the multigrid preconditioner results in signi cant improvements in both the condition number as well as the number of iterations required to satisfy (6.1). Note the condition numbers in Table 6.1 appear bounded. This is in agreement with the theoretical results of Theorem 5.1. 18

Table 6.2.

CG directly applied to (3.23)

1=h 16 32 64 128 256

(Vj ) 44.3 89.0 176.9 344.9 657.0

NI 18 26 37 51 68

References 1. R.E. Bank and T. Dupont, An optimal order process for solving nite element equations, Math. Comp. 36 (1981), 35{51. 2. D. Braess and W. Hackbusch, A new convergence proof for the multigrid method including the V-cycle, SIAM J. Numer. Anal. 20 (1983), 967{975. 3. J.H. Bramble and J.E. Pasciak, New convergence estimates for multigrid algorithms, Math. Comp. 49 (1987), 311{329. 4. J.H. Bramble and J.E. Pasciak, New estimates for multigrid algorithms including the V-cycle, Brookhaven Nat. Lab. Rep. # BNL-46730. 5. J.H. Bramble and J.E. Pasciak, The analysis of smoothers for multigrid algorithms, Math. Comp. (to appear). 6. J.H. Bramble and J.E. Pasciak, An ecient numerical procedure for the computation of steady state harmonic currents in at plates, IEEE Trans. on Mag. Mag-19 (1983), 2409{2412. 7. J.H. Bramble, J.E. Pasciak, J. Wang, and J. Xu, Convergence estimates for multigrid algorithms without regularity assumptions, Math. Comp. 57 (1991), 23{45. 8. J.H. Bramble, J.E. Pasciak and J. Xu, The analysis of multigrid algorithms with non-nested spaces or non-inherited quadratic forms, Math. Comp. 56 (1991), 1{34. 9. J.H. Bramble, J.E. Pasciak and J. Xu, Parallel multilevel preconditioners, Math. Comp. 55 (1990), 1{22. 10. J.H. Bramble and J. Xu, Some estimates for weighted L2 projections, Math. Comp. 56 (1991), 463{476. 11. A. Brandt, Multi-level adaptive solutions to boundary-valued problems, Math. Comp. 31 (1977), 333{390. 12. A. Brandt and A.A. Lubrecht, Multilevel Multi-integration and fast solution of integral equations, (Preprint). 13. P.L. Butzer and H. Berens, \Semi-Groups of Operators and Approximation," Springer-Verlag, New York, 1967. 14. P. Grisvard, \Elliptic Problems in Non Smooth Domains," Pitman, Boston, 1985. 15. Hackbusch, W., \Multi-Grid Methods and Applications," Springer-Verlag, New York, 1985. 16. Maitre, J.F., and Musy, F., Algebraic formalization of the multigrid method in the symmetric and positive de nite case - a convergence estimation for the V-cycle, in \Multigrid Methods for Integral and Di erential Equations," D.J. Paddon and H. Holstien (eds), Claridon Press, Oxford, 1985. 17. J. Mandel, S. McCormick and R. Bank, Variational multigrid theory, in \Multigrid Methods," Ed. S. McCormick, SIAM, Philadelphia, Penn., pp. 131{178. 18. J.C. Nedelec and J. Planchard, Une method variationnelle d'elements nis pour la resolution numerique d'un probleme exterieur dans R3 , R.A.I.R.O. 7, 105{129.

19

19. T. von Petersdor and E.P. Stephan, On the convergence of the multigrid method for a hypersingular integral equation of the rst kind, Numer. Math. 57 (1990), 379{391. 1980 Mathematics subject classi cations : Primary 65N30; Secondary 65F10 Department of Mathematics Cornell University Ithaca, NY 14853 E-mail : [email protected] Statistics Research Section Australian National University Canberra, ACT 2601 Australia E-mail : [email protected] Department of Applied Science Brookhaven National Laboratory Upton, NY 11973 E-mail : [email protected]

20