A Note on Besov Regularity of Functions With Random Coefficients

   Æ 43 5 2014 9    ADVANCES IN MATHEMATICS(CHINA) Vol.43, No.5 Sep., 2014 doi: 10.11845/sxjz.2012164b A Note on Besov Regularity of Fu...
3 downloads 3 Views 196KB Size
  

Æ

43 5 2014 9







ADVANCES IN MATHEMATICS(CHINA)

Vol.43, No.5 Sep., 2014 doi: 10.11845/sxjz.2012164b

A Note on Besov Regularity of Functions With Random Coefficients XU Tiange (School of Mathematics, University of Chinese Academy of Sciences, Beijing, 100049, P. R. China) Abstract: In this paper, we apply an alternative probabilistic approach to prove the Besov regularity of functions with sparse random coefficients introduced by Bochkina. Our s s (p > 1) to Bp,p (p < 1), providing a theoretical result extends Bochkina’s result from Bp,p support for the study of adaptive wavelet numerical scheme for stochastic partial differential equations. Keywords: Besov regularity; adaptive wavelet algorithm; stochastic partial differential equation MR(2010) Subject Classification: 60H15; 46E35 / CLC number: O211 Document code: A Article ID: 1000-0917(2014)05-0660-07

0 Background There has been a great amount of literature in numerical solutions of stochastic partial differential equations[1−2, 8−12] , whereas the ideology and methodology employed are diverse. Kloeden and Platen[9] introduced some numerical algorithms, such as Euler-Maruyama method, Milstein method and Runge-Kutta method, which could be considered as the extensions of the classical numerical methods for the deterministic ordinary differential equations to the stochastic cases. To cope with the complicated nature of SPDEs, a variety of the numerical simulation methods have been developed, of which the Monte-Carlo (MC for short) simulation method is the most widely accepted one. Nevertheless, the drawback for MC simulation lies in the slow convergence rate, about √1M , where M is the total number of realizations for the randomness. Aiming at improving the convergence rate of the MC simulation and applying some practical technique to the nonlinear SPDEs, Lototsky, Rozovskii[10] and Luo[11] established a new numerical method for SPDEs driven by white noise based on the Wiener chaos expansion (WCE for short), which is primarily inspired by the idea to separate the deterministic effects from the randomness in the solution through the Wiener chaos expansion in the Gaussian field. After expanding the solution in terms of the infinite series by the Wiener chaos expansion, the standard deterministic numerical methods could be applied. As for the numerical methods for the deterministic PDEs, a massive amount of research has been done, which produces many impressive results. Among them, we put emphasis on the adaptive wavelet method (AWM for short), since this technique will be involved in our future study—to establish the stochastic AWM. The purpose of this paper is to provide the necessary Received date: 2012-10-25. Foundation item: Supported by NSFC (No. 11101419) and the President Fund of GUCAS (B) (No. Y15102AN00). E-mail: [email protected]



5

:

A Note on Besov Regularity of Functions With Random Coefficients

661

preliminaries for it. Similar to the well-known adaptive finite method (AFM for short), the AWM also takes advantage of the adaptivity to deal with the presence of the singularity of the solutions of the elliptical equation. By the adaptive technique, the procedure of the discretization keeps adjusting to the information from the last iteration to obtain a desirable convergence rate. However, it is worth noting that the adaptive technique is a kind of the nonlinear approximation, different from the widely used linear one. Moreover, the operation of the AFM primarily depends on the regularity of the solution in the Sobolev space, which is known to govern the accuracy of the linear approximation, while the AWM mainly tackles the Besov regularity of the solution in the Besov space, in which the nonlinear method could display a more appropriate smoothness scale. This fundamental difference provides the likelihood of improving the approximation efficiency of AFM by AWM. Cohen, Dahmen and Devore[5] gave a detailed discussion about the theoretical performance of the AWM and established the related algorithm. Moreover, they proved that the algorithm could lead to the optimal approximation, in the sense that the minimum energy norm of the differences of the theoretical solution and all its possible Galerkin approximations in the framework of wavelet basis could be obtained through the designed algorithm and the convergence rate is O(N −s ), where N is the dimension of the Galerkin projection space spanned by the wavelet basis, and s is the scale of the Besov regularity. Now, we start with the simplest stochastic partial differential equation, du(t, x) = Δu(t, x)dt + GdW (t),

t ∈ [0, T ], x ∈ O,

(0.1)

where Δ is the Laplacian operator, W (t, · ) is a cylindrical Wiener process on a Hilbert space H, G ∈ L(H02 , H), O is a bounded domain with the Lipschitz boundary. By the time discretization, the above SPDE could be transformed into u(tn , x) − u(tn−1 , x) = Δu(tn−1 , x)(tn − tn−1 ) + G(W (tn , x) − W (tn−1 , x)).

(0.2)

Dahlker and Devore[7] studied the Besov regularity of the solution of Laplace equation with the Lipschitz boundary under some Besov regularity condition on the external force. Motivated by their result, we hope to analyze the Besov regularity of random noise with respect to the space variable x. Bochkina[3] offered an appropriate and reasonable decomposition of random noise. s But in her paper, the Besov space Bp,p (p ≥ 1) is considered. However, the convergence rate of the adaptive wavelet method depends on the Besov regularity parameter s, which increases as p decreases, since the regularity of the functions in the space Lp (p < 1) is smoother than that s in Lq (q > 1). Therefore, it is necessary to extend the regularity in Besov space Bp,p (p ≥ 1) to p < 1. We would like to point out that the Besov regularity of random noise X in Section 2 has also been involved in [4], but our approach is different from it. Comparatively, ours is more explicit and straightforward. Besides, thanks to [5], when the random noise G(W (tn , x) − W (tn−1 , x)) is expanded into the form in Section 2, it can be approximated by the best N -term approximation. Cioca, et al.[4] have demonstrated the superiority of the nonlinear best N -term approximation of random noise X over the classical linear approximation through the computer simulations. Though the algorithm for solution of the elliptical equation driven by the random noise X was



662







43

established in [4] based on the regularity of the solution of deterministic elliptical equation from [7] and the corresponding adaptive algorithm from [5], yet the time discretization of SPDEs, which is quite common, can not be covered by the paper. How to provide the proper time discretization compatible with the adaptive spatial discretization, control the total optimal approximate rate, in which both the time and spatial discretization are involved, and establish the corresponding algorithm are the further questions to be solved in our future study. The paper is organized as follows. In Section 1, we present some preliminaries about wavelet basis and Besov space. Section 2 is devoted to the main result of the paper.

1 Preliminaries We start this section with the brief introduction of Besov space. For the details, please refer to [6]. Let Ω be a subspace of Rd . Set Ωh,n := {x ∈ Ω; x + kh ∈ Ω, k = 0, 1, · · · , n} and denote the n-th order finite difference operator by Δnh f (x) = Δ1h (Δn−1 )f (x), h

Δ1h f (x) = f (x + h) − f (x).

s Definition 1.1 The Besov space Bp,q (Ω) is defined as the one consisting of the functions p f ∈ L (Ω) such that   q 2sjq ωn (f, 2−j )p < ∞, j≥0

sup|h|≤t ||Δnh f ||Lp (Ωh,n )

is the n-th order Lp modulus of smoothness of f . where ωn (f, t)p = s Remark Roughly speaking, the Besov space Bp,q provides another calibration for measurp ing the smoothness in the space L and s denotes the degree of the smoothness. If s1 ≥ s2 , then s1 s2 s m Bp,q ⊂ Bp,q , ∀q1 , q2 . Besides, the Sobolev space W s,p = Bp,p , ∀s  Z and W m,p = Bp,p ,m∈ 1 2 Z, p = 2. The following theorem provides an equivalent representation of the Besov norm of the function f in terms of its wavelet coefficients. For the proof details, please refer to [6, Theorem 3.7.7]. Theorem 1.1[6] Let ψ ∈ Lr (Rd ) be a wavelet function, and a function f is decomposed into   f= cλ ψλj , j≥−1 λ∈∇j

where {ψλ,j } is a wavelet basis by shifting and dilating ψ and ∇j is the index corresponding to the multiscale subspace. Then we have the norm equivalence    1 1   t f Bp,q ∼  2tj 2d( 2 − p )j (cλ )λ∈∇j ιp j≥−1  q , ι

where lp denotes the p-norm of the p-power summable sequences.

2 Main Results We consider the following random function X(ω, ·) :=

∞ 



j=0 k∈{0,1,2j −1}d

Ij,k (ω)σj βj,k (ω)ψj,k ( · )

(2.1)



:

5

A Note on Besov Regularity of Functions With Random Coefficients

663

defined on a probability space (Ω, F , P) as the representation of random noise in the equation, where • Ij,k is a Bernouli distributed random variable with the parameter pj := 2−dβj (0 < β ≤ 1); • βj,k (j ≥ 0, k ∈ {0, 1, 2j − 1}d ) are i.i.d random variables and independent of Ij,k . dγ dα • σj := j 2 2− 2 j (γ ∈ R, α ≥ 0); jd • ψ( · ) is a wavelet function and {ψj,k ( · ) := 2 2 ψ(2j · −k)} forms a wavelet basis of L2 ([0, 1]d ). The aim of this section is to establish the sufficient and necessary conditions, under which the above random function X belongs to some proper Besov space. To this end, we make use of Theorem 1.1, that is, X∈

P-a.s ⇔

s Bp,p ,

∞ 

j

dγp 2

2

α jp(s+d( 12 − β p − 2 ))

 2−d(1−β)j





|Ij,k βj,k |

p

< ∞, P-a.s.

k∈{0,1,··· ,2j −1}d

j=0

To simplify the notations, we denote by {ξj,k , j = 0, 1, · · · ; k = 1, 2, · · · , 2dj } an i.i.d sequence D

and ξ0,1 = β0,{0}d (in the distribution sense), dγp 2

aj := j

1

β

2jp(s+d( 2 − p − 2 )) , α

ηj := 2−d(1−β)j

Nj 

|ξj,k |p ,

k=1

where Nj is the total number of nonzero Ij,k in level j. Then, in distribution sense, 

D

|Ij,k βj,k |p =

k∈{0,1,··· ,2j −1}d

Nj 

|ξj,k |p .

k=0

Theorem 2.1 Let 0 < p ≤ 1. Suppose that Eet|ξ0,1 | < ∞ (∀t ∈ R), then   α 1 β − − aj ηj < ∞, P-a.s ⇔ s + d < 0, 2 p 2 j=0

(2.2)

  1 β α 2 aj ηj < ∞, P-a.s ⇔ s + d − − = 0, and γ < − . 2 p 2 dp j=0

(2.3)

∞ 

or

∞ 

Before giving the proof of Theorem 2.1, we need the following lemmas. Set νp := E|ξ0,1 |p . p Lemma 2.1 (1) If 0 < β < 1, ηj → νp (convergence in probability), as j → ∞.  D p (2) If β = 1, ηj → N k=1 |ξk | (convergence in distribution), as j → ∞, where N is a Poisson random with parameter 1 and {ξk , k ≥ 1} is an i.i.d sequence with the same distribution as ξ0,1 . Proof If 0 < β < 1, −d(1−β)j

Eeitηj = Eeit2 −d(1−β)j

where bj (t) := Eeit2

|ξ0,1 |p

N j k=1

|ξj,k |p

 2dj = 1 − 2−dβj + 2−dβj bj (t) ,

. Taking the limit as j → ∞, we get

  lim log Eeitηj = lim 2dj log 1 − 2−dβj + 2−dβj bj (t) = itE|ξ0,1 |p = itνp ,

j→∞

j→∞



664







43

p

which implies ηj → νp , as j → ∞. p Similarly, if β = 1, let b(t) = Eeit|ξ1 | . Then lim Eeitηj = eEe

it|ξ1 |p

−1

= eb(t)−1 = EeN log b(t) = E(b(t))N = Eeit

j→∞

N k=1

|ξk |p

.

D  p Thus, we deduce that ηj → N k=1 |ξk | .  n Lemma 2.2 n1 j=0 ηj converges in probability to νp as n → ∞. Proof It is sufficient to prove t

lim Ee n

n j=0

ηj

n→∞

= etνp ,

t ∈ R.

(2.4)

Define fj (t) = Eetηj . Due to the independence of ηj , we have Ee

t n

n j=0

ηj

  t = fj . n j=0 n

Set b(t) := Eet|ξ0,1 | . Case 1 β = 1. By the proof in Lemma 2.1, p

 2dj , fj (t) = 1 − 2−dj + 2−dj b(t) −1

Define the function gt (x) := (1 − x + xb(t))x . Clearly, fj (t) = gt (2−dj ). To prove that gt (x) is monotonic on [0, ∞) with respect to x, we consider log gt (x). Calculating (log gt (x)) to get (log gt (x)) =

(b(t)−1)x 1−x+xb(t)

− log(1 − x + xb(t)) x2

:=

h(x) . x2

It is easy to verify that h(0) = 0 and h (x) =

−(b(t) − 1)2 x < 0, (1 − x + xb(t))2

which implies that gt (x) is monotonically decreasing in x. Thus {fj (t)} is an increasing sequence with respect to j. On the other hand, by Lemma 2.1, we have lim fj (t) = eb(t)−1 := f (t),

j→∞

t ∈ R.

Therefore, n    n+1   n+1 t ηj n t t ≥ Ee j=0 ≥ f0 . f n n

Taking the limit n → ∞ on the above inequality, we have   n+1 t t lim f = lim e(n+1)(b( n )−1) = etνp , n→∞ n→∞ n



:

5

A Note on Besov Regularity of Functions With Random Coefficients

665

and   n+1 p t t lim f0 = lim (Ee n |ξ0,1 | )n+1 = etνp . n→∞ n→∞ n Hence, t

lim Ee n

n j=0

ηj

n→∞

Case 2

= etνp .

0 < β < 1. Similarly, 2dj   fj (t) = 1 − 2−dβj + 2−dβj b t2−d(1−β)j .

Similar to Case 1, let

 x−1 . gt (x) = 1 − xβ + xβ b(tx1−β )

Then fj (t) = gt (2−dβj ). Calculate the first derivative of log gt (x) with respect to x to get 1−β

βm(x) + (1 − β)txE(|ξ0,1 |p etx |ξ0,1 | ) − (1 + m(x)) log(1 + m(x)) x2 (1 + m(x)) h(x) := 2 , x (1 + m(x)) p

(log gt (x)) =

where m(x) = xβ (b(tx1−β ) − 1). For h(x), we have h(0) = 0 and lim h (x) = β(1 − β)tE|ξ0,1 |p > 0,

x→0

implying that there exists a δ > 0 such that h(x) ≥ 0, ∀x ∈ [0, δ]. Hence, gt (x) is increasing on [0, δ] with respect to x. Following the same procedures as in Case 1, we can get (2.4). ∞ Proof of Theorem 2.1 (i) ⇐ If (2.2) or (2.3) holds, then j=0 aj < +∞. Moreover, by Wald’s equation, we have Eηj = 2−d(1−β)j E|ξ0,1 |p · ENj = E|ξ0,1 |p . Therefore, E

∞  j=0

aj ηj =

∞ 

aj Eηj = E|ξ0,1 |p

j=0

∞ 

aj < +∞,

j=0

∞

aj ηj < ∞, P-a.s. ∞ D (ii) ⇒ If j=0 aj ηj < +∞, P-a.s, then aj ηj → 0. By Lemma 2.1, we deduce that limj→∞ aj = 0, which is equivalent to   1 β α s+d − − < 0, 2 p 2

which implies

j=0

or   1 β α s+d − − = 0, 2 p 2

and γ < 0.

(2.5)



666 1 Set λ := − dγp 2 . If s + d( 2 −





− βp ) = 0, then aj =

α 2

∞ 

aj ηj =

j=0



43

1 , jλ

∞  ηj < +∞, jλ j=0

and

P-a.s.

Using Kronecker’s lemma, we have n n  1  1−λ 1 η = n ηj → 0, j nλ j=0 n j=0

P-a.s., as n → ∞,

2 . which is equivalent to λ < 1 because of Lemma 2.2. Thus, we conclude that γ < − dp

References [1] Arnulf, J., Pathwise numerical approximations of SPDEs with additive noise under non-global Lipschitz coefficients, Potential Anal., 2009, 31(4): 375-404. [2] Arnulf, J. and Kloeden, P., Taylor expansion of solutions of SPDEs with additive noise, Ann. Probab., 2010, 38(2): 532-569. [3] Bochkina, N., Besov regularity of functions with sparse random wavelet coefficients, Preprint. [4] Cioica, P.A., Dahlke, S., D¨ ohring, N., Kinzel, S., Lindner, F., Raasch, T., Ritter, K. and Schilling, R.L., Adaptive wavelet methods for elliptic stochastic partial differential equations, Lecture Notes in Computational Science and Engineer, Vol. 102, 2014. [5] Cohen, A., Numerical Analysis of Wavelet Methods, Amsterdam: Elsevier Science B.V., 2003. [6] Cohen, A., Dahmen, W. and Devore, R., Adaptive wavelet methods for elliptic operator equations: convergence rates, Math. Comput., 2000, 70(233): 27-75. [7] Dahlke, S. and DeVore, R.A., Besov regularity for elliptic boundary value problems, Comm. Partial Diff. Eqns., 1997, 22(1/2): 1-16. [8] Gy¨ ongy, I., Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise I, II, Potential Anal., 1998, 9(1): 1-25; 1999, 11(1): 1-37. [9] Kloeden, P.E. and Platen, E., Numerical Solution of Stochastic Differential Equations, New York: SpringerVerlag, 1992. [10] Lototsky, S.V. and Rozovskii, B.L., Wiener chaos solutions of linear stochastic evolution equations, Ann. Probab., 2006, 34(2): 638-662. [11] Luo, W., Wiener chaos expansion and numerical solutions of stochastic partial differential equations, Ph.D. Thesis, Pasadena, CA: California Institute of Technology, 2006. [12] Walsh, J.B., Finite element methods for parabolic stochastic PDE’s, Potential Anal., 2005, 23(1): 1-43.

 Besov   ! ("#$ %& $

%, '(, 100049) )*+ ,-./01234567 Bochlkina 89:;?@ABCB: Besov DE s s F. G Bochkina :HIJKL Bp,p (p > 1) MNOKL Bp,p (p < 1), PQR?@STU4VW XYZ[B\]^_`abc. def+ Besov DEF; WXYZ[g5; ?@STU4V

Suggest Documents