A priori and a posteriori analyses of the DPG method

A priori and a posteriori analyses of the DPG method Jay Gopalakrishnan Portland State University ICERM Workshop on Robust Discretization and Fast So...
Author: Grant Barton
9 downloads 0 Views 921KB Size
A priori and a posteriori analyses of the DPG method Jay Gopalakrishnan Portland State University

ICERM Workshop on Robust Discretization and Fast Solvers for Computable Multi-Physics Models Brown University, May 2013

Thanks: Jay Gopalakrishnan

AFOSR, NSF 1/38

Contents Principal Collaborator in DPG research: Leszek Demkowicz. Three avenues to DPG methods I I I

A priori error analysis I I

A posteriori error analysis Fast solvers Examples I I I I I

Jay Gopalakrishnan

2/38

Three avenues to DPG methods Least-squares Galerkin method

DPG methods

Petrov-Galerkin with optimal test space

Mixed Galerkin method Jay Gopalakrishnan

3/38

“Petrov-Galerkin” schemes (PG) PG schemes are distinguished by different trial and test (Hilbert) spaces. " P.D.E.+ The problem: boundary conditions. ↓   Variational form: 

Find x in a trial space X satisfying b(x, y ) = `(y ) for all y in a test space Y.

↓ 

Find x h in a discrete trial space Xh ⊂ X satisfying b(x h , y h ) = `(y h )

 Discretization: 

for all y h in a discrete test space Yh ⊂ Y . For PG schemes, Xh 6= Yh in general. Jay Gopalakrishnan

4/38

Elements of theory Variational formulation:   " # Exact inf-sup condition a uniqueness   =⇒ wellposedness |b(x, y )| +  C kxkX ≤ sup condition y ∈Y ky kY Babuˇska-Brezzi theory:   Discrete inf-sup condition   |b(xh , yh )|  =⇒ kx − xh kX ≤ C inf kx − wh kX .  wh ∈Xh C kxh kX ≤ sup kyh kY yh ∈Yh Difficulty: Exact inf-sup condition

Jay Gopalakrishnan

=⇒ 6 Discrete inf-sup condition

5/38

Elements of theory Variational formulation:   " # Exact inf-sup condition a uniqueness   =⇒ wellposedness |b(x, y )| +  C kxkX ≤ sup condition y ∈Y ky kY Babuˇska-Brezzi theory:   Discrete inf-sup condition   |b(xh , yh )|  =⇒ kx − xh kX ≤ C inf kx − wh kX .  wh ∈Xh C kxh kX ≤ sup kyh kY yh ∈Yh Difficulty: Exact inf-sup condition

=⇒ 6 Discrete inf-sup condition

Is there a way to find a stable test space for any given trial space (thus giving a stable method automatically)? Jay Gopalakrishnan

5/38

The ideal method Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ),

∀w ∈ X , y ∈ Y . [Demkowicz+G 2011]

Rationale:

Jay Gopalakrishnan

6/38

The ideal method Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ),

∀w ∈ X , y ∈ Y . [Demkowicz+G 2011]

Rationale: Q: Which function y maximizes A: y = T x is the maximizer.

|b(x, y )| for any given x ? ky kY ← Optimal test function.

DPG Idea: If the discrete test space contains the optimal test functions, exact inf-sup condition =⇒ discrete inf-sup condition. Jay Gopalakrishnan

6/38

The ideal method Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that def

∀y ∈ Yhopt = T (Xh ),

b(xh , y ) = `(y ), where T : X 7→ Y is defined by

(T w , y )Y = b(w , y ),

∀w ∈ X , y ∈ Y .

[A.1] {w ∈ X : b(w , y ) = 0 ∀y ∈ Y } = {0}. [A.2] ∃ C1 , C2 > 0 such that

C1 ky kY ≤ sup w ∈X

|b(w , y )| ≤ C2 ky kY . kw kX

Theorem (DPG Quasioptimality) [A.1–A.2] =⇒ kx − xh kX ≤

Jay Gopalakrishnan

C2 inf kx − wh kX . C1 wh ∈Xh

6/38

The ideal method Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ),

∀w ∈ X , y ∈ Y .

But . . . can we really compute T x? For a few problems, T x can be calculated in closed form. When T x cannot be hand calculated, we overcome two difficulties: I

Redesign formulation so that T is local (by hybridization).

I

Approximate T by a computable (finite-rank) T r .

Jay Gopalakrishnan

6/38

The ideal method Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ),

The ideal DPG method =

Jay Gopalakrishnan

∀w ∈ X , y ∈ Y .

iDPG method

7/38

Trivial Example 1 Problem

Standard FEM is an iDPG method "

Given F ∈ H −1 (Ω), find u ∈

H01 (Ω)

solving:

Z

~ u·∇ ~ v = F (v ), ∇

∀v ∈ H01 (Ω).



Recall

Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ), Jay Gopalakrishnan

∀w ∈ X , y ∈ Y . 8/38

Trivial Example 1 Problem

Standard FEM is an iDPG method "

Given F ∈ H −1 (Ω), find u ∈

H01 (Ω)

Z

solving:

~ u·∇ ~ v = F (v ), ∇

∀v ∈ H01 (Ω).



Set X = Y = H01 (Ω) and

Z

(v , y )Y =

~ v ·∇ ~ y. ∇



Recall

Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ), Jay Gopalakrishnan

∀w ∈ X , y ∈ Y . 8/38

Trivial Example 1 Problem

Standard FEM is an iDPG method "

Given F ∈ H −1 (Ω), find u ∈

H01 (Ω)

Z

solving:

~ u·∇ ~ v = F (v ), ∇

∀v ∈ H01 (Ω).



Set X = Y = H01 (Ω) and

Z

(v , y )Y =

~ v ·∇ ~ y. ∇



Then (·, ·)Y = b(·, ·) =⇒ T = identity, so Yhopt = Xh . Recall

Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ), Jay Gopalakrishnan

∀w ∈ X , y ∈ Y . 8/38

Next Three avenues to DPG methods I I

Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

!

I

A priori error analysis I

Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I

!

A posteriori error analysis Fast solvers Examples I

Example 1 (Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I

!

I I I

Jay Gopalakrishnan

9/38

Trivial Example 2 Problem

L2 -based least squares method is an ideal DPG method "

Given an f ∈ L2 (Ω) and a linear continuous bijective A : X → L2 (Ω), find u ∈ X satisfying

Au = f .

Recall

Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ), Jay Gopalakrishnan

∀w ∈ X , y ∈ Y . 10/38

Trivial Example 2 Problem

L2 -based least squares method is an ideal DPG method "

Given an f ∈ L2 (Ω) and a linear continuous bijective A : X → L2 (Ω), find u ∈ X satisfying Set Y = L2 (Ω),

Au = f .

b(x, y ) = (Ax, y )Y ,

`(y ) = (f , y )Y .

Recall

Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ), Jay Gopalakrishnan

∀w ∈ X , y ∈ Y . 10/38

Trivial Example 2 Problem

L2 -based least squares method is an ideal DPG method "

Given an f ∈ L2 (Ω) and a linear continuous bijective A : X → L2 (Ω), find u ∈ X satisfying Set Y = L2 (Ω),

Au = f .

b(x, y ) = (Ax, y )Y ,

`(y ) = (f , y )Y .

Then (T w , y )Y = (Aw , y ) =⇒ T = A =⇒ Yhopt = AXh =⇒ iDPG equations become Normal equations: (Axh , Awh )Y = (f , Awh )Y

∀wh ∈ Xh .

Recall

Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ), Jay Gopalakrishnan

∀w ∈ X , y ∈ Y . 10/38

The least-squares avenue Least-squares Galerkin method

DPG methods

Petrov-Galerkin with optimal test space

Mixed Galerkin method Jay Gopalakrishnan

11/38

Definitions Riesz map: RY : Y → Y ∗ :

(RY y )(v ) = (y , v )Y ,

∀y , v ∈ Y .

Operator generated by the form: B : X → Y∗ :

Bx(y ) = b(x, y ),

∀x ∈ X , y ∈ Y .

Trial-to-Test operator T : X 7→ Y was defined by (T w , y )Y = b(w , y ), =⇒

∀w ∈ X , y ∈ Y .

T = RY−1 ◦ B.

Energy norm on X : def

|||z|||X = kT zkY . Jay Gopalakrishnan

12/38

Residual minimization Theorem (DPG methods are least-squares methods) The following are equivalent statements: i) xh ∈ Xh is the unique solution of the ideal DPG method. ii) xh is the best approximation to x from Xh in the energy norm: |||x − xh |||X = inf |||x − zh |||X zh ∈Xh

iii) xh minimizes residual in the following sense: xh = arg min k` − Bzh kY ∗ . zh ∈Xh

Proof of (i) ⇐⇒ (ii) . b(x − xh , yh ) = 0 ∀yh ∈ Yhopt ⇐⇒ b(x − xh , T zh ) = 0

∀zh ∈ Xh

⇐⇒ (T (x − xh ), T zh )Y = 0 ∀zh ∈ Xh . Jay Gopalakrishnan

13/38

Residual minimization Theorem (DPG methods are least-squares methods) The following are equivalent statements: i) xh ∈ Xh is the unique solution of the ideal DPG method. ii) xh is the best approximation to x from Xh in the energy norm: |||x − xh |||X = inf |||x − zh |||X zh ∈Xh

iii) xh minimizes residual in the following sense: xh = arg min k` − Bzh kY ∗ . zh ∈Xh

Proof of (ii) ⇐⇒ (iii) . |||x − zh |||X = kT (x − zh )kY = kRY−1 B(x − zh )kY = kB(x − zh )kY ∗ = k` − Bzh )kY ∗ . Jay Gopalakrishnan

13/38

Example 3: An ODE Pavlovian integration by parts, or not? " 1D transport eq.

u0 = f

in (0, 1),

u(0) = u0 (inflow b.c.)

 Find u in H 1 , satisfying u(0) = u0 , & Variational form:  Z Z 1 1  0  uv= f v, ∀v in L2 .  0 0 | {z } | {z } b(u,v )

Jay Gopalakrishnan

l(v )

14/38

Example 3: An ODE Pavlovian integration by parts, or not? " 1D transport eq.

u0 = f

in (0, 1),

u(0) = u0 (inflow b.c.)

 Find u in H 1 , satisfying u(0) = u0 , & Variational form:  Z Z 1 1  0  uv= f v, ∀v in L2 .  0 0 | {z } | {z } b(u,v )

l(v )

Find u ∈ L2 , and a number uˆ1 ∈ R, satisfying Ultra-weak form:  Z Z 1 1  0 − f v + u0 v (0), ∀v ∈ H 1 . uv + uˆ1 v (1) =  0 0 | {z } | {z } 

b( (u,uˆ1 ), v )

Jay Gopalakrishnan

l(v )

14/38

Example 3: An ODE Pavlovian integration by parts, or not? " 1D transport eq.

u0 = f

in (0, 1),

u(0) = u0 (inflow b.c.)

 Find u in H 1 , satisfying u(0) = u0 , & Variational form:  Z Z 1 1  0 (DPG gives LS  uv= f v, ∀v in L2 .  0 0 | {z } | {z } with Au = u 0 .) b(u,v )

l(v )

Find u ∈ L2 , and a number uˆ1 ∈ R, satisfying Ultra-weak form:  Z Z 1 1  0 (Here DPG gives  − f v + u0 v (0), ∀v ∈ H 1 . uv + uˆ1 v (1) =  0 0 {z } | {z } something new.) | 

b( (u,uˆ1 ), v )

Jay Gopalakrishnan

l(v )

14/38

One-dimensional results using spectral trial space

[Click here to download FEniCS code.] Jay Gopalakrishnan

15/38

Next Three avenues to DPG methods I I

Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I

A priori error analysis I I

Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

! ! !

A posteriori error analysis Fast solvers Examples I I I

Example 1 (Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example 2 (L2 -based least-squares) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example 3 (An ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I

! ! !

I

Jay Gopalakrishnan

16/38

The practical method Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that b(xh , y ) = `(y ),

def

∀y ∈ Yhopt = T (Xh ),

where T : X 7→ Y is defined by (T w , y )Y = b(w , y ),

∀w ∈ X , y ∈ Y .

Pick any Xh ⊆ X . The practical DPG method finds xhr ∈ Xh , using a (finite-dimensional) Y r ⊆ Y , such that b(xhr , y ) = `(y ),

def

∀y ∈ Yhr = T r (Xh ),

where T r : X 7→ Y r is defined by (T r w , y )Y = b(w , y ), Jay Gopalakrishnan

∀w ∈ X , y ∈ Y r . 17/38

The practical method Pick any Xh ⊆ X . The ideal DPG method finds xh ∈ Xh such that xh = arg min k` − Bzh kY ∗ . zh ∈Xh

Pick any Xh ⊆ X . The practical DPG method finds xhr ∈ Xh , using a (finite-dimensional) Y r ⊆ Y , such that xhr = arg min k` − Bzh k(Y r )∗ . zh ∈Xh

Jay Gopalakrishnan

17/38

Error analysis of the practical DPG method [A.1] {w ∈ X : b(w , y ) = 0 ∀y ∈ Y } = {0}. [A.2] ∃ C1 , C2 > 0 such that

C1 ky kY ≤ sup w ∈X

|b(w , y )| ≤ C2 ky kY . kw kX

[A.3] ∃ Π : Y 7→ Y r and CΠ > 0 such that for all wh ∈ Xh and y ∈ Y , b(w h , y − Πy ) = 0,

kΠy kY ≤ CΠ ky kY .

Theorem (A priori estimates for practical DPG method [G+Qiu 2013]) [A.1–A.3] =⇒ kx − xhr kX ≤

Jay Gopalakrishnan

C2 CΠ inf kx − wh kX . C1 wh ∈Xh

18/38

The ‘D’ in ‘DPG’ For the residual minimization in xh = arg min k` − Bzh kY ∗ zh ∈Xh

to be feasible, the dual norm k · kY ∗ must be easily computable! “Negative-norm least-squares” uses multigrid or operators spectrally equivalent to the dual norm. [Bramble+Pasciak+Lazarov 1997] DPG methods reformulate problems to localize the dual norm computation (to parallel element-by-element computations). DPG methods have discontinuous test function space Y Y = Y (K ), K ∈mesh

which have locally invertible Riesz maps. Jay Gopalakrishnan

19/38

Example 4: The Dirichlet problem A new weak form for the old Laplacian " Find u:

−∆u = f , u = 0,

on Ω, on ∂Ω.

Let Ωh be a mesh of Ω and K ∈ Ωh be a mesh element. Then: Z Z Z ~ ~ ~ ∇u · ∇v − (n · ∇ u)v = f v. K

∂K

K

This allows test function v ∈ Y to be in a “broken” Sobolev space Y Y = H 1 (Ωh ) := H 1 (K ). K ∈Ωh Jay Gopalakrishnan

20/38

Example 4: The Dirichlet problem A new weak form for the old Laplacian " Find u:

−∆u = f , u = 0,

on Ω, on ∂Ω.

Let Ωh be a mesh of Ω and K ∈ Ωh be a mesh element. Then: Z Z Z ~ ~ ~ ∇u · ∇v − (n · ∇ u)v = f v. K ∂K K " # Z Z X Z ~ u·∇ ~ v− f v. ∇ qˆn v = K ∈Ωh

K

∂K



This allows test function v ∈ Y to be in a “broken” Sobolev space Y Y = H 1 (Ωh ) := H 1 (K ). K ∈Ωh Jay Gopalakrishnan

20/38

Functional setting for the Laplacian Want X and Y to make B : X → Y ∗ a continuous bijection, i.e., the form b(x, y ) = (Bx)(y )

on

X ×Y

must satisfy a uniqueness and inf-sup condition. " Z X Z ~ ~ Set b( (u, qˆn ), v ) = ∇u · ∇v − K ∈Ωh

K

# qˆn v .

∂K

We seek u in H01 (Ω) and qˆn in H −1/2 (∂Ωh ).

Jay Gopalakrishnan

21/38

Functional setting for the Laplacian Want X and Y to make B : X → Y ∗ a continuous bijection, i.e., the form b(x, y ) = (Bx)(y )

on

X ×Y

must satisfy a uniqueness and inf-sup condition. " Z X Z ~ ~ Set b( (u, qˆn ), v ) = ∇u · ∇v − K ∈Ωh

K

# qˆn v .

∂K

We seek u in H01 (Ω) and qˆn in H −1/2 (∂Ωh ).

Definition (of H −1/2 (∂Ωh ), the space of numerical fluxes) Define the element-by-element trace operator trn by Y H −1/2 (∂K ), trn r |∂K = r · n|∂K . trn : H(div, Ω) → K ∈Ωh

and set H Jay Gopalakrishnan

−1/2

(∂Ωh ) = ran(trn ). 21/38

Functional setting for the Laplacian Want X and Y to make B : X → Y ∗ a continuous bijection, i.e., the form b(x, y ) = (Bx)(y )

on

X ×Y

must satisfy a uniqueness and inf-sup condition. " Z X Z ~ ~ Set b( (u, qˆn ), v ) = ∇u · ∇v − K ∈Ωh

K

# qˆn v .

∂K

We seek u in H01 (Ω) and qˆn in H −1/2 (∂Ωh ).

Theorem With X = H01 (Ω) × H −1/2 (∂Ωh ) and Y = H 1 (Ωh ) , the operator B is a continuous bijection and has a continuous inverse. [Demkowicz+G 2013] Jay Gopalakrishnan

21/38

Discrete spaces for the Laplacian Trial subspace Xh ⊆ X ≡ H01 (Ω) × H −1/2 (∂Ωh ): Approximate u qˆn

by Lagrange FE of degree ≤ p + 1,

∀ K ∈ Ωh ,

by polynomials of degree ≤ p,

∀ mesh edges.

Test subspace Y r ⊆ H 1 (Ωh ):

Set, for some r ≥ 0,

Y r = {v : v |K ∈ Pr (K ),

Jay Gopalakrishnan

∀K ∈ Ωh }.

22/38

Discrete spaces for the Laplacian Trial subspace Xh ⊆ X ≡ H01 (Ω) × H −1/2 (∂Ωh ): Approximate u qˆn

by Lagrange FE of degree ≤ p + 1,

∀ K ∈ Ωh ,

by polynomials of degree ≤ p,

∀ mesh edges.

Test subspace Y r ⊆ H 1 (Ωh ):

Set, for some r ≥ 0,

Recall

Y r = {v : v |K ∈ Pr (K ),

∀K ∈ Ωh }.

Pick any Xh ⊆ X . The practical DPG method finds xhr ∈ Xh , using a (finite-dimensional) Y r ⊆ Y , such that b(xhr , y ) = `(y ),

def

∀y ∈ Yhr = T r (Xh ),

where T r : X 7→ Y r is defined by (T r w , y )Y = b(w , y ), Jay Gopalakrishnan

∀w ∈ X , y ∈ Y r . 22/38

Discrete spaces for the Laplacian Trial subspace Xh ⊆ X ≡ H01 (Ω) × H −1/2 (∂Ωh ): Approximate u qˆn

by Lagrange FE of degree ≤ p + 1,

∀ K ∈ Ωh ,

by polynomials of degree ≤ p,

∀ mesh edges.

Test subspace Y r ⊆ H 1 (Ωh ):

Set, for some r ≥ 0,

Y r = {v : v |K ∈ Pr (K ),

∀K ∈ Ωh }.

Computation of T r is local: Apply: =⇒ =⇒

(T r w , y )Y = b(w , y )

(T r (u, q ˆn ), y )H 1 (Ωh ) = b( (u, q ˆn ), y ), Z Z ~ u·∇ ~ y − ∇ (T r (u, q ˆn ), y )H 1 (K ) = K

Jay Gopalakrishnan

∀ y ∈ Y r. q ˆn y ,

∀K ∈ Ωh .

∂K

22/38

Discrete spaces for the Laplacian Trial subspace Xh ⊆ X ≡ H01 (Ω) × H −1/2 (∂Ωh ): Approximate u qˆn

by Lagrange FE of degree ≤ p + 1,

∀ K ∈ Ωh ,

by polynomials of degree ≤ p,

∀ mesh edges.

Test subspace Y r ⊆ H 1 (Ωh ):

Set, for some r ≥ 0,

Y r = {v : v |K ∈ Pr (K ),

∀K ∈ Ωh }.

Recall

To prove optimal convergence, we must choose r so that [A.3] holds. [A.3]

Jay Gopalakrishnan

∃ Π : Y 7→ Y r and CΠ > 0 such that for all wh ∈ Xh and y ∈ Y , b(w h , y − Πy ) = 0,

kΠy kY ≤ CΠ ky kY .

22/38

Discrete spaces for the Laplacian Trial subspace Xh ⊆ X ≡ H01 (Ω) × H −1/2 (∂Ωh ): Approximate u qˆn

by Lagrange FE of degree ≤ p + 1,

∀ K ∈ Ωh ,

by polynomials of degree ≤ p,

∀ mesh edges.

Test subspace Y r ⊆ H 1 (Ωh ):

Set, for some r ≥ 0,

Y r = {v : v |K ∈ Pr (K ),

∀K ∈ Ωh }.

Theorem (Verification of [A.3]) Let Ωh be a simplicial shape-regular finite element mesh in N-space dimensions. For any p ≥ 0, whenever r ≥ p + N , there exists a continuous Π : Y → Y r such that for all (wh , ˆsn,h ) ∈ Xh , Z Z ~ wh · ∇(v ~ − Πv ) − ∇ ˆsn,h (v − Πv ) = 0, ∀K ∈ Ωh . K Jay Gopalakrishnan

∂K 22/38

Next Three avenues to DPG methods I I

Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I

A priori error analysis I I

Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

! ! ! !

A posteriori error analysis Fast solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples I I I I

Example Example Example Example

I

Jay Gopalakrishnan

1 2 3 4

(Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (L2 -based least-squares) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (An ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (Diffusion) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

! ! ! ! 23/38

Preconditioning

Abstractly, b(xhr , y ) = `(y ) =⇒ =⇒

b(xhr , T r zh ) = `(T r zh ) (T r xhr , T r zh )Y

r

= `(T zh )

∀y ∈ Yhr = T r (Xh ), ∀zh ∈ Xh ∀zh ∈ Xh .

Lemma [A.1–A.3]

=⇒

C1 kxkX ≤ kT r xkY ≤ C2 kxkX CΠ

for all x ∈ Xh . This implies that any preconditioner spectrally equivalent to the (·, ·)X -inner product is also a preconditioner for the practical DPG method.

Jay Gopalakrishnan

24/38

Example: A BDDC preconditioner

b( (u, qˆn ), v ) =

X K ∈Ωh

"Z

~ u·∇ ~ v − ∇

#

Z

K

qˆn v ∂K

X = H01 (Ω) × H −1/2 (∂Ωh ), Implementation in NGSolve with Lukas Kogler & Joachim Sch¨oberl 1 2

Statically condense the stiffness matrix to u|∂Ωh and qˆn . Apply a BDDC preconditioner as follows: 1 2 3

Jay Gopalakrishnan

Do a wire basket coarse solve. Add inverses of small blocks of u|∂Ωh -unknowns on each interface. Add inverses of small blocks of qˆn -unknowns on each interface.

25/38

Example: A BDDC preconditioner

b( (u, qˆn ), v ) =

X K ∈Ωh

"Z

~ u·∇ ~ v − ∇

#

Z

K

qˆn v ∂K

X = H01 (Ω) × H −1/2 (∂Ωh ), Implementation in NGSolve with Lukas Kogler & Joachim Sch¨oberl p+1 4 5 6 7 8 9 Jay Gopalakrishnan

diagonal 142 159 180 202 209 243

BDDC 60 65 77 78 88 90

Used a small fixed 8 x 8 mesh Number of preconditioned conjugate gradient iterations are reported.

25/38

Next Three avenues to DPG methods I I I

! !

Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A priori error analysis I I

Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A posteriori error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fast solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples I I I I

Example Example Example Example

I

Jay Gopalakrishnan

1 2 3 4

(Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (L2 -based least-squares) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (An ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (Diffusion) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

! !

! ! ! ! ! 26/38

Built-in error estimator in DPG methods Results for Carter’s flat plate problem:

(courtesy of Jesse Chan)

Adaptivity shows no preasymptotics.

Iteration 0 Supersonic flow impinging over a flat plate (Ma = 3, Re = 1000). Used Petrov-Galerkin implementation in Camillia package with h-adaptivity, p = 2, starting with a mesh of just two elements.

Jay Gopalakrishnan

27/38

Built-in error estimator in DPG methods Results for Carter’s flat plate problem:

(courtesy of Jesse Chan)

Adaptivity shows no preasymptotics.

Iteration 5 Supersonic flow impinging over a flat plate (Ma = 3, Re = 1000). Used Petrov-Galerkin implementation in Camillia package with h-adaptivity, p = 2, starting with a mesh of just two elements.

Jay Gopalakrishnan

27/38

Built-in error estimator in DPG methods Results for Carter’s flat plate problem:

(courtesy of Jesse Chan)

Adaptivity shows no preasymptotics.

Iteration 10 Supersonic flow impinging over a flat plate (Ma = 3, Re = 1000). Used Petrov-Galerkin implementation in Camillia package with h-adaptivity, p = 2, starting with a mesh of just two elements.

Jay Gopalakrishnan

27/38

The mixed method approach Least-squares Galerkin method

DPG methods

Petrov-Galerkin with optimal test space

Mixed Galerkin method Jay Gopalakrishnan

28/38

Error representation function Residual:

ρ = ` − Bxh .

Error representation function:

εr = RY−1r (` − Bxh ). It can be practically computed by (εr , y )Y = `(y ) − b(xh , y ), Error estimator:

Jay Gopalakrishnan

η = kεr kY .

∀y ∈ Y r . [Demkowicz+G+Niemi 2012]

Petrov-Galerkin solve

εr by local postprocessing

Least-squares

εr is Riesz inverse of residual

Mixed method

εr is one of the variables 29/38

DPG as a Mixed method Theorem (Reinterpretation of DPG as a mixed method) The following are equivalent statements: i) xh ∈ Xh solves the practical DPG method. ii) xh ∈ Xh and εr ∈ Y r solve the mixed formulation (εr , y )Y + b(xh , y ) = `(y ) r

b(zh , ε ) = 0

∀y ∈ Y r ,

(1a)

∀zh ∈ Xh .

(1b)

Proof. (i) =⇒ (ii) : Eq. (1a) is just the defintion of εr . For (1b), b(zh , εr ) = (T r zh , εr )Y = (T r zh , RY−1r (` − Bxh ))Y = (T r zh , T r (x − xh ))Y = b(x − xh , T r zh ) = 0. (ii) =⇒ (i) : Similar. Jay Gopalakrishnan

30/38

DPG as a Mixed method Theorem (Reinterpretation of DPG as a mixed method) The following are equivalent statements: i) xh ∈ Xh solves the practical DPG method. ii) xh ∈ Xh and εr ∈ Y r solve the mixed formulation (εr , y )Y + b(xh , y ) = `(y ) r

b(zh , ε ) = 0

∀y ∈ Y r ,

(1a)

∀zh ∈ Xh .

(1b)

[Dahmen+Huang+Schwab+Welper 2012] studied similar mixed formulations and found techniques other than localization by discontinuous spaces to make the method practical.

Jay Gopalakrishnan

30/38

Recall the previous assumptions [A.1] {w ∈ X : b(w , y ) = 0 ∀y ∈ Y } = {0}. [A.2] ∃ C1 , C2 > 0 such that

C1 ky kY ≤ sup w ∈X

|b(w , y )| ≤ C2 ky kY . kw kX

[A.3] ∃ Π : Y 7→ Y r and CΠ > 0 such that for all wh ∈ Xh and y ∈ Y , b(w h , y − Πy ) = 0,

kΠy kY ≤ CΠ ky kY .

Optimal a priori estimates followed from these assumptions. We now show that a posteriori error estimators also follow from the same assumptions [A.1–A.3].

Jay Gopalakrishnan

31/38

A posteriori error estimates Theorem (Reliability & Efficiency of DPG error estimator) Suppose [A.1–A.3] hold. Let F ∈ Y ∗ , x = B −1 F ,

[Carstensen+Demkowicz+G 2014]

xh ∈ Xh be the DPG solution, η = kF − Bxh k(Y r )∗ = kεr kY be the error estimator, def

osc(F ) = kF ◦ (1 − Π)kY ∗ . Then C12 kx − xh k2X ≤ η 2 + (CΠ η + osc(F ))2 , 2

η ≤

C22 kx



xh k2X .

← Reliability ← Efficiency

“Efficiency” is trivial in least-square methods. Proof of “Reliability” uses Π critically. Jay Gopalakrishnan

32/38

A posteriori error estimates Theorem (Reliability & Efficiency of DPG error estimator) Suppose [A.1–A.3] hold. Let F ∈ Y ∗ , x = B −1 F ,

[Carstensen+Demkowicz+G 2014]

x˜h ∈ Xh be the DPG solution, η˜ = kF − B x˜h k(Y r )∗ = kεr kY be the error estimator, def

osc(F ) = kF ◦ (1 − Π)kY ∗ . Then C12 kx − x˜h k2X ≤ η˜2 + (CΠ η˜ + osc(F ))2 , 2

η˜ ≤

C22 kx



x˜h k2X .

← Reliability ← Efficiency

“Efficiency” is trivial in least-square methods. Proof of “Reliability” uses Π critically. Jay Gopalakrishnan

32/38

Error estimator in the Laplace example Results for Dirichlet problem with 2 2 f (x, y ) = e −100(x +y ) aside. No need to code an error estimator for driving adaptivity in DPG methods.

Iteration 0

The mixed formulation is standard Galerkin, so it is easily implementable in codes without support for Petrov-Galerkin forms. [Click here to download FEniCS code for this experiment.]

Jay Gopalakrishnan

33/38

Error estimator in the Laplace example Results for Dirichlet problem with 2 2 f (x, y ) = e −100(x +y ) aside. No need to code an error estimator for driving adaptivity in DPG methods.

Iteration 6

The mixed formulation is standard Galerkin, so it is easily implementable in codes without support for Petrov-Galerkin forms. [Click here to download FEniCS code for this experiment.]

Jay Gopalakrishnan

33/38

Error estimator in the Laplace example Results for Dirichlet problem with 2 2 f (x, y ) = e −100(x +y ) aside. No need to code an error estimator for driving adaptivity in DPG methods.

Iteration 11

The mixed formulation is standard Galerkin, so it is easily implementable in codes without support for Petrov-Galerkin forms. [Click here to download FEniCS code for this experiment.]

Jay Gopalakrishnan

33/38

Example 5: Stresses in Stokes flow Second order system: 1 u 2 ∆~

~ p = ~f −∇

in Ω,

∇ · ~u = 0

in Ω.

No slip B.C.: ~u = ~0 on ∂Ω. For uniqueness: (p, 1)Ω = 0.

Convert to first order system:

σ + p δ − ε(~u ) = 0, (definition of true fluid stress σ) ~ p) ∇ · σ = ~f . (since ∇ · σ = 21 ∆~u − ∇

Jay Gopalakrishnan

34/38

Example 5: Stresses in Stokes flow Second order system: 1 u 2 ∆~

~ p = ~f −∇

in Ω,

∇ · ~u = 0

in Ω.

No slip B.C.: ~u = ~0 on ∂Ω. For uniqueness: (p, 1)Ω = 0.

Convert to first order system: Apply deviatoric: Dτ = τ −

tr τ δ N

Jay Gopalakrishnan

σ + p δ − ε(~u ) = 0, ∇ · σ = ~f .

34/38

Example 5: Stresses in Stokes flow Second order system: 1 u 2 ∆~

~ p = ~f −∇

in Ω,

∇ · ~u = 0

in Ω.

No slip B.C.: ~u = ~0 on ∂Ω. For uniqueness: (p, 1)Ω = 0.

Convert to first order system: Apply deviatoric: Dτ = τ −

tr τ δ N

Jay Gopalakrishnan



− ε(~u ) = 0, ∇ · σ = ~f .

And

(tr σ, 1)Ω = 0.

34/38

Example 5: Stresses in Stokes flow Second order system: 1 u 2 ∆~

~ p = ~f −∇

in Ω,

∇ · ~u = 0

in Ω.

No slip B.C.: ~u = ~0 on ∂Ω. For uniqueness: (p, 1)Ω = 0.

Convert to first order system:



− ε(~u ) = 0, ∇ · σ = ~f .

And

(tr σ, 1)Ω = 0.

DPG form with x = (σ, ~u , uˆ, σ ˆn , α) and y = (τ, ~v , ω): b(x, y ) = (Dσ, τ )Ω + (~u , ∇ · τ )Ωh − hˆ u , τ ni∂Ωh + (α, tr τ )Ω + (σ, ε(~v ))Ωh − hˆ σn , ~v i∂Ωh + (tr σ, ω)Ω .

Jay Gopalakrishnan

34/38

Spaces for Stokes example DPG form with x = (σ, ~u , uˆ, σ ˆn , α) and y = (τ, ~v , ω): b(x, y ) = (Dσ, τ )Ω + (~u , ∇ · τ )Ωh − hˆ u , τ ni∂Ωh + (α, tr τ )Ω + (σ, ε(~v ))Ωh − hˆ σn , ~v i∂Ωh + (tr σ, ω)Ω . Trial and test spaces: 1/2

X = L2 (Ω; S) × L2 (Ω)N × H0 (∂Ωh )N × H −1/2 (∂Ωh )N × R, Y = H(div, Ωh ; S) × H 1 (Ωh )N × R. Discrete spaces: Xh = {(σ, ~u , uˆ, σ ˆn , α) ∈ X : σ|K ∈ Pp (K ; S), ~u |K ∈ Pp (K )N , ∀elements K , uˆ|F ∈ Pp+1 (F )N , σ ˆn |F ∈ Pp (F )N , ∀interfaces F , α ∈ R}, Y r = {(τ, ~v , ω) ∈ Y : ω ∈ R, τ |K ∈ Pp+2 (K ; S), ~v |K ∈ Pp+N (K )N , ∀elements K }. Jay Gopalakrishnan

35/38

A priori and a posteriori estimates for Stokes example Theorem Suppose Ωh is a shape-regular simplicial mesh of Ω and p ≥ 0. Then [A.1–A.3] holds for the Stokes example. Consequently, ∃ mesh-independent constants c1 , . . . , c4 > 0 such that kx − xh kX ≤ c1 min kx − ξh kX , c4 kx −

xh k2X

ξh ∈Xh 2 2

− c2 osc(F ) ≤ η ≤ c3 kx − xh k2X .

Verification of [A.3] uses degrees of freedom of symmetric matrix polynomials in [G+Guzm´an 2011]. Proof proceeds by taking the incompressible limit of a similar elasticity discretization. Jay Gopalakrishnan

36/38

Stokes solution on L-shaped domain σxy Osborn’s singular solution: u = curl (a+ s+ + a− s− + c+ − c− ) , where s± = r 1+z sin((z ± 1)θ),

c± = r 1+z cos((z ± 1)θ),

σxx

a± = −z cot(3zπ/2)/(z ± 1), z 2 = sin2 (3zπ/2)

[z = root with smallest real part].

σyy Results from an h-adaptive algorithm with η as estimator and p = 2:

Jay Gopalakrishnan

37/38

Stokes solution on L-shaped domain Stokes example: h−adaptivity on L−shaped domain

E r r or kx − x h k X and e s t imat or η

σxy 0

10

σxx N−1.50 −1

10

σyy error error estimator

−2

10

3

10 Jay Gopalakrishnan

4

10 # Degrees of freedom

5

10

37/38

Stokes solution on L-shaped domain σxy Effectivity during the adaptive process E ff e c t iv ity ρ

1 0.8

σxx

0.6 0.4 3

10

Effectivity index ρ.

Jay Gopalakrishnan

4

10 # Degrees of freedom

η ρ= kx − xh kX

σyy

37/38

Stokes solution on L-shaped domain σxy Effectivity for perturbed adaptive iterates E ff e c t v ity ρ˜

1 0.8

σxx

0.6 0.4 3

10

4

10 # Degrees of freedom

After xh randomly perturbed by 5%.

Jay Gopalakrishnan

η ρ˜ = kx − x˜h kX

σyy

37/38

Conclusion Three avenues to DPG methods I I I

Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mixed Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A priori error analysis I I

Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Practical DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

! ! ! ! !

! Fast solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ! A posteriori error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples I I I I I

Example Example Example Example Example

Jay Gopalakrishnan

1 2 3 4 5

(Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (L2 -based least-squares) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (An ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (Diffusion) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (Stokes) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

! ! ! ! ! 38/38

Suggest Documents