Computational Modelling 201 (Numerical Methods) * DRAFT * Lecture Notes

COMO 201: Numerical Methods and Structural Modelling, 2003 Computational Modelling 201 (Numerical Methods) * DRAFT * Lecture Notes Dr John Enlow Las...
Author: Shanon Preston
2 downloads 0 Views 390KB Size
COMO 201: Numerical Methods and Structural Modelling, 2003

Computational Modelling 201 (Numerical Methods)

* DRAFT * Lecture Notes Dr John Enlow Last modified July 22, 2003.

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2 −2

−1.5

−1

−0.5

0

0.5

1

1.5

2

1

Contents 1 Preliminary Material

4

2 Introduction to Numerical Analysis 2.1 What is Numerical Analysis? . . . . . . . . . 2.1.1 Numerical Algorithms . . . . . . . . . 2.1.2 The Importance of Numerical Methods 2.2 Unavoidable Errors in Computing . . . . . . . 2.2.1 Floating Point Numbers . . . . . . . . 2.2.2 Roundoff (Machine) Error . . . . . . . 2.2.3 Truncation Error . . . . . . . . . . . . 2.2.4 Overflow and Underflow . . . . . . . . 2.2.5 Error Propagation . . . . . . . . . . .

5 5 5 7 7 8 8 9 9 9

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

3 Iteration and Root Finding 3.1 The Standard Iteration Method . . . . . . . . . . . . . 3.2 Aitken’s Method for Acceleration (∆2 Method) . . . . 3.2.1 The Mean Value Theorem (MVT) . . . . . . . . 3.2.2 Aitken’s Approximate Solution . . . . . . . . . 3.2.3 The ∆2 Notation . . . . . . . . . . . . . . . . . 3.2.4 Aitken’s Algorithm . . . . . . . . . . . . . . . . 3.3 Newton’s Method . . . . . . . . . . . . . . . . . . . . . 3.3.1 Graphical Derivation of Newton’s Method . . . 3.3.2 Pathological Failure of Newton’s Method . . . . 3.3.3 Convergence Rate . . . . . . . . . . . . . . . . . 3.4 Newton’s Method and Fractals . . . . . . . . . . . . . . 3.5 Systems of Equations . . . . . . . . . . . . . . . . . . . 3.5.1 Two Dimensions . . . . . . . . . . . . . . . . . 3.5.2 Higher Dimensions . . . . . . . . . . . . . . . . 3.5.3 Newton-Raphson Method for Nonlinear Systems 3.5.4 Example: Newton’s Method for Two Equations

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

11 . . . . . . . 11 . . . . . . . 13 . . . . . . . 13 . . . . . . . 13 . . . . . . . 14 . . . . . . . 15 . . . . . . . 15 . . . . . . . 16 . . . . . . . 17 . . . . . . . 18 . . . . . . . 18 . . . . . . . 20 . . . . . . . 20 . . . . . . . 22 of Equations 22 . . . . . . . 23

4 Interpolation and Extrapolation 4.1 Lagrange (Polynomial) Interpolation . . . . . . . . . . . . . . . . . 4.1.1 Examples of Lagrange Interpolation . . . . . . . . . . . . . . 4.2 Least Squares Interpolation . . . . . . . . . . . . . . . . . . . . . . 2

25 25 27 28

3

COMO 201: Numerical Methods and Structural Modelling, 2003 4.2.1 4.2.2 4.2.3

Fitting a Straight Line to Data (“Linear Regression”) . . . . Least Squares from an Algebraic Perspective . . . . . . . . . General Curve Fitting with Least Squares . . . . . . . . . .

5 Numerical Differentiation 5.1 Finite Differencing . . . . . . . . . . . . . 5.1.1 Second Derivative . . . . . . . . . . 5.2 Ordinary Differential Equations . . . . . . 5.2.1 High Order ODEs . . . . . . . . . . 5.2.2 Example: An Initial Value Problem

28 29 31

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

32 32 33 33 33 33

6 Numerical Integration 6.1 The Trapezoid Rule . . . . . . . . . . . . . . . . 6.2 Simpson’s Rule . . . . . . . . . . . . . . . . . . 6.3 Gauss-Legendre Integration . . . . . . . . . . . 6.3.1 Computational Determination of Weights

. . . . . . . . . . . . . . . . . . . . . and Nodes

. . . .

. . . .

. . . .

. . . .

35 35 35 36 37

7 Fundamentals of Statics and Equilibrium 7.1 Forces and Particles . . . . . . . . . . . . . 7.2 Newton’s Laws . . . . . . . . . . . . . . . 7.3 Static Equilibrium . . . . . . . . . . . . . 7.4 Solution of Mechanics Problems . . . . . . 7.4.1 SI Units in Mechanics . . . . . . . 7.5 More on Forces . . . . . . . . . . . . . . . 7.5.1 Coordinates . . . . . . . . . . . . . 7.5.2 Example I . . . . . . . . . . . . . . 7.5.3 Example II . . . . . . . . . . . . . 7.6 Equilibrium of a Particle . . . . . . . . . . 7.6.1 Example . . . . . . . . . . . . . . . 7.7 Free Body Diagrams . . . . . . . . . . . . 7.7.1 Example . . . . . . . . . . . . . . . 7.8 Rigid Bodies: Force Equivalents . . . . . . 7.8.1 Moment . . . . . . . . . . . . . . . 7.8.2 Varigon’s Theorem . . . . . . . . . 7.8.3 Equivalent Forces Theorem . . . . 7.8.4 Beam Corollary . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

40 40 40 40 41 41 41 42 43 43 44 44 45 45 46 46 47 48 49

. . . . . . . . . . . . . . . . (IVP)

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . . . .

. . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

Chapter 1 Preliminary Material Lecturer First half: Second half:

Dr John Enlow, Room 518, Science III ([email protected] ). Dr R.L. Enlow, Room 220, Science III.

Course structure: • Two lectures (Tuesday and Thursday at 1pm, Seminar Room), • One one-hour tutorial (not at the computers, Thursday 2pm, Seminar Room). • One two-hour laboratory (using computers, Friday 10am-12pm, Lab A). The tutorial exercise and the laboratory exercise are both due at 1pm on Friday. Assessment: The internal assessment is comprised of the mid-semester test (40%) and the weekly exercises (60%). The final grade is either your final exam score or one third of your internal assessment plus two thirds of your final exam score, whichever is greater. Computer Packages Used (First Half of Course) All problems will be solved in, at your preference, MATLAB or C. It is assumed you have good knowledge of one of these. Books There is no prescribed text for this half of the course. Useful references include: • Numerical Recipes in C, Press, Teukolsky, Vetterling and Flannery. • Numerical Methods with MATLAB, Recktenwald. • Fundamentals of Computer Numerical Analysis, Freidman and Kandel. 4

Chapter 2 Introduction to Numerical Analysis 2.1

What is Numerical Analysis?

Numerical analysis is a mathematical discipline. It is the study of numerical methods, which are mathematical techniques for generating approximate solutions to problems (for example by using an iterative process or a series approximation). Numerical methods can be suitable for problems that are very difficult or impossible to solve analytically.

2.1.1

Numerical Algorithms

Numerical methods are implemented via algorithms. An algorithm is a set of instructions, each of which is composed of a sequence of logical operations. These instructions use the techniques described by the numerical method to solve a problem. A desired accuracy for the answer is generally specified, so that the algorithm terminates with an approximate solution after a finite number of steps. An example √ of a numerical method is a scheme based on Newton’s method to calculate√ a for any given a > 0. We assume that we are given an initial estimation a ≈ x0 . Each step of the algorithm consists of calculating xn+1 , √ a (hopefully) better approximation to a, from the previous approximation xn . This is done using the rule 1 a xn + , n ∈ N. 2 xn √ It can be shown that xn → a as n → ∞. To enable an algorithm (which implements the above method) to terminate after a finite number of steps, we might test xn after each step, stopping when xn+1 =





|xn − xn−1 | < 10−6 . 5

COMO 201: Numerical Methods and Structural Modelling, 2003

6

There is more to numerical analysis than just development of methods and algorithms. We also need to know how to predict the behaviour of both the method and its associated algorithms. Common questions include: • Under what conditions will the method/algorithm produce a correct approximate solution to the problem? (e.g. convergence issues, stability). • What can we say about the errors in the approximate solutions? (error analysis). • How long does it take to find a solution to a given accuracy? (O(n log(n)) etc...). Answering these questions may well be more difficult than developing either the method or algorithm, and the answers are often non-trivial. For example, consider the standard iterative method for solving x = f (x) on a ≤ x ≤ b (where at each step we calculate xn+1 = f (xn )). Sufficient conditions for both a unique solution to exist and for the method to be guaranteed to converge to this unique solution are: 1. f (x) continuous on [a, b], 2. x ∈ [a, b] ⇒ f (x) ∈ [a, b], 3. ∃L ∈ R, ∀x ∈ [a, b], y

|f 0 (x)| ≤ L < 1. g(x) ≡ x

f (x)

x This course will focus on numerical methods and algorithms rather than on numerical analysis. MATH 361, which some of you will take in first semester 2003, focuses on numerical analysis. It provides a greater depth of understanding of the derivation of the methods and of their associated errors and limitations. Numerical analysis does not require the use of a computer because it focuses on theoretical

COMO 201: Numerical Methods and Structural Modelling, 2003

7

ideas. In contrast we will make regular use of computers in this course, spending roughly 32 of the exercise time in Lab A.

2.1.2

The Importance of Numerical Methods

Computers are used for a staggering variety of tasks, such as: • accounting and inventory control, • airline navigation and tracking systems, • translation of natural languages, • monitoring of process control and • modelling of everything from stresses in bridge structures during earthquakes to possum breeding habits. One of the earliest and largest uses of computers is to solve problems in science and engineering. The techniques used to obtain such solutions are part of the general area called scientific computing, which in turn is a subset of computational modelling, where a problem is modelled mathematically then solved on computer (the solution process being scientific computing). Nearly every area of modern industry, science and engineering relies heavily on computers. In almost all applications of computers, the underlying software relies on mathematically based numerical methods and algorithms. While examples done in this course focus on solving seemingly simple mathematical problems with numerical methods, the importance of these same methods to aspects of an enormous range of real world problems can not be overstated. An understanding of these fundamental ideas will greatly assist you in using complex and powerful computer packages designed to solve real problems. They are also a crucial component of your ability, as computational modelers, to solve modelled systems.

2.2

Unavoidable Errors in Computing

Representation of modelled systems in a computer introduces several sources of error, primarily due to the way numbers are stored in a computer. These sources of error include: • Roundoff or Machine error. • Truncation error. • Propagated errors.

COMO 201: Numerical Methods and Structural Modelling, 2003

2.2.1

8

Floating Point Numbers

Definition The representation of a number f as f = s · m · Mx where • the sign, s, is either +1 or −1, • the mantissa, m, satisfies 1/M ≤ m ≤ 1, • the exponent, x, is an integer, is called the floating point representation of f in base M . The decimal number f = 8.75, written in its floating point representation in base 10 is 0.875 · 101 , and has s = 1, m = 0.875 and x = 1. Its binary equivalent is f = (1000.11)2 = +(0.100011)2 · (2)(100)2 where s = 1, m = (0.100011)2 and x = (100)2 .

2.2.2

Roundoff (Machine) Error

In a computer the mantissa m of a number x is represented using a fixed number of digits. This means that there is a limit to the precision of numbers represented in a computer. The computer either truncates or rounds the mantissa after operations that produce extra mantissa digits. The absolute error in storing a number on computer in base 10 using d digits can be calculated as follows. Let m ˆ be the truncated mantissa, then the magnitude of the absolute error is  = |s · m · 10x − s · m ˆ · 10x | = |m − m| ˆ · 10x < 10−d 10x Roundoff error can be particularly problematic when a number is subtracted from an almost equal number (subtractive cancellation). Careful algebraic rearrangement of the expressions being evaluated may help in some cases. The expression 1015 ∗ (pi − 3.14159265359) = −206.7615373566167... However when evaluated on my calculator the answer given is 590, 000!! Clearly the calculator’s answer is catastrophically incorrect.

COMO 201: Numerical Methods and Structural Modelling, 2003

2.2.3

9

Truncation Error

This type of error occurs when a computer is unable to evaluate explicitly a given quantity and instead uses an approximation. For example, an approximation to sin(x) might be calculated using the first three terms of the Taylor series expansion, x3 x5 sin(x) = x − + + E(x) 3! 5! where

x7 |E(x)| ≤ . 7! If x is in the interval [−0.2, 0.2] then the truncation error for this operation is at most 0.27 < 3 · 10−9 7!

2.2.4

Overflow and Underflow

Underflow and overflow errors are due to a limited number of digits being assigned to the exponent, so that there is a largest and smallest magnitude number that can be stored (N and M say). Attempts to store a number smaller than M will result in underflow, generally causing the stored number to be set to zero. Overflow errors are more serious, occurring when a calculated or stored number is larger than N . These generally cause abnormal termination of the code. Consider the modulus of a complex number a + ib. The obvious way to calculate this is using √ |a + ib| = a2 + b2 , however this method is clearly prone to overflow errors for large a or b. A much safer approach is with the relation    

|a + ib| =   

2.2.5

q

|a| 1 + (b/a)2 , |a| ≥ |b| q

|b| 1 + (a/b)2 , |a| < |b|

Error Propagation

Errors in calculation and storage of intermediate results propagate as the calculations continue. It is relatively simple to derive bounds on the propagated errors after a single operation, but combining many operations can be problematic and may lead to complicated expressions.

COMO 201: Numerical Methods and Structural Modelling, 2003 Consider quantities xˆ and yˆ approximating ideal values x and y. Let x and y be bounds on the magnitudes of the absolute errors in these approximations, so that |x − xˆ| ≤ x , |y − yˆ| ≤ y . Then, using the triangle inequality, the propagated error under the addition operation is |(x + y) − (ˆ x + yˆ)| ≤ |x − xˆ| + |y − yˆ| ≤ x + y .

10

Chapter 3 Iteration and Root Finding Solving nonlinear equations is a major task of numerical analysis and the iteration method is a prototype for many methods dealing with these equations.

3.1

The Standard Iteration Method

The standard iteration method is designed to solve equations of the form x = f (x) where f (x) is a real valued function defined over some interval [a, b]. An algorithm for the standard iteration method. 1. Choose an initial approximate solution x0 , a tolerance  and a maximal number of iterations N . Set n = 1. 2. Create the next iteration using the relation xn = f (xn−1 ). 3. If |xn − xn−1 | >  and n < N then increment n and go to step 2. 4. If |xn − xn−1 | >  then the maximal number of iterations has been reached without the desired convergence. Raise a warning that this is the case. 5. Output xn , the approximate solution to x = f (x). Clearly the algorithm finishes when successive approximations are less than  apart. Ideally we have lim xn = s, where s = f (s) n→∞

11

12

COMO 201: Numerical Methods and Structural Modelling, 2003

and |xn − s| monotonically decreasing as n increases. In fact we have already stated a theorem that gives us sufficient conditions for behaviour similar to this - see the example in section 2.1.1. Example where |f 0 (x)| < 1 on the interval. Good convergence. f (x) = E −x +

1 sin(10x), 10

x0 =

1 5

1

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

1

Example where ∃x ∈ [0, 1] s.t. |f 0 (x)| > 1. Poor convergence! f (x) = E −x +

1 sin(20x), 5

x0 =

1 5

1 0.8 0.6 0.4 0.2

0.2

0.4

0.6

0.8

1

13

COMO 201: Numerical Methods and Structural Modelling, 2003

3.2

Aitken’s Method for Acceleration (∆2 Method)

Aitken’s ∆2 method improves on the convergence rate of the S.I.M. in cases where the S.I.M. converges to a unique solution. It does this by generating an approximate expression for f 0 (s), then using this information to predict what the S.I.M. is going to converge to. In deriving Aitken’s method we’ll need to use the mean value theorem.

3.2.1

The Mean Value Theorem (MVT)

Recall the Mean Value Theorem (M.V.T): Let f (x) be differentiable on the open interval (a, b) and continuous on the closed interval [a, b]. Then there is at least one point c in (a, b) such that f 0 (c) =

f (b) − f (a) b−a

y

f (b)

f (x)

f (a)

a

3.2.2

c1

c2

b

x

Aitken’s Approximate Solution

Let f be a ‘nice’ function, with both f (x) and f 0 (x) continuous and differentiable in the domain of interest. Suppose that we have an approximation xa to the solution s (where f (s) = s). Assume that the S.I.M., starting at xa will converge to s. Let xb and xc represent two iterations of the S.I.M., so that xb = f (xa ) and xc = f (xb ). Then for some ca between xa and s, xb − s = f (xa ) − f (s) = (xa − s) f 0 (ca ) by the M.V.T.

(3.1)

COMO 201: Numerical Methods and Structural Modelling, 2003

14

and similarly, for some cb between xb and s, xc − s = (xb − s) f 0 (cb )

(3.2)

Both xa and xb are close to s, since xa is an approximate solution, so that ca and cb are also close to s (because |ca − s| < |xa − s| and |cb − s| < |xb − s|). With this in mind, we approximate f 0 (ca ) and f 0 (cb ) by f 0 (s) in equations 3.1 and 3.2, giving: xb − s ≈ (xa − s) f 0 (s)

(3.3)

xc − s ≈ (xb − s) f 0 (s)

(3.4)

Subtracting Eq. 3.3 from Eq. 3.4 yields:

f 0 (s) ≈

xc − x b xb − x a

Substituting this expression back into Eq. 3.3 and rearranging yields the following approximate solution to f (x) = x. (xb − xa )2 s ≈ x ≡ xa − xc − 2xb + xa ∗

In general x∗ will be much closer to s than xa , xb and xc . We use this expression roughly as follows: • Start at an estimated solution xa . • At each iteration calculate xb = f (xa ), xc = f (xb ) and x∗ (using the boxed equation above). • Set xa for the next iteration to be the current value of x∗ . Before developing a full algorithm we’ll explain why this method is called Aitken’s ∆2 method.

3.2.3

The ∆2 Notation

This expression can be written more compactly using the forward difference operator ∆, which is defined by the three relations: ∆xn ≡ xn+1 − xn , 

∆

m X

j=1



α j xn j  ≡

m X

αj ∆xnj ,

j=1



n≥0

m ≥ 1, αj ∈ R, (∀j) nj ≥ 0 

∆n E ≡ ∆ ∆n−1 E ,

n≥2

The more compact form of the boxed equation, which gives the method its name, is given by (∆xa )2 x∗ = x a − . ∆2 x a

15

COMO 201: Numerical Methods and Structural Modelling, 2003

3.2.4

Aitken’s Algorithm

Aitken’s algorithm is as follows. x∗a represents Aitken’s accelerated approximate solution, xb and xc are two iterations of the S.I.M. starting at x∗a . 1. Choose an initial approximate solution x∗a , a tolerance  and a maximal number of iterations N . Set n = 0. 2. Compute xb = f (x∗a ) and xc = f (xb ). 3. Update x∗a with x∗a = x∗a −

(xb − x∗a )2 xc − 2xb + x∗a

4. Increment n. 5. If |f (x∗a ) − x∗a | >  and n < N then go to step 2. 6. If |f (x∗a ) − x∗a | >  then raise a warning that the maximal number of iterations has been reached without the desired convergence. 7. Output the approximate solution x∗a .

3.3

Newton’s Method

Newton’s method, sometimes called the Newton-Raphson method, is designed to find solutions of the equation f (x) = 0, where f (x) is some function whose roots are non-trivial to find analytically. Suppose that we have already iterated a number of times, arriving at our approximation xk to the true solution. Ideally the next iteration of the method will choose xk+1 so that f (xk+1 ) = 0. If we approximate f (xk+1 ) using known quantities, then set this approximation to zero, we can solve for a good value of xk+1 (so that f (xk+1 ) ≈ 0). But what approximation should we use? Newton’s method arises from approximating f (xk+1 ) using a Taylor series of f (x) expanded about xk . Using the standard forward difference operator ∆, so that ∆xk ≡ xk+1 − xk , this Taylor series is



(∆xk )2 d2 f df + + ... f (xk+1 ) = f (xk + ∆xk ) = f (xk ) + ∆xk dx xk 2 dx2 xk

Assuming that ∆xk is small, i.e. that we are close to the solution, f (xk+1 ) can be approximated using the first two terms:

df f (xk+1 ) ≈ f (xk ) + ∆xk dx xk

16

COMO 201: Numerical Methods and Structural Modelling, 2003 Setting this approximation of f (xk+1 ) to zero gives

df f (xk ) + (xk+1 − xk ) =0 dx xk

and solving for xk+1 yields

xk+1 = xk −

f (xk ) . f 0 (xk )

This iteration is the basis of Newton’s method.

3.3.1

Graphical Derivation of Newton’s Method

Consider a root-finding method which sets xk+1 to be the intercept of the tangent line to f (x) at xk with the x-axis. y

f (xk ) f (x)

xk

x

xk+1 slope = ⇒

f 0 (xk ) =

∆y ∆x 0 − f (xk ) xk+1 − xk

⇒ f 0 (xk )(xk+1 − xk ) = −f (xk ) ⇒

xk+1 − xk = −

and so xk+1 = xk −

f (xk ) . f 0 (xk )

f (xk ) f 0 (xk )

17

COMO 201: Numerical Methods and Structural Modelling, 2003

3.3.2

Pathological Failure of Newton’s Method

How else could Newton’s method fail? • Divergence: y

f (x1 ) f (x0 )

x x0

x1

f (x)

• Cycles / oscillation: y

f (x)

f (x1 ) f (x0 )

x x0 , x2 , x4 , . . . • Poor convergence (near zero slope). The issues are often located near turning points.

x 1 , x3 , x5 , . . .

18

COMO 201: Numerical Methods and Structural Modelling, 2003

3.3.3

Convergence Rate

Newton’s method has quadratic convergence1 when close to the root2 - the error at step k + 1 is of the order of the square of the error at step k. This is much better than the other methods described (including Aitken’s ∆2 method) which only have linear convergence.

3.4

Newton’s Method and Fractals

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2 −2

−1.5

−1

−0.5

0

0.5

1

1.5

2

The fractal shown is obtained by considering Newton’s method when used on the complex -valued equation z 3 − 1 = 0. 1

In the case of repeated roots we have f 0 (x) = 0 at the roots, meaning that Newton’s method only manages linear convergence. 2 Choosing a starting value close to the root is very important. Newton’s method depends on a good starting value for convergence.

COMO 201: Numerical Methods and Structural Modelling, 2003

19

Newton’s method gives the iteration zj+1

zj3 − 1 = zj − . 3zj2

Looking at the region |z| < 2, we colour a pixel white if Newton’s method converges to z = 1 and black if it converges to one of the two complex roots. All points converge, with the exception of the origin and three points along the negative real axis. Although we may have expected the three regions (one for each root) to be identically shaped due to symmetry, the fractal behaviour is quite surprising! Points near local extrema cause Newton’s method to shoot off towards other remote values. Source File: nfrac.m function.nfrac %.Generates.a.fractal.based.on.which.root.Newton’s. %.method.converges.to.when.considering. %..........f(z).=.z^ 3.-.1. %.We.colour.anything.converging.to.z=1.(the.real.root).black.

n=500;.m=zeros(n,n); for.(it1=1:n) .....for.(it2=1:n) ..........x.=.((it1-1)/(n-1).-0.5)*4;. ..........y.=.((it2-1)/(n-1).-0.5)*4; ..........if.(x*x+y*y