Numerical Analysis EE, NCKU Tien-Hao Chang (Darby Chang)
1
In the previous slide
Rootfinding – multiplicity
Bisection method – Intermediate Value Theorem – convergence measures
False position – yet another simple enclosure method – advantage and disadvantage in comparison with bisection method 2
In this slide
Fixed point iteration scheme – what is a fixed point? – iteration function – convergence
Newton’s method – tangent line approximation
– convergence
Secant method 3
Rootfinding
Simple enclosure – Intermediate Value Theorem
– guarantee to converge • convergence rate is slow
– bisection and false position
Fixed point iteration – Mean Value Theorem – rapid convergence • loss of guaranteed convergence
4
2.3 Fixed Point Iteration Schemes
5
6
There is at least one point on the graph at which the tangent lines is parallel to the secant line
7
Mean Value Theorem
′
𝑓 𝜉 =
𝑓 𝑏 −𝑓(𝑎) 𝑏−𝑎
We use a slightly different formulation 𝑓 𝑏 − 𝑓(𝑎) = 𝑓 ′ 𝜉 𝑏 − 𝑎 An example of using this theorem – proof the inequality sin 𝑏 − sin 𝑎 ≤ 𝑏 − 𝑎 8
9
Fixed points
Consider the function sin 𝑥 – thought of as moving the input value of to the output value
1 2
𝜋 6
– the sine function maps 0 to 0 • the sine function fixes the location of 0
– 𝑥 = 0 is said to be a fixed point of the function sin 𝑥
10
11
Number of fixed points
According to the previous figure, a trivial question is – how many fixed points of a given function?
12
13
𝑔′ 𝑥 ≤ 𝑘 < 1
14
Only sufficient conditions
Namely, not necessary conditions – it is possible for a function to violate one or more of the hypotheses, yet still have a (possibly unique) fixed point
15
Fixed point iteration
16
Fixed point iteration
If it is known that a function 𝑔 has a fixed point, one way to approximate the value of that fixed point is fixed point iteration scheme These can be defined as follows:
17
http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg
In action
18
19
20
Any Questions? About fixed point iteration
21
Relation to rootfinding
Now we know what fixed point iteration is, but how to apply it on rootfinding? More precisely, given a rootfinding 2-3x-3=0, what is its hint equation, f(x)=x3+x iteration function g(x)?
22
Iteration function
Algebraically transform to the form – 𝑥=⋯
𝑓 𝑥 = 𝑥 3 + 𝑥 2 − 3𝑥 − 3 – 𝑥 = 𝑥 3 + 𝑥 2 − 2𝑥 − 3 – 𝑥=
𝑥 3 +𝑥 2 −3 3
–…
Every rootfinding problem can be transformed into any number of fixed point problems – (fortunately or unfortunately?)
23
24
http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg
In action
25
26
Analysis
#1 iteration function converges – but to a fixed point outside the interval 1,2
#2 fails to converge – despite attaining values quite close to #1
#3 and #5 converge rapidly – #3 add one correct decimal every iteration – #5 doubles correct decimals every iteration
#4 converges, but very slow 27
Convergence
This analysis suggests a trivial question
– the fixed point of 𝑔 is justified in our previous theorem 28
29
30
31
𝑘
𝑝𝑛 − 𝑝 ≤
𝑘𝑛 1−𝑘
𝑝1 − 𝑝0 demonstrates
the importance of the parameter 𝑘 – when 𝑘 → 0, rapid
– when 𝑘 → 1, dramatically slow – when 𝑘 →
1 , 2
roughly the same as the
bisection method 32
Order of convergence of fixed point iteration schemes All about the derivatives, 𝑔
𝑘
𝑝
33
34
35
36
37
38
Stopping condition
39
40
Two steps
41
The first step
𝑝𝑛+1 −𝑝 lim 𝑛→∞ 𝑝𝑛 −𝑝 𝛼
=𝜆
𝑝𝑛+1 −𝑝 lim 𝑛→∞ 𝑝𝑛 −𝑝
= lim 𝜆 ∙ 𝑝𝑛 − 𝑝
∵ lim 𝑝𝑛 − 𝑝 = 0
𝑝𝑛+1 −𝑝 lim 𝑛→∞ 𝑝𝑛 −𝑝
𝛼−1
𝑛→∞
𝑛→∞
= 0 when 𝛼 > 1 42
The second step
∵
𝑝𝑛+1 −𝑝𝑛 𝑝𝑛 −𝑝
𝑝𝑛+1 −𝑝+𝑝−𝑝𝑛 𝑝𝑛 −𝑝
=
𝑝𝑛 −𝑝 − 𝑝𝑛+1 −𝑝 𝑝𝑛 −𝑝
∵
𝑝𝑛+1 −𝑝 lim 𝑛→∞ 𝑝𝑛 −𝑝
1 − 0 ≤ lim
≤1+0
𝑝𝑛+1 −𝑝𝑛 lim 𝑛→∞ 𝑝𝑛 −𝑝
= 1 when 𝛼 > 1
≤
𝑝𝑛+1 −𝑝𝑛 𝑝𝑛 −𝑝
≤
𝑝𝑛 −𝑝 + 𝑝𝑛+1 −𝑝 𝑝𝑛 −𝑝
=0
𝑝𝑛+1 −𝑝𝑛 𝑛→∞ 𝑝𝑛 −𝑝
43
Any Questions? 2.3 Fixed Point Iteration Schemes
44
2.4 Newton’s Method
45
46
47
Newton’s Method
Definition
48
http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg
In action
49
50
In the previous example
Newton’s method used 8 function evaluations Bisection method requires 36 evaluations starting from (1,2) False position requires 31 evaluations starting from (1,2)
51
52
Any Questions?
53
answer Initial guess
Are these comparisons fair?
𝑓 𝑥 = tan 𝜋𝑥 − 𝑥 − 6 – 𝑝0 = 0.48, converges to 0.4510472613 after 5 iterations – 𝑝0 = 0.4, fails toexample converges after 5000 iterations
– 𝑝0 = 0, converges to 697.4995475 after 42 iterations 54
answer Initial guess
Are these comparisons fair?
𝑓 𝑥 = tan 𝜋𝑥 − 𝑥 − 6 – 𝑝0 = 0.48, converges to 0.4510472613 after 5 iterations – 𝑝0 = 0.4, fails to converges after 5000 iterations
– 𝑝0 = 0, converges to 697.4995475 after 42 iterations 55
Initial guess
Are these comparisons fair?
𝑓 𝑥 = tan 𝜋𝑥 − 𝑥 − 6 – 𝑝0 = 0.48, converges to 0.4510472613 after 5 iterations – 𝑝0 = 0.4, fails to converges after 5000 iterations
– 𝑝0 = 0, converges to 697.4995475 after 42 iterations 56
𝑝0 in Newton’s method
Not guaranteed to converge – 𝑝0 = 0.4, fails to converge
May converge to a value very far from 𝑝0 – 𝑝0 = 0, converges to 697.4995475
Heavily dependent on the choice of 𝑝0 57
Convergence analysis for Newton’s method
58
The simplest plan is to apply the general fixed point iteration convergence theorem
59
Analysis strategy
To do this, it is must be shown that there exists such an interval, 𝐼, which contains the root 𝑝, for which
60
61
62
63
64
Any Questions?
65
Newton’s Method
Guaranteed to Converge?
Why sometimes Newton’s method does not converge? This theorem guarantees that 𝛿 hint exists answer But it may be very small
66
Newton’s Method
Guaranteed to Converge?
Why sometimes Newton’s method does not converge? This theorem guarantees that 𝛿 exists answer But it may be very small
67
Newton’s Method
Guaranteed to Converge?
Why sometimes Newton’s method does not converge? This theorem guarantees that 𝛿 exists But it may be very small 68
http://img2.timeinc.net/people/i/2007/startracks/071008/brad_pitt300.jpg
Oh no! After these annoying analyses, the Newton’s method is still not guaranteed to converge!?
69
Don’t worry
Actually, there is an intuitive method Combine Newton’s method and bisection method – Newton’s method first
– if an approximation falls outside current interval, then apply bisection method to obtain a better guess
(Can you write an algorithm for this method?) 70
Newton’s Method
Convergence analysis
At least quadratic – 𝑔′ 𝑥 =
𝑓 𝑥 𝑓′′ 𝑥 𝑓′ 𝑥 2
– 𝑔′ 𝑝 = 0, since 𝑓 𝑝 = 0
Stopping condition – 𝑝𝑛 − 𝑝𝑛−1 < 𝜖
71
Recall that http://www.dianadepasquale.com/ThinkingMonkey.jpg
72
Is Newton’s method always faster?
73
74
http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg
In action
75
76
Any Questions? 2.4 Newton’s Method
77
2.5 Secant Method
78
Secant method
Because that Newton’s method – 2 function evaluations per iteration – requires the derivative
Secant method is a variation on either falseanswer position or Newton’s method – 1 additional function evaluation per iteration – does not require the derivative
Let’s see the figure first 79
80
Secant method
Secant method is a variation on either false position or Newton’s method – 1 additional function evaluation per iteration – does not require the derivative – does not maintain an interval – 𝑝𝑛+1 is calculated with 𝑝𝑛 and 𝑝𝑛−1 81
82
83
84
85
Any Questions? 2.5 Secant Method
86