LeastSquares, The Normal Equation, Curve Fitting

Math 344 Linear Algebra II Nov. 18, 2013 LeastSquares, The Normal Equation, Curve Fitting Least Squares via the Normal Equation We will need some p...
Author: Wilfrid Powell
2 downloads 2 Views 159KB Size
Math 344

Linear Algebra II

Nov. 18, 2013

LeastSquares, The Normal Equation, Curve Fitting Least Squares via the Normal Equation We will need some procedures in the LinearAlgebra package. with LinearAlgebra : An Example. Let A and b be defined as follows. A, b d 1, 2; 3, 4; 5, 6 , 1, 0, 1 1 2

1

3 4 ,

0

5 6

1

(1)

The equation A x = b has no solutions. If it did, Maple's LinearSolve procedure would find them. The syntax is simply LinearSolve A, b Error, (in LinearAlgebra:-LinearSolve) inconsistent system The error message advises us that the linear system A x = b is inconsistent. We will obtain the least squares solution by solving the normal equation ACA

x1 , x2

= ACb 35 x1 C 44 x2 44 x1 C 56 x2

=

6 8

(2)

x d LinearSolve A C A, A C b 2 3

K

(3)

2 3

The entries in the vector A x K b are the signed errors in each coordinate direction. They are referred to as the residuals. residuals = A x K b 1 3

K residuals =

2 3 1 3

K

The "Squared Error" (SE) is the sum of the squares of the residuals SE =

A xKb

2 ; evalf 2

% page 1

(4)

Math 344

Linear Algebra II

Nov. 18, 2013

2 3

SE =

SE = 0.6666666667

(5)

The square root of the squared error is called the "Root Squared Error" (RSE). RSE = A x K b 2 ; evalf % RSE =

1 3

6

RSE = 0.8164965809

(6)

Note that RSE is just the distance from the vector A x to the target vector b. The least squares solution can also be obtained as 'x'= A C A

K1

. ACb 2 3

K x=

(7)

2 3

The Projection Matrix P As was announced in class on Friday, because A is of full rank, the matrix A C A is invertible. Consequently, the orthogonal projection matrix P for the column space of A can be obtained as follows. P d A ACA

K1

.A C 5 6

1 3

K

1 3

1 3

1 3

1 3

5 6

1 6

K

1 6 (8)

Check: P C , P2 , P A 5 6

1 3

K

1 6

5 6

1 3

K

1 3

1 3

1 3

1 3

1 3

1 3

1 K 6

1 3

5 6

1 K 6

1 3

5 6

,

1 6

1 2 ,

3 4 5 6

Maple's LeastSquares Procedure Maple's LeastSquares procedure will find the Least Squares solution to A x = b like this. page 2

(9)

Math 344

Linear Algebra II

Nov. 18, 2013

LeastSquares A, b 2 3

K

(10)

2 3 Using Least Squares to Model Data (Curve Fitting) with plots : setoptions font = times, roman, 8 , labelfont = times, roman, 8 Consider the 5 data points whose plot suggests a rising periodicity. Data d X, 1, 2, 3, 4, 6; Y, 1, 3, 2, 5, 4 X 1 2 3 4 6

(11)

Y 1 3 2 5 4 plot

' Data 1, j , Data 2, j ' $ j = 2 ..6 , style = point, symbol = solidcircle, symbolsize = 18, view = 0 ..6, 0 ..6 , scaling = constrained ; DataPlot d % : 6 5 4 3 2 1 0

0

1

2

3

4

5

6

We will model the data with the function f defined below. The periodic part has period 5 / 2. unassign 'x', 'a' f x d a1 C a2 x C a3 cos 2 p x / 5 / 2 a1 C a2 x C a3 cos

C a4 sin 2 p x / 5 / 2 4 p x C a4 sin 5

:f x

4 px 5

(12)

We would like the coefficients to satisfy this system of 5 linear equations. eqns d seq f Data 1, j = Data 2, j , j = 2 ..6 1 1 2 p C a4 sin p = 1, a1 C 2 a2 C a3 cos p K a4 sin 5 5 5 2 2 1 = 3, a1 C 3 a2 C a3 cos p C a4 sin p = 2, a1 C 4 a2 K a3 cos p 5 5 5 1 1 1 K a4 sin p = 5, a1 C 6 a2 K a3 cos p C a4 sin p =4 5 5 5

a1 C a2 K a3 cos

2 p 5

(13)

Fortunately, Maple has a procedure that will convert such a system to matrix form. It is called GenerateMatrix. In the next entry, the coefficient matrix is named B and the column vector of constants is page 3

Math 344

Linear Algebra II

Nov. 18, 2013

named c. B, c d GenerateMatrix eqns, aj $ j = 1 ..4 1 1 Kcos 1 2 1 3

1 p 5

cos

2 p 5

cos

2 p 5

sin Ksin sin

1 4 Kcos

1 p 5

Ksin

1 6 Kcos

1 p 5

sin

1 p 5 2 p 5

1 3

2 p 5

,

2

(14)

5

1 p 5

4

1 p 5

Check the rank of B, Rank B = 4 The next entry assigns the name a to the Least Squares solution to B x = c. We input numerical approximations for B and c, 10-digit accuracy. Observe that the LeastSquares procedure outputs 15-digit approximations. a d LeastSquares evalf B , evalf c 0.972398705248088 0.645825411336815

(15)

K0.281415699896448 K1.19820272558846 We now have a function modeling the data. Its formula is shown below, with 4 digit constants. evalf 4 f x 0.9724 C 0.6458 x K 0.2814 cos 2.514 x K 1.198 sin 2.514 x Here is a picture showing how the model fits the data. display DataPlot, plot f x , x = 1 ..6 6 5 4 3 2 1 0

0

1

2

3 x

4

page 4

5

6

(16)

Math 344

Linear Algebra II

Nov. 18, 2013

Error Analysis The residuals are in the matrix B aKc so the RSE is RSE = evalf B aKc 2 RSE = 0.787732693228721

(17)

This can be checked as follows. RSE =

add

f Data 1, k

K Data 2, k 2 , k = 2 ..6 : evalf % RSE = 0.787732693228721

(18)

Exercises 1. Load the LinearAlgebra package. 1 2 3 a) Define the matrix A =

4 0 6 7 6 5

and the vector b = 0, 2, 0, 1 and verify that the system

2 2 2 A x = b has no solutions. b) Check that A has rank 3 and then use the normal equation to obtain the least squares solution to A x = b. c) Check the least squares solution you found in part b) by comparing it to Maple's LeastSquares solution. Then display the residuals vector and calculate the SE and RSE associated with the least squares approximation. d) Use the transpose formula to obtain the symmetric matrix P that projects R4 orthogonally to the column space of the matrix A. Verify that P 2 = P and P A = A. e) Replace the third column of A with the column vector K1, 4, 1, 0 , thereby reducing its rank to 2 (check this). This implies that the least squares solution to A x = b is no longer unique. How is this reflected in Maple's least squares solution? 2. Load the plots package. a) Define the Data matrix used above and plot the points. Name the plot DataPlot. b) Model the data with a 3rd degree polynomial, f x = a1 C a2 x C a3 x 2 C a4 x 3 . c) Display the data and the graph of f . d) Calculate the RSE. How does it compare to the RSE for the trig approximation?

page 5