Answers to some of the problems (The numbering is just for convenience;

Answers to some of the problems (The numbering is just for convenience; the solutions are not indexed to the text properly yet. I’ll try and post thes...
Author: Allen Baker
8 downloads 0 Views 88KB Size
Answers to some of the problems (The numbering is just for convenience; the solutions are not indexed to the text properly yet. I’ll try and post these as I get them. Please report any mistakes!)

Chapter 1

1. Suppose that A is invertible. Show that its inverse is unique. That is,

suppose B and C are both inverses of A; show that B = C. Solution: For any matrices which can be multiplied together, we have, by the associative law, (BA)C = B(AC). If B and C are both inverses, then this reads IC = BI, and therefore B = C. 2. Show that A(B + C) = AB + AC Solution: We have to show that the (ij)th entries on both sides are the same: (A(B + C))ij = =

s X k=1 s X

aik (B + C)kj aik (bkj + ckj )

k=1

=

s X

aik bkj +

s X

k=1

aik ckj

k=1

= (AB + BC)ij (You should be able to give a reason for each step above.) 3. Show that A(cB) = c(AB) Solution: Again, we need to show that the corresponding entries on both sides are the same: [A(cB)]ij =

s X

aik (cB)kj

k=1

=

s X k=1

1

aik cbkj

= c

s X

aik bkj

k=1

= [c(AB)]ij 4. Show that the matrix



3 1

A=

3 1

 

has no inverse. Solution: Suppose  B=

a b c d

  , and AB = I.

Then writing out the product gives     3a + c 3b + d 1 0 =  AB =  3a + c 3b + d 0 1 Equating the entries that must be equal leads to inconsistent equations: 3a+c = 1 and 3a + c = 0; similarly for the other pair. So there is no such matrix B. 5. Show that (AB)t = B t At . Solution: As above, we need to show that the corresponding entries of both sides are identical: [(AB)t ]ij = (AB)ji s X ajk bki = =

k=1 s X

(At )kj (B t )ik

k=1 s X = (B t )ik (At )kj k=1

= [B t At ]ij Chapter 2

1. Show that, if we replace the ith equation by equation i + α · equation j,

then we don’t change the solutions to the system of equations. 2

Solution: Suppose the two equations are ai1 x1 + · · · + aim xm = bi and aj1 x1 + · · · + ajm xm = bj Then the new equation i is α(aj1 x1 + · · · + ajm xm ) + (ai1 x1 + · · · + aim xm ) = αbj + bi But since ai1 x1 + · · · + aim xm = bi , and aj1 x1 + · · · + ajm xm = bj , the left hand side is just αbj +bi , so the new equation is true, provided the original equations were true. And conversely, we can subtract α · equation j from the new equation i to get back to our original system. Chapter 3 Chapter 4

1. Do elementary matrices commute? That is, does it matter in which order

they’re multiplied? Solution: Not necessarily. Here’s an example where they don’t:      3 0 1 0 3 0    = −2 1 0 1 −6 1 But

 

3 0 0 1

 

1 0 −2 1





=

3 0 −2 1

 .

In the first case, we multiply the 1st row by 3 before adding to the 2nd row. In the second case, we do it afterwards. Give an instance in which two elementary matrices do commute. 2. Define elementary column operations and show that they can be implemented by multiplying the matrix A on the right by elementary column matrices. Solution: 3

(a) To multiply coli (A) by c, use the matrix obtained from I by multiplying its ith column by c. (b) To add c · coli (A) to colj (A), use the matrix obtained by adding c · coli (I) to colj (I). (c) To interchange columns i and j, multiply by the identity matrix with its ith and j th columns interchanged. To sum up: the column operations are performed by doing the operation on the appropriate columns of the identity matrix, and then multiplying A by this on the right. Chapter 5

1. If the Gauss-Jordan form of A has a row of zeros, are there necessarily any

free variables? If there are free variables, is there necessarily a row of zeros? Solution: The answer is no to both questions. Counterexamples are:   1 0 0        0 1 0  1 0 ∗  , and       0 0 1  0 1 ∗   0 0 0 Remember, these are the original coefficient matrices after reduction, not the augmented ones. Chapter 6 From the additional problems

1. If A and B are two 3 × 3 lower triangular matrices,

then AB is lower triangular as well. Solution: Let 

a 0 0

  A= b c 0  d e f Then

   AB =  





g 0 0



      , and B =   h i 0 .    j k l a+g

0

0



  bg + ch ci 0 .  dg + eh + f j ei + f k f l 4

which is lower triangular, as claimed. 2. If A and B are lower triangular n × n matrices, then AB is also lower triangular. Solution: In this case, we need to write out a general term that’s above the main diagonal and show that it must be zero. So consider the element (AB)ij , where i < j. It has the form (AB)ij = ai1 b1j + . . . aii bij + ai,i+1 bi+1,j + . . . aij bjj + ai,j+1 bj+1,j + . . . . Now B is lower triangular, so all the terms bkj with k < j vanish in B. So all the terms in the above sum to the left of aij bjj must vanish. And all the remaining terms have a factor of the form aik with i < k, so they vanish too. Therefore (AB)ij = 0 when i < j, and this means that AB is lower triangular. Chapter 7 Chapter 8

1. Suppose rowi (A) is a linear combination of rowj (A) and rowk (A), where

i 6=  6= k. Show that det(A) = 0. Solution: Using the properties of the determinant, we have det(A) = det(r1 , . . . , ri , . . . , rj , . . . , rk , . . .) = det(r1 , . . . , arj + brk , . . . , rj , . . . , rk , . . .) = a det(r1 , . . . , rj , . . . , rj , . . . , rk , . . .) + b det(r1 , . . . , rk , . . . , rj , . . . , rk , . . .) = a0 + b0 = 0, because each of the two determinants has two equal rows. 2. Show that det(A) = det(At ). We are going to use the fact that the deteminant of a product is

Solution:

equal to the product of the determinants (see the next problem). First, you can easily verify that if E is the matrix corresponding to an elementary row operation, then det(E) = det(E t ). Now suppose that det(A) 6= 0. Then A can be written as the product A = E1 E2 · · · Ek of elementary matrices, and by the product rule for determinants, det(A) = det(E1 E2 · · · Ek ) = det(E1 ) det(E2 ) · · · det(Ek ). 5

But then At = Ekt etk−1 · · · E1t , and t det(At ) = det(Ekt ) det(Ek−1 · · · det(E1t ) = det(Ek ) det(Ek−1 ) · · · det(E1 ) = det(A).

Now, if det(A) = 0, row reduction of A leads to Ek · · · E1 A = R, where R has at least one row of zeros, and det(R) = 0. Thus A = E1−1 · · · Ek−1 R and At = Rt (Ek−1 )t · · · (E1−1 )t where det(Rt ) = 0. So by the product rule, det(At ) = 0. 3. Show that for any two n × n matrices A and B, det(AB) = det(A) det(B). Solution: Let B be arbitrary and E elementary. Then det(EB) = det(E) det(B): if E replaces rowi (B) with rowi (B) + c · rowj (B), then, as we know, det(EB) = det(B). Since det(E) = 1, this means that det(E) det(B) = det(EB) in this case. The other two cases are easily checked. So it follows that if E1 and E2 are elementary matrices, det((E1 )(E2 B)) = det(E1 ) det(E2 B) = det(E1 ) det(E2 ) det(B), and similarly for any finite product of elementary matrices. If A is invertible, then it can be written as the product A = E1 E2 · · · Ek of elementary matrices, and by what we’ve just said, det(A) = det(E1 ) det(E2 ) · · · det(Ek ). So then det(AB) = det(E1 E2 · · · Ek B) = det(E1 ) det(E2 ) · · · det(Ek ) det(B) = det(A) det(B). If A is singular, then A can be written as a product A = E1 E2 · · · Ek R, where R is upper triangular with at least one row of zeros, and det(R) = 0. Now if R has a row of zeros, then RB has a row of zeros in the same location (check this), and therefore det(RB) = 0. So det(AB) = det(E1 ) · · · det(Ek ) det(RB) = 0 = det(A) det(B), and this completes the proof. Chapter 9 Chapter 10 Chapter 11

1. Show that, in the product matrix AB, the columns of AB are linear

combinations of the columns of A. Solution: If A is m × n, and B is n × p, then the ith column of the matrix AB

6

has the entries



a b + a12 b2j + · · · + a1n bnj  11 1j   a21 b1j + a22 b2j + · · · + a2n bnj   ..  .  am1 b1j + am2 b2j + · · · + amn bnj

    ,   

and this is the same thing as b1j col1 (A) + b2j col2 (A) + · · · + bnj coln (A). A similar computation shows that the rows of AB are linear combinations of the rows of B. Chapter 12

1. Show that any basis for Rn has precisely n elements. Don’t use any facts

about dimension. Solution: First, consider any set {v1 , v2 , . . . , vn , vn+1 } of n + 1 vectors in Rn . Then the system c1 v1 + c2 v2 + · · · cn+1 vn+1 = 0 has non-trivial solutions and so any such set must be linearly dependent and cannot form a basis. The same holds for any finite set of vectors with more than n elements for the same reason. Second, consider any set of n−1 linearly independent vectors {v1 , v2 , . . . , vn−1 } ⊂ Rn . If v ∈ Rn is arbitrary, then we have a basis if there exist constants ci such that c1 v1 + c2 v2 + · · · + cn−1 vn−1 = v. Writing this out as a linear system gives us an augmented matrix which is n × n, and we know that the reduced form of this system will have a non-zero leading entry in the last column, unless the entries of the vector v satisfy some linear equation. So we can’t have a basis. Similarly for fewer than n − 1 vectors. So by the first part, no basis can have more than n elements, and by the second part no basis can have less than n elements. Therefore, a basis, if it exists, must have exactly n elements. (And we know that such a basis exists, namely the standard basis.) Chapter 13 7

Chapter 14 Chapter 15

1. Fix λ and let Eλ = {x|Ax = λx}. Show that Eλ is a subspace. It’s called

the eigenspace corresponding to λ. Solution: Let x1 and x2 ∈ Eλ , and c1 , c2 ∈ R. Then A(c1 x1 + c2 x2 ) = c1 Ax1 + c2 Ax2 = c1 λx1 + c2 λx2 = λ(c1 x1 + c2 x2 ). So c1 x1 + c2 x2 ∈ Eλ , and thus Eλ is a subspace. Chapter 16

1. An arbitrary 2 × 2 symmetric matrix can be written in the form   a b . A= b c

Show that A always has real eigenvalues; when are the two eigenvalues equal? Solution: Writing out det(A − λI) = 0 = λ2 − (a + c)λ + (ac − b2 ), and using the quadratic formula gives (under the radical sign) (a + c)2 − 4(ac − b2 ) = a2 − 2ac + c2 + 4b2 = (a − c)2 + 4b2 , which is clearly non-negative, so the roots are real. They are equal if this quantity vanishes, namely if a = c, mboxandb = 0. In this case, A = cI is a scalar multiple of the identity matrix. 2. Let

 A=

1 −2 2

1

 .

Find the eigenvalues and corresponding eigenvectors of A. Solution: The characteristic equation is (1 − λ)2 + 4 = λ2 − 2λ + 5 = 0, with roots λ± = (2 ±

√ 4 − 20)/2 = 1 ± 2i.

8

For λ+ = 1 + 2i, we have the eigenvector equation   a z+ =   , b with (1 − (1 + 2i))a − 2b = 0 = −2ia − 2b ⇒ a = 1, b = −i. z− is the complex conjugate of z+ . Chapter 17

1. Show that under a change of basis given by the matrix E, the matrix G

of the inner product (x, y)G = xt Gy becomes Ge = E t GE. Solution: We have x = Exe , y = Eye , and therefore xt Gy = (Exe )t G(Ey) = xte E t GEye , so Ge = E t GE, as claimed. 2. (This should really go in the next chapter.) A matrix E is said to preserve the scalar product determined by the matrix G if E t GE = E. Find the set of all 2 × 2 matrices that preserve the dot product. Solution: In this case, since G = I, we have E t E = I. Let   a b . E= c d Then  E tE = 

a c b d

 

a b c d





=

a2 + c2 ab + cd 2

ab + cd b + d

2





=I=

1 0 0 1

 

The four equations simply state that the two columns of E form an orthonormal basis for R2 . Since a2 + c2 = 1, we can write a = cos θ, and c = sin θ. Then the two possibilities for the other two are b = − sin θ, d = cos θ, or their negatives. The first choice gives det E = 1, while the second gives −1 for the determinant. Chapter 18 Chapter 19

1. If E and F are orthogonal and of the same dimension, then EF is or-

thogonal. 9

Solution:

(EF )t (EF ) = F t E t EF = F t (E t E)F = F t F = I. So by definition,

EF is orthogonal. 2. (*) If E is orthogonal, then det(E) = ±1. Solution:

If E is orthogonal, then E t E = I. Taking determinants gives

det(E t E) = det E t det E = (det E)2 = det I = 1. Therefore, det E = ±1. Chapter 20

1. Show that the function ΠV : Rn → V is linear.

Solution:

We need to show that ΠV (c1 v1 + c2 v2 ) = c1 ΠV (v1 ) + c2 ΠV (v2 ).

Suppose that {e1 , . . . , ek } is an o.n. basis for V . Then we have ΠV (c1 v1 + c2 v2 ) =

k X

[(c1 v1 + c2 v2 )•e1 ]ei

i=1

= c1

k X i=1

k X (v1 •ei )ei + c2 (v2 •ei )ei i=1

= c1 ΠV (v1 ) + c2 ΠV (v2 ) where, in the second line, we have used the bilinearity of the dot product.

10