Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Notes linear algerbra 7
Description: Notes linear algerbra 7

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


Coordinate Systems

146

Lecture 19
Change of Basis
19
...
, vn } be a basis for Rn and let
PB = [v1 v2 · · · vn ]
...

The components of the vector x are the coordinates of x in the standard basis E = {e1 ,
...

In other words,
[x]E = x
...

We can therefore interpret PB as the matrix mapping that maps the B-coordinates of x to
the E-coordinates of x
...

If we multiply the equation
[x]E = (E PB )[x]B
on the left by the inverse of E PB we obtain
(E PB )−1 [x]E = [x]B
Hence, the matrix (E PB )−1 maps standard coordinates to B-coordinates, see Figure 19
...
It
is natural then to introduce the notation
B PE

= (E PB )−1

Change of Basis
V = Rn
b

x
B PE

= (E PB )−1

b

[x]B

Figure 19
...

Example 19
...
Let
 
 
 
 
1
−3
3
−8







v1 = 0 , v2 = 4 , v2 = −6 , x = 2 
...

(b) Find the change-of-coordinates matrix from B to standard coordinates
...


Solution
...
Therefore,
B is a basis for Rn
...
The
B-coordinate vector [x]B = (c1 , c2 , c3 ) is the unique solution to the linear system
x = PB [x]B
Solving the linear system with augmented matrix [PB x] we obtain
[x]B = (−5, 2, 1)
We verify that [x]B = (−5, 2, 1) are indeed the coordinates of x = (−8, 2, 3) in the basis
148

Lecture 19
B = {v1 , v2 , v3 }:

 
   
1
−3
3





(−5)v1 + (2)v2 + (1)v3 = −5 0 + 2 4 + −6
0
0
3


    
−5
−6
3
=  0  +  8  + −6
0
0
3



−8
= 2 
3
| {z }
x

19
...
We now consider the situation of dealing with two basis B and C where
neither is assumed to be the standard basis E
...
, vn } and let
C = {w1 ,
...


Then if [x]C is the coordinate vector of x in the basis C then
x = (E PC )[x]C
...

Then
(E PC )[x]C = (E PB )[x]B
and because E PC is invertible we have that
[x]C = (E PC )−1 (E PB )[x]B
...
For
this reason, it is natural to use the notation (see Figure 19
...

V = Rn
b

x

E PC

b

[x]C

E PB

C PB

b

[x]B

Figure 19
...

If we expand (E PC )−1 (E PB ) we obtain that


(E PC )−1 (E PB ) = (E PC )−1 v1 (E PC )−1 v2 · · · (E PC )−1 vn
...
, wn }
...

Example 19
...
Let
B=



  
   
1
−2
−7
−5
,
, C=
,
−3
4
9
7

It can be verified that B = {v1 , v2 } and C = {w1 , w2 } are bases for R2
...

(b) Find the matrix that takes C-coordinates to B-coordinates
...
Find [x]B and [x]C
...
The matrix E PB = [v1 v2 ] maps B-coordinates to standard E-coordinates
...
As we just showed,
the matrix that maps B-coordinates to C-coordinates is
C PB

= (E PC )−1 (E PB )
150

Lecture 19
It is straightforward to compute that
(E PC )−1 =

#
"
−7/4 −5/4
9/4

7/4

Therefore,
C PB

= (E PC )−1 (E PB ) =

"

−7/4 −5/4
9/4

7/4

#"

1

−2

−3

4

#

=

"

2

−3/2

−3

5/2

#

To compute B PC , we can simply invert C PB
...

=
P
[x]
=
E C
C
−2
−7/2
9
7
After this lecture you should know the following:
• how to compute a change of basis matrix
• and how to use the change of basis matrix to map one set of coordinates into another
151

Change of Basis

152

Lecture 20

Lecture 20
Inner Products and Orthogonality
20
...

Definition 20
...
, un ) and let v = (v1 , v2 ,
...

The inner product of u and v is
u • v = u1 v1 + u2 v2 + · · · + un vn
...


...

Theorem 20
...
Then
(a) u • v = v • u
(b) (u + v) • w = u • w + v • w
(c) (αu) • v = α(u • v) = u • (αv)
(d) u • u ≥ 0, and u • u = 0 if and only if u = 0

153

Inner Products and Orthogonality
Example 20
...
Let u = (2, −5, −1) and let v = (3, 2, −3)
...

Solution
...


We now define the length or norm of a vector in Rn
...
4: The length or norm of a vector u ∈ Rn is defined as
q

kuk = u • u = u21 + u22 + · · · + u2n
...

Below is an important property of the inner product
...
5: Let u ∈ Rn and let α be a scalar
...

Proof
...

By Theorem 20
...
Indeed, suppose that u is non-zero so that kuk =
6 0
...
Then by Theorem 20
...
1
...

kvk = kαuk = |α| · kuk =

u
v=

1
u
kuk

Figure 20
...

Example 20
...
Let u = (2, 3, 6)
...

Solution
...


Then the unit vector that is in the same direction as u is
   
2
2/7
1
1
v=
u = 3 = 3/7
kuk
7
6
6/7

Verify that kvk = 1:
p
p
p

kvk = (2/7)2 + (3/7)2 + (6/7)2 = 4/49 + 9/49 + 36/49 = 49/49 = 1 = 1
...

Definition 20
...
The distance between u and v is the
length of the vector u − v
...
In
other words,
d(u, v) = ku − vk
...
8
...

−2
−9
Solution
...


155

Inner Products and Orthogonality

20
...

Below is the general definition
...
9: Two vectors u and v in Rn are said to be orthogonal if u • v = 0
...
In fact, using the
Law of Cosines in R2 or R3 , one can prove that
u • v = kuk · kvk cos(θ)

(20
...
If θ = π2 then clearly u • v = 0
...
e
...
1) to define the angle between vectors u and v
...

θ = arccos
kuk · kvk
The general notion of orthogonality in Rn leads to the following theorem from grade
school
...
10: (Pythagorean Theorem) Two vectors u and v are orthogonal if and
only if ku + vk2 = kuk2 + kvk2
...
First recall that ku + vk =

p

(u + v) • (u + v) and therefore

ku + vk2 = (u + v) • (u + v)
=u•u+u•v+v•u+v•v
= kuk2 + 2(u • v) + kvk2
...

We now introduce orthogonal sets
...
11: A set of vectors {u1 , u2 ,
...

In the following theorem we prove that orthogonal sets are linearly independent
...
12: Let {u1 , u2 ,
...

Then the set {u1 , u2 ,
...
In particular, if p = n then the set
{u1 , u2 ,
...

Solution
...
, cp such that
c1 u1 + c2 u2 + · · · + cp up = 0
...

Since the set is orthogonal, the left-hand side of the last equation simplifies to c1 (u1 • u1 )
...
Hence,
c1 (u1 • u1 ) = 0
...

Repeat the above steps using u2 , u3 ,
...
, cp =
0
...
, up } is linearly independent
...
, up } is
automatically a basis for Rn
...
13
...
Compute

u1 • u2 = (1)(0) + (−2)(1) + (1)(2) = 0
u1 • u3 = (1)(−5) + (−2)(−2) + (1)(1) = 0
u2 • u3 = (0)(−5) + (1)(−2) + (2)(1) = 0
Therefore, {u1 , u2 , u3 } is an orthogonal set
...
12, the set {u
 1 , u2 , u3 } is linearly
independent
...


157

Inner Products and Orthogonality
We now introduce orthonormal sets
...
14: A set of vectors {u1 , u2 ,
...

Consider the previous orthogonal set in R3 :
     
0
−5 
 1





1
−2
, −2
...
Explicitly, ku1 k =
√ is not an orthonormal


6, ku2 k = 5, and ku3 k = 30
...
Hence, the set
√ 
 √  
 
0√
−5/√30 
 1/ √6
{v1 , v2 , v3 } = −2/√ 6 , 1/√5 , −2/√ 30


2/ 5
1/ 6
1/ 30
is an orthonormal set
...
3

Coordinates in an Orthonormal Basis

As we will see in this section, a basis B = {u1 , u2 ,
...
To see why, let x
be any vector in Rn and suppose we want to find the coordinates of x in the basis B, that is
we seek to find [x]B = (c1 , c2 ,
...
By definition, the coordinates c1 , c2 ,
...

Taking the inner product of u1 with both sides of the above equation and using the fact that
u1 • u2 = 0, u1 • u3 = 0, and u1 • un = 0, we obtain
u1 • x = c1 (u1 • u1 ) = c1 (1) = c1
where we also used the fact that ui is a unit vector
...
, un we obtain the remaining coefficients c2 ,
...


...

=
...


Our previous computation proves the following theorem
...
15: Let B = {u1 , u2 ,
...
The coordinate vector of x in the basis B is


u1 • x
 u2 • x 


[x]B = 
...

un • x

Hence, computing coordinates with respect to an orthonormal basis can be done without
performing any row operations and all we need to do is compute inner products! We make
the important observation that an alternate expression for [x]B is

  
u1 • x
uT1
 u2 • x  uT 

  2
[x]B = 
...

un • x
uTn

where U = [u1 u2 · · · un ]
...
If we compare the two identities
[x]B = U−1 x and [x]B = UT x

we suspect then that U−1 = UT
...
To see this, let B = {u1 , u2 ,
...

Consider the matrix product UT U, and recalling that ui • uj = uTi uj , we obtain
 
uT1
uT  

 2
UT U = 
...

uTn
 T
u1 u1 uT1 u2

 T
u2 u1 uT2 u2

=

...


...


uTn u1 uTn u2

· · · uT1 un





· · · uT2 un 



...


...


T
· · · un un

= In
...

A matrix U ∈ Rn×n such that

UT U = UUT = In

is called a orthogonal matrix
...
, un } is an orthonormal set then
the matrix


U = u1 u2 · · · un
is an orthogonal matrix
...
16
...

1
1
−2
−1

(a) Show that {v1 , v2 , v3 } is an orthogonal basis for R3
...

(c) For the given x find [x]B
...
(a) We compute that v1 • v2 = 0, v1 • v3 = 0, and v2 • v3 = 0, and thus {v1 , v2 , v3 }
is an orthogonal set
...

(b) We compute that kv1 k = 2, kv2 k = 18, and kv3 k = 3
...

(c) Finally, computing coordinates in an orthonormal basis is easy:

 

0
u1 • x

[x]B = u2 • x = 2/ 18
u3 • x
5/3

Example 20
...
The standard unit basis
     
0
0 
 1





0 , 1 , 0
E = {e1 , e2 , e3 } =


0
0
1

160

Lecture 20
in R3 is an orthonormal basis
...
On the other
hand, clearly
x1 = x • e1
x2 = x • e2
x3 = x • e3
Example 20
...
(Orthogonal Complements) Let W be a subspace of Rn
...
Using set notation:
W⊥ = {u ∈ Rn : u • w = 0 for every w ∈ W}
...

(b) Let w1 = (0, 1, 1, 0), let w2 = (1, 0, −1, 0), and let W = span{w1 , w2 }
...

Solution
...
Thus, 0 ∈ W⊥
...
Then for any vector w ∈ W it holds that
(u1 + u2 ) • w = u1 • w + u2 • w = 0 + 0 = 0
...
Lastly, let α be any scalar and let u ∈ W⊥
...


Therefore, αu is orthogonal to w and since w is an arbitrary vector in W then (αu) ∈ W⊥
...

(b) A vector u = (u1 , u2, u3 , u3) is in W⊥ if u • w1 = 0 and u • w2 = 0
...
The general solution to the linear
system is
 
 
0
1
1
0

 
u = t
1 + s −1
...


After this lecture you should know the following:
161

Inner Products and Orthogonality






how to compute inner products, norms, and distances
how to normalize vectors to unit length
what orthogonality is and how to check for it
what an orthogonal and orthonormal basis is
the advantages of working with orthonormal basis when computing coordinate vectors

162

Lecture 21

Lecture 21
Eigenvalues and Eigenvectors
21
...
In some cases, the new output vector Ax is simply
a scalar multiple of the input vector x, that is, there exists a scalar λ such that Ax = λx
...

Definition 21
...
If Av = λv
for some scalar λ then we call the vector v an eigenvector of A and we call the scalar λ
an eigenvalue of A corresponding to v
...

Eigenvectors are by definition nonzero vectors because A0 is clearly a scalar multiple of 0
and then it is not clear what that the corresponding eigenvalue should be
...
2
...



 
 
4 −1 6
−3
−1
A = 2 1 6 , v =  0  , u =  2 
...
Compute



   
4 −1 6 −3
−6
Av = 2 1 6  0  =  0 
2 −1 8
1
2
 
−3

=2 0 
1
= 2v

163

Eigenvalues and Eigenvectors
Hence, Av = 2v and thus v is an eigenvector of A with corresponding eigenvalue λ = 2
...

2 −1 8
1
4

There is no scalar λ such that

 
 
0
−1
 6 = λ  2 
...

Example 21
...
Is v an eigenvector of

2

A = −1
−4

Solution
...

2
2
1

 
0
Av = 0 = 0
...
Therefore, v is an eigenvector of A with
corresponding eigenvalue λ = 0
...
In this section, however, we will instead suppose that we have already found
the eigenvalues of A and concern ourselves with finding the associated eigenvectors
...
How do we find an eigenvector v corresponding
to the eigenvalue λ? To answer this question, we note that if v is to be an eigenvector of A
with eigenvalue λ then v must satisfy the equation
Av = λv
...

The last equation says that if v is to be an eigenvector of A with eigenvalue λ then v must
be in the null space of A − λI:
v ∈ Null(A − λI)
...

Recall that the null space of any matrix is a subspace and for this reason we call the subspace
Null(A − λI) the eigenspace of A corresponding to λ
...
4
...

A= 1
8 −6 1

Find a basis for the eigenspace of A corresponding to λ = 4
...
First compute


 
 

−4
6 3
4 0 0
−8
6
3
7 9 − 0 4 0 =  1
3
9 
A − 4I =  1
8 −6 1
0 0 4
8 −6 −3

Find a basis for the null space of A − 4I:




−8
6
3
1
3
9
R1 lR2
 1
3
9  −−−→  −8
6
3 
8 −6 −3
8 −6 −3




8R1 +R2
1
3
9
1
3
9
−8R1 +R3
 −8
6
3  −−−−
30
75 
−−→  0
8 −6 −3
0 −30 −75
Finally,






1
3
9
1 3 9
R +R3
 0
 0 30 75 
30
75  −−2−−→
0 −30 −75
0 0 0

Hence, the general solution to the homogenous system (A − 4I)x = 0 is


−3/2
x = t −5/2
1

where t is an arbitrary scalar
...
The vector v is of course an eigenvector of A with
eigenvalue λ = 4 and also (of course) any multiple of v is also eigenvector of A with λ = 4
...
5
...

A= 4
8 −4 −5

Find the eigenspace of A corresponding to λ = 3
...
First compute


 

 
11 −4 −8
8 −4 −8
3 0 0
A − 3I =  4
1 −4  −  0 3 0  =  4 −2 −4 
8 −4 −5
8 −4 −8
0 0 3

Now find the null space of A − 3I:




8 −4 −8
4 −2 −4
R1 lR2
 4 −2 −4  −
−−→  8 −4 −8 
8 −4 −8
8 −4 −8




−2R1 +R2
4 −2 −4
4 −2 −4
1 +R3
 8 −4 −8  −−2R
0
0 
−−−
−−→  0
8 −4 −8
0
0
0

Hence, any vector in the null space of


4 −2 −4
0
0 
A − 3I =  0
0
0
0


can be written as

 
 
1
1



x = t1 0 + t2 2
1
0

Therefore, the eigenspace of A corresponding to λ = 3 is
   
1 
 1
Null(A − 3I) = span{v1 , v2 } = span 0 , 2
...

Therefore {v1 , v2 } is a basis for the eigenspace of A with eigenvalue λ = 3
...

As shown in the last example, there may exist more than one linearly independent eigenvector of A corresponding to the same eigenvalue, in other words, it is possible that the
dimension of the eigenspace Null(A − λI) is greater than one
...
6: Let v1 ,
...
, λk of A
...
, vk } is a linearly independent set
...
Suppose by contradiction that {v1 ,
...
, λk }
are distinct
...
, vp ,
and {v1 ,
...


(21
...


(21
...
1) by λp+1 :
λp+1 vp+1 = c1 λp+1v1 + c2 λp+1v2 + · · · + cp vp λp+1
...
3)

Now subtract equations (21
...
3):
0 = c1 (λ1 − λp+1)v1 + c2 (λ2 − λp+1)v2 + · · · + cp (λp − λp+1 )vp
...
, vp } is linearly independent and thus ci (λi − λp+1 ) = 0
...
, λk } are all distinct and so we must have c1 = c2 = · · · = cp = 0
...
1)
this implies that vp+1 = 0, which is a contradiction because eigenvectors are by definition
non-zero
...
, vk } is a linearly independent set
...
7
...

A= 1
8 −6 1

Find bases for the eigenspaces corresponding to λ1 and λ2 and show that any two vectors
from these distinct eigenspaces are linearly independent
...
Compute

and one finds that




−5
6 3
A − λ1 I =  1
6 9 
8 −6 0

 
 −3 
(A − λ1 I) = span −4


3

167

Eigenvalues and Eigenvectors
Hence, v1 = (−3, −4, 3) is an eigenvector of A with eigenvalue λ1 = 1, and {v1 } forms a
basis for the corresponding eigenspace
...
Now verify that v1 and v2 are linearly independent:




−3 −1
−3 −1


R +R3
−4 −1
v1 v2 = −4 −1 −−1−−→
0
0
3
1
The last matrix has rank r = 2, and thus v1 , v2 are indeed linearly independent
...
2

When λ = 0 is an eigenvalue

What can we say about A if λ = 0 is an eigenvalue of A? Suppose then that A has eigenvalue
λ = 0
...

In other words, v is in the null space of A
...

Theorem 21
...

In fact, later we will see that det(A) is the product of its eigenvalues
...
1

The Characteristic Polynomial of a Matrix

Recall that a number λ is an eigenvalue of A ∈ Rn×n if there exists a non-zero vector v such
that
Av = λv
or equivalently if v ∈ Null(A − λI)
...
We know that
any matrix M has a non-trivial null space if and only if M is non-invertible if and only if
det(M) = 0
...
Let’s
compute the expression det(A − λI) for a generic 2 × 2 matrix:


a11 − λ

a
12

det(A − λI) =
a21
a22 − λ

= (a11 − λ)(a22 − λ) − a12 a22

= λ2 − (a11 + a22 )λ + a11 a22 − a12 a22
...
This motivates the following definition
...
1: Let A be a n × n matrix
...

169

The Characteristic Polynomial
In summary, to find the eigenvalues of A we must find the roots of the characteristic polynomial:
p(λ) = det(A − λI)
...

Theorem 22
...

Solution
...

Therefore, the claim holds for n = 2
...

If A is a (n + 1) × (n + 1) matrix then expanding det(A − λI) along the first row:
det(A − λI) = (a11 − λ) det(A11 − λI) +

n
X
k=2

(−1)1+k a1k det(A1k − λI)
...
Hence, (a11 −λ) det(A11 −λI)
is a (n + 1)th degree polynomial
...

Example 22
...
Find the characteristic polynomial of


−2 4

...
Compute

 
 

−2 − λ
4
λ 0
−2 4

...
Therefore, the eigenvalues of A are λ1 = 4
and λ2 = 2
Title: Notes linear algerbra 7
Description: Notes linear algerbra 7