Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Title: Introduction to Linear Algebra with Applications by Jim DeFranza (Solution Manual)
Description: Complete Solution Manual from cover to cover
Description: Complete Solution Manual from cover to cover
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
Instructor Solutions Manual
for
Introduction to
Linear Algebra with Applications
Jim DeFranza
Contents
1 Systems of Linear Equations and Matrices
Exercise Set 1
...
Exercise Set 1
...
Exercise Set 1
...
Exercise Set 1
...
Exercise Set 1
...
Exercise Set 1
...
Exercise Set 1
...
8 Applications of Systems of Linear Equations
Review Exercises
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
1
1
7
11
15
19
22
27
32
37
40
2 Linear Combinations and Linear Independence
Exercise Set 2
...
Exercise Set 2
...
Exercise Set 2
...
Review Exercises
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
1 Definition of a Vector Space
...
2 Subspaces
...
3 Basis and Dimension
...
4 Coordinates and Change of Basis
...
5 Application: Differential Equations
Review Exercises
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
1 Linear Transformations
...
2 The Null Space and Range
...
3 Isomorphisms
...
4 Matrix Transformation of a Linear Transformation
Exercise Set 4
...
Exercise Set 4
...
Review Exercises
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
88
88
93
98
101
106
110
113
116
...
Equations
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
118
118
123
128
130
5 Eigenvalues
Exercise Set
Exercise Set
Exercise Set
Exercise Set
and Eigenvectors
5
...
5
...
5
...
4 Application: Markov Chains
...
132
Chapter Test
...
1 The Dot Product on Rn
...
2 Inner Product Spaces
...
3 Orthonormal Bases
...
4 Orthogonal Complements
...
5 Application: Least Squares Approximation
...
6 Diagonalization of Symmetric Matrices
...
7 Application: Quadratic Forms
...
8 Application: Singular Value Decomposition
Review Exercises
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
137
137
140
144
151
157
161
165
166
168
171
A Preliminaries
Exercise Set A
...
2
Exercise Set A
...
4
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
173
173
174
177
178
Algebra of Sets
...
Techniques of Proof
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
1 Systems of Linear Equations
1
Solutions to All Exercises
1
Systems of Linear Equations and
Matrices
Exercise Set 1
...
1 of the text, Gaussian Elimination is used to solve a linear system
...
Equivalent means that the linear systems have the same solutions
...
• Multiply any equation by a nonzero constant
...
When used judiciously these three operations allow us to reduce a linear system to a triangular linear system,
which can be solved
...
Every linear system has either a unique solution, infinitely many solutions or no solutions
...
In the second linear system,
the variable x3 is a free variable, and once assigned any real number the values of x1 and x2 are determined
...
If a linear system has the same form as the second
system, but also has the additional equation 0 = 0, then the linear system will still have free variables
...
In some cases, the conditions on the
right hand side of a linear system are not specified
...
2x1 + 2x2 + x3 = b which is equivalent to
x3 = b + 2a
⎪
−− − − − − − −→⎪
−−−−−−−− ⎩
⎩
2x3 = c
0
= c − 2b − 4a
This linear system is consistent only for values a, b and c such that c − 2b − 4a = 0
...
Applying the given operations we obtain the equivalent triangular system
⎧
⎪ x1 − x2 − 2x3
⎨
−x + 2x2 + 3x3
⎪ 1
⎩
2x1 − 2x2 − 2x3
⎧
⎪
=3
⎨ x1 − x2 − 2x3
= 1 E1 + E2 → E2
x2 + x3
− − − − − →⎪
−−−−− ⎩
= −2
2x1 − 2x2 − 2x3
=3
= 4 (−2)E1 + E3 → E3
−− − − − − −→
−−−−−−−
= −2
2
Chapter 1 Systems of Linear Equations and Matrices
⎧
⎪ x1 − x2 − 2x3
⎨
x2 + x3
⎪
⎩
2x3
=3
= 4
...
2
...
x2 − x3
= 4 (−4)E2 + E3 → E3
x2 − x3 = 4
⎪
−− − − − − −→⎪
−−−−−−− ⎩
⎩
4x2 − 3x3 = 1
x3 = −15
Using back substitution, the linear system has the unique solution x1 = −20, x2 = −11, x3 = −15
...
Applying the given operations we obtain the equivalent triangular system
⎧
+ 3x4
⎪x1
⎪
⎪
⎨x + x
+ 4x4
1
2
⎪2x1
+ x3 + 8x4
⎪
⎪
⎩
x1 + x2 + x3 + 6x4
⎧
=2
+ 3x4
=2
⎪x1
⎪
⎪
⎨
=3
x2
+ x4
=1
(−1)E1 + E2 → E2
−−−−−−− ⎪
= 3 − − − − − − − → ⎪2x1
+ x3 + 8x4 = 3
⎪
⎩
=2
x1 + x2 + x3 + 6x4 = 2
⎧
+ 3x4
=2
⎪x1
⎪
⎪
⎨
x2
+ x4
=1
(−2)E1 + E3 → E3
(−1)E1 + E4 → E4
−− − − − − −→⎪
−−−−−−− ⎪
−−−−−−−
+x3 + 2x4 = −1 − − − − − − − →
⎪
⎩
x1 + x2 + x3 + 6x4 = 2
⎧
⎧
+ 3x4 = 2
+ 3x4 = 2
⎪x1
⎪x1
⎪
⎪
⎪
⎪
⎨
⎨
x2
+ x4 = 1
x2
+ x4 = 1
(−1)E2 + E4 → E4
⎪
−−−−−−− ⎪
+x3 + 2x4 = −1 − − − − − − − → ⎪
x3 + 2x4
= −1
⎪
⎪
⎪
⎩
⎩
x2 + x3 + 3x4 = 0
x3 + 2x4
= −1
⎧
+ 3x4 = 2
⎪x1
⎪
⎪
⎨
x2
+ x4 = 1
(−1)E3 + E4 → E4
...
As a
result there are infinitely many solutions
...
4
...
1 Systems of Linear Equations
3
⎧
⎪x1
⎨
+ x3
= −2
x2 + 3x3 = 1
...
5
...
Substituting the value x = 0 into the first equation,
we have that y = − 2
...
3
3
6
...
7
...
Hence,
the linear system has the unique solution x = 1, y = 0
...
The operation 3E2 + E1 → E1 gives 5x = −1, so x = − 1
...
5
5
9
...
Since each equation has infinitely many solutions the linear system has infinitely many solutions with solution
2t+4
set S =
t∈R
...
Since the first equation is −3 times the second, the equations describe the same line and hence, there are
infinitely many solutions, given by x = 5 y + 1 , y ∈ R
...
The operations E1 ↔ E3 , E1 + E2 → E2 , 3E1 + E3 → E3
to the equivalent triangular system
⎧
⎪ x − 2y + z
⎨
−5y + 2z
⎪
⎩
9
5z
and − 8 E2 + E3 → E3 , reduce the linear system
5
= −2
= −5
...
12
...
= 1
2
So the unique solution is x = 0, y = 1 , z = 1
...
The operations E1 ↔ E2 , 2E1 + E2 → E2 , −3E1 + E3 → E3 and E2 + E3 → E3 , reduce the linear system
to the equivalent triangular system
⎧
⎪ x
+ 5z = −1
⎨
−2y + 12z = −1
...
Reducing the linear system gives
⎧
⎪−x + y + 4z
⎨
3x − y + 2z
⎪
⎩
2x − 2y − 8z
⎧
⎪−x + y + 4z
= −1
⎨
to
= 2 reduces−
2y + 14z
−− − → ⎪
−−− ⎩
=2
2x − 2y − 8z
−1 − 5t, 6t + 1 , t
2
t∈R
...
=0
There are infinitely many solutions with solution set S = −3t + 1 , −7t − 1 , t | t ∈ R
...
Adding the two equations yields 6x1 + 6x3 = 4, so that x1 = 2 − x3
...
The linear system has infinitely many solutions with solution set
2
S = −t + 2 , − 1 , t t ∈ R
...
Reducing the linear system gives
4
Chapter 1 Systems of Linear Equations and Matrices
−2x1 + x2
3x1 − x2 + 2x3
=2
−2x1 + x2
reduces to
− − −→
= 1 −− − −
x2 + 4x3
=2
...
17
...
Hence, the linear system has two
free variables, x3 and x4
...
3
3
18
...
The solution set is a two parameter family given by
S = 1 s − 3t + 5 , 3t − 2, s, t | s, t ∈ R
...
The operation −2E1 + E2 → E2 gives x = b − 2a
...
20
...
21
...
22
...
23
...
24
...
25
...
26
...
3
=a
6x − 3y
reduces →
− − − to
= b −− − −
0
=a
,
= 1a + b
3
1
...
The linear system is equivalent to the triangular linear system
⎧
⎪ x − 2y + 4z = a
⎨
5y − 9z = −2a + b
⎪
⎩
0 =c−a−b
and hence, is consistent for all a, b, and c such that c − a − b = 0
...
Since
⎧
⎪ x − y + 2z
⎨
2x + 4y − 3z
⎪
⎩
4x + 2y + z
⎧
⎪ x
=a
⎨
to
= b reduces−
−− − → ⎪
−−− ⎩
=c
− y + 2z
6y − 7z
0
=a
,
= b − 2a
= c − 2a − b
the linear system is consistent if c − 2a − b = 0
...
The operation −2E1 + E2 → E2 gives the equivalent linear system
x+y
(a − 2)y
= −2
...
30
...
Notice that if a = −6, then 2 − 2a ̸= 0
...
The operation −3E1 + E2 → E2 gives the equivalent linear system
x−y
0
=2
...
32
...
33
...
25
⎨
a + b + c = −1
...
⎪
⎩
a − b + c = 4
...
2
1
4
= x−
3 2
2
− 2
...
To find the parabola y = ax2 + bx + c that passes through the specified points we solve the linear system
⎧
⎪
⎨
c
9a − 3b + c
⎪
⎩
0
...
5b + c
=2
= −1
...
75
6
Chapter 1 Systems of Linear Equations and Matrices
The unique solution is a = −1, b = −2 and c = 2, so the parabola is y = −x2 − 2x + 2 = −(x + 1)2 + 3
...
35
...
5)2 a − (0
...
25
⎨
...
3) a + (2
...
91
The unique solution is a = −1, b = −4, and c = −1, so the parabola is y = −x2 + 4x − 1 = −(x − 2)2 + 3
...
36
...
= 5525
The unique solution is a = 2800, b = −5600, and c = −2875, so the parabola is y = 2800x2 − 5600x − 2875
...
The y coordinate of the vertex is y = −5675
...
a
...
found by solving the linear system
y
⎧
5
⎪−x + y
=1
⎨
−6x + 5y = 3
...
25
38
...
The point of intersection of the three lines can be b
...
⎪3x + y = 1
⎪
⎪
⎩
4x + y = 2
This linear system has the unique solution (1, −2)
...
a
...
x+y
x−y
y
5
25
25
=2
has the unique solution x = 1 and y = 1
...
Notice that the second equation is twice the first and the equations represent the same
line
...
The linear system
is inconsistent
...
Using the operations dE1 → E1 , bE2 → E2 , followed by (−1)E2 + E1 → E1 gives (ad − bc)x = dx1 − bx2
...
In a similar way, we have that y = ax2 −cx1
...
The linear system
41
...
S = {(3 − 2s − t, 2 + s − 2t, s, t) | s, t ∈ R} b
...
2 Matrices and Elementary Row Operations
7
42
...
Let x4 = s, and x5 = t, so that x3 = 2 + 2s − 3t, x2 = −1 + s + t, and x1 = −2 + 3t
...
Let x3 = s
and x5 = t, so that x4 = −1 + 1 s + 3 t, x2 = −2 + 1 s + 5 t, and x1 = −2 + 3t
...
Applying kE1 → E1 , 9E2 → E2 , and −E1 + E2 → E2 gives the equivalent linear system
9kx +
k2 y
(9 − k 2 )y
= 9k
...
a
...
b
...
c
...
44
...
=0
a
...
b
...
c
...
Exercise Set 1
...
Reducing a linear system to
triangular form is then equivalent to row reducing the augmented matrix corresponding to the linear system
to a triangular matrix
...
is ⎣ 2
2x + 2x2 + x3 − 2x4 = 2
⎪ 1
⎩
1 −2 1
2 −2
x1 − 2x2 + x3 + 2x4
= −2
The coefficient matrix is the 3 × 4 matrix consisting of the coefficients of each variable, that is, the augmented
⎡
⎤
1
matrix with the augmented column ⎣ 2 ⎦ deleted
...
Reducing the linear system using the three valid operations is equivalent to reducing
the augmented matrix to a triangular matrix using the row operations:
• Interchange two rows
...
• Add a multiple of one row to another
...
The framed terms
are the pivots of the matrix
...
In this example, the free variable is x4 and x1 , x2 , and x3 depend on x4
...
For a linear system with the same number of equations as variables, there will be a
unique solution if and only if the coefficient matrix can be row reduced to the matrix with each diagonal entry
1 and all others 0
...
⎡
2 −3 5
−1 1 −3
2
3
...
⎡
2
7
...
⎣ 0 0 −4 0 ⎦
−4 2 −3 1
2
...
⎤
4
2
2 −2
−2 −3 −2 2 ⎦
3
3 −3 −4
⎡
9
...
2
11
...
There are infinitely many solutions given
by x = −3 − 2z, y = 2 + z, z ∈ R
...
The variable z = 2 and y is a free variable,
so the linear system has infinitely many solutions
given by x = −3 + 2y, z = 2, y ∈ R
...
The last row of the matrix represents the
impossible equation 0 = 1, so the linear system is
inconsistent
...
The linear system is consistent with free
variables z and w
...
19
...
21
...
23
...
25
...
4 1 −1 1
4 −4 2 −2
3 0
8
...
The linear system has the unique solution
2
x = 2, y = 0, z = − 3
...
The linear system is consistent with free variable z
...
3
3
14
...
16
...
18
...
The solutions are given by
x = 1 − 3y + 3z, w = 4, y ∈ R, z ∈ R
...
The linear system has infinitely many solutions given by x = −1 − 2 z, y = 1 + 3z, w = 4 , z ∈
5
5
R
...
The matrix is in reduced row echelon form
...
Since the pivot in row two is not a one, the
matrix is not in reduced row echelon form
...
The matrix is in reduced row echelon form
...
Since the first nonzero term in row three is 28
...
trix is not in reduced row echelon form
...
To find the reduced row echelon form of the matrix we first reduce the matrix to triangular form using
2 3
2 3
R +R → R
...
This gives
2 3
0 4
30
...
2 0
0 1
1
R1 → R1
2
−− − −
− − −→
1
0
0
1
...
2 Matrices and Elementary Row Operations
9
31
...
The remaining operations are
used to change all pivots to ones and eliminate nonzero entries above and below them
...
−− − − −→
−−−−−
−−−−−
−4− − − → 0
0
0 1 −− − − −
0 1
0 0 1 −− − − −→ 0 0 1
⎡
⎤
⎡
⎤
0
2
1
1 0 0
32
...
−− − −
− − −→
1
2 −3
0 0 1
33
...
0 1 0
35
...
0 0 1 0
34
...
1
0 1 −4
36
...
5
0 0 1 8
5
37
...
The unique solution to the linear system is x = −1, y = 2
...
The augmented matrix for the linear system
−3 1 1
4 2 0
reduces−
−− − →
− − − to
1 0
0 1
1
−5
...
5
5
39
...
3
40
...
1
3
5
6
⎤
−1 ⎦
...
6
6
41
...
1
10
Chapter 1 Systems of Linear Equations and Matrices
The linear system is inconsistent
...
The augmented matrix
⎡
3 0
⎣ −2 0
0 0
⎤
⎡
−2 −3
1 0
1 −2 ⎦ reduces to ⎣ 0 1
−− − −
− − −→
−1 2
0 0
⎤
0
0 ⎦
...
43
...
As a result, the variable x3 is free and there are infinitely many solutions to the linear system given by
1
x1 = − 2 − 2x3 , x2 = − 3 + 3 x3 , x3 ∈ R
...
The augmented matrix
0 −3 −1
1 0
1
2
−2
reduces−
−− − →
− − − to
1 0
0 1
1
−2
2
−3
1
3
...
3
3
45
...
1
1
2
1
2
1
2
0 0
1 0
0 1
As a result, the variable x4 is free and there are infinitely many solutions to the linear system given by
x1 = 1 − 1 x4 , x2 = 1 − 1 x4 , x3 = 1 − 1 x4 , x4 ∈ R
...
The augmented matrix
⎡
−3
⎣ 1
−3
⎤
⎡
−1 3 3 −3
1
−1 1 1 3 ⎦ reduces to ⎣ 0
−− − −
− − −→
3 −1 2 1
0
0
1
0
0
0
1
3
4
9
4
5
2
⎤
4
6 ⎦
...
4
4
2
47
...
0
As a result, the variables x3 and x4 are free and there are infinitely many solutions to the linear system given
by x1 = −8 − x3 − 8x4 , x2 = −11 − x3 − 11x4 , x3 ∈ R, x4 ∈ R
...
The augmented matrix
⎡
−3
⎣ 1
4
⎤
⎡
2 −1 −2 2
1
−1 0 −3 3 ⎦ reduces to ⎣ 0
−− − −
− − −→
−3 1 −1 1
0
0
1
0
⎤
1 8
8
1 11 −11 ⎦
...
49
...
3 Matrix Algebra
11
⎡
1
2
⎣ 2
3
−1 −1
⎤
⎡
−1 a
1
−2 b ⎦ −→ ⎣ 0
1 c
0
⎤
2 −1
a
−1 0
−2a + b ⎦
...
The linear system is consistent precisely when the last equation, from the row echelon form, is consistent
...
b
...
c
...
d
...
If the variables are denoted by x, y
and z, then one solution is obtained by setting z = 1, that is, x = −2, y = 2, z = 1
...
Reducing the augmented matrix gives
a
2
1
a−1
1
1
→
2
a
a−1
1
1
1
→
1
a
a−1
2
1
1
2
1
→
a−1
1
2
a(a−1)
0 − 2 +1
1
2
1 − 1a
2
...
If a ̸= −1, the linear system is consistent
...
If the linear system is consistent and a ̸= 2, then the
solution is unique
...
c
...
The unique solution is x = 1 , y = 1
...
The augmented matrix for the linear system and the reduced row echelon form are
⎡
−2 3
⎣ 1 1
0 5
⎤
⎡
1 a
1
−1 b ⎦ −→ ⎣ 0
−1 c
0
0
1
0
4
−5
1
−5
0
⎤
3
− 1 a + 10 b
2
1
⎦
...
The linear system is consistent precisely when the last equation, from the reduced row echelon form, is
consistent
...
b
...
c
...
d
...
If the variables
are denoted by x, y and z, then one solution is obtained by setting z = 1, that is, x = 4 , y = 1 , z = 1
...
,
,
,
...
3
Addition and scalar multiplication are defined componentwise allowing algebra to be performed on expressions involving matrices
...
For
example, addition is commutative and associative, the matrix of all zeros plays the same role as 0 in the real
numbers since the zero matrix added to any matrix A is A
...
Matrix multiplication is also defined
...
The order of multiplication is important since it is not always the case that AB and BA
are the same matrix
...
The distributive
property does hold for matrices, so that A(B + C) = AB + AC
...
The transpose
of a matrix A, denoted by At , is obtained by interchanging the rows and columns of a matrix
...
Of particular importance is (AB)t = B t At
...
A class of matrices that is introduced in Section 1
...
A matrix A is symmetric it is equal to its transpose, that is, At = A
...
c d
b d
12
Chapter 1 Systems of Linear Equations and Matrices
Here we used that two matrices are equal if and only if corresponding components are equal
...
For example, to show that
the product of two matrices AB is symmetric, requires showing that (AB)t = AB
...
Since addition of matrices is defined componentwise, we have that
2
4
A+B =
−3
1
−1 3
−2 5
+
=
2 − 1 −3 + 3
4−2
1+5
=
1 0
2 6
...
2 −3
4
1
2
...
To evaluate the matrix expression (A + B) + C requires we first add A + B and then add C to the result
...
Since addition of real
2 1
numbers is associative the two results are the same, that is (A + B) + C =
= A + (B + C)
...
3(A + B) − 5C = 3
=3
1
2
−3
1
0
6
+
−5
−1 3
1
1
−5
−2 5
5 −2
1
1
−2 −5
=
5 −2
−19 28
5
...
A + 2B − C = ⎣ 1
0 −2 3
1 2
number, we have that
⎤
3 9
10 6 ⎦
...
Notice that, A and B are examples of matrices
0 −8
7 −7
that do not commute, that is, the order of multiplication can not be reversed
...
The products are AB =
8
...
AB =
7
0
−2
−8
=
21 −6
0 −24
and A(3B) =
5
11
...
First, adding the matrices B and C gives
A(B + C) =
=
−2 −3
3
0
2
−2
6
0
3 −6
=
⎤
9 −7 −9
10
...
AB = ⎣ −10 0 9 ⎦
6
0 −2
−9 4
−13 7
⎡
3 1
−2 4
⎡
0
0
+
2
0
−1 −1
=
(−2)(4) + (−3)(−3) (−2)(0) + (−3)(−1)
(3)(4) + (0)(−3)
(3)(0) + (0)(−1)
21 −6
0 −24
−2 −3
3
0
=
1 3
12 0
4
0
−3 −1
...
3 Matrix Algebra
14
...
2A(B − 3C) =
13
0 −3
1
0
2
0
−1 −1
=
3 3
2 0
10 −18
−24
0
17
...
So At and B t are 3 × 2
matrices and the operation is defined
...
−3 −2
18
...
−7 −5
−4
1
19
...
(At + B t )C = ⎣ 6
4 12
⎡
⎤
0
20
15
0
0 ⎦
23
...
BAt =
25
...
If A =
−5 −1
5
1
22
...
24
...
AB =
27
...
(A + 2B)(3C) =
a
0
b
c
a b
0 c
=
0 0
0 0
a2
0
0 2
0 5
and B =
1 1
0 0
, then
...
That is, a = ±1, c = ±1, and b(a + c) = 0, so that A
1 0
1
b
−1 b
−1
0
has one of the forms
,
,
, or
...
Since AM =
and M A =
, then AM = M A if and only if 2a + c =
a+c
b+d
2c + d c + d
2a + b, 2b + d = a + b, a + c = 2c + d, bed = c + d
...
b a−b
1 1
−1 −1
0 0
and B =
...
Notice that, this can not happen with real numbers
...
29
...
So bg − cf and cf − bg must both be 1, which
(ce + dg) − (ag + ch)
cf − bg
0 1
is not possible
...
The product
30
...
Let A =
c d
will equal
if and only b + 2 = 6, 3a = 12, and ab = 16
...
e f
g h
and B =
...
33
...
1
0 0 1
⎡
⎤
1 0 0
We can see that if n is even, then An is the identity matrix, so in particular A20 = ⎣ 0 1 0 ⎦
...
0 0 1
34
...
1
A2 = ⎣ 0
0
0
1
0
35
...
Since AB = BA, then A2 B = AAB = ABA =
BAA = BA2
...
a
...
b
...
Then select any two matrices
0 1
1 0
0 1
that do not commute
...
1 0
0 1
⎡
⎤
1
⎢ 0 ⎥
⎢
⎥
37
...
⎥ gives the first column vector of the matrix A
...
⎦
...
Then let x = ⎢
⎣
that each column vector of A is the zero vector
...
1−n
n
38
...
Let A =
−n
1+n
and Am =
1−m
m
−m
1+m
(1 − n)(1 − m) − nm (1 − n)(−m) − (1 + m)n
n(1 − m) + m(1 + n)
−mn + (1 + n)(1 + m)
a
c
b
d
, so that At =
AAt =
a
c
b
d
a c
b d
a c
b d
0
1
...
...
Then
=
1 − (m + n)
m+n
−(m + n)
1 + (m + n)
= Am+n
...
Then
=
a2 + b2 ac + bd
ac + bd c2 + d2
=
0 0
0 0
if and only if a2 +b2 = 0, c2 +d2 = 0, and ac+bd = 0
...
1
...
Since A and B are symmetric, then At = A and B t = B
...
41
...
Since (AAt )t = (At )t At = AAt , then the matrix AAt is
symmetric
...
42
...
In addition, if AB = BA, then
(AB)2 = ABAB = AABB = A2 B 2 = AB
and hence, AB is idempotent
...
Let A = (aij ) be an n × n matrix
...
44
...
a
...
a1n + b1n
⎜⎢ a21 + b21 a22 + b22
...
...
...
...
...
...
...
ann + bnn
the diagonal entries satisfy aii = −aii and hence,
⎞⎤
⎟⎥
⎟⎥
⎟⎥ = (a11 + b11 ) + (a22 + b22 ) + · · · + (ann + bnn )
⎠⎦
= (a11 + a22 + · · · + ann ) + (b11 + b22 + · · · + bnn ) = tr(A) + tr(B)
...
⎛⎡
⎜⎢
⎜⎢
tr(cA) = tr ⎜⎢
⎝⎣
ca11
ca21
...
...
...
...
...
...
...
can1
can2
...
⎠⎦
Exercise Set 1
...
The n × n identity matrix I, with each diagonal entry a 1 and all other entries 0, satisfies AI = IA = A for
all n × n matrices
...
In the case of 2 × 2 matrices
A=
a
c
b
d
has an inverse if and only if ad − bc ̸= 0 and A−1 =
1
ad − bc
d −b
−c
a
...
If in the reduction process A is transformed to the identity matrix,
then the resulting augmented part of the matrix is the inverse
...
−4 −2
The inverse of the product of two invertible matrices A and B can be found from the inverses of the individual
matrices A−1 and B −1
...
Solutions to Exercises
1
...
5
−3 1
2
...
7
−1 −3
3
...
Since (1)(2) − (1)(2) = 0, then the matrix is
is not invertible
...
5
...
Since the original matrix has been
0 0 1 −5 −1 3
⎡
⎤
3
1 −2
exists and A−1 = ⎣ −4 −1 3 ⎦
...
Since
⎡
⎤
⎡
0 2 1 1 0 0
1 0
⎣ −1 0 0 0 1 0 ⎦ reduces to ⎣ 0 1
−− − −
− − −→
2 1 1 0 0 1
0 0
⎡
⎤
0 −1 0
the matrix is invertible and A−1 = ⎣ 1 −2 −1 ⎦
...
4 The Inverse of a Matrix
17
7
...
⎡
⎤
1/3 −1 −2 1/2
⎢ 0
1
2
−1 ⎥
⎥
9
...
A−1 = 1 ⎢
3 ⎣ 1
−2 −1 0 ⎦
1
1
1 1
8
...
13
...
A = ⎢
⎣ 1
0
14
...
A is not invertible
...
A−1
⎡
1
⎢ 0
=⎢
⎣ 0
0
⎤
−3 3
0
1 −1 1/2 ⎥
⎥
0 1/2 1/2 ⎦
0
0 1/2
12
...
The matrix A is not invertible
...
Performing the operations, we have that AB +A =
(A + I)B
...
Since the distributive property holds for matrix multiplication and addition, we have that (A+I)(A+I) =
A2 + A + A + I = A2 + 2A + I
...
Let A =
1 2
−2 1
...
Since A2 =
−3
4
−4 −3
and −2A =
−2 −4
4 −2
, then A2 − 2A + 5I = 0
...
1 −2
= 1 (2I − A)
...
If A −2A+5I = 0, then A −2A = −5I, so that A 5 (2I − A) = 5 A− 5 A = − 5 (A2 −2A) = − 5 (−5I) = I
...
Since (1)(1) − (2)(−2) = 5, the inverse exists and A−1 =
1
5
20
...
So if λ = 2 , then the matrix can not be reduced to the identity
3
−− − −
− − −→
1 2 1
1
2
1
and hence, will not be invertible
...
The matrix is row equivalent to ⎣ 0 3 − λ 1 ⎦
...
⎡
⎤
1
2
1
22
...
23
...
If λ ̸= 1, then the matrix A is invertible
...
b
...
If λ = 0, λ =
√
√
2 or λ = − 2, then the matrix is not invertible
...
The matrices A =
1 0
0 0
and B =
0
0
26
...
0
0
is not invertible
...
Using the distributive property of matrix multiplication, we have that
(A + B)A−1 (A − B) = (AA−1 + BA−1 )(A − B) = (I + BA−1 )(A − B)
= A − B + B − BA−1 B = A − BA−1 B
...
28
...
29
...
If A is invertible and AB = 0, then A−1 (AB) = A−1 0, so that B = 0
...
If A is not invertible, then Ax = 0 has infinitely many solutions
...
, xn be nonzero solutions of
Ax = 0 and B the matrix with nth column vector xn
...
30
...
First notice that At (A−1 )t =
(A−1 A)t = I, so (A−1 )t = (At )−1
...
31
...
Since A and B are symmetric, then At = A
and B t = B
...
Finally, since AB = BA, we have that (AB)t = B t At = BA = AB,
so that AB is symmetric
...
Consider (A−1 B)t = B t (A−1 )t = B(At )−1 = BA−1
...
Using this last observation, we have (A−1 B)t = A−1 B and hence,
A−1 B is symmetric
...
If AB = BA, then B −1 AB = A, so that B −1 A = AB −1
...
Since (B −1 )t B t =
(BB −1 )t = I, we have that (B −1 )t = (B t )−1
...
34
...
Assuming At = A−1 and B t = B −1 , we need to show that (AB)t = (AB)−1
...
36
...
37
...
Using the associative property of matrix multiplication, we have that
(ABC)(C −1 B −1 A−1 ) = (AB)CC −1 (B −1 A−1 ) = ABB −1 A−1 = AA−1 = I
...
The proof is by induction on the number of matrices k
...
2
1
Inductive Hypothesis: Suppose that (A1 A2 · · · Ak )−1 = A−1 A−1 · · · A−1
...
Since [A1 A2 · · · Ak ] and Ak+1 can be considered as
1
...
Finally, by the
k+1
inductive hypothesis
([A1 A2 · · · Ak ]Ak+1 )−1 = A−1 [A1 A2 · · · Ak ]−1 = A−1 A−1 A−1 · · · A−1
...
Since akk ̸= 0 for each k, then
⎡
a11 0
...
0
⎢
⎢
...
...
...
...
...
...
...
...
0 ann
⎤⎡
1
a11
⎥⎢ 0
⎥⎢
⎥⎢
...
...
...
0
...
...
...
...
...
...
1
ann
⎤
⎡
⎥ ⎢
⎥ ⎢
⎥=⎢
⎦ ⎣
1
0
...
...
0
1 0
...
...
...
...
...
...
...
0 1
and hence, A is invertible
...
If A is invertible, then the augmented matrix [A|I] can be row reduced to [I|A−1 ]
...
Similarly, the inverse for an invertible lower triangle matrix is also lower triangular
...
Since A is invertible, then A is row equivalent to the identity matrix
...
41
...
Expanding the matrix equation
1 0
0 1
a
c
b
d
x1
x3
x2
x4
=
1 0
0 1
, gives
ax1 + bx3
cx1 + dx3
ax2 + bx4
cx2 + dx4
=
...
From part (a), we have the two linear systems
ax1 + bx3
cx1 + dx3
=1
and
=0
ax2 + bx4
cx2 + dx4
=0
...
Since the assumption is that ad − bc = 0, then d = 0
...
c
...
Notice that
if in addition either a = 0 or c = 0, then the matrix is not invertible
...
If a and c are not zero, then these equations are inconsistent and the
matrix is not invertible
...
5
A linear system can be written as a matrix equation Ax = b, where A is the coefficient matrix of the linear
system, x is the vector of variables and b is the vector of constants on the right hand side of each equation
...
−x + 2x2 + 2x3 = −3 is ⎣ −1 2
⎪ 1
⎩
−1 2
1
x3
1
−x1 + 2x2 + x3
=1
If the coefficient matrix A, as in the previous example, has an inverse, then the linear system always has a
unique solution
...
In the above example, since the coefficient matrix is invertible the linear system has
a unique solution
...
x = ⎣ x2 ⎦ = A−1 ⎣ −3 ⎦ =
3
3
x3
1
0
3 −3
1
−12
−4
Every homogeneous linear system Ax = 0 has at least one solution, namely the trivial solution, where each
component of the vector x is 0
...
That is, the unique solution is x = A−1 0 = 0
...
One
additional fact established in the section and that is useful in solving the exercises is that when the linear
system Ax = b has two distinct solutions, then it has infinitely many solutions
...
Solutions to Exercises
1
...
⎡
2 −3
1
2
3
...
3
⎡
4
3 −2
1
5
...
2x − y − z
=1
⎪
⎩
3x − y + 2z = −1
⎤
⎡
⎤
x
⎦, x = ⎣ y ⎦, and
z
⎤
−3
0 ⎦,
−4
⎤
⎦
...
11
...
A =
⎡
⎤
⎡
⎤
0 3 −2
x
4 ⎦, x = ⎣ y ⎦, and b =
4
...
A = ⎣ 0 4 −2 −4 ⎦,
1 3 −2
0
⎡
⎤
⎡
⎤
x1
−4
⎢ x2 ⎥
⎥
⎣ 0 ⎦
x=⎢
⎣ x3 ⎦, and b =
3
x4
−2x − 4y = −1
3y = 1
⎧
⎪−4x − 5y + 5z = −3
⎨
10
...
2x1
+ x3 + x4 = −3
⎪
⎩
x1
+ x3 − 2x4 = 1
⎡
8
...
The solution is x = A−1 b = ⎣ 4 ⎦
...
The solution is x = A−1 b = ⎢
⎣ −8 ⎦
...
The solution is x = A−1 b = ⎣ 8 ⎦
...
The solution is x = A−1 b = ⎢
⎣ 1 ⎦
1
17
...
...
5 Matrix Equations
21
18
...
19
...
3
1
1
12
20
...
2
−1 2
0
2
−2
21
...
−1
22
...
a
...
x = A−1 ⎣ −1 ⎦ = ⎣ −3
0
−8
25
...
a
...
x =
1
5
⎤
⎡
⎤
−9
2
⎥ ⎢ 5 ⎥
⎥=⎢
⎥
⎦ ⎣ 7 ⎦
5
2
=
hence, the linear system has infinitely many solutions with solution set S =
1
5
−7
8
−4t
t
t ∈ R
...
26
...
22
Chapter 1 Systems of Linear Equations and Matrices
⎡
1 2
27
...
A = ⎣ 1
1
⎤
1 −1
1 −1 ⎦
1 −1
29
...
Therefore, A is not invertible
...
Since u is a solution to A(x) = b and v is a solution to A(x) = 0, then Au = b and Av = b
...
⎡
⎤
⎡
⎤
2
1
1
x
31
...
Let A = ⎣ −1 −1 ⎦ , x =
, and b = ⎣ −2 ⎦
...
b
...
The solution to the linear system is also given by Cb =
1
3
32
...
Since the augmented matrix
⎡
⎤
⎡
2
1
3
2
⎣ −1 −1 −2 ⎦ reduces to ⎣ 0
−− − −
− − −→
3
2
5
0
the unique solution is x =
1
1
...
C =
1
1 0
−1 −2 0
c
...
⎤
3
⎣ −2 ⎦ =
5
1
1
Exercise Set 1
...
If the matrix
is the coefficient matrix of a linear system, then the determinant gives information about the solutions of the
a b
linear system
...
Another class of matrices where finding
c d
the determinant requires a simple computation are the triangular matrices
...
So if A = (aij ) is an n × n matrix, then det(A) = a11 a22 · · · ann
...
• If two rows are interchanged, then the new determinant is the negative of the original determinant
...
• If a multiple of one row is added to another, then the new determinant is unchanged
...
The same properties hold if row is replaced with column
...
1
Two other useful properties are det(At ) = det(A) and if A is invertible, then det(A−1 ) = det(A)
...
6 is that a square matrix is invertible if and only if its
determinant is not zero
...
6 Determinants
23
A is invertible ⇔ det(A) ̸= 0 ⇔ Ax = b has a unique solution
⇔ Ax = 0 has only the trivial solution
⇔ A is row equivalent to I
...
Solutions to Exercises
1
...
Hence the
determinant is 24
...
Since the matrix is triangular, the determinant
is the product of the diagonal entries
...
5
...
7
...
9
...
Expanding along row one
det(A) = 2
−1
4
1 −2
2
...
4
...
6
...
8
...
− (0)
3
4
−4 −2
+ (1)
3 −1
−4
1
= −5
...
b
...
Expanding along column two
det(A) = −(0)
⎛⎡
3
−4
4
−2
+ (−1)
−1 1
3 0
0 0
2
−1
1
2
1
−4 −2
+ (1)
2 1
3 4
= −5
...
det ⎝⎣ 3 −1 4 ⎦⎠ = 5 e
...
2
0
1
Then det(B ′ ) = −2 det(B) = −10
...
f
...
The row
2
operation does not change the determinant, so det(B ′′ ) = det(B ′ ) = −10
...
Since det(A) ̸= 0, the matrix
A does have an inverse
...
a
...
b
...
c
...
d Since row two contains two zeros this is the preferred expansion
...
Since the determinant is
nonzero, the matrix is invertible
...
Determinant: 13; Invertible
12
...
Determinant: −16; Invertible
14
...
Determinant: 0; Not invertible
16
...
Determinant: 30; Invertible
18
...
Determinant: −90; Invertible
20
...
Determinant: 0; Not invertible
22
...
Determinant: −32; Invertible
24
...
Determinant: 0; Not invertible
26
...
Since multiplying a matrix by a scalar multiplies each row by the scalar, we have that
det(3A) = 33 det(A) = 270
...
det(2A−1 ) = 23 det(A−1 ) =
29
...
Since the matrix is the transpose of the original matrix but with two columns interchanged,
the determinant is −10
...
Expanding along row 3
x2
2
0
x
1
0
2
1
−5
Then the determinant of the matrix
32
...
is 0 when −5x2 + 10x = −5x(x − 2) = 0,
⎤
⎡
1 1 1
1 1 1 1 1
⎢ 0 1 1 1 1
1 1 1 ⎥
⎥
⎢
1 1 1 ⎥ reduces to ⎢ 0 0 1 1 1
⎥ −− − − ⎢
− − −→ ⎣
0 1 1 ⎦
0 0 0 1 1
1 0 1
0 0 0 0 1
that is x = 0 or x = 2
...
Since the reduced matrix is triangular and the product of the diagonal entries is 1, then the determinant
of the original matrix is also 1
...
Since the determinant is a1 b2 − b1 a2 − xb2 + xa2 + yb1 − ya1 , then the determinant will be zero precisely
2 −a1
when y = b2 −a2 x + b1 a1 −a1 b2
...
b1 −a1
b
1
1
b
...
2 −2
1 1 3
1 1 3
Only the matrix C has an inverse
...
Since
reduces to
, the linear system is
− − − → 0 0 −5
2 2 1 −− − −
1 1 3
1 1 3
inconsistent
...
Since
reduces →
− − − to 0 0 0 , there are infinitely many solutions given by
2 2 6 −− − −
1 1 3
1 1
3
x = 3 − y, y ∈ R
...
Since
reduces →
− − − to 0 −4 −5 , the linear system has the unique
2 −2 1 − − − −
solution x = 7 , y = 5
...
a
...
det(A) = 2 c
...
d
...
−4
34
...
A =
1
2
1
2
,B =
1 1
2 2
,C =
1
...
a
...
det(A) = 0 c
...
d
...
⎡
⎤
−1 0 −1
0
2 ⎦ b
...
a
...
c
...
2
2
Therefore, the linear system has either no solutions or infinitely many solutions
...
Since the augmented matrix reduces to
⎡
⎤
⎡
⎤
−1 0 −1 −1
1 0 1 0
⎣ 2
0
2
1 ⎦ −→ ⎣ 0 1 4 0 ⎦
3
1 −3 −3 1
0 0 0 1
the linear system is inconsistent
...
a
...
y
20
= 3x2 − 27x − 12y + 36,
the equation of the parabola is
3x2 − 27x − 12y + 36
...
a
...
x
y 1
−2 −2 1
3
2 1
4 −3 1
y
5
= −29y 2 + 20x − 25y + 106,
the equation of the parabola is
−29y 2 + 20x − 25y + 106 = 0
...
a
...
x2 + y 2
18
5
9
x
y
−3 −3
−1 2
3
0
1
1
1
1
y
5
= −24x2 −24y 2 −6x−60y+234,
the equation of the circle is
−24x2 − 24y 2 − 6x − 60y + 234 = 0
...
a
...
y2 x
16 0
16 0
4 1
9 2
y
−4
4
−2
3
1
1
1
1
1
y
5
= 136x2 − 16y 2 − 328x + 256,
5
25
x
the equation of the hyperbola is
136x2 − 16y 2 − 328x + 256 = 0
...
a
...
y
5
= −84x2 −294y 2+84x+630y+924,
5
25
x
the equation of the ellipse is
−84x2 − 294y 2 + 84x + 630y + 924 = 0
...
a
...
y
5
2
2
= −12+12x −36xy+42y −30y,
5
25
the equation of the ellipse is
25
2
44
...
x=
4
4
2
2
3
2
3
2
y=
2
2
2
2
4
4
3
2
46
...
x=
=0
−5
−3
−5
−3
9
=− ,
5
y=
= 2,
7
6
5
2
5
2
5
2
7
6
−5
−3
=−
47
...
7 Elementary Matrices and LU Factorization
48
...
−12 −7
5 11
−10 −7
12 11
97
=
,
26
−10 −12
12
5
−10 −7
12 11
47
=−
23
50
...
x =
−2 3
2
−2 −3 −8
2
2 −7
2
3
2
−1 −3 −8
−3 2 −7
91
= − 68 , y =
160
= − 103 , y =
x=
2 −2 2
−1 −2 −8
−3 2 −7
2
3
2
−1 −3 −8
−3 2 −7
=−
25
,
28
y=
−2 −8 −4
0
3
1
4 −8 −1
−2 1 −4
0 −4 1
4
0 −1
4 −3
3
4
−1 −3
−8
4
−1 4
−8 3
−1 −3
−8
4
=−
29
28
3
= − 34 , z =
=
10
103 , z
=
−2 1 −8
0 −4 3
4
0 −8
−2 1 −4
0 −4 1
4
0 −1
=
45
17
2
3 −2
−1 −3 −2
−3 2
2
2
3
2
−1 −3 −8
−3 2 −7
=
42
103
52
...
Then det(A) = det(At ) = det(−A) = (−1)n det(A)
...
Therefore, A is not invertible
...
Expansion of the determinant of A across row one equals the expansion down column one of At , so
det(A) = det(At )
...
If A = (aij ) is upper triangular, then det(A) = a11 a22 · · · ann
...
Exercise Set 1
...
Just like the resulting linear factors of a quadratic are useful and provide information about
the original quadratic polynomial, the lower triangular and upper triangular factors in an LU factorization
are easier to work with and can be used to provide information about the matrix
...
For example,
⎡
⎤
⎡
⎤
1 0 0
1 0 0
⎣ 0 1 0 ⎦ (−1)R1 + R3 → R3 ⎣ 0 1 0 ⎦
...
For example, using the elementary matrix above
⎡
⎤⎡
⎤ ⎡
⎤
1 0 0
1 3 1
1 3 1
EA = ⎣ 0 1 0 ⎦ ⎣ 2 1 0 ⎦ = ⎣ 2 1 0 ⎦
...
To find an LU
factorization of A :
28
Chapter 1 Systems of Linear Equations and Matrices
• Row reduce the matrix A to an upper triangular matrix U
...
• If row interchanges are not required, then each of the elementary matrices is lower triangular, so that
−1
−1
A = E1 · · · Ek U is an LU factorization of A
...
When A = LU is an LU factorization of A, and A is invertible, then A = (LU )−1 = U −1 L−1
...
An LU factorization can also be used to solve
a linear system
...
That is,
⎡
⎤⎡
⎤
1 0 0
1 −1 2
A = LU = ⎣ 2 1 0 ⎦ ⎣ 0 4 −3 ⎦
...
As the final step solve U x = y = ⎣ −4 ⎦ using back substitution, so that x3 = 3 , x2 =
2
3
1
1
7
4 (−4 + 3x3 ) = 8 , x1 = 2 + x2 − 2x3 = − 8
...
b
...
b
...
E = ⎣ 2 1 0 ⎦
⎡ 0 0 1 ⎤
1 2 1
EA = ⎣ 5 5 4 ⎦
1 1 −4
⎡
⎤
1 0 0
a
...
b
...
b
...
E = ⎣ 1 0 0 ⎦
⎡ 0 0 1 ⎤
3 1 2
EA = ⎣ 1 2 1 ⎦
1 1 −4
⎡
⎤
1 0 0
a
...
a
...
The corresponding
elementary matrices that transform A to the identity are given in
I = E3 E2 E1 A =
1
0
−3
1
1
0
0
1
10
1 0
2 1
A
...
Since elementary matrices are invertible, we have that
−1 −1 −1
A = E1 E2 E3 =
1 0
−2 1
1 0
0 10
1 3
0 1
...
7 Elementary Matrices and LU Factorization
29
1
1
6
...
The required row operations are R1 + R2 → R2 , 10 R2 → R2 , −5R2 + R1 → R1 , and − 2 R1 → R1
...
b
...
7
...
The identity matrix can be written as I = E5 E4 E3 E2 E1 A, where the elementary matrices are
⎡
⎡
1
E1 = ⎣ −2
0
1
E5 = ⎣ 0
0
8
...
Row
⎤
⎡
0 0
1 0
1 0 ⎦ , E2 = ⎣ 0 1
0 1
−1 0
⎤
⎡
0
1
0 ⎦ , E3 = ⎣ 0
1
0
⎤
⎡
−2 0
1 0
1 0 ⎦ , E4 = ⎣ 0 1
0 1
0 0
⎤
11
0 ⎦ , and
1
⎤
0 0
−1 −1 −1 −1 −1
1 −5 ⎦
...
A = E1 E2 E3 E4 E5
0 1
operations to reduce the matrix to the identity matrix are
3R1 + R2 → R2
4R2 + R3 → R3
−R3 → R3
−2R1 + R3 → R3
−R1 → R1
R2 + R1 → R1
R2 ↔ R3
−R2 → R2
−R3 + R2 → R2
with corresponding elementary matrices
⎡
⎤
⎡
⎤
⎡
⎤
⎡
1 0 0
1 0 0
1 0 0
1
E1 = ⎣ 3 1 0 ⎦ , E2 = ⎣ 0 1 0 ⎦ , E3 = ⎣ 0 0 1 ⎦ , E4 = ⎣ 0
0 0 1
−2 0 1
0 1 0
0
⎡
⎤
⎡
⎤
⎡
⎤
⎡
−1 0 0
1 0 0
1 0 0
1
E5 = ⎣ 0 1 0 ⎦ , E6 = ⎣ 0 −1 0 ⎦ , E7 = ⎣ 0 1 0 ⎦ , E8 = ⎣ 0
0 0 1
0 0 1
0 0 −1
0
⎡
⎤
1 0 0
−1 −1 −1 −1 −1 −1 −1 −1 −1
E9 = ⎣ 0 1 −1 ⎦
...
A = E1 E2 E3 E4 E5 E6 E7 E8 E9
0 0 1
0
1
4
1
1
0
⎤
0
0 ⎦,
1
⎤
0
0 ⎦ , and
1
9
...
The identity matrix can be written as I = E6 · · · E1 A, where the elementary matrices are
⎡
⎤
⎡
⎤
⎡
⎤
⎡
0 1 0
1 −2 0
1 0 0
1 0
E1 = ⎣ 1 0 0 ⎦ , E2 = ⎣ 0 1 0 ⎦ , E3 = ⎣ 0 1 0 ⎦ , E4 = ⎣ 0 1
0 0 1
0 0 1
0 −1 1
0 0
⎡
⎤
⎡
⎤
1 0 1
1 0 0
−1 −1
−1
E5 = ⎣ 0 1 0 ⎦ , and E6 = ⎣ 0 1 0 ⎦
...
A = E1 E2 · · · E6
0 0 1
0 0 −1
10
...
There are only two row interchanges needed, R1 ↔ R4 and R2 ↔ R3
...
0 ⎦
1
⎤
0
1 ⎦,
1
30
Chapter 1 Systems of Linear Equations and Matrices
−1 −1
b
...
1 −2
, by means of the one
0
1
1 0
, so that EA = U
...
The matrix A can be row reduced to an upper triangular matrix U =
operation 3R1 + R2 → R2
...
L =
1 0
1/6 1
,U =
1 0
−3 1
1
0
−2
1
...
The matrix A can be row reduced to an upper triangular matrix U = ⎣ 0 1 3 ⎦ , by means of
0 0 1
the operations (−2)R1 + R2 → R2 and 3R1 + R3 → R3
...
Then the LU factorization of A is A = LU =
0 0 1
⎡
⎤⎡ 3 0 1 ⎤
1 0 0
1 2 1
−1 −1
E1 E2 = ⎣ 2 1 0 ⎦ ⎣ 0 1 3 ⎦
...
L = ⎣ −1 1 0 ⎦ ,
15
...
A = LU = ⎢
⎣ 1 0 1 0 ⎦⎣ 0 0
1
5 ⎦
3 0 0 1
0 0
0
1
17
...
We have that A = LU =
...
The last step is to solve U x = y, which has the unique solution x1 = 2 and x2 = 3
...
An LU factorization of the matrix A is given by A =
Ly =
2
−7/2
1 0
−2 1
3 −2
0
1
...
2
2
⎡
1 0
19
...
To solve
1
and hence, the solution is y1 = 0, y2 = −3, and y3 = 1
...
1
...
An LU factorization of the matrix A is given by A = ⎣ 2 1 0 ⎦ ⎣ 0 1
−2 0 1
0 0
⎡
⎤
−1
to Ly = ⎣ 8 ⎦ is y1 = −1, y2 = 10, y3 = 2, so the solution to U x = y is x1 = 1, x2
4
21
...
Then the solution
1
= 2, x3 = 2
...
An LU factorization of the matrix A is given by A = ⎢
⎣ −1 0 1 0 ⎦ ⎣ 0 0
2 −2 0 1
0 0
⎡
⎤
5
⎢ −2 ⎥
⎥
solution to Ly = ⎢
⎣ 1 ⎦ is y1 = 5, y2 = −2, y3 = 6, y4 = −13, so the solution to U x
1
51
13
−34, x3 = 2 , x4 = − 2
...
Then the
1 3 ⎦
0 2
= y is x1 =
31
2 , x2
=
23
...
This is reflected in the matrix P in the factorization
⎡
⎤⎡
⎤⎡
⎤
1 0 0
1 −3 2
0 0 1
A = P LU = ⎣ 0 1 0 ⎦ ⎣ 2 5 0 ⎦ ⎣ 0 1 − 4 ⎦
...
In order to row reduce the matrix A to an upper triangular matrix requires the operation of switching
rows
...
2
2
0 1 0
0 0 1
0 0
1
25
...
26
...
Using the LU factorization A = LU = ⎣ 1
1
⎡
1
2
A−1 = U −1 L−1 = ⎣ 0
0
−1
2
1
0
0
1
3
1
3
⎤⎡
0 0
2 1
1 0 ⎦⎣ 0 1
1 1
0 0
⎤⎡
1
0
⎦ ⎣ −1 1
0 −1
⎤
−1
−1 ⎦ , we have that
3
⎤ ⎡
1
0
1 −2
2
0 ⎦ = ⎣ −1 3
1
1
0 −3
0
1
3
1
3
⎤
⎦
...
A−1 = (LU )−1
⎛⎡
⎤⎡
1
0 0
−3 2
= ⎝⎣ −1 1 0 ⎦ ⎣ 0 1
1 −1 1
0 0
⎡ 1
⎤
1
− 3 −1
3
= ⎣ 1 −1 −2 ⎦
0
1
1
⎤⎞−1 ⎡ 1
1
−3
2 ⎦⎠ = ⎣ 0
1
0
2
3
1
0
⎤⎡
1 0
−1
−2 ⎦ ⎣ 1 1
1
0 1
⎤
0
0 ⎦
1
29
...
This gives the system of equations ad = 0, ae = 1, bd = 1, be + cf = 0
...
But this incompatible with the third equation
...
Since A is row equivalent to B there are elementary matrices such that B = Em
...
D1 B
...
D1 B =
Dn
...
E1 A and hence, A is row equivalent to C
...
If A is invertible, there are elementary matrices E1 ,
...
Similarly, there
−1
−1
are elementary matrices D1 ,
...
Then A = Ek · · · E1 Dℓ · · · D1 B, so A is row
equivalent to B
...
a
...
b
...
c
...
Exercise Set 1
...
We need to find positive whole numbers x1 , x2 , x3 , and x4 such that x1 Al3 + x2 CuO −→ x3 Al2 O3 + x4 Cu
is balanced
...
9
3
= x4
A particular solution that balances the equation is given by x1 = 2, x2 = 9, x3 = 3, x4 = 9
...
To balance the equation x1 I2 + x2 Na2 S2 O3 −→ x3 NaI + x4 Na2 S4 O6 , we solve the linear system
⎧
⎪2x1 = x3
⎪
⎪
⎨2x = x + 2x
2
3
4
, so that x1 = x4 , x2 = 2x4 , x3 = 2x4 , x4 ∈ R
...
1
...
We need to find positive whole numbers x1 , x2 , x3 , x4 and x5 such that
x1 NaHCO3 + x2 C6 H8 O7 −→ x3 Na3 C6 H5 O7 + x4 H2 O + x5 CO2
...
−1 0 ⎦
3
−1 0
Hence the solution set for the linear system is given by x1 = x5 , x2 = 1 x5 , x3 =
3
particular solution that balances the equation is x1 = x4 = x5 = 3, x2 = x3 = 1
...
To balance the equation
1
3 x5 , x4
= x5 , x5 ∈ R
...
= x6
= 4x4 + 12x6 + x7
= x4 + 3x5 + 2x7
matrix for the equivalent homogeneous linear system
⎤
⎡
0 0 0
0
0
0 0
1 0
⎢ 0 1
0 1 0
0
−3
0 0 ⎥
⎥
⎢
⎢
2 0 0 −1
0
0 0 ⎥
⎥ reduces to ⎢ 0 0
⎥ −− − − ⎢ 0 0
− − −→ ⎢
10 0 0
0
−1
0 0 ⎥
⎣ 0 0
35 4 −4 0 −12 −1 0 ⎦
0 2 −1 −3
0
−2 0
0 0
0
1
0
0
0
−1
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
16
− 327
13
− 327
374
− 327
16
− 327
26
− 327
130
− 327
0
0
0
0
0
0
⎤
⎥
⎥
⎥
⎥
...
5
...
, x7 be defined as in the figure
...
300
x1
x2
800
x3
500
x6
x5
x4
700
300
Flow In
700+300+500+300
x2 + x3
x5 + 700
x6 + 300
500+300
Flow Out
x1 + 800 + x4 + x7
x1 + 800
x3 + x4
x5 + x7
x2 + x6
x7
Equating the total flows in and out gives a linear system with solution x1 = 1000 − x4 − x7 , x2 = 800 − x6, x3 =
1000 − x4 + x6 − x7 , x5 = 300 + x6 − x7 , with x4 , x6 , and x7 free variables
...
As a sample solution let, x4 = 200, x6 = 300, x7 = 100,
then x1 = 700, x2 = 500, x3 = 1000, x5 = 500
...
Let x1 , x2 ,
...
x1
100
x5
500
400
x2
300
300
500
x4
x7
400
x3
x8
200
x6
500
Balancing all in and out flows generates the linear system
⎧
⎪x1 + 500 + x6 + 200 + 500
⎪
⎪
⎪x + 500
⎪ 1
⎪
⎪
⎪
⎪x5 + 300
⎪
⎨
x + 500
⎪ 7
⎪
⎪
⎪x6 + 400
⎪
⎪
⎪x + 200
⎪ 3
⎪
⎪
⎩
x2 + x4
= 100 + 400 + x8 + 500 + 300
= x2 + x5
= 100 + 400
= x4 + 300
= x7 + x8
= 400 + 500
= x3 + 300
...
In order to have all positive flows, for example, let x7 = 300 and x8 = 200, so
x1 = 200, x2 = 500, x3 = 700, x4 = 500, x5 = 200, x6 = 100
...
Equating total flows in and out gives the linear system
⎧
+ x4
= 150
⎪x1
⎪
⎪
⎨x − x
− x5 = 100
1
2
⎪ x2 + x3
= 100
⎪
⎪
⎩
−x3 + x4 + x5 = −50
with solution x1 = 150 − x4 , x2 = 50 − x4 − x5 , and x3 = 50 + x4 + x5
...
8
...
If x8 ≥ 150, then all the flows will remain positive
...
If x1 , x2 , x3 , and x4 denote the number of grams required from each of the four food groups, then the
specifications yield the linear system
⎧
⎪20x1 + 30x2 + 40x3 + 10x4 = 250
⎪
⎪
⎨40x + 20x + 35x + 20x = 300
1
2
3
4
...
4, x2 = 3
...
6, x4 = 6
...
10
...
1
...
02 0
...
05
300
22
11
...
A = ⎣ 0
...
02 0
...
The internal demand vector is A ⎣ 150 ⎦ = ⎣ 20 ⎦
...
03 0
...
1
200
74
external demand for the three sectors is 300 − 22 = 278, 150 − 20 = 130, and 200 − 74 = 126, respectively
...
02 0
...
06
c
...
03 1
...
05 ⎦ d
...
05 0
...
13
⎡
⎤⎡
⎤ ⎡
⎤
1
...
06 0
...
2
X = (I − A)−1 D = ⎣ 0
...
04 0
...
9 ⎦
...
05 0
...
13
600
832
...
The level of production for each sector of the economy is given by X = (I − A)−1 D and hence
⎡
⎤
56
...
5 ⎥
⎢
⎥
⎢ 23
...
1 ⎥
⎢
⎥
⎢ 57
...
X≈⎢
⎥
⎢ 41
...
2 ⎥
⎢
⎥
⎢ 30
...
0 ⎦
55
...
a
...
If the parabola is y = ax2 + bx + c, then
plot
...
1000
⎪
⎩
800
3960100a + 1990b + c = 690
600
400
200
2000
1995
1990
1985
1980
1975
1970
1965
0
c
...
(b) is a = 27 , b = − 10631 , c = 5232400, so that
20
2
the parabola that approximates the data is y =
27 2
10631
20 x −
2 x + 5232400
...
The model gives an estimate, in billions of
dollars, for health care costs in 2010 at
1400
1200
1000
800
600
400
200
2000
1995
1990
1985
1980
1975
1970
0
1965
27
10631
(2010)2 −
(2010) + 5232400 = 2380
...
Using the data for 1985, 1990, and 2000, a parabola y = ax2 + bx + c will fit the three points provided
there is a solution to the linear system
⎧
⎪a(1985)2 + b(1985) + c = = 1
⎨
a(1990)2 + b(1990) + c
= 11
...
The estimated number of subscribers predicted by the model in 2010 is
15
3
71
56080223
2
≈ 2418 million
...
This reflects the
fact that the rate of growth during this period is slowing down
...
0
...
08
1500000
1398000
1500000
1314360
15
...
A =
b
...
A2
=
d
...
1 0
...
a
...
8 0
...
2 0
...
Since A
800
200
=
800
200
, after the first week there are 800 healthy mice and
800
800
=
, after the second week there are 800 healthy mice and
200
200
800
800
200 infected mice
...
Since A6
≈
, after six weeks the number of healthy and infected mice
200
200
800
still has not changed
...
4, we will see that
is the steady state solution to this problem
...
9 0
...
1
17
...
1 0
...
3 ⎦ , so the number of the population in each category after
0
...
6 ⎤
⎡
⎤0 ⎡
⎡
⎤ ⎡
⎤
20000
23000
20000
24900
one month are given by A ⎣ 20000 ⎦ = ⎣ 15000 ⎦ , after two months by A2 ⎣ 20000 ⎦ = ⎣ 13400 ⎦ , and
10000 ⎡
12000
10000
11700
⎡
⎤
⎤
20000
30530
after one year by A12 ⎣ 20000 ⎦ ≈ ⎣ 11120 ⎦
...
c
...
Let c denote the number of consumers
...
98 0
...
02 0
...
a
...
4I1 + 3I2
3I2 + 5I3
=8
= 10
n
c
0
≈
0
...
2c
...
797c
, so it takes
0
...
4I1 + 3I2
⎪
⎩
3I2 + 5I3
The solution to the linear system is I1 ≈ 0
...
7, I3 ≈ 0
...
a
...
6I2 + 4I3
+ 2I5 + 3I6 = 18
⎪
⎪
⎪I4 + I6 = I3
⎩
⎪
4I3 + 6I4
= 16
⎩
I1 + I6 = I2
approximately 17 months
...
The augmented matrix for the linear system
⎡
1 −1 0 0 1
⎢ 0 0 −1 1 1
⎢
⎢ 0 0 −1 1 0
⎢
⎢ 1 −1 0 0 0
⎢
⎢ 4 6
0 0 0
⎢
⎣ 0 6
4 0 2
0 0
4 6 0
⎤
0 0
1 0 ⎥
⎥
1 0 ⎥
⎥
1 0 ⎥,
⎥
0 14 ⎥
⎥
3 18 ⎦
0 16
so an approximate solution is I1 = 1
...
5, I3 = 1
...
5, I5 = 0
...
3
...
Denote the average temperatures of the four points by a, b, c, and d, clockwise starting with the upper
left point
...
For example, at the first point a =
22
...
4
The solution is a ≈ 24
...
6, c ≈ 23
...
9
...
4
25
...
4
26
...
3
23
...
3
23
...
7
21
...
⎥
⎥
⎥
⎥
⎥
⎥
⎦
Review Exercises Chapter 1
⎡
⎢
1
...
A = ⎢
⎣
0, the matrix
⎤
1 1 2 1
−1 0 1 2 ⎥
⎥ b
...
Since the determinant of the coefficient matrix is not
2 2 0 1 ⎦
1 1 2 3
is invertible and the linear system is consistent and has a unique solution
...
Since the linear system Ax = b has a unique solution for every b, the only solution to the homogeneous
system is the trivial solution
...
From part (b), since the determinant is not zero the inverse exists and
⎡
⎤
−3 −8 −2 7
1⎢ 5
8
6 −9 ⎥
⎥
...
The solution can be found by using the inverse matrix and is given by
⎡
⎤
⎡
⎤
3
11
⎢ 1 ⎥ 1 ⎢ −17 ⎥
⎥
⎢
⎥
x = A−1 ⎢
⎣ −2 ⎦ = 4 ⎣ 7 ⎦
...
a
...
b
...
c
...
d
...
e
...
f
...
2
2
a b
3
...
The matrix A is idempotent provided A2 = A, that is,
0 c
a2 ab + bc
0
c2
a b
0 c
=
...
From the first and third equations, we have that a = 0 or a = 1 and c = 0 or c = 1
...
Given these constraints, the possible solutions
are a = 0, c = 0, b = 0; a = 0, c = 1, b ∈ R; a = 1, c = 0, b ∈ R; a = 1, b = 0, c = 1
a b
1 0
4
...
If A is to commute with every 2 × 2 matrix, then it must commute with
and
c d
0 0
0 0
...
0 d
0 0
,
...
Then A will commute with every 2 × 2 matrix if and only if it has the form
a 0
A=
, a ∈ R
...
a
...
b
...
c
...
By part (a), M 2 = kI, for some k
...
6
...
Using the labels shown
in the figure the resulting linear system is
x5
100
x1
300
x6
500
300
x2
200
x7
400
x3
400
x8
600
x4
500
The augmented
⎡
1 0
⎢ 0 1
⎢
⎢ 0 0
⎢
⎢ 1 0
⎢
⎢ 0 1
⎢
⎣ 0 1
0 0
⎧
⎪x1
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
x
⎪ 1
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩
= 300
= 100
= −200
= 300
= 200
= 900
= 1200
...
7
...
Since the matrix A is triangular, the determinant is the product of the diagonal entries and hence,
det(A) = 1
...
b
...
40
Chapter 1 Systems of Linear Equations and Matrices
8
...
Since det(At ) = det(A), then At is also invertible
...
Therefore, (A−1 )t = (At )−1
...
a
...
Using the properties of the
transpose operation, we have that B t = (A + At )t = At + (At )t = At + A = B
...
b
...
Let α and β be scalars such that α + β = 1
...
So
A(αu + βv) = A(αu) + A(βv) = αA(u) + βA(v) = αb + βb = (α + β)b = b
...
T
1
1
1
1
2
...
All linear systems have either one solution, infinitely many
solutions, or no solutions
...
F
...
F
...
8
...
T
10
...
T
12
...
T
14
...
F
...
16
...
F
...
T
7
...
−1 −1
1
1
and B =
...
T
(ABC)−1 = C −1 B −1 A−1
B=
1 0
0 1
−1
0
0 −1
, and
18
...
F
...
20
...
F
...
T
23
...
T
25
...
T
28
...
F
...
F
...
30
...
4
4
1 1
−3 2
...
T
32
...
F
...
T
35
...
T
38
...
⎡
36
...
T
1
⎣ −1
0
40
...
T
43
...
F
...
45
...
F
...
1
A vector in standard position in the Euclidean spaces R2 and R3 is a directed line segment that begins at
the origin and ends at some point in either 2 or 3 space
...
In Rn if vectors are viewed as matrices with n rows and one column, then addition
and scalar multiplication of vectors agree with the componentwise operations defined for matrices
...
For example, cv changes
the length of the vector v and possibly the direction
...
If 0 < c < 1, the length of the vector is decreased and the direction is unchanged
...
Using addition and scalar multiplication of vectors one vector can be obtained from combinations of other
vectors
...
5
0
−1
1
That is, are there scalars c1 , c2 and c3 such that
⎡
⎤
⎡
⎤
⎡
⎤
⎡
⎤ ⎡
⎤
−6
−1
−1
0
−c1 − c2
⎣ 3 ⎦ = c1 ⎣ 2 ⎦ + c2 ⎣ 2 ⎦ + c3 ⎣ 1 ⎦ = ⎣ 2c1 + 2c2 + c3 ⎦?
5
0
−1
1
−c2 + c3
Since two vectors
⎧
⎪−c1 − c2
⎨
2c + 2c2 + c3
⎪ 1
⎩
−c2 + c3
echelon form are
are equal if and only if corresponding components are equal, this leads to the linear system
= −6
= 3
...
As a result the vector can be
⎤
0
1 ⎦
...
As a consequence, every vector v in R3 can be written in terms of the three given vectors
...
The linear system can also have infinitely many solutions for a given
v, which says the vector can be written in terms of the other vectors in many ways
...
1 Vectors in Rn
43
Solutions to Exercises
1
...
3
⎡
⎤
11
3
...
−3(u + v) − w = ⎣ −7 ⎦
−8
7
...
−6
2
...
3
−1
2
⎡
⎤
−6
4
...
2u − 3(u − 2w) = ⎣ −10 ⎦
−3
⎡
⎤
−3
⎢ −10 ⎥
⎥
8
...
To show that (x1 + x2 )u = x1 u + x2 u, we will expand the left hand side to obtain the right hand side
...
0
0
0
0
10
...
Simply multiply each of the vectors e1 , e2
and e3 by the corresponding component of the
vector
...
v = −e1 + 3e2 + 2e3
13
...
v = −e1 + 1 e3
2
⎡
⎤
a
15
...
Then −u + 3v − 2w = 0 if and only if
c
⎡
⎤ ⎡
⎤ ⎡
⎤ ⎡
⎤
−1
−6
2a
0
⎣ −4 ⎦ + ⎣ 6 ⎦ − ⎣ 2b ⎦ = ⎣ 0 ⎦
...
−1
⎡
⎤
a
16
...
Then −u + 3v − 2w = 0 if and only if
c
⎡
⎤ ⎡
⎤ ⎡
⎤ ⎡
⎤
2
6
2a
0
⎣ 0 ⎦ + ⎣ −9 ⎦ − ⎣ 2b ⎦ = ⎣ 0 ⎦
...
11
−2
c1 + 3c2
−2c1 − 2c2
1
be written as the combination 7
4
−2
17
...
Thus, the vector
4
4
= −1
3
−5
...
Thus, the vector
5c1 − 2c2
=5
2
−1
written as the combination 5
+ 10
...
The linear system is
19
...
Hence, the vector
2c1 − 2c2 = 1
−1
and
...
Hence, the vector
can not be
1
3c1 − 6c2 = 1
−1
2
written as a combination of
and
...
The linear system is 4c1 + 3c2 + c3
= −3
...
The vector ⎣ −3 ⎦ is a
4
combination of the three vectors
...
The linear system is −c1 + c2 + c3 = 0 , which has the unique solution c1 = −1, c2 = −1, c3 = 0
⎪
⎩
c
− c3 = −1
⎡
⎤1
−1
and hence, the vector ⎣ 0 ⎦ can be written as a combination of the other vectors
...
The linear system is
2
...
The linear system is
, which is inconsistent and hence, the vector ⎣ 0
c2 − c3 = 0
⎪
⎩
2
c1 + c2 − c3 = 2
be written as a combination of the other vectors
...
The linear system is 2c1 + 2c2 + c3
= 7 , which is inconsistent and hence, the vector ⎣ 7
⎪
⎩
3
4c1 + 4c2 + 2c3 = 3
be written as a combination of the other vectors
...
All 2×2 vectors
...
All 2×2 vectors
...
3 a + 3 b
...
Row reduction of the matrix
1 2
0 0
gives
⎤
⎦ cannot
=
2
a
, which is consistent when b = 1 a
...
3a
⎡
⎤
a
29
...
Moreover, c1 = 1 a − 2 b + 30
...
R
...
All vectors of the form
32
...
b
that a, b ∈ R
...
Let u = ⎢
...
⎥ , and w = ⎢
...
⎦
⎣
...
⎦
...
...
⎥ = ⎢
⎥=⎢
⎥ u + (v + w)
...
...
...
⎦ ⎣
...
⎝⎣
⎦⎠ ⎣
...
...
un + vn
wn
(vn + un ) + wn
un + (vn + wn )
28
...
Let u = ⎢
⎣
u1
u2
...
...
a+b
a
such that a ∈ R
...
Then u = ⎢
⎦
⎣
u1
u2
...
...
⎡
⎤
⎡
u1
⎢ u2 ⎥
⎢
⎢
⎥
⎢
35
...
Then u + (−u) = ⎢
⎣
...
un
the additive inverse of u
...
...
un
0
0
...
...
...
0
−u1
−u2
...
...
...
un
⎡
⎥ ⎢
⎥ ⎢
⎥=⎢
⎦ ⎣
0
0
...
...
...
un
⎤
⎥
⎥
⎥
...
Hence, the vector −u is
⎦
46
Chapter 2 Linear Combinations and Linear Independence
⎡
⎢
⎢
36
...
Let u = ⎢
⎣
u1
u2
...
...
Let u = ⎢
⎣
⎡
⎢
⎢
39
...
...
un
un
u1
u2
...
...
Then
⎦
u1 + v1
u2 + v2
...
...
...
cun + cvn
⎥
⎥
⎥ and c and d scalars
...
...
⎤
⎤
⎢
⎢
(c + d)u = ⎢
⎣
⎡
v1
v2
...
...
...
(c + d)un
⎤
⎤
⎡
⎥ ⎢
⎥ ⎢
⎥=⎢
⎦ ⎣
cu1 + du1
cu2 + du2
...
...
Then (1)u = ⎢
⎥ = u
...
...
(1)un
cun
⎡
⎥ ⎢
⎥ ⎢
⎥=⎢
⎦ ⎣
⎡
⎤
cu1
cu2
...
...
Then c(du) = c ⎢
⎦
⎣
⎤
⎡
cu1
cu2
...
...
...
dun
⎤
⎤
⎡
⎥ ⎢
⎥ ⎢
⎥+⎢
⎦ ⎣
⎤
⎡
⎥ ⎢
⎥ ⎢
⎥+⎢
⎦ ⎣
⎡
⎥ ⎢
⎥ ⎢
⎥=⎢
⎦ ⎣
cv1
cv2
...
...
...
dun
cdu1
cdu2
...
...
⎦
⎤
⎥
⎥
⎥ = cu + du
...
⎦
40
...
Then u + z1 = u + z2 and hence, z1 = z2
...
2
The vectors in R2 and R3 , called the coordinate vectors, are the unit vectors that define the standard axes
...
Every vector in the Euclidean spaces is a combination of the coordinate
vectors
...
v3
0
0
1
The coordinate vectors are special vectors but not in the sense of generating all the vectors in the space
...
The vector v is a linear combination of v1 , v2 ,
...
, cn such that
v = c1 v1 + c2 v2 + · · · + cn vn
...
2 Linear Combinations
47
For specific vectors we have already seen that an equation of this form generates a linear system
...
Notice that the set of all linear
combinations of the vectors e1 and e2 in R3 is the xy plane and hence, not all of R3
...
In the exercises when asked to determine whether or not
a vector is a linear combination of other vectors, first set up the equation above and then solve the ⎡
⎡
⎤ resulting
⎤
1
0
linear system
...
Then form the augmented matrix
−1
c
⎡
⎤
⎡
⎤
1 0
2 a
1 0 2
a
⎣ 1 1
⎦
...
This is not all the vectors in R3
...
−1
1
−1
Solutions to Exercises
1
...
In matrix form the linear system is
1 −2 −4
1 0 2
that row− − − −
, which has the unique solution c1 = 2 and c2 = 3
...
−1 3 13
1 0 −1
row− − − −
− reduces to
2 0 −2 − − − − − → 0 1 4
solution c1 = −1, c2 = 4, then the vector v is a linear combination of v1 and v2
...
Since the resulting linear system
−2 3 1
4 −6 1
v is not a linear combination of v1 and v2
...
Since the resulting linear system
1
−2
is not a linear combination of v1 and v2
...
Since the resulting linear system
4
...
6
...
7
...
Since
⎡
⎤
⎡
⎡
⎤
⎡
⎤
1 −2 3
5
1
2
3 −2 2
1 0 0 −4
⎣ −1 −1 −1 −4 ⎦ −→ ⎣ 0
⎣ −2 0
0 8 ⎦ −→ ⎣ 0 1 0 2/3 ⎦ ,
0 −1 −3 −7
0
0 −3 −1 2
0 0 1 −4
0 0
1 0
0 1
⎤
1
1 ⎦,
2
is consistent and has a unique solution, then v can
be written in only one way as a linear combination
of v1 , v2 , and v3
...
Since
⎡
⎤
⎡
⎤
1 −1 0 −1
1 0 0 0
⎣ 2 −1 1 1 ⎦ −→ ⎣ 0 1 1 0 ⎦ ,
−1 3 2 5
0 0 0 1
is consistent and has a unique solution, then v can
be written in only one way as a linear combination
of v1 , v2 , and v3
...
the linear system is inconsistent, so v can not be
written as a linear combination of v1 , v2 , and v3
...
Since
⎡
⎤
2
1 −1
3
⎢ −3 6 −1 −17 ⎥
⎢
⎥
⎣ 4 −1 2
17 ⎦
1
2
3
7
⎡
1
⎢ 0
⎢
⎣ 0
0
0
1
0
0
0
0
1
0
⎤
3
−1 ⎥
⎥,
2 ⎦
0
10
...
12
...
3 0 −1 3
1 0 −1/3 1
reduces →
− − − to 0 1 −7/3 1 , there are infinitely many ways in which scalars
1 −1 2 0 − − − −
can be selected so v is a linear combination of v1 , v2 , and v3
...
3
3
13
...
Specifically, any set of scalars given by
c1 = 1 − c3 , c2 = 2 + c3 , c3 ∈ R
...
Since ⎣ 1 −1 −3 −1 −1 ⎦ reduces− ⎣ 0 1 0 1 −2 ⎦ , there are infinitely many ways in
−− − →
− − − to
1 2 −1 −2 −3
0 0 1 −2 2
which scalars can be selected so v is a linear combination of v1 , v2 , v3 , and v4
...
⎡
⎤
⎡
⎤
−1 0
0 −3 −3
1 0 0
3
3
16
...
Specifically, any set of
scalars given by c1 = 3 − 3c4 , c2 = −5 + 12c4 , c3 = 5 − 10c4 , c4 ∈ R
...
Since
2
...
The matrix equation
c1 M 1 + c2 M 2 + c3 M 3 = c1
leads to the augmented matrix
⎡
1 −2 −1
⎢ 2
3
3
⎢
⎣ 1
1
2
−1 4
1
1
2
1 −1
+ c2
−2
1
3
4
−1 3
2 1
+ c3
⎤
⎡
−2
⎢
4 ⎥
⎥ , which reduces to ⎢
−−−−−−
4 ⎦ −− − − − −→⎣
0
1
0
0
0
0
1
0
0
0
0
1
0
=
−2 4
4 0
⎤
−1
−1 ⎥
⎥,
3 ⎦
0
and hence, the linear system has the unique solution c1 = −1, c2 = −1, and c3 = 3
...
18
...
Consequently, the matrix M is a linear combination of the three
matrices
...
The matrix M is not a linear combination of the three matrices since
⎡
⎤
⎡
2
3
3
2
⎢ 2 −1 −1 1 ⎥
⎢
⎢
⎥ reduces to ⎢
⎣ −1 2
⎦ −− − − ⎣
− − −→
2 −1
3 −2 2
2
1
0
0
0
0
1
0
0
0
0
1
0
⎤
0
0 ⎥
⎥,
0 ⎦
1
so that the linear system is inconsistent
...
Since the augmented matrix ⎢
⎣ 0 0 0 3 ⎦ −− − − ⎣ 0 0 1 0 ⎦
− − −→
−1 0 1 4
0 0 0 1
sistent
...
Consequently, the matrix M
can not be written as a linear combination of M1 , M2 , and M3
...
Ax = 2
−
22
...
The matrix product is AB = ⎣ 13 −3 5 ⎦ , and the column vectors are also given by (AB)1 =
−16 −6 −3
⎡
⎤
⎡
⎤
⎡
⎤
⎡
⎤ ⎡
⎤ ⎡
⎤
2
0
−1
2
0
−1
3 ⎣ 1 ⎦ − 2 ⎣ −1 ⎦ + 2 ⎣ 4 ⎦ , (AB)2 = 2 ⎣ 1 ⎦ + ⎣ −1 ⎦ − ⎣ 4 ⎦ , and
−4 ⎡
−4
3
1
⎤3
⎡
⎤1 ⎡
⎤
2
0
−1
(AB)3 = ⎣ 1 ⎦ + 0 ⎣ −1 ⎦ + ⎣ 4 ⎦
...
The matrix product is AB =
50
Chapter 2 Linear Combinations and Linear Independence
25
...
These two polynomials will agree for all x if and only if the coefficients of like terms agree
...
Therefore, the polynomial p(x) can not be written
as a linear combination of 1 + x and x2
...
Since c1 (1 + x) + c2 (x2 ) = −x3 + 3x + 3 has the solution c1 = 3, c2 = −1, then p(x) can be written as a
linear combination of 1 + x and x2
...
Consider the equation c1 (1 + x) + c2 (−x) + c3 (x2 + 1) + c4 (2x3 − x + 1) = x3 − 2x + 1, which is equivalent
to
(c1 + c3 + c4 ) + (c1 − c2 − c4 )x + c3 x2 + 2c4 x3 = x3 − 2x + 1
...
Hence, x3 − 2x + 1 = 1 (1 + x) + 2(−x) + 0(x2 + 1) +
2
2
2
1
3
2 (2x − x + 1)
...
Equating the coefficients of like terms in the equation c1 (1 + x)+ c2 (−x)+ c3 (x2 + 1)+ c4 (2x3 − x+ 1) = x3
gives the linear system c1 + c3 + c4 = 0, c1 − c2 − c4 = 0, c3 = 0, 2c4 = 1, which has the unique solution
c1 = − 1 , c2 = −1, c3 = 0, c4 = 1
...
2
2
2
2
29
...
c
30
...
31
...
v = v1 + v2 + v3 + v4
= v1 + v2 + v3 + (v1 − 2v2 + 3v3 )
v = v1 + v2 + v3 + v4
= v1 + (2v1 − 4v3 ) + v3 + v4
= 2v1 − v2 + 4v3
33
...
c1
c1
= 3v1 − 3v3 + v4
34
...
In order to show that S1 = S2 , we will show that each is a subset of the other
...
, ck such that v = c1 v1 + · · · + ck vk
...
If v ∈ S2 , then v = c1 v1 + · · · + (cck )vk , so v ∈ S1
...
2
...
Let v ∈ S1
...
Now let v ∈ S2 , so
v = c1 v1 + · · · + ck vk + ck+1 (v1 + v2 ) = (c1 + ck+1 )v1 + (c2 + ck+1 )v2 + · · · + ck vk
and hence, v ∈ S1
...
Since both containments hold, S1 = S2
...
If A3 = cA1 , then det(A) = 0
...
38
...
Since the linear system is assumed to be consistent, then it must
have infinitely many solutions
...
If f (x) = ex and g(x) = e 2 x , then f ′ (x) = ex = f ′′ (x), g ′ (x) = 1 e 2 x , and g ′′ (x) = 1 e 2 x
...
In a similar manner, for arbitrary constants c1 and c2 , the function
c1 f (x) + c2 g(x) is also a solution to the differential equation
...
3
In Section 2
...
In R2 and R3 , two nonzero
vectors are linearly independent if and only if they are not scalar multiples of each other, so they do not lie
on the same line
...
, vk } is linearly independent
set up the vector equation
c1 v1 + c2 v2 + · · · + ck vk = 0
...
If there is one or more nontrivial solutions, then the vectors are linearly dependent
...
An alternative method, for
determining linear independence is to form a matrix A with column vectors the vectors to test
...
• If det(A) ̸= 0, then the vectors are linearly independent
...
⎡
⎤ ⎡ ⎤
⎡
⎤
−1
2
3
For example, to determine whether or not the vectors ⎣ 1 ⎦ , ⎣ 1 ⎦ , and ⎣ 5 ⎦ are linearly independent
3
2
−1
start with the equation
⎡
⎤
⎡
⎤
⎡
⎤ ⎡
⎤
−1
2
3
0
c1 ⎣ 1 ⎦ + c2 ⎣ 1 ⎦ + c3 ⎣ 5 ⎦ = ⎣ 0 ⎦
...
5 0
independent
...
This implies the inverse A−1 exists and that det(A) ̸= 0
...
In addition, since A−1
52
Chapter 2 Linear Combinations and Linear Independence
⎡
⎤
c1
exists the linear system A ⎣ c2 ⎦ = b has a unique solution for every vector b in R3
...
The uniqueness is a key result of the
linear independence
...
• If S consists of m vectors in Rn and m > n, then S is linearly dependent
...
• Any subset of a set of linearly independent vectors is linearly independent
...
Solutions to Exercises
−1
1
independent
...
Since
1
−2
dependent
...
Since
3 −1
−3 2
−1 −2
1
...
2
...
Since v3 = v1 + v2 , the vectors are linearly
independent
...
To solve the linear system c1 ⎣ 2 ⎦ + c2 ⎣ 2 ⎦ = ⎣ 0 ⎦ , we have that
3
0
⎡
⎤
⎡
⎤ 1
−1 2
−1 2
⎣ 2 2 ⎦ reduces to ⎣ 0 6 ⎦, so the only solution is the trivial solution and hence, the vectors are
−− − −
− − −→
1 3
0 0
linearly independent
...
Since ⎣ 2 −1 ⎦ reduces− ⎣ 0 0 ⎦ , 7
...
Also,
linearly dependent
...
2
⎡
−1
3
1
= 16, the vectors are linearly independent
...
Since ⎢
to ⎢
⎣ −1 2 0 ⎦ reduces− ⎣ 0 0
−− − →
−−−
1 ⎦
2 1 1
0 0
0
trivial solution and hence, the vectors are linearly independent
...
Since ⎢
⎣ 1
− − −→
0
2 ⎦ −− − − ⎣ 0 0 0 ⎦
1
4
6
0 0 0
infinitely many solutions and hence, the vectors are linearly dependent
...
3 Linear Independence
53
3 3
0 1
1 −1
0 0
From the linear system c1
+ c2
+ c3
=
, we have that
2 1
0 0
−1 −2
0 0
⎤
⎡
⎤
0 1
3 0
1
⎢ 0 1 −2 ⎥
1 −1 ⎥
⎥ −→ ⎢
⎥
⎣ 0 0 −5/3 ⎦
...
⎡
⎤
⎡
⎤
−1 1 2
−1 1 2
⎢ 2 4 2 ⎥
⎢ 0 1 1 ⎥
⎥
⎢
⎥
12
...
−− − →
− − − to
1 1 0
0 0 0
⎡
⎤
⎡
⎤
1
0 −1 1
1 0 −1
1
⎢ −2 −1 1
⎢ 0 −1 −1
1 ⎥
3 ⎥
⎥
⎢
⎥ , the homogeneous linear system c1 M1 +
13
...
0 0
⎡
⎤
⎡
⎤
0 −2 2 −2
−1 −1 0 2
⎢ −1 −1 0
⎢ 0 −2 2 −2 ⎥
2 ⎥
⎥
⎥ , the homogeneous linear system c1 M1 +
14
...
0 0
11
...
Since v2 = − 2 v1 , the vectors are linearly 16
...
linearly dependent
...
Any set of vectors containing the zero vector 18
...
dependent
...
a
...
b
...
20
...
Any set of four or more vectors in R3 is linearly dependent
...
b
...
Form the matrix with column vectors the three given vectors, that is, let A = ⎣ 2 0 a ⎦
...
⎡
⎤
⎡
⎤
1 1 1
1 1
1
⎢ 2 0 −4 ⎥
⎢ 0 −2 −6 ⎥
⎥
⎢
⎥
22
...
1 1 1
1 2 1 = 1,
1
2
⎡
⎤ 3 ⎡
1 1 1 2
1 0
⎣ 1 2 1 1 ⎦ −→ ⎣ 0 1
1 3 2 3
0 0
Hence v = −v2 + 3v3
...
a
...
b
...
3
54
Chapter 2 Linear Combinations and Linear Independence
⎡
⎤
⎡
1 1 0 0
⎢ 0 1 1 0 ⎥
⎢
⎥
⎢
24
...
Since ⎢
⎣ −1 1 1 0 ⎦ reduces− ⎣
−− − →
− − − to
0 0 1 0
⎡
⎤
⎡
1 1 0 3
1
⎢ 0 1 1 5 ⎥
⎢ 0
⎥
⎢
b
...
25
...
0 ⎦
0
⎤
⎥
⎥ , the matrix M = M1 + 2M2 + 3M3
...
Since
⎦
linear system is inconsistent and hence, M can not be written
= 13, the matrix A is invertible, so that Ax = b has a unique solution for every
vector b
...
Since
3
1
0
2
4
−1 4
2 −4
= 4, the matrix A is invertible, so that Ax = b has a unique solution for every
vector b
...
Since the equation c1 (1) + c2 (−2 + 4x2 ) + c3 (2x) + c4 (−12x + 8x3 ) = 0, for all x, gives that c1 = c2 =
c3 = c4 = 0, the polynomials are linear independent
...
Since the equation c1 (1) + c2 (x) + c3 (5 + 2x − x2 ) = 0, for all x, gives that c1 = c2 = c3 = 0, the
polynomials are linear independent
...
Since the equation c1 (2) + c2 (x) + c3 (x2 ) + c4 (3x − 1) = 0, for all x, gives that c1 = 1 c4 , c2 = −3c4 , c3 =
2
0, c4 ∈ R, the polynomials are linearly dependent
...
Since the equation c1 (x3 − 2x2 + 1) + c2 (5x) + c3 (x2 − 4) + c4 (x3 + 2x) = 0, for all x, gives that
c1 = c2 = c3 = c4 = 0, the polynomials are linear independent
...
In the equation c1 cos πx + c2 sin πx = 0, if x = 0, then c1 = 0, and if x = 1 , then c2 = 0
...
32
...
Let x = 0, x = ln 2, and x = ln 5 to obtain
2
⎧
⎪ c1 + c2 + c3
=0
⎨
the linear system 2c1 + 1 c2 + 4c3
= 0
...
2 c1 + 5 c2 + 4 c3
linearly independent
...
In the equation c1 x + c2 x2 + c3 ex = 0, if x = 0, then c3 = 0
...
This system has solution c1 = 0 and c2 = 0
...
34
...
Let x = 1, x = 0, and x = 1 to obtain the linear
2
⎧
⎪c1 + ec2
=0
⎨
system
c2
= 0
...
2 c1 + e
independent
...
Suppose u and v are linearly dependent
...
If a ̸= 0, then u = − a v
...
Then
u − cv = 0 and hence, the vectors are linearly dependent
...
Consider the equation c1 w1 + c2 w2 + c3 w3 = 0
...
Since S is linearly independent, then c1 = 0, c1 + c2 = 0, c1 + c2 + c3 = 0 and hence, the only solution is the
trivial solution
...
37
...
Since the vectors v1 , v2 , v3 are linear independent, then c1 = 0, c1 + c2 + c3 = 0, and −c2 + c3 = 0
...
38
...
Since w1 = v2 , w2 = v1 + v3 , and w3 = v1 + v2 + v3 ,
then
c1 (v2 ) + c2 (v1 + v3 ) + c3 (v1 + v2 + v3 ) = 0 ⇔ (c2 + c3 )v1 + (c1 + c3 )v2 + (c2 + c3 )v3 = 0
...
Therefore, the set T is linearly dependent
...
Consider c1 v1 + c2 v2 + c3 v3 = 0, which is true if and only if c3 v3 = −c1 v1 − c2 v2
...
Therefore,
c3 = 0
...
40
...
v1 = v3 − v2 , v1 = 2v3 − 2v2 − v1 , v1 = 3v3 − 3v2 − 2v1 b
...
Then all solutions are given by c1 = 1 − c3 , c2 = −c3 , c3 ∈ R
...
Since A1 , A2 ,
...
Hence, the only solution to Ax = 0 is x = 0
...
Consider
0 = c1 Av1 + c2 Av2 + · · · + ck Avk = A(c1 v1 ) + A(c2 v2 ) + · · · + A(ck vk ) = A(c1 v1 + c2 v2 + · · · + ck vk )
...
Since the vectors v1 , v2 ,
...
, Avk are linearly independent
...
v1 =
0
1
1 1
1 1
...
Let
, which are linearly independent
...
If ad − bc = 0, then the column
c d
vectors are linearly dependent
...
Since
56
Chapter 2 Linear Combinations and Linear Independence
2
...
⎧
⎪ c1
+ c3 = 0
⎨
Since S is linearly independent, then
c2 + c3 = 0 , which has the unique solution c1 = c2 = c3 = 0
...
a2 0 1
3
...
So the vectors
1 2 1
are linearly independent if and only if a ̸= ±1, and a ̸= 0
...
a
...
Since the
vectors are not scalar multiples of each other they are linearly independent
...
a
...
b
...
If a = 1, b = 1, c = 3,
⎤
1
then the system is inconsistent and v = ⎣ 1 ⎦ is not a linear combination of the vectors
...
All vectors ⎣ b ⎦ such that −2a + b + c = 0
...
Since 0 1 0 = −2, the vectors are linearly
c
2 1 0
independent
...
The augmented matrix of the equation
⎡
⎤
⎡
⎤
⎡
⎤ ⎡
⎤
1
1
1
a
c1 ⎣ 0 ⎦ + c2 ⎣ 1 ⎦ + c3 ⎣ 0 ⎦ = ⎣ b ⎦
2
1
0
c
is
⎡
1
⎣ 0
2
1 1
1 0
1 0
⎤
⎡
a
1
b ⎦→⎣ 0
c
0
0
1
0
⎤
0 −3b + 1c
2
2
⎦
...
That is, all vectors in R3 can be written
as a linear combination of the three given vectors
...
a
...
1 2 0
b
...
c
...
5
5
5
−− − −
− − −→
1 1 1 1
0 0 1 9/5
d
...
57
Review Chapter 2
⎡
⎤
⎡
⎤
⎡
⎤
1 1 2 1
x
3
⎢ −1 0 1 2 ⎥
⎢ y ⎥
⎢ 1 ⎥
⎥
⎢
⎥
⎢
⎥
7
...
Let A = ⎢
⎣ 2 2 0 1 ⎦ , x = ⎣ z ⎦ , and b = ⎣ −2 ⎦
...
det(A) = −8 c
...
d
...
e
...
a
...
c1 M 1 + c2 M 2 + c3 M 3 ,
⎡
1
⎢ 0
⎢
⎣ −1
1
1
1
2
1
0
0
0
0
1
0
0
0
0
1
0
0
⎤
0
0 ⎥
⎥,
1 ⎦
0
has only the trivial solution
...
The augmented matrix for the linear system
⎤
⎡
0 1
⎢
1 −1 ⎥
⎥ reduces to ⎢
− − −→
1 2 ⎦ −− − − ⎣
0 1
1
0
0
0
0
1
0
0
0
0
1
0
1 −1
2
1
=
⎤
−1
2 ⎥
⎥,
−3 ⎦
0
1 −1
so the unique solution is c1 = −1, c2 = 2, and c3 = −3
...
The equation
= c1 M 1 + c2 M 2 + c3 M 3
1
2
⎧
=1
⎪ c1 + c2
⎪
⎪
⎨
c2 + c3
= −1
holds if and only if the linear system
has a solution
...
d
...
⎤
⎡
⎤
⎡
⎤ ⎡
⎤
1
3
2
b1
9
...
x1 ⎣ 2 ⎦ + x2 ⎣ −1 ⎦ + x3 ⎣ 3 ⎦ = ⎣ b2 ⎦ b
...
c
...
d
...
⎡
⎤
v1
⎢ v2 ⎥
⎢
⎥
2
2
10
...
If v = ⎢
...
b
...
, n, so
⎣
...
vn
v · v > 0
...
58
Chapter 2 Linear Combinations and Linear Independence
⎡
⎢
⎢
u · (v + w) = ⎢
⎣
and
u1
u2
...
...
...
un
d
...
...
⎡
⎥ ⎢
⎥ ⎢
⎥+⎢
⎦ ⎣
⎤ ⎡
⎥ ⎢
⎥ ⎢
⎥·⎢
⎦ ⎣
v1
v2
...
...
...
wn
⎤⎞
⎡
⎥⎟ ⎢
⎥⎟ ⎢
⎥⎟ = ⎢
⎦⎠ ⎣
⎤
⎡
⎥ ⎢
⎥ ⎢
⎥+⎢
⎦ ⎣
u1
u2
...
...
...
un
⎤ ⎡
⎥ ⎢
⎥ ⎢
⎥·⎢
⎦ ⎣
⎤ ⎡
⎥ ⎢
⎥ ⎢
⎥·⎢
⎦ ⎣
w1
w2
...
...
...
vn + wn
⎤
⎥
⎥
⎥ = (u1 v1 + u1 w1 ) + · · ·+ (un vn + un wn )
⎦
⎤
⎥
⎥
⎥ = (u1 v1 + u1 w1 ) + · · · + (un vn + un wn )
...
Using part (b), we have
c1 vi · v1 + c1 vi · v1 + · · · + ci vi · vi + · · · + cn vi · vn = 0
...
But vi · vi ̸= 0, so ci = 0
...
, n
the vectors are linearly independent
...
5
...
Since
⎡
⎤
⎡
⎤
1 2 4
1 2 4
⎣ 0 1 3 ⎦ → ⎣ 0 1 3 ⎦
...
T
2
...
For example,
4
...
F
...
T
6
...
⎡
1
⎣ 0
1
Since
⎤
⎡
4 2
1
3 1 ⎦→⎣ 0
−1 0
0
⎤
4 2
1 0 ⎦
...
T
⎤
⎡
2
4 1
3 0 ⎦→⎣ 0
−1 1
0
9
...
Since p(x) is not a scalar
multiple of q1 (x) and any linear
combination of q1 (x) and q2 (x)
with nonzero scalars will contain
an x2
...
T
12
...
T
15
...
Since
⎤
4 1
1 −1 ⎦
...
F
...
13
...
The set of all linear combinations of matrices in T is not
all 2 × 2 matrices, but the set of
all linear combinations of matrices from S is all 2 × 2 matrices
...
59
Chapter Test Chapter 2
16
...
T
19
...
T
22
...
At least one is a linear
combination of the others
...
F
...
26
...
An n × n matrix is invertible if and only if the column
vectors are linearly independent
...
T
25
...
F
...
31
...
F
...
18
...
Since the column vectors are linearly independent,
det(A) ̸= 0
21
...
If the vector v3 is a linear combination of v1 and v2 ,
then the vectors will be linearly
dependent
...
F
...
27
...
F
...
33
...
1
A vector space V is a set with an addition and scalar multiplication defined on the vectors in the set
that satisfy the ten axioms
...
To show a set V with an addition and scalar multiplication defined is a vector space
requires showing all ten properties hold
...
The operations defined on a set, even a familiar set, are free to our choosing
...
Then
Mn×n is not a vector space since AB may not equal BA, so that A ⊕ B is not B ⊕ A for all matrices in Mn×n
...
z1
z2
z1 + z2 − 1
The additive identity (Axiom (4)) is the unique vector, lets call it vI , such that v ⊕ vI = v for all vectors
v ∈ R3
...
z1
zI
z1 + zI − 1
z1
1
So the additive identity in this case is not the zero vector, which is the additive identity for the vector space
R3 with the standard operations
...
So w is the additive inverse of v provided v ⊕ w = vI , that is,
⎡
⎤ ⎡
⎤ ⎡
⎤ ⎡
⎤
⎡
⎤
x1
x2
x1 + x2 + 1
−1
−x1 − 2
⎣ y1 ⎦ ⊕ ⎣ y2 ⎦ = ⎣ y1 + y2 + 2 ⎦ = ⎣ −2 ⎦ ⇔ w = ⎣ −y1 − 4 ⎦
...
For example, let V = ⎣ y ⎦ x − y + z = 0
⎩
⎭
z
3
and define addition and scalar multiplication as the standard operations on R
...
This applies to most, but not all of the vector space properties
...
So to show V is a vector space we
⎡
⎤
x1
would need to show the sum of two vectors from V is another vector in V
...
Then
z2
⎡
⎤ ⎡
⎤ ⎡
⎤
x1
x2
x1 + x2
u ⊕ v = ⎣ y1 ⎦ + ⎣ y2 ⎦ = ⎣ y1 + y2 ⎦ ,
z1
z2
z1 + z2
3
...
Similarly, cu is in V for all scalars c
...
Solutions to Exercises
1
...
To show that V is not a vector space it is sufficient to show any
one of the properties does not hold
...
2
...
Since the addition of two vectors is another vector, V is closed under addition
...
Therefore,
V is not a vector space
...
The operation ⊕ is not associative so V is not a vector space
...
4
...
62
Chapter 3 Vector Spaces
5
...
x2
y2
+
x1 + x2
y1 + y2
=
x1
x2
+
+
y1
y2
x1 + x2 + x3
and
y1 + y2 + y3
3
...
x3
y3
x
and −u =
y
(−u) = −u + u = 0
...
Then 0 + u =
...
c
x
y
8
...
cx + dx
cy + dy
=
=
...
(cd)x
(cd)y
10
...
...
c
x
y
0
0
4
...
Let u =
9
...
x3
y3
+
is in R2
...
a
c
M2×2
...
e f
g h
a+e b+f
c+g d+h
=
is in
=
3
...
a b
and −u =
c d
Then u + (−u) = −u + u = 0
...
Let u =
a b
+
c d
a+e b+f
=k
c+g d+h
a b
= kc
+k
c d
7
...
e
g
−a −b
−c −d
...
4
...
k
a
c
b
d
=
a+e
c+g
=
e
g
f
h
and u =
ka
kc
kb
kd
+
a
c
b
d
...
8
...
c d
c d
c d
...
Since the real numbers have the associative
property,
a b
a b
k l
= (kl)
...
1
a
c
b
d
=
a
c
b
d
...
1 Definition of a Vector Space
7
...
⎡
⎤ ⎡
⎤
⎡
⎤
a
c
a+c
8
...
Since ⎣ b ⎦ + ⎣ d ⎦ = ⎣ b + d ⎦ , the sum of two vectors in V is not another vector in V, then
1
1
2
V is not a vector space
...
If the third component always remains 1, then to show V is a vector space is
equivalent to showing R2 is a vector space with the standard componentwise operations
...
Since the operation ⊕ is not commutative, 10
...
vector space
...
Since 12
...
this vector is not in V, then V is not a vector
space
...
a
...
That is, if two matrices
from V are added, then the row two, column two entry of the sum has the value 2 and hence, the sum is
not in V
...
Each of the ten vector space axioms are satisfied with vector addition and scalar multiplication
defined in this way
...
Suppose A and B are skew symmetric
...
The set of upper triangular matrices with
(A + B)t = At + B t = −A − B = −(A + B) and the standard componentwise operations is a vec(cA)t = cAt = −(cA), so the set of skew symmet- tor space
...
The other vector space properties
also hold and V is a vector space
...
Suppose A and B are symmetric
...
The set of invertible matrices is not a vector
(A + B)t = At + B t = A + B and (cA)t = cAt = space
...
Then A + B is
cA, so the set of symmetric matrices is closed unnot invertible, and hence not in V
...
The other
vector space properties also hold and V is a vector
space
...
The zero vector is given by 0 =
18
...
Since (AB)2 = ABAB = A2 B 2 = AB if and only if A and
B commute, the set of idempotent matrices is not a vector space
...
If A and C are in V and k is a scalar, then (A + C)B = AB + BC = 0, and (kA)B = k(AB) = 0, so
V is closed under addition and scalar multiplication
...
Hence, V is a vector space
...
The set V is closed under addition and scalar multiplication
...
So V is a
vector space
...
Since A ⊕ A−1 = AA−1 = I, then the additive inverse of A is
0 1
A−1
...
If c = 0, then cA is not in V
...
21
...
The additive identity is 0 =
64
Chapter 3 Vector Spaces
22
...
Since
t
1+t
0
1
the additive identity is
t
1+t
+
s
1+s
=
t+s
1 + (t + s)
⇔ s = 0,
...
Since the other nine vector space properties also hold, V is a vector space
...
1 ⎤
1
⎡
⎡
⎤
1
1+a
23
...
The additive identity is 0 = ⎣ 2 ⎦
...
Then the additive inverse is −u =
3
3 + 2a ⎡
⎡
⎤
⎤ ⎡
⎤ ⎡ ⎤
1−a
1+t
1 + 0t
1
⎣ 2 + a ⎦
...
Each of the ten vector space axioms is satisfied
...
0⊙⎣ 2 − t ⎦ = ⎣ 2 − 0t ⎦ = ⎣ 2 ⎦
3 − 2a
3 + 2t
3 + 2(0)t
3
c
...
Since S is a subset of R3 with the same standard operations only vector space axioms (1) and
(6) need to be verified since the others are inherited from the vector space R3
...
Then
w1 + w2 = (a + c)u + (b + d)v and k(au + bv) =
(ka)u + (kb)v are also in S
...
Each of the ten vector space axioms is satisfied
...
The set S is a plane through the origin in
R3 , so the sum of vectors in S remains in S and a
scalar times a vector in S remains in S
...
27
...
28
...
Since cos(0) = 1 and sin(0) = 0, the additive identity is
is
cos(−t1 )
sin(−t1 )
=
cos t1
− sin t1
1
0
...
b
...
0
...
additive identity in this case is
cos t1
sin t1
c
...
Since (f + g)(0) = f (0) + g(0) = 1 + 1 = 2, then V is not closed under addition and hence is not a vector
space
...
Since c ⊙ (d ⊙ f )(x) = c ⊙ (f (x + d)) = f (x + c + d) and (cd) ⊙ f (x) = f (x + cd), do not agree for all
scalars, V is not a vector space
...
a
...
b
...
Exercise Set 3
...
That is, W is a vector space
...
For example, if u and v are vectors in W, then they
are also vectors in V, so that u ⊕ v = v ⊕ u
...
To show that a subset is a subspace it is sufficient to
verify that
if u and v are in W and c is a scalar, then u + cv is another vector in W
...
2 Subspaces
65
For example, let
⎧⎡
⎫
⎤
⎨ s − 2t
⎬
⎦ s, t ∈ R ,
W = ⎣
t
⎩
⎭
s+t
⎡
⎤
0
which is a subset of R3
...
Notice that we have
t
b
also in W
...
Next we form the linear
combination
⎡
⎤
⎡
⎤
s − 2t
a − 2b
⎦ + c⎣
⎦
t
b
u + cv = ⎣
s+t
a+b
and simplify the sum to one vector
...
Continuing the simplification, we have that
⎡
⎤
(s + a) − 2(t + b)
⎦
u + cv = ⎣
t+b
(s + a) + (t + b)
and now the vector u + cv is in the required form with two parameters s + a and t + b
...
An arbitrary vector in W can also be written as
⎡
⎤
⎡ ⎤
⎡
⎤
s − 2t
1
−2
⎣
⎦ = s⎣ 0 ⎦ + t⎣ 1 ⎦
t
s+t
1
1
⎡
⎤
⎡
⎤
1
−2
and in this case ⎣ 0 ⎦ and ⎣ 1 ⎦ are linearly independent, so W is a plane in R3
...
⎩
⎭
1
1
The span of a set of vectors is always a subspace
...
The coordinate vectors of R3 are a
simple example
...
66
Chapter 3 Vector Spaces
2
1
• Two linearly dependent vectors can not span R2
...
If v in in S, then
2
1
= (c1 − 2c2 )
and hence, every vector in the span of S is a linear combination of only one vector
...
0
1
3
Since the coordinate vectors are in S, then span(S) = R2 , but the vectors are linearly dependent since
2
1
0
=2
+3
...
⎥ is in span{u1 ,
...
⎦
...
For example, let S =
vn
equation
c1 u1 + c2 u2 + · · · + ck uk = v,
and then solve the resulting linear system
...
For example, if S = { A ∈ M2×2 | A is invertible} , then S is not a subspace of the vector space of
1
0
1 0
all 2 × 2 matrices
...
To determine whether of not
is in
0 −1
0 1
0 0
1
1
1 2
−1 0
the span of the two matrices
and
, start with the equation
0 1
1 1
c1
1 2
0 1
+ c2
−1 0
1 1
=
2
1
−1
1
⇔
c1 − c2
c2
2c1
c1 + c2
2 −1
1
1
=
...
Solutions to Exercises
0
0
and
be two vectors in S and c a scalar
...
0
y1
1
...
The set S is not a subspace of R2
...
The set S is not a subspace of R2
...
The set S is not a subspace of R2
...
The set S is a subspace since
x
3x
+c
0
−1
∈ S
...
0
0
is in
∈ S
...
5
...
If u =
0
y1 + cy2
=
∈ S
...
2 Subspaces
67
7
...
8
...
Then
(x1 + y1 )(x2 + y2 )(x3 + y3 ) ̸= 0, so S is not a subspace
...
Since for all real numbers s, t, c, we have that
⎡
⎤
⎡
⎤ ⎡
⎤
s − 2t
x − 2y
(s + cx) − 2(t + cy)
⎣
⎦ + c⎣
⎦=⎣
⎦,
s
x
s + cx
t+s
y+x
(t + cy) + (s + cx)
is in S, then S is a subspace
...
For any two vectors in S, we have ⎣ 2 ⎦ + ⎣ 2 ⎦ = ⎣
4
x3
y3
x2 + y2
not a subspace
...
If A and B are symmetric matrices and c is 12
...
B 2 = A + B if and only if AB = −BA, so S is
not a subspace
...
Since the sum of invertible matrices may not 14
...
c is a scalar, then (A + cB)t = At + cB t = −A +
c(−B) = −(A + cB), so S is a subspace
...
If A and B are upper triangular matrices,
then A + B and cB are also upper triangular, so
S is a subspace
...
If A and B are diagonal matrices and c is
a scalar, then A + cB is a diagonal matrix and
hence, S is a subspace
...
The set S is a subspace
...
a b
e f
,B =
such
c d
g h
that a + c = 0 and e + g = 0, then A + B =
a+e b+f
with
c+g d+h
If A =
(a + e) + (d + h) = (a + d) + (e + h) = 0
19
...
and hence, S is a subspace
...
The set S is not a subspace since
(x2 + x) − x2 = x, which is not a polynomial of
even degree
...
If p(x) and q(x) are polynomials with p(0) =
0 and q(0) = 0, then
22
...
(p + q)(0) = p(0) + q(0) = 0,
and
(cq)(0) = cq(0) = 0,
so S is a subspace
...
The set S is not a subspace, since for example
(2x2 + 1) − (x2 + 1) = x2 , which is not in S
...
The set S is a subspace (assuming the zero
polynomial is in the set)
...
The vector v is in the span of S = {v1 , v2 , v3 } provided there are scalars c1 , c2 , and c3 such that
v = c1 v1 + c2 v2 + c3 v3
...
We have that
⎡
⎤
⎡
⎤
1 −1 −1 1
1 −1 −1 1
⎣ 1 −1 2 −1 ⎦ reduces to ⎣ 0 1
0
1 ⎦,
−− − −
− − −→
0 1
0
1
0 0
3 −2
and since the linear system has a (unique) solution, the vector v is in the span
...
Since
⎡
1
⎣ 1
0
⎤
⎡
−1 −1 −2
1
−1 2
7 ⎦ reduces to ⎣ 0
−− − −
− − −→
1
0 −3
0
⎤
−1 −1 −2
1
0 −3 ⎦ ,
0
3
9
the linear system has a (unique) solution and hence, the vector v is in the span
...
Since
⎡
1
⎢ 1
⎢
⎣ 0
−1
⎤
⎡
0 1 −2
⎢
1 −1 1 ⎥
⎥ reduces to ⎢
⎦ −− − − ⎣
− − −→
2 −4 6
1 −3 5
28
...
0
1
0
0
1
−2
0
0
⎤
1
0 ⎥
⎥,
2 ⎦
0
the linear system is inconsistent and hence, the matrix M is not in the span
...
Since
c1 (1 + x) + c2 (x2 − 2) + c3 (3x) = 2x2 − 6x − 11 if and only if (c1 − 2c2 ) + (c1 + 3c3 )x + c2 x2 = 2x2 − 6x − 11,
we have that c1 = −7, c2 = 2, c3 =
1
3
and hence, the polynomial is in the span
...
Since
c1 (1 + x) + c2 (x2 − 2) + c3 (3x) = 3x2 − x − 4 if and only if (c1 − 2c2 ) + (c1 + 3c3 )x + c2 x2 = 3x2 − x − 4,
we have that c1 = 2, c2 = 3, c3 = −1 and hence, the polynomial is in the span
...
Since
⎧⎡
⎫
⎤
⎡
⎤
⎤
2
1 a
−1 3
b
⎨ a
⎬
⎣ −1 3 b ⎦ reduces to ⎣ 0 7 a + 2b ⎦ , then span(S) = ⎣ b ⎦ a + c = 0
...
Since
⎡
1
⎣ 1
2
⎤
⎡
2 1 a
1
3 2 b ⎦ reduces to ⎣ 0
−− − −
− − −→
1 −1 c
0
33
...
−a + b
⎩
⎭
−5a + 3b + c
c
−1
1
=
a
a+b
3
a
c
b
d
b
2a−b
3
, leads to the linear system c1 + c2 = a, 2c1 − c2 =
a, b ∈ R
...
2 Subspaces
69
34
...
span(S) =
36
...
⎡
⎤
−4 2 2 a
⎣ 0 −1 1 b ⎦
1
0 1 c
⎡
⎤
−4 2 2
a
⎦,
b
reduces− ⎣ 0 −1 1
−− − →
− − − to
0
0 2 c + 1a + 1b
4
2
span(S) = ax2 + bx + c a − c = 0
...
a
...
The set S is linearly independent
...
the span is all polynomials of degree two or less
...
a
...
b
⎭
3a − b
39
...
span(S) = R3
b
...
b
...
40
...
Since
⎡
⎤
⎡
1 −1 0 2 a
1 −1 0
⎣ 2 0 1 1 b ⎦ reduces to ⎣ 0 2
1
−− − −
− − −→
1 3 1 1 c
0 0 −1
⎤
2
a
−3
−2a + b ⎦ ,
5 3a − 2b + c
every vector in R3 is a linear combination of the vectors in S and hence, span(S) = R3
...
The set S is
linearly dependent since there are four vectors in R3
...
a
...
The set S is linearly dependent
...
The set T is also
linearly dependent and span(T ) = R3
...
The set H is linearly independent and we still have span(H) =
R3
...
a
...
The set S is linearly independent
...
The set T is also linearly independent and span(T ) = M2×2
...
a
...
The set S is linearly dependent
...
2x2 + 3x + 5 = 2(1) − (x − 3) + 2(x2 + 2x) d
...
44
...
Since
⎡
⎤
⎡
⎤ ⎡
⎤
2s1 − t1
2s2 − t2
2(s1 + cs2 ) − (t1 + ct2 )
⎢
⎥
⎢
⎥ ⎢
⎥
s1
s2
s1 + cs2
⎢
⎥ + c⎢
⎥=⎢
⎥ ∈ S,
⎣
⎦
⎣
⎦ ⎣
⎦
t1
t2
t1 + ct2
−s1
−s2
−(s1 + cs2 )
⎧⎡
⎡
⎤
⎡
⎤
⎡
⎤
⎤ ⎡
⎤⎫
2s − t
2
−1
2
−1 ⎪
⎪
⎪
⎪
⎨⎢
⎢
⎥
⎢
⎥
⎢
⎥
⎥ ⎢
⎥⎬
s
⎢
⎥ = s ⎢ 1 ⎥ + t ⎢ 0 ⎥ , then S = span ⎢ 1 ⎥ , ⎢ 0 ⎥
...
b
...
Since the vectors ⎢
⎣ 0 ⎦ and ⎣ 1 ⎦ are not multiples of each other they are linearly independent
...
S R
70
Chapter 3 Vector Spaces
45
...
, b
...
⎩
⎭
2s + 3t
2
3
2
3
Therefore, S is a subspace
...
The vectors found in part (b) are linearly independent
...
Since the span of
two linearly independent vectors in R3 is a plane, then S ̸= R3
a b
such that −a − 2b + c + d = 0
...
From part (a)
c d
not all matrices can be written as a linear combination of the matrices in S and hence, the span of S is not
equal to M2×2
...
The matrices that generate the set S are linearly independent
...
a
...
Since A(x + cy) =
1
2
+c
1
2
=
if and only if c = 0, then S is not a subspace
...
If u and v are in S, then A(u + cv) = Au + cAv = 0 + 0 = 0 and hence S is a subspace
...
Let B1 , B2 ∈ S
...
50
...
Let c be a scalar
...
Since S and T are subspaces, then u1 + cu2 ∈ S and v1 + cv2 ∈ T
...
51
...
Then there are scalars c1 ,
...
, dn such that w = i=1 ci ui + i=1 di vi
...
, um , v1 ,
...
, um , v1 ,
...
Now let w ∈ span{u1 ,
...
, vn }, so there are scalars c1 ,
...
Therefore, span{u1 ,
...
, vn } ⊆ S + T
...
a
...
Similarly, T is a subspace
...
The sets S and T are given by
1 0
−1 0
, T = span
0
0
,
1
0
,
0
0
0
1
1 0
−1 0
,
0
0
,
,
1
0
1 0
−1 0
,
0
0
1
0
0 0
0 1
,
0 1
0 0
, so
0 0
0 1
a + kd −(a + kd)
b+e
c+f
= M2×2
...
,
3
...
3
In Section 3
...
The minimal spanning sets, minimal in the sense of the number of vectors in the set, are those
that are linear independent
...
For example,
• B = {e1 , e2 ,
...
, xn } is a basis for Pn
...
For example, if c ̸= 0, then B = {ce1 , e2 ,
...
But all bases for a vector space have the same number of vectors, called the dimension of the
vector space, and denoted by dim(V )
...
If S = {v1 , v2 ,
...
• If m > n, then span(S) can equal V, but in this case some of the vectors are linear combinations of
others and the set S can be trimmed down to a basis for V
...
• If m < n, then S can not be a basis for V, since in this case span(S) ̸= V
...
• If m = n, then S will be a basis for V if either S is linearly independent or span(S) = V
...
⎡
⎤
⎡
⎤
1
3
The two vectors v1 = ⎣ −1 ⎦ and v2 = ⎣ −1 ⎦ are linearly independent but can not be a basis for R3 since
2
2
all bases for R3 must have three vectors
...
−− − − − − − − − −→
−−−−−−−−−−
2
2 0 0 1
0 0 0 2 1
The pivots in the echelon form matrix are located in columns one, two and four, so the corresponding column
vectors in the original matrix form the basis
...
⎩
⎭
2
2
0
To trim a set of vectors that span the space to a basis the procedure is the same
...
Since
⎡
⎤
⎡
⎤
0 2 0 3
−1 2 2 −1
⎣ −1 2 2 −1 ⎦ reduces to ⎣ 0 2 0 3 ⎦ ,
−− − −
− − −→
−1 1 2 −1
0 0 0 3
then the span of the four vectors
⎧⎡
⎤ ⎡
0
⎨
so a basis for R3 is ⎣ −1 ⎦ , ⎣
⎩
−1
is ⎤ 3
...
⎭
1
−1
Solutions to Exercises
1
...
Therefore, since S has only two vectors
it is not a basis for R3
...
Since the third vector can be written as the
sum of the first two, the set S is linearly dependent and hence, is not a a basis for R3
...
Since the third polynomial is a linear combination of the first two, the set S is linearly dependent and hence is not a basis for P3
...
The two vectors in S are not scalar multiples and hence, the set S is linearly independent
...
9
...
11
...
Since dim(R2 ) = 2 every basis for R2 has two
vectors
...
4
...
6
...
−1
1
8
...
10
...
12
...
trices in M2×2
...
,
3
...
Since
⎡
−1 1
⎣ 2 0
1 1
the set S
vectors in
15
...
⎤
⎡
2 −1
⎢
4 2 ⎥
⎥ reduces to ⎢
⎦ −− − − ⎣
− − −→
2 0
5 3
1
0
0
0
2
−1
0
0
14
...
16
...
4
and is therefore not a basis for R
...
Notice that 1 (2x2 + x + 2 + 2(−x2 + x) − 2(1)) = x and 2x2 + x + 2 + (−x2 + x) − 2x − 2 = x2 , so the
3
span of S is P2
...
18
...
⎡
⎤
⎡
⎤
⎡ ⎤
⎡
⎤
s + 2t
1
2
1
19
...
Since the vectors ⎣ −1 ⎦ and
t ⎧⎡
0
⎡
⎤
⎤0 ⎡
⎤⎫ 1
2
1
2 ⎬
⎨
⎣ 1 ⎦ are linear independent a basis for S is B = ⎣ −1 ⎦ , ⎣ 1 ⎦ and dim(S) = 2
...
Since every matrix in S can be written in the form
a a+d
a+d
d
1 1
1 0
=a
+d
0 1
1 1
,
and the two matrices on the right hand side are linearly independent, a basis for S is
1 1
0 1
B=
,
...
1 0
1 1
21
...
Since the
1 0
0 1
1 0
0 1
0 0
,
,
0 0
1 0
0 1
and dim(S) = 3
...
Every 2 × 2 skew symmetric matrix has the form
B=
0 1
−1 0
with dim(S) = 1
...
Since every polynomial p(x) in S satisfies
p(0) = 0, we have that p(x) = ax+bx2
...
⎛⎡
24
...
If in addition, p(1) = 0, then a + b + c = 0, so
c = −a − b and hence
p(x) = ax3 +bx2 +(−a−b)x = a(x3 −x)+b(x2 −x)
...
⎤⎞
2 1
0 2 ⎦⎠ = −4, the set S is already a basis for R3 since it is a linearly independent
−2 1
R3
...
Since the first two vectors
3
2
⎧⎡
⎤ ⎡
⎤⎫
4
⎨ −2
⎬
are not scalar multiples of each other a basis for span(S) is ⎣ 1 ⎦ , ⎣ −1 ⎦
...
Since det ⎝⎣ 2
−1
set of three vectors in
⎡
⎤ ⎡
2
26
...
The vectors can not be a basis since a set of four vectors in R3 is linearly dependent
...
This gives
⎡
⎤
⎡
⎤
2 0 −1 2
2 0 −1 2
⎣ −3 2 −1 3 ⎦ reduces to ⎣ 0 2 − 5
6 ⎦
...
So a basis for the span of S is given by
⎧⎡
⎤ ⎡ ⎤ ⎡
⎤⎫
2
0
−1 ⎬
⎨
B = ⎣ −3 ⎦ , ⎣ 2 ⎦ , ⎣ −1 ⎦
...
⎩
⎭
0
2
0
28
...
To trim the set
down to a basis for the span row reduce the matrix with column vectors the vectors in S
...
−− − −
− − −→
2 −3 −2 −2
0 0 −3 2
A basis for the span consists of the column vectors in the original matrix corresponding to the pivot columns
of the row echelon matrix
...
Observe that span(S) = R3
...
The vectors can not be a basis since a set of four vectors in R3 is linearly dependent
...
This gives
⎡
⎤
⎡
⎤
2 0 2 4
2 0 2 4
⎣ −3 2 −1 0 ⎦ −→ ⎣ 0 2 2 6 ⎦
...
So a basis for the span of S is given by
3
...
The vectors can not be
down to a basis for the span
75
⎤⎫
⎬
⎦
...
⎭
a basis since a set of four vectors in R3 is linearly dependent
...
This gives
⎡
⎤
⎡
⎤
2 1 0 2
2 1 0 2
⎣ 2 −1 2 3 ⎦ reduces to ⎣ 0 −2 2 1 ⎦
...
So⎤a basis for the span of S is given by
⎧⎡
⎫
⎤ ⎡
⎤ ⎡
1
0 ⎬
⎨ 2
B = ⎣ 2 ⎦ , ⎣ −1 ⎦ , ⎣ 2 ⎦
...
⎩
⎭
0
0
2
31
...
Reducing this matrix, we have that
⎡
⎤
⎡
⎤
2 1 1 0 0
2 1 1
0 0
⎣ −1 0 0 1 0 ⎦ reduces to ⎣ 0 1 1
2 0 ⎦
...
So a basis for R3 containing S is B = ⎣ −1 ⎦ , ⎣ 0 ⎦ , ⎣ 0 ⎦
...
Form the 3 × 5 matrix with first two column vectors the vectors in S and then augment the identity
matrix
...
−− − −
− − −→
3 1 0 0 1
0 0 1 −2 1
A basis for R3 consists of the column vectors in the original⎧⎡
matrix corresponding to⎤⎫ pivot columns of the
the
⎤ ⎡
⎤ ⎡
1
1 ⎬
⎨ −1
row echelon matrix
...
⎩
⎭
3
1
0
4
4
33
...
A basis for R containing S is
⎧⎡
⎧⎡
⎤ ⎡
⎤ ⎡
⎤ ⎡ ⎤⎫
⎤ ⎡
⎤ ⎡
⎤ ⎡
⎤⎫
1
3
1
0 ⎪
1
1
1 ⎪
⎪
⎪ −1
⎪
⎪
⎪
⎪
⎨⎢
⎬
⎨⎢
⎬
−1 ⎥ ⎢ 1 ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥
1 ⎥ ⎢ −3 ⎥ ⎢ −2 ⎥ ⎢ 0 ⎥
⎥,⎢
⎥,⎢
⎥,⎢ ⎥
...
B= ⎢
B= ⎢
⎪⎣ 2 ⎦ ⎣ 1 ⎦ ⎣ 0 ⎦ ⎣ 1 ⎦ ⎪
⎪⎣ 1 ⎦ ⎣ −1 ⎦ ⎣ −1 ⎦ ⎣ 0 ⎦⎪
⎪
⎪
⎪
⎪
⎩
⎭
⎩
⎭
4
2
0
0
−1
2
3
0
35
...
⎩
⎭
3
1
0
36
...
⎩
⎭
−1
3
0
37
...
Then B = {eii | 1 ≤ i ≤ n} is a basis for the subspace of all n × n diagonal matrices
...
Consider the equation c1 (cv1 ) + c2 (cv2 ) + · · · + cn (cvn ) = 0
...
Now since S is a basis, it is linearly independent, so
c1 = c2 = · · · = cn = 0 and hence S ′ is a basis
...
It is sufficient to show that the set S ′ is linearly independent
...
This is equivalent to A(c1 v1 + c2 v2 + · · · + cn vn ) = 0
...
Since S is linearly independent,
then c1 = c2 = · · · cn = 0, so that S ′ is linearly independent
...
To solve the homogeneous equation Ax = 0 consider the matrix
⎡
⎤
⎡
⎤
3 3 1
3
1 0
1
0
⎣ −1 0 −1 −1 ⎦ that reduces to ⎣ 0 1 −2/3 0 ⎦
...
⎣ 1 ⎦⎪
⎪
⎪
⎪
⎩
⎭
0
⎤
⎥
⎥
...
Since H is a subspace of V, then H ⊆ V
...
, vn } be a basis for H, so that S is a linearly
independent set of vectors in V
...
Now let v be a vector in V
...
cn such that c1 v1 + c2 v2 + · · · + cn vn = v
...
Hence, V ⊆ H
and we have that H = V
...
Since S = {ax3 + bx2 + cx | a, b, c ∈ R}, then dim(S) = 3
...
Hence, a polynomial in T has the form
q(x) = a(x3 − 1) + b(x2 − 1) + c(x − 1), so that dim(T ) = 3
...
Hence, a polynomial q(x) is in S ∩ T if and only if q(x) = a(x3 − x) + b(x2 − x) and hence, dim(S ∩ T ) = 2
...
Every vector in W can be written as a linear combination of the form
⎡
⎤
⎡
⎤
⎡
⎤
⎡ ⎤
2s + t + 3r
2
1
3
⎣ 3s − t + 2r ⎦ = s ⎣ 3 ⎦ + t ⎣ −1 ⎦ + r ⎣ 2 ⎦
...
⎩
⎭
⎩
⎭
2 ⎧⎡ 1⎤ ⎡
1⎫
1
1
1
1
2
⎤
1
⎨ 2
⎬
Since B = ⎣ 3 ⎦ , ⎣ −1 ⎦ is linear independent, B is a basis for W, so that dim(W ) = 2
...
Since
44
...
For the intersection, since
T = s⎢ ⎦ + t⎢
⎣ 0
⎣ 1 ⎦
⎪
⎪
⎪
⎪
⎩
⎭
0
0
3
...
S ∩ T = s⎣ ⎦
0
⎪
⎪
⎪
⎪
⎩
⎭
0
Exercise Set 3
...
, vn } is an ordered basis for a vector space, then for each vector v there are scalars
c1 , c2 ,
...
The unique scalars are called the coordinates of the
vector relative to the ordered basis B, written as
⎡
⎤
c1
⎢ c2 ⎥
⎢
⎥
[v]B = ⎢
...
⎦
...
For example, since every vector in R3 can be written as
⎡
⎤
⎡
⎤
⎡
⎤
⎡
⎤
x
1
0
0
v = ⎣ y ⎦ = x⎣ 0 ⎦ + y⎣ 1 ⎦ + z⎣ 0 ⎦,
z
0
0
1
⎡
⎤
x
the coordinates relative to the standard basis are [v]B = ⎣ y ⎦
...
, cn
...
For example, if B =
,
and B ′ =
,
, then
0
1
1
0
[v]B =
x
y
=
B
x
y
and [v]B ′ =
x
y
=
B′
y
x
...
, vn } to find the coordinates of the same vector
′
′
relative to a second basis B ′ = {v1 ,
...
To determine a transition
matrix:
• Find the coordinates of each vector in B relative to the basis B ′
...
That
is,
′
[I]B = [ [v1 ]B ′ [v2 ]B ′
...
B
• The coordinates of v relative to B ′ given the coordinates relative B are given by the formula
′
[v]B ′ = [I]B [v]B
...
B
B
1
−1
,
and B ′ =
1
1
transition matrix from B to B ′ are:
Let B =
1
2
,
−2
−1
be two bases for R2
...
1/3 1
−1/3 1
′
• [I]B =
B
3
−2
• As an example,
1/3 1
−1/3 1
=
B′
3
−2
1/3 1
−1/3 1
=
B
−7/3
−8/3
...
The coordinates of
c1
3
1
−2
2
+ c2
=
, relative to the basis B are the scalars c1 and c2 such that
8
0
...
Hence, [v]B =
−2
4
2
...
−1/2
3
, then [v]B =
...
To find the coordinates we form and row reduce the matrix
⎡
⎡
2
4
...
2
1 2 9
0 0 1 3
3
⎤
⎡
⎤
⎡
⎤
1 0 0
2 1 0 0
1/2
0 0 1 ⎦ reduces to ⎣ 0 −1 0 1 ⎦ , then [v]B = ⎣ −1 ⎦
...
Since c1 + ⎤2 (x − 1) + c3 x2 = 3 + 2x − 2x2 if and only if c1 − c2 = 3, c2 = 2, and c3 = −2, we have that
c
⎡
5
[v]B = ⎣ 2 ⎦
...
The equation c1 (x2 + 2x + 2) + c2 (2x + 3) + c3 (−x2 + x + 1)
matrix form,
⎡
⎤
⎡
2 3 1
8
⎣ 2 2 1
6 ⎦ , which reduces to ⎣
−− − − − −→
−−−−−−
1 0 −1 −3
⎡
⎤
−1/3
so [v]B = ⎣ 2 ⎦
...
Since c1
1 −1
0
0
+ c2
0
1
1
0
+ c3
1
0
0
−1
+ c4
= 8 + 6x − 3x2 gives the linear system, in
1 0
0 1
0 0
1 0
−1 0
=
⎤
0 −1/3
0
2 ⎦
1 8/3
1 3
−2 2
if and only if
3
...
4
1 −1
0 1
1 −1
1 1
2 −2
8
...
− − −reduces →
− − − − − to
1 2 0 3 3
0 0 0 1 −1
−1
⎡
⎤
⎡
⎤
1
−2
−1/4
1/2
9
...
[v]B1 = ⎣ 1 ⎦;[v]B2 = ⎣ 2 ⎦
1/8
−1/2
1
0
⎡
⎤
⎡ 1 ⎤
⎡
⎤
⎡
⎤
1
3
1
1
⎢ 1 ⎥
⎢ 1 ⎥
⎥
⎢
⎥
11
...
[v]B1 = ⎢
⎣ 1 ⎦; [v]B2 = ⎣ 7 ⎦
3
−1
0
−1
−1
3
13
...
Hence, [I]B2 =
=
...
[I]B2 =
B1
15
...
[I]B2
B1
1/3 2/3
−7/6 1/6
; [v]B2 = [I]B2 [v]B1
B1
1
−1
⎡
−1
5
...
Notice that the only difference in the bases B1 and B2 is the order in which the polynomials 1, x, and x2
are given
...
⎡
⎤
⎤
0 0 1
5
That is, [I]B2 = ⎣ 1 0 0 ⎦
...
B1
B1
0 1 0
3
⎡
⎤
1/4 5/8
3/8
18
...
B1
2
11/8
80
Chapter 3 Vector Spaces
⎡
⎤
⎡
−1
1
19
...
Since
⎡
⎤
⎡
1 0
0 −1 a
⎢ 0 −1 −1 0 b ⎥
⎢
⎢
⎥
to ⎢
⎣ 1 1 −1 0 c ⎦ reduces− ⎣
−− − →
−−−
0 −1 0 −1 d
21
...
[I]B2
B1
⎡
0
=⎣ 1
0
1 −1
1
0
23
...
[I]B =
S
4
4
=
B
1
0
⎤ ⎡
⎤
−1
a
⎦ + c3 ⎣ 1 ⎦ = ⎣ b ⎦ gives
0
c
⎤
⎡
⎤
⎡
⎤
−a − b + c
a
−a − b + c
⎦ , we have that ⎣ b ⎦ = ⎣
⎦
...
[v]B2 = [I]B2 ⎣ 2 ⎦ = ⎣ 1 ⎦
B1
0 1
3
3
1 −1
1
0
0 1
=
−1 0
22
...
[I]B2 =
B1
1
0
0
0
⎤
1
2
b
...
[I]B1 =
B2
1 0
0 1
1
2
c
...
B1
B2
=
B
3
4
;
1
4
=
B
5
8
8
8
c
...
a
...
cos θ
sin θ
− sin θ
cos θ
x
y
=
x cos θ − y sin θ
x sin θ + y cos θ
;
4
2
=
B
6
4
;
⎤⎤
⎡
a
2a + b − c − 2d
⎢ −a − b + c + d
b ⎥⎥
⎥⎥ = ⎢
⎣
c ⎦⎦
a−c−d
d
a + b − c − 2d
B
⎤
⎥
⎥
...
5 Application: Differential Equations
81
b
...
1
0
B
=
B
0
0
=
0
1
,
,
1
1
0
1
=
B
=
B
−1
0
−1
1
,
y
y
1
1
21
1
21
1
25
...
Since u1 = −v1 + 2v2 , u2 = −v1 + 2v2 − v3 , and u3 = −v2 + v3 , the coordinates
relative to B2 are
⎡
⎤
⎡
⎤
⎡
⎤
⎡
−1
−1
0
−1 −1
2
[u1 ]B2 = ⎣ 2 ⎦ , [u2 ]B2 = ⎣ 2 ⎦ , [u3 ]B2 = ⎣ −1 ⎦ , so [I]B2 = ⎣ 2
B1
0
−1
1
0 −1
b
...
5
⎡
x
x
⎤ ⎡
⎤
2
1
= [I]B2 ⎣ −3 ⎦ = ⎣ −3 ⎦
B1
1
4
of u1 , u2 , and u3
⎤
0
−1 ⎦
...
a
...
Substituting these into the differential equations gives
the auxiliary equation r2 − 5r + 6 = 0
...
e2x
e3x
= e5x > 0 for all x, the two solutions are linear independent
...
The
2x
2e
3e3x
general solution is the linear combination y(x) = C1 e2x + C2 e3x , where C1 and C2 are arbitrary constants
...
a
...
Then the auxiliary equation is r2 + 3r + 2 = 0 =
(r + 1)(r + 2) = 0 and hence, two distinct solutions are y1 = e−x and y2 = e−2x
...
Since W [y1 , y2 ](x) =
e−x
e−2x
= −e−x ̸= 0 for all x, the two solutions are linear independent
...
The general solution is the linear combination y(x) = C1 e−x + C2 e−2x , where C1 and C2 are arbitrary
constants
...
a
...
Substituting these into the differential equation gives
the auxiliary equation r2 + 4r + 4 = 0
...
Since the auxiliary equation has
only one root of multiplicity 2, two distinct solutions are y1 = e−2x and y2 = xe−2x
...
Since W [y1 , y2 ](x) =
e−2x
xe−2x
= e−4x > 0 for all x, the two solutions are linearly
−2x
−2x
−2e
e
− 2xe−2x
independent
...
The general solution is the linear combination y(x) = C1 e−2x + C2 xe−2x , where C1 and C2
are arbitrary constants
...
a
...
Then the auxiliary equation is r2 − 4r + 5 = 0 =
(r + 1)(r − 5) = 0 and hence, two distinct solutions are y1 = e−x and y2 = e5x
...
Since W [y1 , y2 ](x) =
e−x
e5x
= e4x > 0 for all x, the two solutions are linear independent
...
The
−x
−e
5e5x
general solution is the linear combination y(x) = C1 e−x + C2 e5x , where C1 and C2 are arbitrary constants
...
Since W [y1 , y2 ](x) =
82
Chapter 3 Vector Spaces
5
...
Substituting these into the differential equation gives the
auxiliary equation r2 − 2r + 1 = 0
...
Since the auxiliary equation has
only one root of multiplicity 2, two distinct and linearly independent solutions are y1 = ex and y2 = xex
...
The initial value conditions now allow us to find the
specific values for C1 and C2 to give the solution to the initial value problem
...
Further, since y ′ (x) = ex + C2 (ex + xex ), and y ′ (0) = 3, we
have that 3 = 1 + C2 , so C2 = 2
...
6
...
Then the auxiliary equation is
r2 − 3r + 2 = (r − 1)(r − 2) = 0 and hence, two distinct and linearly independent solutions are y1 = ex and
y2 = e2x
...
Using the initial condition y(1) = 0, we have that
0 = y(1) = C1 e + C2 e2 and 1 = y ′ (1) = C1 e + C2 (2e2 )
...
Then the solution to the initial value problem is y(x) = −e−1 ex + e−2 e2x
...
a
...
′
′′
b
...
Equating coefficients of like terms, we have that a = 1, b = 3, and c = 4
...
If f (x) = yc (x) + yp (x), then
f ′ (x) = 3C1 e3x + C2 ex + 2x + 3 and f ′′ (x) = 9C1 e3x + C2 ex + 2
...
8
...
The auxiliary equation for y ′′ + 4y ′ + 3y = 0 is r2 + 4r + 3 = (r + 3)(r + 1) = 0, so the complimentary
solution is yc (x) = C1 e−x + C2 e−3x
...
Since yp (x) = −2A sin 2x + 2B cos 2x and yp (x) = −4A cos 2x − 4B sin 2x, after substitution in the
differential equation, we have that (−A + 8B) cos 2x + (−B − 8A) sin 2x = 3 sin 2x and hence A = − 24 and
65
3
B = − 65
...
The general solution is y(x) = yc (x) + yp (x) = C1 e−x + C2 e−3x − 24 cos 2x − 65 sin 2x
...
Since the damping coefficient is c = 0 and there is no external force acting on the system, so that f (x) = 0,
the differential equation describing the problem has the form my ′′ + ky = 0
...
5 = 4
...
25 feet and then released the initial
g
d
1
conditions on the system are y(0) = 0
...
The roots of the auxiliary equation 16 y ′′ + 4y = 0
are the complex values r = ±8i
...
Applying the initial conditions we obtain C1 = 0
...
The equation of motion of the spring is
y(x) = 1 cos(8x)
...
Since the mass is m = w = 32 = 1 , the spring constant is k = 4, the damping coefficient is c = −2,
g
4
and there is no external force, the differential equation that models the motion is 1 y ′′ − 2y ′ + 4y = 0
...
The initial conditions are y(0) = 1 and y ′ (0) = −2
...
The
derivative of the general solution is y ′ (x) = 4C1 e4x + C2 [e4x + 4xe4x ], so the second initial condition gives
−2 = y ′ (0) = 4C1 + C2 [1 + 0] and hence, C2 = −6
...
Review Exercises Chapter 3
1
...
1
11 ⎦
0 k − 69
Hence, det(A) = k − 69
...
2
...
Since the matrix A with column vectors the three given vectors is upper triangle, then det(A) = acf
...
3
...
Since the sum of two 2 × 2 matrices and a scalar times a 2 × 2 matrix are 2 × 2 matrices, S is closed
under vector addition and scalar multiplication
...
b
...
c
...
d
...
1 0
1 −1
2 1
and the matrices
1 1
0 1
,
4
...
Let p(x) = a + bx + cx2 such that a + b + c = 0 and let q(x) = d + ex + f x2 such that d + e + f = 0
...
Therefore S is a subspace
...
If p(x) = a + bx + cx2 is in S, then a + b + c = 0, so p(x) = a + bx + (−a − b)x2 = a(1 − x2 ) + b(x − x2 )
...
5
...
Consider the equation
c1 v1 + c2 (v1 + v2 ) + c3 (v1 + v2 + v3 ) = (c1 + c2 + c3 )v1 + (c2 + c3 )v2 + c3 v3 = 0
...
The only solution to this system
is the trivial solution, so that the set T is linearly independent
...
b
...
Since S is linearly independent, we have that set W is linearly independent if and only if the linear system
⎧
⎡
⎤
⎡
⎤
⎪
3c2 + c3 = 0
0 3 1
−1 2 −1
⎨
−c + 2c2 − c3 = 0 has only the trivial solution
...
Therefore, W is not basis
...
a
...
Therefore, S is not a basis
...
v3 = 2v1 + v2
c
...
So the basis is
⎧⎡
1
⎪
⎪
⎨⎢
−3
⎢
⎪⎣ 1
⎪
⎩
1
dimension of the span of
⎤
⎡
0
1 2
⎢ 0 5
0 ⎥
⎥ reduces to ⎢
− − −→
0 ⎦ −− − − ⎣ 0 0
1
0 0
S is 2
...
Since
⎤
0 0
0 0 ⎥
⎥
5 0 ⎦
−1 1
5, the basis consists of the corresponding column vectors of the
⎤⎫
0 ⎪
⎪
⎥ ⎢ 0 ⎥⎬
⎥,⎢ ⎥
...
Since
= −4, the vectors are linearly independent
...
Let B = {v1 , v2 , v3 , v4 }
...
Since
⎡
⎤
⎡
⎤
1 1 1
1
1 2 1 0
1 0 0 0
0
1/2
1/2 −1/2
⎢ 2 0 0
⎢
2 −3 1 0 0 ⎥
2
2
0
1 ⎥
⎢
⎥ reduces to ⎢ 0 1 0 0
⎥,
⎣ −1 1 1
− − −→
1
1 1 0 1 ⎦ − − − − ⎣ 0 0 1 0 1/2 −1/2
1
−1 ⎦
1 0 −1 −1 1 1 0 0
0 0 0 1 −3/2
0
−1/2 1/2
then
so
⎡
⎤
⎡
⎤
⎡
0
1/2
1/2
⎢ 2 ⎥
⎢ 2 ⎥
⎢ 0
⎥
⎢
⎥
⎢
[v1 ]T = ⎢
⎣ 1/2 ⎦ , [v2 ]T = ⎣ −1/2 ⎦ , [v3 ]T = ⎣ 1
−3/2
0
−1/2
[I]T
B
⎡
−1
⎢
1 ⎢ 5
B
T −1
g
...
[v]B = ⎢
⎣ −6 ⎦
−4
0 −1
0 −3
1 11
4 8
⎤
⎤
−1/2
⎥
⎢
⎥
⎥ , [v4 ]T = ⎢ 1 ⎥ ,
⎦
⎣ −1 ⎦
1/2
⎡
⎡
⎤
0
1/2
1/2 −1/2
⎢ 2
2
0
1 ⎥
⎥
...
[v]T = [I]
⎢
B ⎣ −2 ⎦ = ⎣ −8
9 ⎦
8
5
2
⎤
⎥
⎥
⎦
7
...
Since v is a linear combination of the other vectors it does not contribute to the span of the set
...
, vn }
...
a
...
85
Review Chapter 3
d e
a + kd
b+e
=
and the terms on the diagonal are equal, then S is a
f d
c + f a + kd
x y
p q
x + kp y + kq
subspace
...
c
...
Since
a
c
b
a
+k
a
c
1 0
0 1
a basis for S is
,
0
0
b
a
1
0
0
1
1
0
a basis for S ∩ T is
1
0
0
1
1
0
=x
1 0
0 0
,
0
1
0 0
1 0
,
x y
y z
a basis for T is
1
0
=a
,
,
0 1
1 0
0
0
0
1
+b
0 1
0 0
+c
0 0
1 0
and dim(S) = 3
...
0
0
0
1
d
...
9
...
The set B = {u, v} is a basis for R2 since it is linearly independent
...
Now take the dot product of both sides first with u and then v
...
1
2
Since u2 + u2 = 1 and u · v = 0, we have that a = 0
...
Hence, B is a set of two linearly independent vectors in R2 and therefore, is a basis
...
If [w]B =
, then
β
x
αu1 + βv1
x
αu + βv =
⇔
=
...
u1 v2 − v1 u2
̸= 0 since the vectors are linearly independent
...
a
...
Consider
c1 + c2 (x + c) + c3 (x2 + 2cx + c2 ) = (c1 + cc2 + c2 c3 ) + (c2 + 2cc3 )x + c3 x2 = 0
...
To find the coordinates of a polynomial f (x) = a0 + a1 x + a2 x2 , we solve the linear system
⎧
⎪c1 + cc2 + c2 c3 = a0
⎨
c2 + 2cc3 = a1 ⇔ c1 = a0 − ca1 + c2 a2 , c2 = a1 − 2ca2 , c3 = a2
...
a1 − 2ca2
So [a0 + a1 x + a2 x2 ]B = ⎣
a2
b
...
F
...
T
3
...
Only lines that pass
through the origin are subspaces
...
F
...
5
...
F
...
10
...
F
...
6
...
For example, the
−1
vector
is in S but
−1
−1
1
−
=
is not in S
...
T
12
...
T
14
...
T
16
...
If a set spans a vector
space, then adding more vectors
can change whether the set is linearly independent or dependent
but does not change the span
...
F
...
19
...
F
...
If the number of
vectors exceeds the dimension,
then the set is linearly dependent
...
T
22
...
T
24
...
Also x ⊕ y ̸=
y ⊕ x
...
T
28
...
F
...
T
1
1
=
B1
1
1/2
9
...
T
27
...
F
...
T
32
...
3
33
...
F
...
T
⎤
⎥
⎥
⎦
⎡
⎤
−1
⎢ 2 ⎥
⎥
=⎢
⎣ 0 ⎦
1
88
Chapter 4 Linear Transformations
4
Linear Transformations
Exercise Set 4
...
To verify T : V −→ W is a linear transformation from V to W, then we must show that T satisfies the two
properties
T (u + v) = T (u) + T (v) and T (cu) = cT (u)
or equivalently just the one property
T (u + cv) = T (u) + cT (v)
...
For example, T : R2 −→ R2 defined by
x
y
T
x + 2y
x−y
=
is a linear transformation
...
Notice that the definition of T
requires the input of only one vector, so to apply T first simplify the expression
...
Next apply the definition of the mapping resulting in a vector with two components
...
So
T
x1
y1
+c
x2
y2
x1 + cx2
y1 + cy2
=T
=
(x1 + cx2 ) + 2(y1 + cy2 )
(x1 + cx2 ) − (y1 + cy2 )
...
This gives
(x1 + cx2 ) + 2(y1 + cy2 )
(x1 + cx2 ) − (y1 + cy2 )
(x1 + 2y1 ) + c(x2 + 2y2 )
(x1 − y1 ) + c(x2 − y2 )
=
x1 + 2y1
x1 − y1
=
+c
x2 + 2y2
x2 − y2
=T
x1
y1
+ cT
x2
y2
,
and hence T is a linear transformation
...
Other useful observations made in Section 4
...
x2 + 1
y2
=
x1 + x2 + 2
y1 + y2
4
...
• T (c1 v1 + c2 v2 + · · · + cn vn ) = c1 T (v1 ) + c2 T (v2 ) + · · · + cn T (vn )
The third property can be used to find the image of a vector when the action of a linear transformation is
known only on a specific set of vectors, for example on the vectors of a basis
...
1
0
1
1
1
−1
⎧⎡
⎤ ⎡
⎤ ⎡
⎤⎫
1
0 ⎬
⎨ 1
Then the image of an arbitrary input vector can be found since ⎣ 1 ⎦ , ⎣ 0 ⎦ , ⎣ 1 ⎦ is a basis for R3
...
The first step is to write the input vector in terms
0
of the basis vectors, so
⎡
⎤
⎡ ⎤
⎡ ⎤ ⎡
⎤
1
1
1
0
⎣ −2 ⎦ = − ⎣ 1 ⎦ + 2 ⎣ 0 ⎦ − ⎣ 1 ⎦
...
0
1
−1
3
Solutions to Exercises
1
...
Since
u1 + cv1
u2 + cv2
=
u2 + cv2
u1 + cv1
=
u2
u1
v2
v1
+c
= T (u) + cT (v),
then T is a linear transformation
...
Let u =
and v =
be vectors in R2 and c a scalar
...
For example, if u =
=
(u1 + cv1 ) + (u2 + cv2 )
(u1 + cv1 ) − (u2 + cv2 ) + 2
(u1 + cv1 ) + (u2 + cv2 )
(u1 + cv1 ) − (u2 + cv2 ) + 4
, and c = 1, then T (u + v) =
2
2
...
Hence,
90
Chapter 4 Linear Transformations
3
...
Since
u1 + v1
u2 + v2
T (u + v) = T
=
u1 + v1
2
u2 + 2u2 v2 + v2
1
u1 + v1
2
u2 + v2
2
and
= T (u) + T (v),
which do not agree for all vectors, T is not a linear transformation
...
Since
T (u + cv) = T
u1
u2
and
T (u) + cT (v) =
+c
v1
v2
2u1 − u2
u1 + 3u2
u1 + cv1
u2 + cv2
=T
+c
then T is a linear transformation
...
Since T
+c
=
u2
v2
0
scalars c, T is a linear transformation
...
Since
u1 + cv1
T (u + cv) = T
u2 + cv2
2v1 − v2
v1 + 3v2
2(u1 + cv1 ) − (u2 + cv2 )
u1 + cv1 + 3(u2 + cv2 )
=
u1
u2
=T
2(u1 + cv1 ) − (u2 + cv2 )
u1 + cv1 + 3(u2 + cv2 )
=
v1
v2
+ cT
(u1 +cv1 )+(u2 +cv2 )
2
(u1 +cv1 )+(u2 +cv2 )
2
=
,
, for all pairs of vectors and
= T (u) + cT (v),
then T is a linear transformation
...
Since T (x + y) = T (x) + T (y), if and only if at least one of x or y is zero, T is not a linear transformation
...
Since T describes a straight line passing
through the origin, then T defines a linear transformation
...
x
y
cT
x
y
=
c
x
y
=
c2 (x2 + y 2 )
=
= c(x2 + y 2 ) if and only if c = 1 or
0
0
, T is not a linear transformation
...
Since T is the identity mapping on the first 11
...
tion
...
Since cos 0 = 1, then T (0) ̸= 0 and hence, T is not a linear transformation
...
Since
T (p(x) + q(x)) = 2(p′′ (x) + q ′′ (x)) − 3(p′ (x) + q ′ (x)) + (p(x) + q(x))
= (2p′′ (x) − 3p′ (x) + p(x)) + (2q ′′ (x) − 3q ′ (x) + q(x)) = T (p(x)) + T (q(x))
and similarly, T (cp(x)) = cT (p(x)) for all scalars c, T is a linear transformation
...
Since T (p(x)+q(x)) = p(x)+q(x)+x and T (p(x))+T (q(x)) = p(x)+q(x)+2x these will not always agree
and hence, T is not a linear transformation
...
15
...
16
...
2
−2
0
0
17
...
T (u) =
; T (v) =
b
...
The mapping T is a linear transformation
...
1 Linear Transformations
91
18
...
T (u) = x2 − 7x + 9; T (v) = −x + 1 b
...
The mapping T is a linear transformation
...
T (u + v) =
0
−1
c
...
−1
−1
19
...
T (u) =
0
−1
̸= T (u) + T (v) =
⎤⎞
1/2
−3/4
20
...
T (u) =
; T (v) =
b
...
The mapping T is not a⎛⎡
linear ⎤ ⎡
transformation
...
Alternatively, T (0) =
̸=
x+y
z+w
x+y+z+w
0
y
w
0
...
Since
⎛⎡
−3/4
0
1
0
=
−3
0
1
and T is a linear transformation, we have that
1
1
=T
−3
−3
0
⎡
⎤ ⎡
⎤
⎡ ⎤
⎡
1
1
0
0
22
...
⎦ and T is a linear operator, then
⎤⎞
⎛⎡
⎤⎞
⎛⎡
⎤⎞
⎛⎡ ⎤ ⎞ ⎡
⎤ ⎡
⎤ ⎡
⎤ ⎡
⎤
1
1
0
0
1
14
5
20
T ⎝⎣ 7 ⎦⎠ = T ⎝⎣ 0 ⎦⎠ + 7T ⎝⎣ 1 ⎦⎠ + 5T ⎝⎣ 0 ⎦⎠ = ⎣ −1 ⎦ + ⎣ 0 ⎦ + ⎣ −5 ⎦ = ⎣ −6 ⎦
...
Since T (−3 + x − x2 ) = T (−3(1) + 1(x) + (−1)x2 and T is a linear operator, then
T (−3 + x − x2 ) = −3(1 + x) + (2 + x2 ) − (x − 3x2 ) = −1 − 4x + 4x2
...
2 1
−1 3
T
=2
25
...
In particular, T
⎡
+T
3
7
=T
7
1
1
⎤
⎤
⎡ ⎤
⎡
1
2
3
26
...
T (e1 ) = ⎣ 2 ⎦ , T (e2 ) = ⎣ 1 ⎦ , T (e3) = ⎣ 3 ⎦
1
3
2
+4
⎡
−1
0
=
22
−11
...
T (3e1 − 4e2 + 6e3 ) = 3T (e1) − 4T (e2 ) + 6T (e3 ) = 3 ⎣ 2 ⎦ − 4 ⎣ 1 ⎦ + 6 ⎣ 3 ⎦ = ⎣ 20 ⎦
1
3
2
3
92
Chapter 4 Linear Transformations
27
...
Since the polynomial 2x2 − 3x + 2 cannot be written as a linear combination of x2 , −3x, and −x2 + 3x,
from the given information the value of T (2x2 − 3x + 2) can not be determined
...
b
...
3
3
3
3
⎡
⎤
⎡ ⎤
⎡
⎤
⎛⎡
⎤⎞
⎡
⎤
⎡
⎤ ⎡
⎤
2
1
1
2
−1
2
3
28
...
Since ⎣ −5 ⎦ = 7 ⎣ 0 ⎦ − 5 ⎣ 1 ⎦ , then T ⎝⎣ −5 ⎦⎠ = 7 ⎣ 2 ⎦ + 5 ⎣ −2 ⎦ = ⎣ 4 ⎦
...
Since the 0 1 3 has a row of zeros its value is 0, so the vectors ⎣ 0 ⎦ , ⎣ 1 ⎦ , and ⎣ 3 ⎦ are
0 0 0
0
0
0
3
linearly dependent
...
−1
0
x
x
−x
−1
29
...
If A =
, then T
=A
=
...
T (e1 ) =
and T (e2 ) =
0 −1
y
y
−y
0
0
...
−1
⎡
⎤
⎡
⎤
⎡
⎤
1 −2
1
−2
30
...
Let A = ⎣ 3 1 ⎦
...
T (e1 ) = ⎣ 3 ⎦ , T (e2 ) = ⎣ 1 ⎦
0 2
0
2
⎡
⎤
x
x+y
0
31
...
Consequently, T ⎝⎣ 0 ⎦⎠ =
, for all z ∈ R
...
Since T ⎝⎣ y ⎦⎠ = ⎣ 0 ⎦ if and only if
and
−x + 5y + z = 0
z
0
1 2 1
reduces to
− − −→
−1 5 1 − − − −
⎡ 3 ⎤
−7z
⎣ − 2 z ⎦ , z ∈ R
...
a
...
0
z
b
...
1 2
2
−9
34
...
Since T (ax2 + bx + c) = (2ax + b) − c, then T (p(x)) = 0 if and only if 2a = 0 and b − a = 0
...
b
...
As a second choice let q(x) = 3x2 − 5x − 2, so q(0) = −2 and T (q(x)) = q ′ (x) − q(0) =
6x − 5 + 2 = 6x − 3
...
The mapping T is a linear operator
...
Since T (cv + w) =
transformation
...
2 The Null Space and Range
93
36
...
37
...
38
...
If b = 0, then we also have that T (cx) = cmx = cT (x)
...
39
...
Using the properties of the Riemann Integral, we have that
1
T (cf + g) =
1
(cf (x) + g(x)) dx =
0
1
cf (x)dx +
0
so T is a linear operator
...
T (2x2 − x + 3) =
1
g(x)dx = c
0
1
f (x)dx +
0
g(x)dx = cT (f ) + T (g)
0
19
6
40
...
41
...
Hence, if either T (v) = 0 or T (w) = 0, then the
conclusion holds
...
So there exist scalars
a and b, not both 0, such that aT (v) + bT (w) = 0
...
Hence, since T is linear, then aT (v) + bT (w) = T (av + bw) = 0, and we have shown that T (u) = 0 has a
nontrivial solution
...
Since {v1 ,
...
, cn , not all zero, such that
c1 v1 + c2 v2 + · · · + cn vn = 0
...
Therefore, {T (v1 ),
...
43
...
44
...
Since {v1 ,
...
, cn such that v =
c1 v1 + · · · + cn vn
...
Since T1 (vi ) = T2 (vi ), for each i = 1, 2,
...
45
...
Then L(U, V ) with these operations satisfy all ten of the vector
space axioms
...
Exercise Set 4
...
Any transformation defined by a matrix product is a linear transformation
...
The null space of T, denoted by N (T ), is the null space of the matrix, N (A) =
{x ∈ R3 | Ax = 0}
...
Since
⎤
⎡
⎤
3 0
1 3 0
0 3 ⎦ reduces to ⎣ 0 −6 3 ⎦
−− − −
− − −→
0 3
0 0 0
3
the homogeneous equation Ax = 0 ⎧ ⎡
has infinitely many solutions given by x1 = − 2 x3 , x2 = 1 x3 , and x3
2
⎫
⎤
−3/2
⎨
⎬
a free variable
...
⎤ ⎡ since the pivots in the reduced matrix are in columns one and two, a basis for the
Also ⎤⎫
⎧⎡
3 ⎬
⎨ 1
range is ⎣ 2 ⎦ , ⎣ 0 ⎦ and hence, the range is a plane in three space
...
This is a fundamental theorem that if T : V −→ W is a linear
transformation defined on finite dimensional vector spaces, then
dim(V ) = dim(R(T )) + dim(N (T ))
...
A number of useful statements are added to the list of equivalences concerning n × n linear systems:
A is invertible ⇔ Ax = b has a unique solution for every b ⇔ Ax = 0 has only the trivial solution
⇔ A is row equivalent to I ⇔ det(A) ̸= 0 ⇔ the column vectors of A are linearly independent
⇔ the column vectors of A span Rn ⇔ the column vectors of A are a basis for Rn
⇔ rank(A) = n ⇔ R(A) = col(A) = Rn ⇔ N (A) = {0} ⇔ row(A) = Rn
⇔ the number of pivot columns in the row echelon form of A is n
...
Since T (v) =
0
0
3
...
, v is not in N (T )
...
Since p′ (x) = 2x − 3 and p′′ (x) = 2, then
T (p(x)) = 2x, so p(x) is not in N (T )
...
Since T (v) =
0
0
, v is in N (T )
...
Since T (v) =
0
0
, v is in N (T )
...
Since p′ (x) = 5 and p′′ (x) = 0, then T (p(x)) =
0, so p(x) is in N (T )
...
Since T (p(x)) = −2x, then p(x) is not in 8
...
N (T )
...
Since ⎣ 2 1 3 3 ⎦ reduces− ⎣ 0 1 −1 1 ⎦ there are infinitely many vectors that are mapped
−− − →
− − − to
1 −1 3 0 ⎛⎡
⎡
⎤
⎤⎞ 0 0 ⎤ 0 0
⎡
⎡
⎤
1
−1
1
1
to ⎣ 3 ⎦
...
0
1
0
0
4
...
Since ⎣ 2 1 3 3 ⎦ reduces− ⎣ 0 1 −1
−− − →
− − − to
1 −1 3 4
0 0 0 1
⎡
⎤
2
⎣ 3 ⎦ is not in R(T )
...
Since ⎣ 2 1 3 1 ⎦ reduces− ⎣ 0 1 −1 0 ⎦ , the linear system is inconsistent, so the vector
−− − →
− − − to
1 −1 3 −2
0 0 0 1
⎡
⎤
−1
⎣ 1 ⎦ is not in R(T )
...
Since ⎣ 2 1 3 −5 ⎦ reduces− ⎣ 0 1 −1 −1 ⎦ there are infinitely many vectors that are
−− − →
− − − to
1 −1 3 −1
0 0 0
0
⎡
⎤
⎡
⎤
−2
−2
mapped to ⎣ −5 ⎦ and hence, the vector ⎣ −5 ⎦ is in R(T )
...
The matrix A is in R(T )
...
The matrix A is not in R(T )
...
The matrix A is not in R(T )
...
The matrix A is in R(T )
...
That is, N (T ) =
y
Hence, the null space has dimension 0, so does not have a basis
...
A vector v =
A vector is in the null space if and only if
18
...
=0
, that is x = y
...
⎤ ⎡
⎤
x + 2z
0
19
...
Hence, a basis for the null space is ⎣ 1 ⎦
...
Since ⎣ 3 5 1 ⎦ reduces− ⎣ 0 1 1/2 ⎦ , then N (T ) = t ⎣ −1/2 ⎦ t ∈ R and a basis for
to
−− − →
−−−
⎩
⎭
0 ⎧⎡ 1
2
0 0
0
1
⎤⎫
1/2
⎨
⎬
the null space is ⎣ −1/2 ⎦
...
A basis for the null space is ⎢ 6 ⎥
...
Since N (T ) =
⎩
⎭
⎪⎣ 1 ⎦ ⎪
⎪
⎪
t
⎩
⎭
⎧⎡
⎤ ⎡
⎤⎫
0
1 ⎬
⎨ 2
basis for the null space is ⎣ 1 ⎦ , ⎣ 0 ⎦
...
Since T (p(x)) = 0 if and only if p(0) = 0 a 24
...
A basis for the null space a = 0
...
is x, x2
...
Since det ⎝⎣ 0 1 −1 ⎦⎠ = −5, the column vectors of the matrix are a basis for the column
2 0 1
space of the matrix
...
⎩
⎭
2
0
1
⎡
⎤
⎡
⎤
1 −2 −3 1 5
1 0 1 0 1
26
...
⎩
⎭
1
1
1
⎧⎡
⎤ ⎡ ⎤⎫
0 ⎬
⎨ 1
27
...
⎩
⎭
0
0
⎡
⎤
⎡
⎤
⎡
⎤
⎡
⎤
x − y + 3z
1
−1
3
28
...
⎩
⎭
−1
3
−5
29
...
30
...
31
...
The vector w is in the range of T if the linear system
⎡
⎤
⎡
⎤
⎡
⎤ ⎡
⎤
−2
0
−2
−6
c1 ⎣ 1 ⎦ + c2 ⎣ 1 ⎦ + c3 ⎣ 2 ⎦ = ⎣ 5 ⎦
1
−1
0
0
⎡
⎤
⎡
−2 0 −2 −6
−2 0
1
2
5 ⎦ −→ ⎣ 0 1
has a solution
...
Hence, ⎣ 5 ⎦ is not in R(t)
...
Since
−2 0 −2
1
1
2
1 −1 0
⎤
−2 −6
1
2 ⎦ , so that the linear system is inconsis0 −1
= 0, the column vectors are linearly dependent
...
Since the pivots are in columns one and
0
4
...
c
...
⎧⎡
⎡
⎤
⎤ ⎡ ⎤ ⎡
⎤⎫
−2
0
−1 ⎬
⎨ −1
32
...
The vector ⎣ 1 ⎦ is in R(T )
...
⎣ 2 ⎦ , ⎣ 5 ⎦ , ⎣ −1 ⎦
c
...
33
...
The polynomial 2x2 − 4x + 6 is not in R(T )
...
Since the null space of T is the set of all constant functions, then dim(N (T )) = 1 and hence, dim(R(T )) = 2
...
34
...
The polynomial x2 −x−2 is not in R(T )
...
Since the null space of T is the set of all polynomials of the
form ax2 , then dim(N (T )) = 1 and hence, dim(R(T )) = 2
...
35
...
For example, the
⎛
⎞
x
x
mapping to the xy-plane is T ⎝⎣ y ⎦⎠ =
...
Define T : R2 → R2 , by T
x
y
=
y
0
...
37
...
The range R(T ) is the subspace of Pn consisting of all polynomials of degree n − 1 or less
...
dim(R(T )) = n
c
...
38
...
Hence dim(N (T )) = k
...
a
...
dim(N (T )) = 1
40
...
a
c
41
...
Hence a basis for N (T ) is
0
0 1
, then T (B) = AB − BA =
a, d ∈ R
=
a
1
0
...
If B is an n × n matrix, then T (B t ) = (B t )t = B and hence, R(T ) = Mn×n
...
a
...
Also if B is any symmetric matrix, then T 1 B = 1 B + 1 B t = B
...
b
...
44
...
Notice that (A − At )t = At − A = −(A − At ), so that the range of T is a subset of the skew-symmetric
matrices
...
Therefore, R(T ) is the set
2
2
2
of all skew-symmetric matrices
...
Since a matrix A is in N (T ) if and only if T (A) = A − At = 0, which is if
and only if A = At , then the null space of T is the set of symmetric matrices
...
If the matrix A is invertible and B is any n×n matrix, then T (A−1 B) = A(A−1 B) = B, so R(T ) = Mn×n
...
a
...
Any zero rows of A correspond to diagonal entries that are 0, so the echelon form of
A will have pivot columns corresponding to each nonzero diagonal term
...
b
...
98
Chapter 4 Linear Transformations
Exercise Set 4
...
If T : V −→ W is a one-to-one and onto linear transformation, then T is called an isomorphism
...
If {v1 ,
...
, T (vn )}
...
, T (vn )} is a basis for R(T )
...
3 are:
• If V is a vector space with dim(V ) = n, then V is isomorphic to Rn
...
For example, there is a correspondence between the very different vector spaces P3 and M2×2
...
Since every polynomial a+bx+cx2 +dx3 =
a(1) + b(x) + c(x2 ) + d(x3 ) use the coordinate map
⎡
⎤
⎡
⎤
a
a
⎢ b ⎥
⎢
⎥ L2
L1
⎥ followed by ⎢ b ⎥ −→ a b ,
a + bx + cx2 + dx3 −→ [a + bx + cx2 + dx3 ]S = ⎢
⎣ c ⎦
⎣ c ⎦
c d
d
d
so that the composition L2 (L1 (a + bx + cx2 + dx3 )) =
a
c
b
d
defines an isomorphism between P3 and
M2×2
...
Since N (T ) =
0
0
, then T is one-to-one
...
Since N (T ) = ⎣ 0 ⎦ , then T is one-to⎩
⎭
0
one
...
Since N (T ) =
not one-to-one
...
Since ⎣ −2 −1
−2 −4
⎧⎡
⎨ 0
then N (T ) = ⎣ 0
⎩
0
5
...
Then
a ∈ R , then T is
⎤
⎡
⎤
−2
2 −2 −2
−1 ⎦ reduces to ⎣ 0 −3 −3 ⎦ ,
−− − −
− − −→
−1
0 0
3
⎤⎫
⎬
⎦ , so T is one-to-one
...
That is, p(x) is in N (T ) if and only if p(x) = 0
...
6
...
Therefore, N (T )
consists of only the zero polynomial and hence, T is one-to-one
...
A vector
a
b
=a
has a solution
...
Notice the result also follows from det
= 4,
1
1
is in the range of T if the linear system
is consistent for every vector
a
b
so the inverse exists
...
Since
reduces →
− − − to
1 −1/2 b − − − −
0 0
only if a = −2b and hence, T is not onto
...
3 Isomorphisms
99
⎡
⎤
1 −1 2
9
...
11
...
Since T (e2 ) =
0
0
−2
0
and T (e2 ) =
10
...
are two linear independent vectors in R2 , they form a basis
...
14
...
2
2
4
dependent and
16
...
Since
3 −1
6 3
= 0, the set is linearly
9 2
hence, is not a basis
...
⎡
⎤
⎡
⎤
−1
−1
15
...
13
...
Is a basis
...
Since
= 6, the set is linearly independent and hence, is a basis
...
Since T (1) = x2 , T (x) = x2 + x and T (x2 ) = 20
...
x2 + x + 1, are three linearly independent polynomials the set is a basis
...
a
...
b
...
Let w =
...
That is,
A−1 T
x
y
=
1
0
−2/3 −1/3
x
−2x − 3y
=
x
y
...
b
...
Let w =
...
That is,
22
...
Since det(A) = det
A−1 T
x
y
=
1
5
−1 −3
1 −2
−2x + 3y
−x − y
=
1
5
5x
5y
=
x
y
...
a
...
0
1
⎡
⎤
⎛⎡ 0 ⎤⎞ ⎡
⎤⎡
⎤ ⎡
⎤
−1 −1 −1
x
−1 −1 −1
−2x + z
x
0
1 ⎦ c
...
b
...
24
...
Since det(A) = det ⎝⎣ −1
0 ⎛⎡
1
0
⎡
⎤
⎤⎞ ⎡
⎤⎡
⎤ ⎡
⎤
1
1 0
x
1
1 0
2x − y + z
x
0 1 ⎦ c
...
b
...
25
...
Since
−1 −1
0
2
= −10, then the ma1 −3
is an isomorphism
...
26
...
28
...
Since T (cA + B) = (cA + B)t = cAt + B t = cT (A) + T (B), T is linear
...
To show that T is onto let B be a matrix in Mn×n
...
Hence, T is an isomorphism
...
The transformation is linear and if p(x) = ax3 + bx2 + cx + d, then
T (p(x)) = ax3 + (3a + b)x2 + (6a + 2b + c)x + (6a + 2b + c + d)
...
Since N (T ) = {0} and
dim(N (T )) + dim(R(T )) = 4, then dim(R(T )) = 4 so that R(T ) = P3 and hence, T is onto
...
Since T (kB + C) = A(kB + C)A−1 = kABA−1 + ACA−1 = kT (B) + T (C), then T is linear
...
If C is a matrix in Mn×n and
B = A−1 CA, then T (B) = T (A−1 CA) = A(A−1 CA)A−1 = C, so T is onto
...
32
...
c d
d
34
...
Define an isomorphism T : R4 → P3 , by
⎛⎡
⎤⎞
a
⎜⎢ b ⎥ ⎟
3
2
⎥⎟
T ⎜⎢
⎝⎣ c ⎦⎠ = ax + bx + cx + d
...
⎧⎡
⎫
x
⎨
⎬
⎦ x, y ∈ R define an isomorphism T : V → R2 by
35
...
y
T ⎝⎣
y
x + 2y
36
...
4
...
Let v be a nonzero vector in R3
...
That is, L = {tv| t ∈ R}
...
Since T is linear, then T (tv) = tT (v)
...
Hence, the set
L′ = {tT (v)| t ∈ R} is also a line in R3 through the origin
...
That is, P = {su + tv| s, t ∈ R}
...
Exercise Set 4
...
In Section 4
...
The matrix representation is given relative to bases for the vector spaces V and W and is
defined using coordinates relative to these bases
...
, vn } is a basis for V and B ′ a basis for W,
two results are essential in solving the exercises:
• The matrix representation of T relative to B and B ′ is defined by
′
[T ]B = [ [T (v1 )]B ′ [T (v2 )]B ′
...
B
• Coordinates of T (v) can be found using the formula
′
[T (v))]B ′ = [T ]B [v]B
...
• Apply T to each basis vector in B
...
Since
⎤
1 0 0 1/2 −1 1/2
1 0 1 −1/2 1
1/2 ⎦ ,
1 1 0 −1/2 0 −1/2
⎤⎤
⎡
⎤ ⎡⎡
⎤⎤
⎡
⎤ ⎡⎡
⎤⎤
⎡
⎤
−1
1/2
0
−1
0
1/2
⎣⎣ 0 ⎦⎦ = ⎣ −1/2 ⎦ , ⎣⎣ −1 ⎦⎦ = ⎣ 1 ⎦ , ⎣⎣ 0 ⎦⎦ = ⎣ 1/2 ⎦
...
⎡
⎤
1/2 −1 1/2
′
[T ]B = ⎣ −1/2 1
1/2 ⎦
B
−1/2 0 −1/2
102
Chapter 4 Linear Transformations
• The coordinates of any vector T (v) can be found using the matrix product
′
[T (v))]B ′ = [T ]B [v]B
...
−1/2
−4
B
Since B is the standard basis the coordinates of a vector are just
⎡ ⎛⎡
⎤⎞⎤
⎡
⎤⎡
1
1/2 −1 1/2
⎣T ⎝⎣ −2 ⎦⎠⎦ = ⎣ −1/2 1
1/2 ⎦ ⎣
−4
−1/2 0 −1/2
B′
This vector is not T (v), but the coordinates relative
⎛⎡
⎤⎞
⎡
⎤
⎡
1
1
1⎣
9⎣
1 ⎦−
T ⎝⎣ −2 ⎦⎠ =
2
2
−4
1
the components, so
⎤ ⎡
⎤
1
1/2
−2 ⎦ = ⎣ −9/2 ⎦
...
Then
⎤
⎡
⎤ ⎡
⎤
1
2
−1
3⎣
0 ⎦+
1 ⎦ = ⎣ 2 ⎦
...
a
...
To find the matrix representation for A relative to B, the column
vectors are the coordinates of T (e1) and T (e2 ) relative to B
...
Hence, [T ]B = [ [T (e1 ]B [T (e2 ]B ] =
...
The direct computation is T
=
and using part (a), the result is
1
−1
2
5 −1
2
9
T
=
=
...
a
...
The direct computation is T
−1
3
=
1
3
−1
3
1
3
and using part (a), the result is
...
Let B = {e1 , e2 , e3 } be the standard basis
...
b
...
3
1 0 −1
3
−2
3
...
4 Matrix Transformation of a Linear Transformation
⎡
1 0 0
4
...
[T ]B = ⎣ 0 1 0
0 0 ⎡ −1
⎡
⎤
2
1
result is T ⎣ −5 ⎦ = ⎣ 0
1
0
⎤
103
⎡
⎤ ⎡
⎤
2
2
b
...
−1
1
−1
⎦
0
1
0
5
...
The column vectors of the matrix representation relative to B and B ′ are the coordinates relative to
′
1
2
B ′ of the images of the vectors in B by T
...
Since B ′
B
−1
0
′
′
B
B
1
2
is the standard basis, the coordinates are the components of the vectors T
and T
, so
−1
0
′
−3 −2
[T ]B =
...
The direct computation is T
−1
−2
T
6
...
[T ]B
B
′
⎡
−3 −2
3
6
=
⎤
−3 2 1
=⎣ 2 1 2 ⎦
2 0 2
−3
−3
=
−1
−2
⎡
and using part (a)
=
B
−3 −2
3
6
2
−3/2
=
−3
−3
...
⎣ −1 ⎦ = T ⎣ −1 ⎦ = [T ]B ⎣ −1 ⎦ = [T ]B ⎣ −3 ⎦
B
B
2
1
1
5/2
B
7
...
The matrix representation is given by
′
[T ]B =
B
T
−1
−2
1
1
T
B′
=
T
B′
−2
−3
T
B′
2
2
...
−1
−2
=
...
and then use these coordinates to fine T (v
...
The direct computation is T
T
−1
−3
= [T ]B
B
B′
⎡
′
−1
−3
= [T ]B
B
B
′
2
1
=
2
−3
8
3
, so T
−1
−3
2
3
⎤
=−
3
−2
+
8
3
0
−2
=
−2
−4
⎤
⎡
⎡ ⎤
−1 1 1
−2
1
′
8
...
[T ]B = ⎣ −3 1 −1 ⎦ b
...
Using the matrix in
B
−2
4
⎡ −3 1 ⎤⎤
⎡
⎡ ⎡
⎤⎤
⎡
⎤ ⎡3
⎤
−2
−2
1
1
′
′
part (a) gives ⎣T ⎣ 1 ⎦⎦ = [T ]B ⎣T ⎣ 1 ⎦⎦ = [T ]B ⎣ 1 ⎦ = ⎣ −3 ⎦ , so that
B
B
3
3
1
−4
B′
B
⎡
⎤ ⎡ ⎤
⎡
⎤
⎡
⎤ ⎡
⎤
−2
0
1
−1
1
T ⎣ 1 ⎦ = ⎣ 0 ⎦ − 3 ⎣ 0 ⎦ − 4 ⎣ −1 ⎦ = ⎣ 4 ⎦
...
104
Chapter 4 Linear Transformations
⎡
⎤
1 1
1
9
...
Since B ′ is the standard basis for P2 , then [T ]B = ⎣ 0 −1 −2 ⎦
...
The direct computation is
B
0 0
1
T (x2 − 3x + 3) = x2 − 3x + 3
...
B
B
1
1
′
⎡
⎤
1 −1 −2
d
10
...
[T ]B = ⎣ −1 0
1 ⎦ b
...
B
−3 1
3
⎡
⎤ ⎡
⎤
1
0
′
′
Using the matrix in part (a) gives [T (1 − x)]B ′ = [T ]B [1 − x]B = [T ]B ⎣ −1 ⎦ = ⎣ 0 ⎦ , so
B
B
1
−1
T (1 − x) = 0(−1 + x) + 0(−1 + x + x2 ) − x = −x
...
First notice that if A =
⎡
0 0
a
...
The direct computation is T
2
2
3
T
1
−2
so
T
12
...
0 −2b
2c
0
, then T (A) =
2
3
1
−2
a
c
=0
b
d
=
1
0
0
−1
B
2
3
= [T ]B
B
1
−1
3
2
B
1
−2
0
6
=
−2
0
...
The direct computation gives T
1 3
−1 2
...
[T ]B = ⎢
⎣ 0
0
3 1
...
⎤
0
0 ⎥
⎥
0 ⎦
3
the matrix in part (a) gives
1 3
−1 2
=
3
5
1
6
′
1
2
1 22
5 −2
b
...
[T ]B = 1
B
9
9
1 −1
11 −1
1
5
′
′
5 2
−2 5
22 1
d
...
[T ]B = 1
f
...
a
...
[T ]B ′ = 1 ⎣ 0 2 ⎦ c
...
[T ]B′ =
B
B
C
C
2
1
2
1 1
2
1
⎡
⎤
1
2
′′
3 ⎦
e
...
13
...
[T ]B =
⎡
⎤
−3 1
1⎣
2 0 ⎦
2
1 1
4
...
a
...
[T ]B = ⎣ 0 1 ⎦ c
...
B
C
C
0 1/2
1/2 0 ⎤
1/2 0
⎡
0 0 0
′
′
1 0
e
...
The function S ◦ T is the
B
B
B
B
0 1
0 0 1
(S ◦ T )(ax + b) = ax + b so S reverses the action of T
...
[T ]B ′ = ⎢ 0 0 −2 2 ⎥
16
...
[T ]B = 1 ⎢
B
B
4 ⎣ −3 −1
⎦
⎣ −1 0 −1 −1 ⎦
1
2
−3 1 −1 0
0 0 2⎤ 2
⎡
⎤
⎡
⎡
4 0 0 0
2
0
0 2
1 0
⎢ −2 0 −6 2 ⎥
⎢ 0
⎢ 0 1
′
′
2
2 0 ⎥
⎥ d
...
[T ]B ′ = 1 ⎢
B
B
B
4 ⎣ −1 0
4 ⎣ −1
⎣ 0 1
3 7 ⎦
1 −1 1 ⎦
−3 0 5 1
−1 −1 1 1
1 0
′
⎡
1
0
...
[S]B ′ =
B
0 1
0 0
identity map, that is,
⎤
−1 −1
1 −1 ⎥
⎥
−1 1 ⎦
1
1
17
...
The transformation rotates a vector by θ radians in the counterclockwise direction
...
[T ]B = cI
20
...
[T ]B = [1 0 0 1]
B
23
...
[2T + S]B = 2[T ]B + [S]B =
−4
23
5 2
−1 7
2
1
1
4
24
...
[T ◦ S]B = [T ]B [S]B =
b
...
a
...
⎣ −26 ⎦
−9
⎡
⎤
4 −4 −4
29
...
[S ◦ T ]B = ⎣ 1 −1 −1 ⎦ b
...
25
...
[S ◦ T ]B = [S]B [T ]B =
⎤
0
0 ⎥
⎥
0 ⎦
0
22
...
[−3S]B = −3[S]B = −3
b
...
=
⎡
⎤
2 −2 −2
26
...
[2T ]B = 2[T ]B = ⎣ 0
4
4 ⎦
−2 2
2
⎡
⎤
−10
b
...
a
...
⎣ 4 ⎦
8
30
...
31
...
Since B is the standard basis, then [T (1)]B = [1]B = ⎣ 0 ⎦ , [T (x)]B = [2x]B = ⎣ 2 ⎦ , and [T (x2 )]B =
0
0
⎡
⎤
⎡
⎤
0
1 0 0
[3x2 ]B = ⎣ 0 ⎦ , so [T ]B = ⎣ 0 2 0 ⎦
...
[S]B = ⎢
B
⎣ 0 1 0 ⎦ , [D]B = 0 0 2 0 , [D]B [S]B = 0 2 0 = [T ]B
0 0 0 3
0 0 3
0 0 1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
⎤
0
24
0
0
0
⎡
34
...
If A =
b
d
−y
−x
−1
−1
1
1
, that is reflects across
, so
B
−1
0
=
B
−1
0
−1
1
...
Since T (v) = v is the identity map, then
′
[T ]B = [ [T (v1 )]B ′ [T (v2 )]B ′ [T (v3 )]B ′ ] = [ [v1 ]B ′ [v2 ]B ′
B
⎡
⎡
0 1
[v3 ]B ′ ] = ⎣ 1 0
0 0
⎤
⎡
⎤
a
b
′
If [v]B = ⎣ b ⎦ , then [v]B ′ = ⎣ a ⎦
...
37
...
[T ]B = [ [T (v1 )]B [T (v2 )]B
...
[vn−1 +vn ]B ] = ⎢
...
⎢ 0 0
⎢
⎣ 0 0
0 0
⎤
0
b ⎥
⎥
...
1
identity matrix by
0 0
1 0
1 1
...
...
...
...
...
...
0
...
0
...
...
1
1
0
Exercise Set 4
...
...
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎥
0 ⎥
⎥
1 ⎦
1
4
...
However, the action of the operator does not change, so does not depend on the matrix
′
representation
...
, vn } and B2⎤ 1 ,
...
⎥ and [T (v)]B2 = ⎢
...
⎦
⎣
...
...
The matrix representations are related by the formula
[T ]B2 = [I]B2 [T ]B1 [I]B1
...
Two n × n matrices A and B are called
B1
B2
similar provided there exists an invertible matrix P such that B = P −1 AP
...
The coordinates of the image of the vector v =
[T (v)]B1 = [T ]B1 [v]B1 =
1 2
−1 3
4
−1
2
−7
=
relative to the two bases are
and [T (v)]B2 = [T ]B2 [v]B2 =
2 1
−1 2
−1
−5
=
−7
−9
Then using the coordinates relative to the respective bases the vector T (v) is
1
1
−7
−1
0
+ (−9)
2
−7
=
1
0
=2
−7
0
1
,
so the action of the operator is the same regardless of the particular basis used
...
The coordinates of the image of the vector v =
relative to the two bases are
2
[T (v)]B1 = [T ]B1 [v]B1 =
0
1
2 −1
5
2
2
8
=
and [T (v)]B2 = [T ]B2 [v]B2 =
−2 0
4 1
−3
8
=
6
−4
Then using the coordinates relative to the respective bases the vector T (v) is
1
2
6
1
1
−4
=
2
8
=2
1
0
0
1
+8
,
so the action of the operator is the same regardless of the particular basis used
...
a
...
The coordinates of the image of the vector v =
[T ]B1 [v]B1 =
1 1
1 1
3
−2
=
1
1
, and T (e2 ) =
−1
1
, and T
3
−2
1
1 1
, then [T ]B1 =
...
1
0
0 0
relative to the two bases are
2
0
and [T ]B2 [v]B2 =
0
0
1/2
−5/2
Then using the coordinates relative to the respective bases the vector T (v) is
1
1
1
+ (0)
−1
1
=
1
1
=
1
0
+
0
1
,
=
1
0
...
...
−1 0
4
...
Since B1 is the standard basis, then [T ]B1 =
...
b
...
Then using the coordinates relative to the respective bases the vector T (v) is
−2
1
0
0
1
−2
=
−2
−2
2
−1
= −2
−2
−1
2
,
so the action of the
⎡
1
5
...
[T ]B1 = ⎣ 0
0
⎡
operator is the same regardless of the particular basis used
...
Since
b
...
⎡
⎤
⎡
⎤
1
1
0
−2
1
2
1 ⎦ , [T ]B2 = ⎣ 3 −2 −4 ⎦ b
...
a
...
Relative to the basis B2 , we have
T (v) = [T ]B1 [v]B1 = ⎣ 1 −1
0 ⎡ 1 −1
−1 ⎡
0 ⎡
⎤
⎤
⎤
⎡
⎤
⎡ ⎤
⎡
⎤
−2
1
2
−1
2
−1
0
1
[T (v]B2 = [T ]B2 [v]B2 = ⎣ 3 −2 −4 ⎦ ⎣ −2 ⎦ = ⎣ −3 ⎦
...
0
7
...
By Theorem 15,
B2
−1
1
−1
1
B
B
1
[T ]B2 = P −1 [T ]B1 P =
1
1
2
1
1
1
3
1 1
3 2
3 −1
−1
1
=
9/2 −1/2
23/2 −3/2
...
Since B1 is the standard basis, then the transition matrix relative to B2 and B1 is
−1
1
−1 1
P = [I]B1 =
=
...
4
...
The transition matrix is P = [I]B1 =
B2
10
...
Since P = [I]B1 =
B2
2
3
1
2
1
2
3
5
1
2
−1 −1
2
1
2
2
−1/2
−1
[T ]B2 = P −1 [T ]B1 P =
2 0
0 0
, and [T ]B1 =
1
1
−2 −1
, and [T ]B1 =
1 −1/2
2
−2
[T ]B2
0 2
=⎣ 0 0
0 0
⎤
⎡
0
1 0
1 ⎦
...
Since T (1) = 0, T (x) = x, T (x2 ) = 2x2 + 2, and
[T ]B2
⎡
0 0
=⎣ 0 1
0 0
⎤
⎡
0
1 0
0 ⎦
...
...
2
3
1
2
=
−1 −2
6
6
...
Since T (1) = 0, T (x) = 1, and T (x2 ) = 2x, then [T ]B1
⎡
=
, by Theorem 15,
1 −1
1
2
2 −1
−5
3
1/3
1
1/3 −1
−1 0
0 1
2 −1
−3
2
and [T ]B1 =
[T ]B2 = P −1 [T ]B1 P =
14
...
Since P = [I]B1 =
B2
1
0
0 −1
1/2
1/2
−1/2 −3/2
[T ]B2 = P −1 [T ]B1 P =
12
...
By Theorem 15
3/2
3/2
1/2 −1/2
[T ]B2 = P −1 [T ]B1 P =
[T ]B2 = P −1 [T ]B1 P = −
1/3
1
1/3 −1
−1 −1
2
1
1
2
1
1
...
⎤
0
2 ⎦ and
0
⎤
−2
0 ⎦ , then by Theorem 15, [T ]B2 = P −1 [T ]B1 P
...
1
17
...
Also since B and
C are similar, there is an invertible matrix Q such that C = Q−1 BQ
...
110
Chapter 4 Linear Transformations
18
...
Then
det(B) = det(P −1 AP ) = det(P −1 ) det(B) det(P ) = det(B)
...
For any square matrices A and B, the trace function satisfies the property tr(AB) = tr(BA)
...
Hence
tr(B) = tr(P −1 AP ) = tr(AP P −1 ) = tr(A)
...
If A is similar to B, then there is an invertible matrix P such that B = P −1 AP
...
21
...
Hence
B n = (P −1 AP )n = P −1 An P
...
22
...
Then
det(B − λI) = det(P −1 AP − λI) = det(P −1 (AP − λP )) = det(P −1 (A − λI)P ) = det(A − λI)
...
6
1
...
Since the triangle is reflected across the x-axis, the matrix representation relative to the standard basis
1
0
for T is
...
Since the triangle is reflected across the y-axis, the matrix representation relative
0 −1
−1 0
to the standard basis is
...
Since the triangle is vertically stretched by a factor of 3, the matrix
0 1
1 0
representation relative to the standard basis is
...
a
...
b
...
c
...
3 1
3
...
The matrix representation relative to the standard basis S is the product of the matrix representations
for the three separate operators
...
4
...
The matrix that will reverse the action of the operator T is the inverse of [T ]S
...
S
0 −2
b
...
a
...
That is,
[T ]S =
1
0
2
1
−1 0
0 1
=
−1 2
0 1
...
The matrix that will reverse the action of the operator T is the inverse of [T ]S
...
S
0 −1
b
...
a
...
[T ]S =
c
...
a
...
c
...
The transformation is a reflection through the y-axis
...
a
...
⎣ −1/2
3/2 −1 ⎦
0
0
1
b
...
a
...
⎣ 0 1 −2 ⎦
0 0 1
b
...
a
...
1
1
0
0
=
B
2
2
,
1
−1
y
x
, then [T ]S =
B
0
0
=
=
0
0
1
1
,
2
0
1
1
=
B
T
1
−1
2
0
0
2
,
B
−1
1
T
S
2
2
=
1
1
=
b
...
0 1
0
0
d
...
a
...
...
The figure shows the parallelogram determined by
the original points
...
Since T
=
and
y
−y
0
0
1
1
T
=
, then
1
−1
1
0
[T ]B =
[T ]B
c
...
2
−1
y
x
1
21
...
The transformation relative to the standard basis is given by a horizontal shear by a factor of one followed
by a reflection across the x-axis
...
Review Exercises Chapter 4
1
...
The vectors are not scalar multiples, so S is a basis
b
...
The resulting linear system
+ 3 y and c2 = 1 x − 1 y
...
N (T ) = {0} d
...
⎡
c1 + 3c2
c1 − c2
3
−1
=x
has the unique solution c1 =
=y
⎡
⎤
x
⎢ x+y ⎥
⎥
=⎢
⎣ x− y ⎦
...
Since the range consists of all vectors of the form ⎢
⎣ x − y ⎦ = x ⎣ 1 ⎦ + y ⎣ −1 ⎦ and the vectors ⎣ 1 ⎦
2y
2
2
⎧2
⎡
⎤
⎡
⎤ ⎡
⎤⎫
0
0
⎪
⎪
⎪ 1
⎪
⎨⎢
⎢ 1 ⎥
⎥ ⎢
⎥⎬
⎢
⎥ are linearly independent, then a basis for the range is ⎢ 1 ⎥ , ⎢ 1 ⎥
...
Since dim(R(T )) = 2 and dim(R ) = 4, then T is not onto
...
d
114
Chapter 4 Linear Transformations
⎧⎡
⎤ ⎡
⎤ ⎡
⎤ ⎡
⎤⎫
⎡
⎤
−1
1
0 ⎪
−1 2
⎪ 1
⎪
⎪
⎨⎢
⎬
⎢ 5 −4 ⎥
0 ⎥ ⎢ 1 ⎥ ⎢ 0 ⎥ ⎢ 1 ⎥
C
⎥ ⎢
⎥ ⎢
⎥ ⎢
⎥
⎢
⎥
g
...
[T ]B = ⎣ 7 −5 ⎦
⎪
⎪
⎪
⎩
⎭
1
1
0
0
−2 4
i
...
That is
⎡
⎤
⎡
⎤
⎡
⎤
−1 2
−1 2
x−y
⎢ 5 −4 ⎥
⎢ 5 −4 ⎥ 1 x + 1 y
⎢ −x + 3y ⎥
x
x
3
⎥
⎥ 3
⎢
⎥
A
=⎢
=⎢
⎣ 7 −5 ⎦
⎣ 7 −5 ⎦ 2 x − 1 y = ⎣ −x + 4y ⎦
...
2
...
The composition H ◦ T (p(x)) = H(T (p(x)) = H(xp(x) + p(x)) = p(x) + xp′ (x) + p′ (x) + p(0) and hence,
S ◦ (H ◦ T )(p(x)) = S(p(x) + xp′ (x) + p′ (x) + p(0))
...
b
...
Then
⎡
⎤
⎡
⎤
⎡
⎤
0 1 0 0
1 0 0 0
1 1 0 0 0
⎢ 0 0 0 0 ⎥
⎢ 1 1 0 0 ⎥
⎢ 0 0 2 0 0 ⎥
⎢
⎥
⎢
⎥
⎢
⎥
B′
⎢ 0 0 0 0 ⎥ , [T ]B ′ = ⎢ 0 1 1 0 ⎥ , [H]B ′ = ⎢ 0 0 0 3 0 ⎥
...
Since T (p(x)) = T (a + bx + cx2 + dx3 ) = 0 if and only if a + (a + b)x + (b + c)x2 + (c + d)x3 + dx4 = 0,
then a polynomial is in the null space of T if and only if it is the zero polynomial
...
d
...
x
x
3
...
A reflection through the x-axis is given by the operator S
=
and a reflection through the
y
−y
x
−x
y-axis by T
=
...
Similarly, T is also a linear operator
...
c
...
1 3
4
...
Let B =
...
Let A =
...
−a + c −b + d
0 0
1 3
Since the matrix
is invertible, the mapping T is also onto
...
[S]B =
115
Review Chapter 4
1 0
1 0
b
...
Since the matrix
the mapping T is neither one-to-one nor onto
...
The linear transformation S : R(T ) → R2 defined by S
a b
one-to-one and onto and hence R(T ) and R2 are isomorphic
...
H
=
−1
0
−1
2
x
x
x
x
e
...
T
=
⇔ x = y = 0, S
=
⇔y=0
y
y
y
y
7
...
The normal vector for the plane is the cross product of the linearly independent vectors
⎡
⎡
⎤
⎡
⎤
1
0
0
⎣ 0 ⎦ and ⎣ 1 ⎦ , that is, n = ⎣ −1 ⎦
...
0
0
1
0 1 0
⎧⎡ ⎤ ⎫
⎡
⎤ ⎡ ⎡
⎤⎤
⎡
⎤⎡
⎤ ⎡
⎤
−1
−1
1 0 0
−1
−1
⎨ 0 ⎬
b
...
N (T ) = ⎣ 0 ⎦
d
...
[T ]B = ⎣ 0 0 1 ⎦ =
⎪ ⎣ 0 0 1 ⎦ , if n is odd
⎪
0 1 0
⎩
0 1 0
1
1
1
8
...
Since T (p(x) + cq(x)) = 0 (p(x) + cq(x))dx = 0 p(x)dx + c 0 q(x)dx = T (p(x)) + cT (q(x)), then T is
a linear transformation
...
c
...
a
...
Simply switch the column vectors of the matrix found in part (a)
...
B
0 1
0 −1
1
0
−2
−1
2
6
...
[T ]B =
, [S]B =
b
...
Since 0 (ax2 +bx+c)dx = a + 2 +c, then N (T ) = ax2 + bx + c | a + 2 + c = 0
...
Since c = − a − 2 ,
3
3
3
b
then N (T ) consists of all polynomials of the form ax2 + bx + − a − 2 so a basis for the null space is
3
′
1
1
1
2
x − 3 , x − 2
...
If r ∈ R, then 0 rdx = r and hence T ⎡ onto
...
[T ]B = 1 1 1
is
g
...
h
...
6
x
Since S(xex ) = 0 tet dt, the integration requires integration by parts
...
Then
x
x
d
(S ◦ T )(f ) = S(f ′ (x)) = 0 f ′ (t)dt = f (x) and (T ◦ S)(f ) = T (S(f )) = dx 0 f (t)dt = f (x)
...
Since T 2 − T + I = 0, T − T 2 = I
...
10
...
The point-slope equation of the line that passes through the points given by u =
is y =
v2 −u2
v1 −u1 (x − u1 ) + u2
...
That is,
tu1 + (1 − t)v1
tu2 + (1 − t)v2
u1
u2
and v =
v1
v2
and show the components satisfy
v2 − u2
v2 − u2
(tu1 + (1 − t)v1 − u1 ) + u2 =
(t(u1 − v1 ) + ((v1 − u1 )) + u2
v1 − u1
v1 − u1
= (v2 − u2 )(1 − t) + u2 = (1 − t)v2 − u2 (1 − t) + u2
= tu2 + (1 − t)v2
...
Since T (tu + (1 − t)v) = tT (u) + (1 − t)T (v), then the image of a line segment is another line segment
...
Let w1 and w2 be two vectors in T (S)
...
Since S is a convex set for 0 ≤ t ≤ 1, we have tv1 + (1 − t)v2 is in
S and hence, T (tv1 +(1−t)v2) is in S
...
d
...
To find the image of S, let
be a vector in S and let T
=
, so that u = 2x,
y
y
v
u 2
u2
u
and v = y
...
Therefore, T (S) =
+ v 2 = 1 , which is an ellipse in
v
2
4
R2
...
F
...
F
...
4
...
T
T (x + y) = 2x + 2y − 1
but
T (x) + T (y) = 2x + 2y − 2
...
T
6
...
Since
1
T (2u − v) +
3
1 1
1
=
+
3 1
3
T (u) =
7
...
Since
1
T (u + v)
3
0
1/3
=
1
2/3
8
...
If T is one-to-one, then the
set is linearly independent
...
T
10
...
For example, T (1) = 0 =
T (2)
...
T
12
...
Since, for every k,
k
0
T
=
...
T
14
...
T
16
...
T
18
...
117
Chapter Test Chapter 4
19
...
T
22
...
Since
dim(N (T )) + dim(R(T )) =
dim(R4 ) = 4, then
dim(R(t)) = 2
...
T
21
...
The transformation is a
constant mapping
...
F
...
=⎣ 0 1
1 −1 −1
25
...
F
...
27
...
It projects each vector
onto the xz-plane
...
T
29
...
Define T : Rn → R such
that T (ei ) = i for i = 1,
...
, T (en )} =
{1, 2,
...
This set is not a basis for R, but T is onto
...
T
31
...
False, let T : R2 → R2
by T (v) = v
...
1 0
32
...
Since N (T ) consists of
only the zero vector, the null
space has dimension 0
...
T
34
...
Any idempotent matrix is
in the null space
...
F
...
36
...
37
...
T
...
T
⎡
1
39
...
Let A = ⎣ 0
0
⎡
x
x
T (v) = A
=⎣ y
y
0
to-one
...
1
An eigenvalue of the n × n matrix A is a number λ such that there is a nonzero vector v with Av = λv
...
Notice that
if v is an eigenvalue corresponding to the eigenvalue λ, then
A(cv) = cAv = c(λv) = λ(cv),
so A will have infinitely many eigenvectors corresponding to the eigenvalue λ
...
An eigenspace is the set of all
eigenvectors corresponding to an eigenvalue λ along with the zero vector, and is denoted by Vλ = {v ∈ Rn |
Av = λv}
...
The eigenspace can also be viewed as the null space
of A − λI
...
The last equation is the characteristic equation for A
...
To then find the corresponding eigenvectors,
for each eigenvalue λ, the equation Av = λv is solved for v
...
Expanding across row three, we have that
−1 − λ
1
−2
1
−1 − λ
2
1
0
1−λ
=
1
−1 − λ
−2
2
+ (1 − λ)
−1 − λ
1
1 −1 − λ
= −λ2 − λ3
...
• To find the eigenvectors corresponding to λ1 = 0 solve Av = 0
...
Similarly, the eigenvectors of λ2 = −1 have the
t
⎡
⎤
−2t
from ⎣ 2t ⎦ , t ̸= 0
...
⎩
⎭
⎩
⎭
1
1
• Notice that there are only two linearly independent eigenvectors of A, the algebraic multiplicity of λ1 = 0
is 2, the algebraic multiplicity of λ2 = −1 is 1, and the geometric multiplicities are both 1
...
1 Eigenvalues and Eigenvectors
119
The matrix can have three distinct real eigenvalues and three linearly independent eigenvectors
...
But the matrix can still have three linearly independent eigenvectors,
two from the eigenvalue of multiplicity 2 and one form the other
...
Since the matrix equation
corresponding to the eigenvector
0
3
0
1
0
1
0
1
= λ
is satisfied if and only if λ = 3, the eigenvalue
is λ = 3
...
Since the matrix equation
=λ
−1
1
is satisfied if and only if λ = −2, the eigenvalue
is λ = −2
...
The corresponding eigenvalue is λ = 0
...
The corresponding eigenvalue is λ = −2
...
The corresponding eigenvalue is λ = 1
...
The corresponding eigenvalue is λ = −1
...
a
...
b
...
c
...
Hence the eigenvectors are v1 =
and v2 =
,
3 −3
3 −3
1
3
respectively
...
−2
2
3 −3
Av1 =
1
1
=
0
0
1
1
=0
, and Av2 =
8
...
λ2 + 4λ + 3 = 0 b
...
v1 =
= (−3)
1
1
9
...
(λ − 1)2 = 0 b
...
v1 =
1
0
−
1
−1
,
−2
−1
−1
−2
1
1
1
−1
−2
1
1
0
10
...
λ2 + 3λ + 2 = 0 b
...
v1 =
−2
1
(−1)
−2
1
,
0
2
−1 −3
−1
1
= (−2)
d
...
= (1)
d
...
a
...
Expanding down column one, we have that
0=
−1 − λ
0
1
0
1−λ
0
0
2
−1 − λ
= (−1 − λ)
1−λ
0
2 −1 − λ
= (1 + λ)2 (1 − λ)
...
λ1 = −1, λ2 = 1 c
...
⎡
⎤⎡
⎤ ⎡
⎤
⎡
⎤
⎡
⎤⎡
⎤ ⎡
⎤
⎡
⎤
−1 0
1
1
−1
1
−1 0
1
1
1
1
⎣ 0 1
0 ⎦ ⎣ 0 ⎦ = ⎣ 0 ⎦ = (−1) ⎣ 0 ⎦ and ⎣ 0 1
0 ⎦ ⎣ 2 ⎦ = ⎣ 2 ⎦ = (1) ⎣ 2 ⎦
0 2 −1
0
0
0
0 2 −1
2
2
2
⎡
⎤
⎡
⎤
⎡
1
−2
2
12
...
λ(λ + 1)(λ − 1) = 0 b
...
v1 = ⎣ 0 ⎦, v2 = ⎣ 1 ⎦ , v3 = ⎣ 1
0
0
2
⎡
⎤
⎡
⎤
1
−3
13
...
(λ − 2)(λ − 1)2 = 0 b
...
v1 = ⎣ 0 ⎦, v2 = ⎣ 1 ⎦
0
1
⎡
⎤⎡
⎤ ⎡
⎤
⎡
⎤
⎡
⎤⎡
⎤ ⎡
⎤
⎡
2 1
2
1
2
1
2 1
2
−3
−3
d
...
a
...
λ = 1 c
...
a
...
λ1 = −1, λ2 = 2, λ3 = −2, λ4 = 4
⎡
⎤
⎡
⎤
⎡
⎤
⎡
⎤
1
0
0
0
⎢ 0 ⎥
⎢ 1 ⎥
⎢ 0 ⎥
⎢ 0 ⎥
⎥
⎢
⎥
⎢
⎥
⎢
⎥
c
...
For the eigenvalue λ = −1, the verification is
0
0
0
1
⎡
⎤⎡
⎤ ⎡
⎤
⎡
⎤
−1 0
0 0
1
−1
1
⎢ 0 2
⎢
⎥
0 0 ⎥⎢ 0 ⎥ ⎢ 0 ⎥
⎢
⎥⎢
⎥=⎢
⎥ = (−1) ⎢ 0 ⎥
...
⎣ 0 0 −2 0 ⎦ ⎣ 0 ⎦ ⎣ 0 ⎦
⎣ 0 ⎦
0 0
0 4
0
0
0
⎡
⎤
−1
⎢ 1 ⎥
⎥
16
...
(λ + 1)(λ − 1)(λ − 2)(λ − 3) = 0 b
...
v1 = ⎢
⎣ 0 ⎦,
−2
⎡
⎤
⎡
⎤
⎡
⎤
−1
−7
1
⎢ 1 ⎥
⎢ 2 ⎥
⎢ 0 ⎥
⎥
⎢
⎥
⎢
⎥
v2 = ⎢
⎣ 0 ⎦,v3 = ⎣ 1 ⎦ , v4 = ⎣ 0 ⎦
0
0
0
5
...
The characteristic
z w
equation det(A − λI) = 0, is given by (x − λ)(w − λ) − zy = 0 and simplifies to λ2 − (x + w)λ + (xw − yz) = 0
...
17
...
Since A is invertible, then λ ̸= 0
...
) Now since Av = λv, then A−1 Av = A−1 (λv) =
1
λA−1 v and hence A−1 v = λ v
...
Suppose A is not invertible
...
Since
Ax0 = 0 = 0x0 , then x0 is an eigenvector of A corresponding to the eigenvalue λ = 0
...
Then there exists a nonzero vector x0 such that Ax0 = 0, so A is not
invertible
...
If λ is an eigenvalue for T with geometric multiplicity n, then corresponding to λ are n linearly independent
eigenvectors v1 ,
...
If v is a vector in V there are scalars c1 ,
...
If v is also a nonzero vector, then
T (v) = T (c1 v1 + · · · + cn vn ) = c1 T (v1 ) + · · · + cn T (vn ) = c1 λv1 + · · · + cn λvn = λv
and hence, v is an eigenvector
...
Let A be an idempotent matrix, that is, A satisfies A2 = A
...
We also have that Av = A2 v = A(Av) = A(λv) = λAv = λ2 v
...
But v is an eigenvector,
so that v ̸= 0
...
22
...
For the second part of the question, let
1 1
1 0
A=
and B =
...
But
0 0
1 0
−1
1
1
0
the eigenvectors of A are
,
and the eigenvectors of B are
,
...
Let A be such that An = 0 for some n and let λ be an eigenvalue of A with corresponding eigenvector v,
so that Av = λv
...
Continuing in this way we see that An v = λn v
...
Since v ̸= 0, then λn = 0 and hence, λ = 0
...
a
...
T (f ) =
1
0
1
0
0 −1
0
−1
0
1
0
0
0
0
1
0
−
−
0
1
0 1
1
0
=
0 0
0 −1
0
1
0
0
=
0
0 −1
−2
0
0
2
0
0
0
= 2e
= −2f
25
...
Notice that for a matrix C, we can use the multiplicative
property of the determinant to show that
det(A−1 CA) = det(A−1 ) det(C) det(A) = det(A−1 ) det(A) det(C) = det(A−1 A) det(C) = det(I) det(C) = det(C)
...
26
...
Since the
only eigenvalue of the identity matrix I is λ = 1, then AB − BA cannot equal I
...
Since the matrix is triangular, then the characteristic equation is given by
det(A − λI) = (λ − a11 )(λ − a22 ) · · · (λ − ann ) = 0,
so that the eigenvalues are the diagonal entries
...
Suppose λ is an eigenvalue of A, so there is a nonzero vector v such that Av = λv
...
For the inductive hypothesis assume that λn is an eigenvalue of An
...
If v is an eigenvector of A, then v is an eigenvector of An for all n
...
Let λ be an eigenvalue of C = B −1 AB with corresponding eigenvector v
...
Multiplying both sides of the previous equation on the left by B gives A(Bv) = λ(Bv)
...
30
...
Then there are scalars c1 ,
...
Then
Av = c1 Av1 + · · · + cm Avm = c1 λ1 v1 + · · · + cm λm vm ,
which is in S
...
If λ = 1, then y = 0 and
1
eigenvalues are λ = 1 and λ = −1 with corresponding eigenvectors
0
31
...
Then T
=λ
−y
y
y
if λ = −1, then x = 0
...
1
32
...
=0
corresponding to λ1 = 1, and
λ2 = −1
...
When θ = 0, then T is the identity map
x
x
T
=
...
If θ = π, then T
=
=−
...
33
...
Hence T
34
...
Since T (ekx ) = k 2 ekx − 2kekx − 3ekx = (k 2 − 2k − 3)ekx , the function ekx is an eigenfunction
...
c
...
35
...
Notice that, for example, [T (x − 1)]B = [−x2 − x]B and since − 1 (x − 1) − 1 (x + 1) − x2 = −x − x2 ,
2
2
⎡ 1 ⎤
−2
then [−x2 − x]B = ⎣ − 1 ⎦
...
−1 −1 1
⎡
⎤
1
1 0
b
...
−1
0 1
c
...
5
...
2
Diagonalization of a matrix is another type of factorization
...
An n × n matrix A is diagonalizable
if and only if A has n linearly independent eigenvectors
...
• The diagonal entries of D are the corresponding eigenvalues, placed on the diagonal of D in the same
order as the eigenvectors in P
...
⎡
0 −1
A = ⎣ 0 −1
1 1
Eigenvalues:
−3, 0, 1
Eigenvectors:
⎡
⎤ ⎡
1
⎣ 1 ⎦,⎣
−1
⎤
2
2 ⎦
−1
⎤ ⎡
⎤
−1
1
2 ⎦,⎣ 1 ⎦
1
1
Diagonalizable:
⎡
⎤
1 −1 1
2 1 ⎦
P =⎣ 1
−1 1 1
⎡
Three Examples
⎤
−1 2 0
A = ⎣ −1 2 0 ⎦
1 −1 1
Eigenvalues:
0, 1 multiplicity 2
Eigenvectors:
⎡
⎤ ⎡ ⎤ ⎡ ⎤
2
1
0
⎣ 1 ⎦,⎣ 1 ⎦,⎣ 0 ⎦
−1
0
1
⎡
⎤
−3 0 0
D=⎣ 0 0 0 ⎦
0 0 1
Diagonalizable:
⎡
2 1
P =⎣ 1 1
−1 0
Other useful results given in Section 5
...
⎤
0
0 ⎦
1
⎤
0
0 ⎦
1
• If an n × n matrix A has n distinct eigenvalues, then A is diagonalizable
...
• If T : V −→ V is a linear operator and B1 and B2 are bases for V, then [T ]B1 and [T ]B2 have the same
eigenvalues
...
That is, suppose A has eigenvalues λ1 ,
...
, dk
...
Solutions to Exercises
1
...
P −1 AP =
−4
0
0 −2
124
Chapter 5 Eigenvalues and Eigenvectors
⎡
0
3
...
The eigenvalues for the matrix A are −2 and
−1
...
Hence, A is diagonalizable
...
There is only one eigenvalue −1, with multi1
plicity 2 and eigenvector
...
9
...
The eigenvectors are not necessary in this case,
since A is a 2 × 2 matrix with two distinct eigenvalues
...
11
...
The eigenvectors are not necessary in this
case, since A is a 3 × 3 matrix with three distinct
eigenvalues
...
13
...
Since there are only two linearly independent eigenvectors, A is not diagonalizable
⎡
3
4
...
The eigenvalues for the matrix A are −2 + 6
√
and −2 − 6
...
8
...
Since A does not have two linearly
1
independent eigenvectors, A is not diagonalizable
...
The only eigenvalue for the matrix A is 0
with multiplicity 2 and corresponding eigenvec−1
tor
...
12
...
Since there are only two
corresponding linearly independent eigenvectors,
⎡
⎤
⎡
⎤
1
5
⎣ 1 ⎦ and ⎣ 1 ⎦ , the matrix A is not diagonal1
1
izable
...
The eigenvalues are 3, 0, and 1
...
√ √
15
...
The eigenvalues for the matrix A are 2, 2,
0 with multiplicity 2
...
Hence A is diagonalizable
...
In this case there are three linearly indepen⎡
⎤ ⎡
⎤
⎡
⎤
−1
0
0
dent eigenvectors ⎣ 1 ⎦ , ⎣ 1 ⎦ , and ⎣ 0 ⎦
...
17
...
In addition there are four linearly inde⎡
⎤ ⎡ ⎤ 2,
⎤
0
0
0
−1
⎢ −1 ⎥ ⎢ 1 ⎥ ⎢ −1 ⎥
⎢ 0 ⎥
⎥ ⎢ ⎥ ⎢
⎥
⎢
⎥
pendent eigenvectors ⎢
⎣ 1 ⎦ , ⎣ 2 ⎦ , ⎣ 0 ⎦ , and ⎣ 1 ⎦ corresponding to the two distinct eigenvalues
...
18
...
Since there are only two corresponding
⎡
⎤
⎤
1
1
⎢ 1 ⎥
⎢ 1 ⎥
⎥
⎢
⎥
linearly independent eigenvectors ⎢
⎣ 1 ⎦ and ⎣ 1 ⎦ , the matrix is not diagonalizable
...
2 Diagonalization
125
19
...
Since the characteristic equation is
2−λ
0
= (2 − λ)(−1 − λ) = 0, the eigenvalues are λ1 = 2 and λ2 = −1
...
This gives eigenvectors
y
y
y
y
−3
0
corresponding to λ1 = 2 and
corresponding to λ2 = −1
...
0 −1
√
√
√
5−2
20
...
If P =
, then A = P
P −1
...
The eigenvalues of A are −1, 1, and 0 with corresponding linearly independent eigenvectors ⎣ 1 ⎦ , ⎣ 1 ⎦ ,
1
3
⎡
⎤
⎡
⎤
⎡
⎤
0
0 2 0
−1 0 0
and ⎣ 1 ⎦ , respectively
...
2
1 3 2
0 0 0
⎡
⎤ ⎡
⎤
1
4
22
...
If P = ⎣ 0 −2 −2 ⎦ , then A = P ⎣ 0 1 0 ⎦ P −1
...
The eigenvalues of the matrix A are
⎡
⎤
2
linearly independent eigenvectors, ⎣ 1 ⎦ ,
0
⎡
⎤
⎡
2 0 0
−1
P = ⎣ 1 1 0 ⎦ , then P −1 AP = ⎣ 0
0 0 1
0
λ1 = −1 and λ2 = 1 of ⎡ ⎤ ⎡ two
...
If
0
1
⎤
0 0
1 0 ⎦
...
The⎡
and 0 with ⎤
corresponding linearly independent eigenvectors
⎡
⎤ eigenvalues of A are 1 with multiplicity 2,⎡
⎤
⎡
⎤
⎡
⎤
0
1
0
0 1 0
1 0 0
⎣ 0 ⎦ , ⎣ 0 ⎦ , and ⎣ 1 ⎦ , respectively
...
1
0
1
1 0 1
0 0 0
25
...
There are
four ⎡
linearly independent eigenvectors corresponding to ⎤ eigenvalues given as the column vectors in
the
⎤
⎡
−1 0 −1 1
1 0 0 0
⎢ 0 1 0 0 ⎥
⎢ 0 1 0 0 ⎥
−1
⎥
⎢
⎥
P =⎢
⎣ 0 0 1 1 ⎦
...
1 0 0 0
0 0 0 2
126
Chapter 5 Eigenvalues and Eigenvectors
26
...
0 0 0 1
3, 0 ⎤
with multiplicity 2, and 1 with corresponding linearly
⎡
⎤
⎡
−1
1
1 −1 −1
⎢ −1 ⎥
⎢ 0 0
0 ⎥
0
⎥ and ⎢
⎥
⎢
⎣ 0 ⎦ , respectively
...
The proof is by induction on the power k
...
The inductive hypothesis is to assume the result holds for a natural number k
...
We need to show that the result holds for the next positive integer k + 1
...
28
...
486 243
, respectively
...
The eigenvalues of the matrix A are ⎡ and 1 of multiplicity 2, with corresponding linearly independent
0
⎤
⎡
⎤
1 0 1
0 0 0
eigenvectors the column vectors of P = ⎣ 1 −2 2 ⎦
...
Notice that the eigenvalues on the diagonal have
the same order as the corresponding eigenvectors in ⎤ with the eigenvalue 1 repeated two times since the
P,
⎡ k
0
0 0
algebraic multiplicity is 2
...
2 −1 −1
30
...
31
...
Since B is similar to A there is an invertible Q such that B = Q−1 AQ, so that A = QBQ−1
...
32
...
Then
A−1 = (P DP −1 )−1 = P D−1 P −1
...
For the second part of the question, let
0 1
A =
, so that A is not invertible
...
2 Diagonalization
127
1 1
0 0
1 −1
...
If A is diagonalizable with an eigenvalue of multiplicity n, then A = P (λI)P −1 = (λI)P P −1 = λI
...
34
...
Suppose A is diagonalizable
with A = P DP −1
...
But this means D = 0 which
implies A = 0, a contradiction
...
a
...
[T ]B2 = ⎣ −1 −1 0 ⎦ c
...
d
...
For example, for [T ]B2 , the
⎡
−1
eigenvector corresponding to λ = 0 is ⎣ 1 ⎦
...
0
36
...
Since the matrix
[T ]B = [ [T (sin x)]B [T (cos x)]B ] = [ [cos x]B [− sin x]B ] =
0 −1
1
0
has the eigenvalues the complex numbers i and −i with corresponding eigenvectors
T is diagonalizable over the complex numbers but not over the real numbers
...
To show that T is not diagonalizable it suffices to show that [T ]B is
of R3
...
0
The eigenvalues ⎤ [T ]B are λ1 = 1 with multiplicity 2, and λ2 = 2
...
Since there are only two linearly independent eigenvectors, [T ]B and hence,
1
1
T is not diagonalizable
...
Let B be the standard basis for R3
...
⎤
2 4
2 4 ⎦,
0 4
39
...
Let A = Q−1 BQ
...
Then
D = P −1 (Q−1 BQ)P = (QP )−1 B(QP ),
so that B is diagonalizable
...
128
Chapter 5 Eigenvalues and Eigenvectors
Exercise Set 5
...
The strategy is to uncouple the system of differential equations
...
0 −2
The next step is to diagonalize the matrix A
...
So A = P DP −1 ,
0
1
1 −1
1 1
−1
0
where P =
, P −1 =
, and D =
...
The general solution to the uncoupled system is w(t) =
w(0)
...
That is,
0 e−2t
y1 (t) = (y1 (0) + y2 (0))e−t − y2 (0)e−2t , y2 (t) = y2 (0)e−2t
...
Let A =
−2
1
and
, respectively
...
The eigenvalues of A are 4 and −2
, respectively, so that
...
Let A =
and
1/3 2/3
−1/3 1/3
,
1
2
1
1
(y1 (0) + 2y2 (0))et + (y1 (0) − y2 (0))e−2t , y2 (t) = (y1 (0) + 2y2 (0))et + (−y1 (0) + y2 (0))e−2t
...
Using the same approach as in Exercise (1), we let A =
A=
1
0
...
Then the eigenvalues of A are 1 and −2 with corresponding eigenvectors
y2 (t) =
e4t
0
1
1
(−y1 (0) + y2 (0))e4t + (y1 (0) + y2 (0))e−2t
...
Then the general solution to the uncoupled system is w(t) =
0 e−2t
1
0
hence y(t) = P
P −1 y(0), that is,
0 e2t
y1 (t) =
P −1 y(0),
...
So A = P DP −1 =
P −1 AP w =
0
e−2t
1
1
,
and hence, w′ (t) =
1
0
0 e2t
w(0) and
1
1
1
1
(y1 (0) + y2 (0)) + (y1 (0) − y2 (0))e2t , y2 (t) = (y1 (0) + y2 (0)) + (−y1 (0) + y2 (0))e2t
...
3 Application: Systems of Linear Differential Equations
129
⎡
⎤
−4 −3 −3
3
3 ⎦
...
Let A = ⎣ 2
4
3
⎡ 2
⎤⎡
⎤⎡
⎤
−1 0
1
−1 0 0
−2 −2 −1
1
2 ⎦ , where the column vectors of P are the
A = P DP −1 = ⎣ 0 −1 −2 ⎦ ⎣ 0 1 0 ⎦ ⎣ 2
1
1
0
0 0 2
−1 −1 −1
eigenvectors of A and the ⎤
diagonal entries of D are the corresponding eigenvalues
...
⎡
⎤
−3 −4 −4
11
13 ⎦
...
Let A = ⎣ 7
−5 −8 −10
⎡
⎤⎡
⎤⎡
⎤
0
2
1
−2 0 0
3
4
5
1
1 ⎦ , where the column vectors of P are
A = P DP −1 = ⎣ −1 1 −2 ⎦ ⎣ 0 −1 0 ⎦ ⎣ 1
1 −2 1
0
0 1
−1 −2 −2
the eigenvectors of A and the diagonal entries of D are the corresponding eigenvalues
...
7
...
The eigenvalues of A =
are 1 and −1, with corresponding eigenvectors
0
1
and
−1
1
−1 0
2 1
, respectively
...
Then
0 e−t
y1 (t) = e−t y1 (0), y2 (t) = et (y1 (0) + y2 (0)) − e−t y1 (0)
...
⎡
⎤
5 −12 20
8
...
The eigenvalues of A are 1, 3 and −1, so that
2 ⎡
−4 7
⎤⎡
⎤⎡
⎤
1 2 2
1 0 0
−1 2 −2
A = P DP −1 = ⎣ 2 2 1 ⎦ ⎣ 0 3 0 ⎦ ⎣ 1 −2 3 ⎦ , where the column vectors of P are the
1 1 0
0 0 −1
0
1 −2
eigenvectors of A and the diagonal entries of D are the corresponding eigenvalues
...
The solution to the initial value problem is
y1 (t) = −4et + 8e3t − 2e−t , y2 (t) = −8et + 8e3t − e−t , y3 (t) = −4et + 4e3t
...
a
...
Let y1 (t) and y2 (t) denote the amount of salt in each take after t minutes,
′
′
so that y1 (t) and y2 (t) are the rates of change for the amount of salt in each tank
...
This gives
the initial value problem
′
y1 (t) = −
1
1
y1 +
y2 ,
60
120
′
y2 (t) =
1
1
y1 −
y2 ,
60
120
1
y1 (0) = 12, y2 (0) = 0
...
The solution to the system is y1 (t) = 4 + 8e− 40 t , y2 (t) = 8 − 8e− 40 t
...
Since the exponentials in both y1 (t) and y2 (t) go to 0 as t goes to infinity, then limt→∞ y1 (t) = 4 and
limt→∞ y2 (t) = 8
...
10
...
Let x(t) and y(t) denote the temperature of the first and second floor, respectively
...
10
10
10
10
x′ (t) = −
with initial conditions x(0) = 70, y(0) = 60
...
The solution to the initial value problem is
x(t) ≈ 61
...
15t + 8
...
15t, y(t) ≈ 67
...
15t + 7
...
15t
...
The first floor will reach 32◦ in approximately 4
...
Exercise Set 5
...
a
...
85 0
...
Notice that the transition matrix is a probability matrix since each column sum
0
...
92
is 1
...
4 currently in the city, the population of the city
is 1
...
7 and the population of the suburbs is 0
...
3
...
7
the current population is
...
After 10 years an estimate for the population distribution is given by
0
...
7
0
...
35
T 10
≈
...
The steady state probability vector is
, which is the unit probability
0
...
63
0
...
2
...
The transition matrix is T =
T2
0
...
65
≈
0
...
6125
0
...
2
0
...
8
...
The initial probability state vector is
0
...
65
...
35
0
...
65
0
...
is approximately 39%
...
4 Application: Markov Chains
131
c
...
57+0
...
57
0
...
4
0
...
57
0
...
The steady state vector is the
...
5 0
...
1
3
...
4 0
...
2 ⎦
...
1 0
...
7
⎡
⎤
0
there are only plants with pink flowers, the initial probability state vector is ⎣ 1 ⎦
...
36
0
the probabilities of each variety are given by T 3 ⎣ 1 ⎦ ≈ ⎣ 0
...
29
0
⎡
⎤
0
...
33 ⎦
...
33
4
...
6 0
...
4
T = ⎣ 0
...
4 0
...
0
...
3 0
...
48
a
...
22 ⎦ , the probability of the taxi being in location S after three fares is 0
...
0
0
...
3
0
...
Since T 5 ⎣ 0
...
23 ⎦ , the probability of the taxi being in location A after three fares is 0
...
35
0
...
23, and location S is 0
...
c
...
82
0
...
47
1
⎣ 0
...
The steady state vector is the probability eigenvector
⎣ 0
...
23 ⎦
...
82 + 0
...
52
0
...
52
0
...
5 0
0
5
...
5 0
...
The eigenvector of T, corresponding to
0
0
...
Hence, the steady state probability vector is ⎣ 0 ⎦ and the disease will not
1
1
be eradicated
...
45 0
...
7
6
...
55 0
...
3
0
...
27
of smokers and non-smokers after 5 years are given by T 5
=
and after 10 years are also
0
...
73
0
...
27
0
...
The eigenvector of T corresponding to the eigenvalue λ = 1 is
...
3
0
...
04
1
0
...
27
The steady state vector is the probability eigenvector
=
, so in the long run
0
...
38 + 1
...
04
approximately 27% will be smoking
...
33 0
...
17 0
...
5(0
...
25
0
...
25 0
...
25 0
...
25
⎥ b
...
⎢ 0
...
a
...
17 0
...
33 0
...
5(0
...
25 ⎦
⎣ 0
...
25 0
...
25 0
...
25
0
...
b
...
If n is odd, then T n = T
v1
v2
v1
v2
and if n is even, then T n = I
...
Hence T v = v if and only if v =
...
The population is split in two groups which do not
1/2
intermingle
...
a
...
The eigenvalues of T are λ1 = −q + p + 1 and λ2 = 1, with corresponding eigenvectors
q/p
1
−1
1
and
...
Let T =
So let T =
a
b
a b
b a
b
c
q
p
q/p
1
=
q
p+q
p
p+q
...
If T is also stochastic, then a + b = 1 and b + c = 1, so a = c
...
a
...
b
...
Hence, the steady state probability vector is
...
a
...
But
...
b
...
First notice that A is
1
1
diagonalizable since it has two linearly independent eigenvectors
...
That is, P =
and D =
...
c
...
a
...
b
...
c
...
d
...
e
...
f
...
0 0 2
3
...
The eigenvalues are λ1 = 0 and λ2 = 1
...
No conclusion can be made from part (a) about whether
or not A is diagonalizable, since the matrix does not have four ⎡
distinct eigenvalues
...
Each eigenvalue
⎤
⎡
⎤
0
−1
⎢ 0 ⎥
⎢ 0 ⎥
⎥
⎢
⎥
has multiplicity 2
...
d
...
e
...
4
...
The characteristic polynomial is det(A − λI) = (λ − 1)2 (λ − 2)
...
The eigenvalues are the solutions
to (λ − 1)2 (λ⎡ 2) = 0, that is λ1 = 1 with multiplicity ⎤ and λ2 = 2
...
The eigenvectors of A are
−
2,
⎡
⎤
⎤
⎡
1
0
0
⎣ −3 ⎦ and ⎣ −1 ⎦ corresponding to λ1 = 1 and ⎣ −2 ⎦ corresponding to λ2 = 2
...
d
...
Equivalently, since dim(Vλ1 ) + dim(Vλ2 ) = 3, then the matrix A is diagonalizable
...
P = ⎣ −3 −1 −2 ⎦ , D = ⎣ 0 1 0 ⎦ f
...
For example,
0
1
1
0 0 2
⎡
⎤
⎡
⎤
⎡
⎤
⎡
⎤
1
0
0
1 0 0
0
0
1
2 0 0
P1 = ⎣ −3 −2 −1 ⎦ , D1 = ⎣ 0 2 0 ⎦ and P2 = ⎣ −2 −1 −3 ⎦ , D2 = ⎣ 0 1 0 ⎦
...
a
...
+ (−λ)
−λ
0
−k −λ
− (1)
−λ 1
−k 3
= λ3 − 3λ + k
...
The graphs of y(λ) = λ3 − 3λ+ k for different values c
...
when the graph of y(λ) = λ3 − 3λ + k crosses the x-axis
three time
...
k=3
k = 2
...
5
k = -3
k = -4
6
...
Suppose Bv = λv, so v is an eigenvector of B corresponding
to the eigenvalue λ
...
⎡
⎤
1
⎢ 1 ⎥
⎢
⎥
7
...
Let v = ⎢
...
⎦
...
That is, Av = ⎢
⎣
λ
λ
...
...
...
⎤
⎥
⎥
⎥ , so λ is an eigenvalue of A corresponding to the eigenvector v
...
⎦
λ
1
Since A and At have the same eigenvalues, then the same result holds if the sum of each column of A is equal
to λ
...
a
...
And for every v in V, then T (v) is in V so
V is invariant
...
Since dim(W ) = 1, there is a nonzero vector w0 such that W = {aw0 | a ∈ R}
...
So there is some λ such that w1 = λw0
...
c
...
Since the
matrix representation for T is given relative to the standard basis,
T (v) = λv ⇔ T (v) =
0 −1
1
0
v=
0
1
−1
0
v1
v2
⇔ v1 = v2 = 0
...
9
...
Suppose w is in S(Vλ0 ), so that w = S(v) for some eigenvector v of T corresponding to λ0
...
Hence, S(Vλ0 ) ⊆ Vλ0
...
Let v be an eigenvector of T corresponding to the eigenvalue λ0
...
Now by part (a), T (S(v)) = λ0 (S(v)), so that S(v) is also an eigenvector
of T and in span{v}
...
c
...
, vn } be a basis for V consisting of eigenvectors of T and S
...
, λn and µ1 , µ2 ,
...
Now let v be a
vector in V
...
, cn such that v = c1 v1 +c2 v2 +
...
Applying the operator ST to both sides of this equation we obtain
ST (v) = ST (c1 v1 + c2 v2 +
...
+ cn λn vn )
= c1 λ1 µ1 v1 + c2 λ2 µ2 v2 +
...
+ cn µn λn vn
= T (c1 µ1 v1 + c2 µ2 v2 +
...
+ cn vn ) = T S(v)
...
⎡
⎤ ⎡
⎤
⎡
⎤
1
−1
0
d
...
Then
B
...
a
...
...
...
...
...
...
...
0
0
...
...
eλn
λk
1
0
...
...
...
0
⎤
0
0
...
...
...
...
...
...
1
...
m
k=1
...
...
λk
n
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎥
⎥
⎥
⎥
...
Suppose that A is diagonalizable and A = P DP −1 so Ak = P Dk P −1
...
m→∞
2!
m!
2!
m!
c
...
...
Chapter Test Chapter 5
1
...
P −1 AP =
4
...
F
...
The eigenvalues of A are
−1 and 1, and the only eigenvalue of D is −1, so the matrices
are not similar
...
T
3
...
The matrix has only
two linearly ⎤
independent eigen⎡
⎡
⎤
2
0
vectors ⎣ 0 ⎦ and ⎣ 0 ⎦
...
T
136
Chapter 5 Eigenvalues and Eigenvectors
7
...
T
9
...
T
11
...
Since det(A − λI) is
12
...
The det(A − λI) is
(λ − (a + b))(λ − (a − b)),
the eigenvalues are a+b and a−b
...
F
...
λ2 − 2λ + (1 − k),
so that, for example, if k = 1,
then A has eigenvalues 0 and 2
...
T
16
...
T
15
...
The matrix has
one eigenvalue λ = 1, of multiplicity 2, but does not have
two linearly independent eigenvectors
...
T
19
...
F
...
T
2α − β
α−β
2β − 2α 2β − α
22
...
T
23
...
The matrix is similar to a
diagonal matrix but it is unique
up to permutation of the diagonal entries
...
T
24
...
The matrix can still have
n linearly independent eigenvectors
...
27
...
T
29
...
F
...
T
32
...
33
...
F
...
37
...
T
36
...
T
39
...
T
6
...
1
In Section 6
...
The dot
product of two vectors is of central importance
...
So
⎡
⎤ ⎡
⎤
u1
v1
⎢ u2 ⎥ ⎢ v2 ⎥
⎢
⎥ ⎢
⎥
u · v = ⎢
...
⎣
...
...
For example, the length
of a vector √ Rn is then defined as ||u|| = u2 + u2 + · · · + u2 and the distance between two vectors is
in
n
1
2
||u − v|| = u · u
...
||u||||v||
• Two vectors are perpendicular if and only if u · v = 0
...
For example, if u = ⎣ 2 ⎦ , then ||u|| = 1 + 4 + 9 = 14,
||u||
−3
⎡
⎤
1
1
so a vector of length 1 and in the direction of u is √ ⎣ 2 ⎦
...
, vn } is pairwise orthogonal, which means vi · vj = 0 whenever i ̸= j, then using the properties of
dot product
vi · (c1 v1 + c2 v2 + · · · + ci vi + · · · + cn vn ) = c1 (vi · v1 ) + c2 (vi · v2 ) + ci (vi · vi ) + · · · + cn (vi · vn )
= 0 + · · · + 0 + ci (vi · vi ) + 0 + · · · + 0
= ci (vi · vi ) = ci ||vi ||2
...
u · v = (0)(1) + (1)(−1) + (3)(2) = 5
3
...
∥ u ∥=
√
√
12 + 52 = 26
(−3)2 + (−2)2 + 32 =
u·v
v·v
=
0−1+6
1+1+4
=
√
22
⎤
⎡
⎤
1
1
u·w
8
4
...
√1 ⎣ −2 ⎦
22
3
⎡
⎤
−1
15
...
Since two vectors in R2 are orthogonal if and
c
only if their dot product is zero, solving
·
3
−1
= 0, gives −c + 6 = 0, that is, c = 6
...
Since cos θ = ||u||||v|| =
tors are not orthogonal
...
The vector w is orthogonal to u and v if and
only if w1 + 5w2 = 0 and 2w1 + w2 = 0, that is,
w1 = 0 = w2
...
||u − v|| =
(u − v) · (u − v)
⎡
⎤ ⎡
⎤
−2
−2
√
⎣ −1 ⎦ · ⎣ −1 ⎦ = 41
6
6
=
⎡
5
6
⎡
6
...
Divide each component of the vector by the
1
norm of the vector, so that √1
is a unit
26
5
vector in the direction of u
...
√5
1
11
...
√
14
...
16
...
So all vectors in
9w3
−12w
9
⎨
⎬
span ⎣ −12 ⎦ are orthogonal to the two vec⎩
⎭
1
tors
...
⎣ c ⎦ · ⎣ 2 ⎦ = 0 + 2c − 2 = 0 ⇔ c = 1
2
−1
19
...
20
...
21
...
22
...
1 The Dot Product on Rn
23
...
w =
y
y
5
5
u
25
u
v 5
w
x
25
25
25
...
w = ⎣ 0 ⎦
0
3
1
3
2
v 5
w
y
5
u
v
w
5
25
v
w
x
u
25
⎡
⎤
⎡
⎤
0
3
28
...
w = 6 ⎣ 2 ⎦
1
v
w
u
w
u
v
29
...
Then there exist scalars c1 , c2 , · · · , cn such that
u = c1 u1 + c2 u2 + · · · + cn un
...
+ cn un )
= c1 (v · u1 ) + c2 (v · u2 ) + · · · + cn (v · un )
= c1 (0) + c2 (0) + · · · + cn (0) = 0
...
If u and w are in S and c is a scalar, then
(u + cw) · v = u · v + c(w · v) = 0 + 0 = 0
and hence, S is a subspace
...
Consider the equation
c1 v1 + c2 v2 + · · · + cn vn = 0
...
Since
v1 · (c1 v1 + c2 v2 + · · · + cn vn ) = v1 · 0, we have that c1 (v1 · v1 ) + c2 (v1 · v2 ) + · · · + cn (v1 · vn ) = 0
...
Now since the vectors are nonzero their lengths are positive, so ||v1 || = 0 and hence, c1 = 0
...
Hence, S is linearly independent
...
Since AA−1 = I, then
n
k=1
aik a−1 = 0 for i ̸= j
...
For any vector w, the square of the norm and the dot product are related by the equation ||w||2 = w · w
...
⎡
⎤
1
34
...
The normal vector to the plane, n = ⎣ 2 ⎦ , is orthogonal to every vector in the plane
...
A = ⎣ 0 0 0 ⎦
0 0 0
35
...
Consequently, (At A)ij = 0 if i ̸= j
...
Thus,
⎡
⎤
||A1 ||2
0
···
0
⎢
⎥
...
⎢
⎥
0
||A2 ||2 0
...
At A = ⎢
⎢
⎥
...
...
...
First notice that u · (Av) = ut (Av)
...
37
...
By Exercise 36, u · (Av) = (At u) · v
...
Let u = ei and v = ej , so (At )ij = Aij
...
For the converse, suppose that A = At
...
Exercise Set 6
...
An inner product on the
vector space V is a mapping from V × V, that is the input is a pair of vectors, to R, so the output is a number
...
1
...
2 Inner Product Spaces
141
determine if a mapping on V × V defines an inner product we must verify that:
• ⟨u, u⟩ ≥ 0 and ⟨u, u⟩ = 0 if and only if u = 0
• ⟨u, v⟩ = ⟨v, u⟩
• ⟨u + v, w⟩ = ⟨u, w⟩ + ⟨v, w⟩
• ⟨cu, v⟩ = c ⟨u, v⟩
In addition the definition of length, distance and angle between vectors are given in the same way with dot
product replaced with inner product
...
||u||||v||
Also two vectors in an inner product space are orthogonal if and only if ⟨u, v⟩ = 0 and a set S = {v1 , v2 ,
...
That is, ⟨vi , vj ⟩ = 0, whenever i ̸= j
...
, k, ||vi || = 1, then S is orthonormal
...
If B = {v1 , v2 ,
...
⟨v1 , v1 ⟩
⟨v2 , v2 ⟩
⟨vn , vn ⟩
If B is orthonormal, then the expansion is
v = ⟨v, v1 ⟩ v1 + ⟨v, v2 ⟩ v2 + · · · + ⟨v, vn ⟩ vn
...
Let u =
u1
u2
...
Then
⟨u, u⟩ = u2 − 2u1 u2 − 2u2 u1 + 3u2 = (u1 − 3u2 )(u1 + u2 )
...
For example, if
u1 = 3 and u2 = 1, then ⟨u, u⟩ = 0 but u is not the zero vector
...
Since ⟨u, u⟩ = −u2 + 2u1 u2 = u1 (−u1 + 2u2 ) = 0 if and only if u1 = 0 or u1 = 2u2 , then V is not an inner
1
product space
...
In this case ⟨u, u⟩ = 0 if and only if u is the zero vector and ⟨u, v⟩ = ⟨v, u⟩ for all pairs of vectors
...
1
2
For example if u =
Now let w =
−1
−1
1
1
and v =
2
2
2
2
2
2
, then ⟨u + v, w⟩ = 9w1 + 9w2 and ⟨u, v⟩ + ⟨v, w⟩ = 5w1 + 5w2
...
4
...
5
...
142
Chapter 6 Inner Product Spaces
6
...
Since (At A)ii =
m
m
m
t
2
k=1 aik aki =
k=1 aki aki =
k=1 aki , then ⟨A, A⟩ ≥ 0 and ⟨A, A⟩ = 0 if and only if A = 0, so the first
required property holds
...
Using the fact that tr(A + B) = tr(A) + tr(B), the third property follows in a similar manner
...
7
...
For the first requirement
m
n
⟨A, A⟩ = i=1 j=1 a2 , which is nonnegative and 0 if and only if A is the zero matrix
...
Also,
m
⟨A + B, C⟩ =
n
(aij cij + bij cij ),
i=1 j=1
then ⟨A + B, C⟩ = ⟨A, C⟩ + ⟨B, C⟩
...
8
...
1
10
...
9
...
11
...
That is, provided ⟨1, sin x⟩ =
0, ⟨1, cos x⟩ = 0, and ⟨cos x, sin x⟩ = 0
...
Similarly, ⟨1, cos x⟩ = 0
...
1
1
12
...
13
...
14
...
π
Also, since −π cos xdx = 0, then the set is orthogonal
...
2 Inner Product Spaces
143
15
...
The distance between the two functions is
1
||f − g|| =∥ −3 + 3x − x2 ∥=
0
b
...
a
...
π
||f − g|| =∥ cos x − sin x ∥=
−π
(1 − sin 2x)dx =
b
...
a
...
cos θ =
370
...
⟨f,g⟩
||f ||||g||
= 0, so f and g are orthogonal
...
a
...
cos θ = 2
e − e−2
√
20
...
∥ p − q ∥= ||3 − x|| = 10 b
...
a
...
cos θ = − 2
3
21
...
To find the distance between the matrices, we have that
∥A−B ∥=
1
2
=
2
−1
tr
−
2 1
1 3
−1
1
1 −4
−1
1
1 −4
=
−1
1
1 −4
=
2 −5
−5 17
tr
b
...
a
...
a
...
a
...
Since
x
y
tr
⎡
8
tr ⎣ 0
8
⎡
√
=2 3
10 −4
−4
2
8
tr ⎣ 6
0
b
...
cos θ =
2
3
so the set of vectors that are orthogonal to
is
x
y
=
−1
1
1 −4
√
19
...
5 6
√ 26
√
38 39
√ 19
√
33 28
·
x
y
⟨A,B⟩
||A||||B||
=
,
√4
99
b
...
Notice that the set describes a
line
...
Since
x
y
·
1
−b
= 0 ⇔ x − by = 0, the set of all vectors that are orthogonal to
x − by = 0
...
1
−b
is the line
144
Chapter 6 Inner Product Spaces
⎡
⎤ ⎡
⎤
⎡
⎤
x
2
2
27
...
This set describes the plane with normal vector n
...
The set of all vectors orthogonal to ⎣ 1 ⎦ , is ⎣ y ⎦ x + y = 0
...
a
...
a
...
⟨ex , e−x ⟩ = 0 dx
6
0
√
3
= 33 d
...
∥ 1 − x
1 0
0 1
= 1 c
...
2 −1
, so ⟨u, v⟩ = 2u1 v1 − u1 v2 − u2 v1 + 2u2 v2 , which defines an inner product
...
Let A =
, so ⟨u, v⟩ = 3u1 v1 + 2u1 v2 + 2u2 u1
...
3
b
...
If f is an even function and g is an odd function, then f g is an odd function
...
32
...
33
...
Since u1 and u2 are orthogonal, ⟨u1 , u2 ⟩ = 0 and hence, ⟨c1 u1 , c2 u2 ⟩ = 0
...
34
...
Exercise Set 6
...
Given any basis B the GramSchmidt process is a method for constructing an orthogonal basis from B
...
3 Orthonormal Bases
145
projecting one vector onto another
...
v·v
⟨v, v⟩
Notice that the vectors projv u and u − projv u are orthogonal
...
−1
1
1
Notice that B is not an orthogonal basis, since v1 and v3 are not orthogonal, even though v1 and v2 are
orthogonal
...
Gram-Schmidt Process to Covert B to the Orthogonal Basis {w1 , w2 , w3 }
...
Solutions to Exercises
1
...
projv u =
1/2
1/2
−3/2
3/2
b
...
a
...
7/5
−14/5
b
...
146
Chapter 6 Inner Product Spaces
−3/5
−6/5
3
...
projv u =
b
...
a
...
Since
u − projv u =
8/5
−4/5
u − projv u =
,
v · (u − projv u) =
·
8/5
−4/5
v · (u − projv u) =
= 0,
so the dot product is 0 and hence, the two vectors
are orthogonal
...
a
...
Since
4/3
⎡
⎤
1/3
u − projv u = ⎣ 5/3 ⎦ ,
−4/3
we have that
⎤ ⎡
⎤
1
1/3
v · (u − projv u) = ⎣ −1 ⎦ · ⎣ 5/3 ⎦ = 0
−1
−4/3
⎡
⎤
0
7
...
projv u = ⎣ 0 ⎦ b
...
⎡
⎤
3
6
...
projv u = 1 ⎣ 2 ⎦
7
−1
b
...
⎡
⎤
1
8
...
projv u = 3 ⎣ 0 ⎦ b
...
5
12
0
0
⎡
⎤
3
1⎣
4 ⎦,
u − projv u =
2
3
we have that
9
...
projq p = 5 x −
4
·
⎡
so the dot product is 0 and hence, the two vectors
are orthogonal
...
b
...
dx = 0
6
...
a
...
Since
p − projq p = x2 − x + 1,
we have that
1
q, p − projq p =
0
(2x − 1) x2 − x + 1 dx = 0
so the dot product is 0 and hence, the two vectors are orthogonal
...
a
...
Since
12
...
projq p = − 2 x b
...
3
x − x + 1 dx = 0
2
so the dot product is 0, and hence, the two vectors
are orthogonal
...
Let B = {v1 , v2 } and denote the orthogonal basis by {w1 , w2 }
...
To obtain an orthonormal basis, divide each vector in the orthogonal basis by their norm
...
−1
−1
14
...
15
...
Then
⎡
⎤
⎡
⎤
⎡
⎤
1
1
1
w1 = v1 = ⎣ 0 ⎦ , w2 = v2 − projw1 v2 = ⎣ 2 ⎦ , w3 = v3 − projw1 v3 − projw2 v3 = ⎣ −1 ⎦
...
⎩
⎭
1
−1
−1
⎧⎡
⎤ ⎡
⎤ ⎡
1
1/2
⎨
16
...
√
√
√
⎪
⎪
⎩ − 2
⎭
6
3
2
6
3
√
√ 2
√
√
17
...
30(x2 − x), 8 5 x2 − 3 x
6
2
2
norm
...
An orthonormal basis for
span(W ) is
⎧
⎡
⎤
⎡
⎤⎫
1
2
⎨ 1
⎬
1
√ ⎣ 1 ⎦ , √ ⎣ −1 ⎦
...
An orthonormal basis for
span(W ) is
⎧
⎡
⎤
⎡
⎤
⎡
−1
−2
1
⎪
⎪
⎨ 1 ⎢
−2 ⎥ 1 ⎢ 1 ⎥ 1 ⎢ 0
⎥, √ ⎢
⎥, √ ⎢
√ ⎢
⎪ 6⎣ 0 ⎦
6 ⎣ −1 ⎦
6 ⎣ −2
⎪
⎩
1
0
1
23
...
An orthonormal basis for
span(W ) is
⎧
⎡
⎤
⎡
⎤⎫
√
−1 ⎬
⎨ √2 0
⎣ 1 ⎦ , 3 ⎣ −1 ⎦
...
⎦⎪
⎪
⎭
√
3x, − 3x + 2
...
An orthonormal basis for
span(W ) is
⎧
⎡
⎤
⎡
⎤
⎡
1
2
⎪√
√
√
⎪
⎨ 5⎢
⎥
⎢
⎥
⎢
⎢ −2 ⎥ , 15 ⎢ 1 ⎥ , − 30 ⎢
⎣ 0 ⎦ 15 ⎣ 1 ⎦
⎪ 5
30 ⎣
⎪
⎩
0
−1
24
...
0 ⎦⎪
⎪
⎭
5
5
9
1
1, 12x − 6, − x3 + x −
2
4
2
and an orthonormal basis is
√
1
4 7
5
9
1
1, √ (12x − 6),
− x3 + x −
3
2
4
2
12
25
...
√
, √3 ⎢
⎣ −1 ⎦⎪
⎪ 3⎣ 1 ⎦
⎪
⎪
⎩
⎭
1
1
...
⎡An ⎤orthonormal ⎤basis for span(W ) is
⎧
⎫
⎡
2
1
⎨
⎬
1
√ ⎣ 0 ⎦ , √1 ⎣ 5 ⎦
...
Let v be a vector in V and B = {u1 , u2 ,
...
Then there exist scalars
c1 , c2 ,
...
Then
||v||2 = v · v = c2 (u1 · u1 ) + c2 (u2 · u2 ) + · · · + c2 (un · un )
...
Hence,
||v||2 = c2 + c2 + · · · + c2 = |v · u1 |2 + · · · + |v · un |2
...
To show the three statements are equivalent we will show that (a)⇒(b), (b)⇒(c) and (c)⇒(a)
...
Since AAt = I, then the row vectors are orthonormal
...
• (b)⇒(c): Suppose the row vectors of A are orthonormal
...
• (c)⇒(a): Suppose the column vectors are orthonormal
...
6
...
Let
149
⎡
⎢
⎢
A=⎢
⎣
a11
a21
...
...
...
...
...
...
...
an1
an2
...
So that
a11
a12
...
...
...
...
...
...
ann
n
k=1
aki akj =
an1
an2
...
...
⎦
0 if i ̸= j
1 if i = j
...
Conversely, if At A = I, then A has orthonormal columns
...
We have x · (Ay) = xt Ay = (xt A)y = (At x)t y = (At x) · y
...
Recall that ||Ax|| = Ax · Ax
...
By Exercise 30, we have that
Ax · Ax = (At Ax) · x = xt · (At Ax) = x · x
...
32
...
33
...
Then Ax · Ay = 0 if and only if x · y = 0
...
Since a vector ⎢
⎣ z ⎦ is orthogonal to both ⎣ −1 ⎦ and ⎣ −1 ⎦ if and only if 2x + 3y − z + 2w = 0
w
1
2
1 0 −1 1
1 0 −1 1
and
reduces →
− − − to 0 1 1/3 0 , then all vectors that are orthogonal to both vectors have
2 3 −1 2 − − − −
⎧⎡
⎡
⎤
⎤ ⎡
⎤⎫
s−t
1
−1 ⎪
⎪
⎪
⎪
⎨⎢
⎢ −1s ⎥
⎥ ⎢
⎥⎬
⎢ 3 ⎥ , s, t ∈ R
...
the form ⎣
s ⎦
⎪⎣ 1 ⎦ ⎣ 0 ⎦⎪
⎪
⎪
⎩
⎭
t
0
1
35
...
m}
...
, m
...
, n
...
36
...
• By definition of ⟨·, ·⟩ , we have ⟨u, u⟩ > 0 for all nonzero vectors and if u = 0, then ⟨u, u⟩ = 0
...
• ⟨u + v, w⟩ = (u + v)t Aw = (ut + vt )Aw = ut Aw + vt Aw = ⟨u, w⟩ + ⟨v, w⟩
• ⟨cu, v⟩ = (cu)t Av = cut Av = c ⟨u, v⟩
150
Chapter 6 Inner Product Spaces
37
...
38
...
Then ei t Aei = ei t A1 = aii > 0 since A is positive definite
...
, n, then the diagonal entries are all positive
...
Since, when defined, (BC)t = C t B t , we have that
xt At Ax = (Ax)t Ax = (Ax) · (Ax) = ||Ax||2 ≥ 0,
so At A is positive semi-definite
...
We will show that the contrapositive statement holds
...
Then there is a
nonzero vector x such that Ax = 0 and hence xt Ax = 0
...
41
...
Now multiply both
sides by xt to obtain
xt Ax = xt (λx) = λ(xt x) = λ(x · x) = λ||x||2
...
42
...
Since
−2
−1
·
2
−4
= 0, the vectors are orthogonal
...
det
c
...
−2 −1
2 −4
= 100
y
5
5
25
x
25
d
...
Let A =
e
...
The area of the rectangle spanned
by the two vectors is
det(A) = |ad − bc| =
(ad − bc)2 =
a2 d2 − 2abcd + b2 c2
=
a2 d2 + 2a2 c2 + b2 c2
=
a2 d2 + a2 c2 + a2 c2 + b2 c2 =
(since ac = −bd)
= a2 (c2 + d2 ) + b2 (c2 + d2 ) =
= ||v1 ||||v2 ||
...
The area of a parallelogram is the height times the base and is equal to the are of the rectangle shown
in the figure of the exercise
...
a
c
Since p = kv1 for some scalar k, if v1 =
, and v2 =
, then the area of the parallelogram is
b
d
a
b
det
...
Therefore, the area of the parallelogram is | det(A)|
...
The volume of a box in R3 spanned by the vectors v1 , v2 , and v3 (mutually orthogonal) is ||v1 ||||v2 ||||v3 ||
...
4 Orthogonal Complements
151
Let A be the matrix with row vectors v1 , v2 and v3 , respectively
...
AAt = ⎣
0
||v2 ||2
0
2
0
0
||v3 ||
Therefore, the volume of the box is
(det(A))2 = | det(A)|
...
4
The orthogonality of two vectors is extended to a vector being orthogonal to a subspace of an inner product
space
...
For example, the
normal vector of a plane through the origin in R3 is orthogonal to every vector in the plane
...
If
⎧⎡
⎫ ⎧ ⎡
⎫
⎧⎡
⎤
⎤
⎡
⎤
⎤ ⎡
⎤⎫
1
−1
−1 ⎬
⎨ x
⎬ ⎨
⎬
⎨ 1
W = ⎣ y ⎦ x − 2y + z = 0 = y ⎣ 2 ⎦ + z ⎣ 0 ⎦ y, z ∈ R = span ⎣ 2 ⎦ , ⎣ 0 ⎦ ,
⎩
⎭ ⎩
⎭
⎩
⎭
z
0
1
0
1
⎡
⎤
⎡
⎤
1
−1
then since the two vectors ⎣ 2 ⎦ and ⎣ 0 ⎦ are linearly independent W is a plane through the origin in
0
1
⎡
⎤
a
R3
...
⎩
⎭
1
⎤
1
So the orthogonal complement is the line in the direction of the vector (normal vector) ⎣ −1/2 ⎦
...
Other results to remember when solving the exercises are:
• W ⊥ is a subspace
• W ∩ W ⊥ = {0}
• (W ⊥ )⊥ = W
• dim(V ) = dim(W ) ⊕ dim(W ⊥ )
⎡
152
Chapter 6 Inner Product Spaces
Solutions to Exercises
1
...
W ⊥ =
=0
x − 2y = 0
=
y = 1x
2
x
y
x
y
x
y
·
1
0
=0
x=0
0
1
So the orthogonal complement is the y-axis
...
3
...
4
...
⎤
⎡
⎤
2
1
5
...
That
−1
0
⎡
⎤
x
is, the set of all vectors ⎣ y ⎦ satisfying
z
⎡
⎤ ⎡
⎤
x
2
⎣ y ⎦ · ⎣ 1 ⎦ = 0 and
z
−1
⎡
⎤ ⎡
⎤
x
1
⎣ y ⎦·⎣ 2 ⎦=0⇔
z
0
⎡
2x + y − z
x + 2y
=0
...
Thus, the orthogonal complement is a line in three space
...
A vector ⎣ y ⎦ is in W ⊥ if and only if
, that is, x = − 2 z, y = −z
...
⎩
⎭
1
7
...
This
leads to the system of equations
6
...
6
3
2
=0
⎡ 1
⎤
−6z + 2w
3
⎢
⎥
−1z
2
⎥ , for all real numbers z and w, that is,
Hence a vector is in W ⊥ if it only if it has the form ⎢
⎣
⎦
z
w
⎧⎡
⎤ ⎡
⎤⎫
2/3 ⎪
⎪ −1/6
⎪
⎪
⎨⎢
⎬
−1/2 ⎥ ⎢ −1 ⎥
⎥,⎢
⎥
...
A vector ⎢
+ z + w = 0 , that is, x = y = z = − 2 w So W ⊥ =
⎣ z ⎦
⎪
⎩
y + z + w = 0,
w⎫
⎧⎡
⎤
⎪ −1/2 ⎪
⎪
⎪
⎨⎢
⎥⎬
⎢ −1/2 ⎥ , which is a line
...
⎣ −1/3 ⎦
10
...
⎣
12
...
A polynomial p(x) = ax2 + bx + c is in W ⊥ if and only if ⟨p(x), x − 1⟩ = 0 and p(x), x2 = 0
...
14
...
5 4 3
= − 52 , c ∈ R, then a basis for the orthogonal
9
+1
...
The set W consists of all vectors w = ⎢ 2 ⎥ such that w4 = −w1 − w2 − w3 ,
⎣ w3 ⎦
w4
⎧⎡
⎫ ⎧ ⎡
⎤
⎤
⎡
⎤
⎡
s
1
0
0
⎪
⎪ ⎪
⎪
⎪ ⎪
⎨⎢
⎬ ⎨ ⎢
⎥
⎢ 1 ⎥
⎢ 0
t
0 ⎥
⎥ s, t, u ∈ R = s ⎢
⎥ + t⎢
⎥
⎢
W = ⎢
⎦
⎣ 0 ⎦+u⎣ 1
u
⎪⎣
⎪ ⎪ ⎣ 0 ⎦
⎪
⎪ ⎪
⎩
⎭ ⎩
−s − t − u
−1
−1
−1
that is
⎤
⎫
⎪
⎪
⎬
⎥
⎥ s, t, u ∈ R
...
that x − w = 0, y − w = 0, z − w = 0, z ∈ R
...
The two vectors that span⎧⎡ are linearly independent but are not orthogonal
...
Then
⎩
⎭
−1
3/2
⎡
⎤ ⎡
⎤
1
1
⎣ −2 ⎦ · ⎣ 0 ⎦ ⎡
⎤
1
2
−1
⎤ ⎡
⎤⎣ 0 ⎦+
projW v = ⎡
1
1
−1
⎣ 0 ⎦·⎣ 0 ⎦
−1
−1
⎡
⎤ ⎡
⎤
1
3/2
⎣ −2 ⎦ · ⎣ 1 ⎦ ⎡
⎤
⎡
⎤
3/2
2
2
3/2
1 ⎣
⎡
⎤ ⎡
⎤⎣ 1 ⎦=
5 ⎦
...
The two vectors that span W are linearly independent and orthogonal, so that an orthogonal basis for
⎧⎡
⎤ ⎡
⎤⎫
0
⎨ 2
⎬
W is B = ⎣ 0 ⎦ , ⎣ −1 ⎦
...
0
⎤
0
⎤
−1 ⎦ ⎡
0
0
⎤ ⎣ −1 ⎦
0
0
−1 ⎦
0
18
...
Using the Gram-Schmidt
⎧⎡
⎤ ⎡
⎤ but
3
2 ⎬
⎨
process an orthogonal basis for W is B = ⎣ −1 ⎦ , ⎣ 14 ⎦
...
11
11
2
2
8
1
8
1
⎣ 14 ⎦ · ⎣ 14 ⎦
8
8
subspace W
...
4 Orthogonal Complements
155
19
...
Using the Gram-Schmidt
⎡ ⎤
⎡
⎤
1
−13
precess an orthogonal basis for W consists of the two vectors ⎣ 2 ⎦ and 1 ⎣ 4 ⎦
...
Then
1
5
⎡
⎤ ⎡
⎤
1
1
⎣ 2 ⎦ · ⎣ −3 ⎦ ⎡
⎤
1
1
5
⎤ ⎣ 2 ⎦+
projW v = ⎡ ⎤ ⎡
1
1
1
⎣ 2 ⎦·⎣ 2 ⎦
1
1
⎡
⎤ ⎡
⎤
−13
1
⎣ 4 ⎦ · ⎣ −3 ⎦ ⎡
⎤ ⎡
⎤
−13
0
5
5
⎡
⎤ ⎡
⎤ ⎣ 4 ⎦ = ⎣ 0 ⎦
...
The three vectors that span W are linearly independent ⎡ not⎫
⎧⎡
⎤ ⎡ ⎤ but ⎤ orthogonal
...
Then
process an orthogonal basis for W is B = ⎣
⎪ −1 ⎦ ⎣ 2 ⎦ ⎣ 3 ⎦⎪
⎪
⎪
⎩
⎭
1
0
4
⎡
⎤
5
1 ⎢ −10 ⎥
⎢
⎥
...
The spanning vectors for W are linearly independent but⎫ not orthogonal
...
Then
precess an orthogonal basis for W is B = ⎣
⎪ −1 ⎦ ⎣ −3 ⎦⎪
⎪
⎪
⎩
⎭
2
6
⎡
⎤
−5
⎢
⎥
4 ⎢ 21 ⎥
projW v = 73 ⎣
...
a
...
1
5
3
6
·
−2
1
−2
1
b
...
u = v − projW v =
2
−1
1
5
e
...
a
...
projW v =
1
10
−3
1
c
...
Chapter 6 Inner Product Spaces
1
10
3
9
·
−3
1
=0
e
...
a
...
projW v = ⎣ 1 ⎦ c
...
The vector u is one of the spanning vectors of W ⊥
...
W⊥
v
W
⎡
⎤
⎡
⎤
projW v
1
−1
25
...
Using the Gram-Schmidt process an
⎧⎡ −1 ⎤ ⎡
⎧⎡
⎤⎫4
⎤⎫
1
0 ⎬
2
⎨
⎨
⎬
orthogonal basis for W is ⎣ 1 ⎦ , ⎣ 3 ⎦
...
W ⊥ = span ⎣ −1 ⎦
⎩
⎭
⎩
⎭
−1
3
1
⎡
⎤
⎡
⎤
2
4
b
...
u = v − projW v = 1 ⎣ −2 ⎦
3
3
1
2
⎡
⎤
2
d
...
1
W
projW v
then u is in W ⊥
...
If v is in V ⊥ , then ⟨v, u⟩ = 0 for every vector in V
...
On the
other hand, since V = {0} ⊕ {0}⊥, then {0}⊥ = V
...
⊥
27
...
Since W1 ⊆ W2 , then ⟨w, u⟩ = 0 for all u ∈ W1
...
6
...
a
...
Then
(f + cg)(−x) = f (−x) + cg(−x) = f (x) + cg(x) = (f + cg)(x),
so f + cg is in W and hence, W is a subspace
...
Suppose g(−x) = −g(x)
...
Let h(x) = f (x)g(x)
...
0
0
c
...
Since f ∈ W, then f (−x) = f (x) and since f ∈ W ⊥ , then f (x) = −f (x)
...
Hence, W ∩ W ⊥ = 0
...
If g(x) = 1 (f (x) + f (−x)), then
2
g(−x) = 1 (f (−x) + f (x)) = g(x)
...
2
2
d e
a b
29
...
Let A =
and B =
be a matrix in W
...
So A ∈ W ⊥ if and only if ad + bf + be + cg = 0 for all real numbers a, b, and c
...
b
...
Let T : R → R , be defined by T (v) = projW v =
c
...
5
1
2 1
1
3/5
4 2 1 4 2
20 10
4 2
1
= 25
= 1
...
Since T (e1 ) =
b
...
b−c
2
0
2
1
2
1
...
...
Let w0 be in W
...
Hence, w0 is orthogonal to every vector in W ⊥ , so w0 ∈ (W ⊥ )⊥
...
Now let w0 ∈ (W ⊥ )⊥
...
So ⟨v, w0 ⟩ = 0,
since w0 ∈ (W ⊥ )⊥ and v ∈ W ⊥ and ⟨v, w⟩ = 0, since w ∈ W and v ∈ W ⊥
...
Therefore, since V is an inner product space v = 0, so w0 = w ∈ W
...
Since both
containments hold, we have W = (W ⊥ )⊥
...
5
1
...
To find the least squares solution it is equivalent to solving the normal equation
⎡
⎤
4
x
At A
= At ⎣ 1 ⎦
...
5/2
Hence, the least squares solution is x =
...
Since the orthogonal projection of b onto W is Ax, we
0
⎡
⎤
⎡
⎤
5/2
3/2
have that w1 = Ax = ⎣ 5/2 ⎦ , and w2 = b − w1 = ⎣ −3/2 ⎦
...
a
...
That is,
y
1
⎡
⎤
⎡
⎤
2 2
−2
2 1 1 ⎣
x
2 1 1 ⎣
6 7
x
−3
1 2 ⎦
0 ⎦⇔
=
=
...
b
...
−3/5
8/5
3
...
⎥
⎥
⎥
⎥
⎥
⎦
a
...
The least squares solution is given by the solution
approximates a linear increasing trend
...
At A
= At b, which is equivalent to
b
35459516 17864
17864
9
m
b
=
34790257
17491
...
13148
3287
6
...
Let
⎡
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
A=⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
1955
1960
1965
1970
1975
1980
1985
1990
1995
2000
2005
159
1
1
1
1
1
1
1
1
1
1
1
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎡
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
and b = ⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
157
141
119
104
93
87
78
70
66
62
57
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
...
The figure shows the scatter plot of the data, which b
...
Also shown in to the normal equation
m
the figure is the best fit line found in part (b)
...
The solution to the system gives the line that best fits
the data, y = − 529 x − 19514
...
a
...
The line that best fits the data is
approximates a linear increasing trend
...
07162857143x − 137
...
the figure is the best fit line found in part (b)
...
To use a least squares approach to finding the best fit parabola y = a + bx + cx2 requires using a 3 × 3
matrix, where the columns correspond to 1, x, and x2
...
So let
⎡
⎤
⎡
⎤
1 0
0
0
...
7 ⎥
4 ⎥
⎢
⎥
⎢
⎥
⎢ 1 5 25 ⎥
⎢ 2
...
5 ⎥
⎢
⎥
⎢
⎥
⎢ 1 10 100 ⎥
⎢ 10 ⎥
⎢
⎥
⎢
⎥
A = ⎢ 1 12 144 ⎥ and b = ⎢ 16
...
⎢
⎥
⎢
⎥
⎢ 1 15 225 ⎥
⎢ 29
...
9 ⎥
⎢
⎥
⎢
⎥
⎢ 1 20 400 ⎥
⎢ 57
...
9 ⎦
1 25 625
82
...
The figure shows the scatter plot of the data
...
The least squares solution is given by the solution
shown in the figure is the best fit parabola found in to the normal equation
⎡
⎤
a
part (b)
...
8494750
8494750
b
...
7
...
The Fourier polynomials are
p2 (x) = 2 sin(x) − sin(2x)
2
sin(3x)
3
2
p4 (x) = 2 sin(x) − sin(2x) + sin(3x)
3
1
− sin(4x)
2
2
p5 (x) = 2 sin(x) − sin(2x) + sin(3x)
3
1
2
− sin(4x) + sin(5x)
...
a
...
The graph of the function f (x) = x, on the interval
−π ≤ x ≤ π and the Fourier approximations are shown
in the figure
...
a
...
The graph of the function f (x) = x2 , on the interval
−π ≤ x ≤ π and the Fourier approximations are shown
in the figure
...
4
25
10
...
To solve the normal equation At Ax = At b, we have that
p2 (x) =
(QR)t (QR)x = (QR)t b,
so Rt (Qt Q)Rx = Rt Qt b
...
6 Diagonalization of Symmetric Matrices
161
Since the column vectors of Q are orthonormal, then Qt Q = I, so
Rt Rx = Rt Qt b
...
Exercise Set 6
...
An n × n matrix is diagonalizable if and only if it has n
linearly independent eigenvectors
...
If A is an
n × n real symmetric matrix, then the eigenvalues are all real numbers and the eigenvectors corresponding
to distinct eigenvalues are orthogonal
...
That is, there is an orthogonal matrix P, so that P −1 = P t , and a diagonal matrix D such
that A = P DP −1 = P DP t
...
If the symmetric matrix A has an eigenvalue of geometric multiplicity greater than 1, then the corresponding
linearly independent eigenvectors that generate the eigenspace may not be orthogonal
...
• Since A is diagonalizable there are n linearly independent eigenvectors
...
• Form the orthogonal matrix P with column vectors determined in the previous step
...
The eigenvalues are placed on
the diagonal in the same order as the eigenvectors are used to define P
...
• The matrix A has the factorization A = P DP −1 = P DP t
...
The eigenvalues are the solutions to the characteristic equation det(A − λI) = 0, that is,
λ1 = 3 and λ2 = −1
3
...
λ1 = 2, λ2 = −4
4
...
Since the eigenvalues are λ1 = −3 with eigenvector v1 =
then v1 · v2 = 0, so the eigenvectors are orthogonal
...
Since the eigenvalues are λ1 = −1 with eigenvector v1 =
−1
1
, then v1 · v2 = 0, so the eigenvectors are orthogonal
...
and λ2 = 2 with eigenvector v2 =
1
1
2
1
,
and λ2 = −5 with eigenvector v2 =
⎤
⎡
⎤
1
−1
7
...
162
Chapter 6 Inner Product Spaces
⎡
⎤
⎡
⎤
1
0
8
...
9
...
Moreover, there are three linearly
⎧⎡
⎧⎡
⎤⎫
⎤ ⎡
⎤⎫
0 ⎬
⎨ 1 ⎬
⎨ −1
independent eigenvectors and the eigenspaces are V3 = span ⎣ 0 ⎦ and V−1 = span ⎣ 0 ⎦ , ⎣ 1 ⎦
...
⎧⎡
⎧⎡
⎧⎡
⎤⎫
⎤⎫
⎤⎫
⎨ 0 ⎬
⎨ −1 ⎬
⎨ 1 ⎬
10
...
⎩
⎭
⎩
⎭
⎩
⎭
0
1
1
⎧⎡
⎧⎡
⎤⎫
⎤⎫
0
⎪ 3 ⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨⎢
⎬
⎨⎢
⎬
1 ⎥
−2 ⎥
⎥ , V−3 = span ⎢
⎥ , and
11
...
V−1 = span ⎣
2 ⎦ ⎣ −1 ⎦⎪
⎪
⎪
⎪
⎩
⎭
0
1
⎧⎡
⎧⎡
⎧⎡ ⎤ ⎫
⎤⎫
⎤ ⎡
⎤⎫
0 ⎪
⎪ 1 ⎪
⎪ 0
⎪ 0 ⎪
⎪
⎪
⎪
⎪
⎪
⎨⎢
⎬
⎨⎢
⎬
⎨⎢ ⎥ ⎪
⎬
0 ⎥
0 ⎥ ⎢ 0 ⎥
⎢
⎥ , V−1 = span ⎢
⎥,⎢
⎥ , and V−2 = span ⎢ 1 ⎥
12
...
Since A A =
=
, the inverse of A is At , so the matrix A is
0 1
−1/2
3/2
1/2
3/2
orthogonal
...
Since A A =
, then At is not the inverse of A and hence, A is not orthogonal
...
Since At A = ⎣ − 2/2
2/2 0 ⎦ ⎣ 2/2
2/2 0 ⎦ = ⎣ 0 1 0 ⎦ , the inverse of A is At , so
0 0 1
0
0 1
0
0 1
the matrix A is orthogonal
...
16
...
The eigenvalues of the matrix A are −1 and 7 with corresponding orthogonal eigenvectors
1
, respectively
...
0 7
1/ 2 1/ 2
and
1
√
2
−1
1
and
1
√
2
1
1
−1
1
...
6 Diagonalization of Symmetric Matrices
18
...
An orthonormal pair of eigenvectors is √2
and √2
...
0 7
1/ 2 1/ 2
√
√
−1/√2 1/√2
2
1
1
19
...
If P = √5
, then P −1 AP = D =
1 −2
1/ 2 1/ 2
2
0
−4 0
...
0 −3
0 2
⎡
⎤ ⎡
⎤
⎡
⎤
−1
1
−1
21
...
Since the eigenvectors are pairwise orthogonal, let P be the matrix ⎡
with column ⎤
vectors unit
√
√
√ ⎤
⎡
−1/√3 1/ 2 −1/√6
1 0 0
eigenvectors
...
√
0 0 −2
1/ 3 1/ 2
1/ 6
⎡
⎤
⎡
⎤
1 −1 √
0
0 0 0
1
√ ⎣ 0
22
...
0 0 −1
1 1
0
and
23
...
Since the inverse of AB is (AB)t , then AB is an orthogonal matrix
...
24
...
Then
det(A) = det(At ) = det(A−1 ) =
1
,
det(A)
so (det(A))2 = 1 and hence, det(A) = ±1
...
We need to show that the inverse of At is (At )t = A
...
26
...
Then
(A−1 )−1 = (At )−1 = (A−1 )t
and hence A−1 is orthogonal
...
a
...
b
...
Since A is orthogonal, then
a b
c d
a
b
c
d
=
a2 + b2 ac + bd
ac + bd c2 + d2
=
1
0
0
1
⎧
⎪ a2 + b 2 = 1
⎨
⇔ ac + bd = 0
...
Since a2 + b2 = 1, then v1 is
a unit vector
...
Now let v2 =
c
d
...
There are two cases
...
c = cos θ + π/2 = − sin θ and d = sin θ + π/2 = cos θ, so that A =
Case 2
...
If det(A) = 1, then by part (b), T (v) = Av with A =
T (v) =
cos θ
− sin θ
sin θ
cos θ
x
y
cos θ
− sin θ
sin θ
cos θ
cos θ
− sin θ
cos θ
sin θ
...
...
If det(A) = −1, then by part (b), T (v) = A′ v with A′ =
cos θ
sin θ
...
Hence, in this case, T is a reflection through the x-axis followed by a rotation through the angle θ
...
Suppose A and B are orthogonally similar, so B = P t AP, where P is an orthogonal matrix
...
a
...
Then B t = (P t AP )t = P t At P = P t AP = B and hence, B is
symmetric
...
Since B = P t AP = P −1 AP, then A = P BP −1
...
b
...
Then B −1 = (P t AP )−1 = P −1 A−1 P = P t At P = (P t AP )t =
B t and hence, B is orthogonal
...
Since B = P t AP =
P −1 AP, then A = P BP −1
...
29
...
Then
Dt = (P t AP )t = P t At P
...
Then
P (P t AP )P t = P (P t At P )P t
...
30
...
Since D = P t AP =
P −1 AP, then A = P DP −1
...
⎡
⎤
v1
⎢ v2 ⎥
⎢
⎥
2
2
31
...
If v = ⎢
...
+ vn
...
⎦
...
Consider the equation Av = λv
...
Since A is
skew symmetric this is equivalent to
vt (−A) = λvt
...
Hence,
2λvt v = 0, so that by part (a),
2
2
2λ(v1 +
...
6
...
Therefore, the only eigenvalue of A is λ = 0
...
7
x
27 −9
1
,A =
, and b =
...
The next step is to diagonalize the matrix A
...
Notice that the eigenvectors are not orthogonal
...
Let x =
√
3 10
−√ 10
Gram-Schmidt process unit orthogonal vectors are v1 =
10
10
√
10
10
√
3 10
10
and v2 =
...
By interchanging the column vectors the
resulting matrix is orthogonal and is a rotation
...
Let x =
x
y
2 −4
−4
8
,A =
2
1
, and b =
30(y ′ )2 +
that is
√
10x′ = 0
...
Then the quadratic equation is equivalent to xt Ax +
1
−2
bt x = 0
...
Let
√
5
1 2
10 0
P =
and D =
,
−2 1
0 0
5
Notice that the eigenvectors are orthogonal
...
...
Let x =
x
y
12 4
4 12
,A =
, and b =
0
0
that is
10(x′ )2 +
√ ′
5y = 0
...
The matrix form of the quadratic equation is xt Ax −
8 = 0
...
If D =
−1
1
16 0
0 8
√
, so that P =
2
2
√
2
2
√
− √2
2
2
2
1
1
and
−1
1
...
4
...
Then the quadratic equation is equivalent to
1
3
and
,
−3
1
respectively
...
Orthonormal eigenvectors
1
3
are v1 = √1
and v2 = √1
...
Let
xt Ax + bt x + f = 0
...
The transformed quadratic equation is
(x′ )2
(y ′ )2
= 1
...
a
...
that is 20(x′ )2 + 10(y ′ )2 −
− 16 = 0
6
...
2 − 2
b
...
Then P
√
− √2
2
2
2
0
16
10 −6
−6 10
Pt =
◦
equation that describes the original conic rotated 45 is
[ xy ]
8
...
[x y]
1
0
10 −6
−6 10
0
−1
x
y
x
y
−1=0
− 16 = 0, that is 10x2 − 12xy + 10y 2 − 16 = 0
...
The action of the matrix
cos − π
6
sin − π
6
P =
− sin − π
6
cos − π
6
◦
on a vector is a clockwise rotation of 30
...
2
2
√
√
9
...
7x2 + 6 3xy + 13y 2 − 16 = 0 b
...
a
...
√
√
3
3
1
1
3
2
2
4 (x−2) + 2 (x−2)(y −1)y + 4 (y −1) + 2 (x−2)+ 2 (y −1)
=0
Exercise Set 6
...
The singular values of the matrix are σ1 = λ1 , σ2 = λ2 , where λ1 and λ2 are the eigenvalues of At A
...
−2 1
1
1
5 5
√
√
2
...
√
√
−1
1
−1 −2
2 0
Then At A =
=
, so σ1 = 2 2 and σ2 = 2
...
8 Application: Singular Value Decomposition
3
...
4
...
−1
1
5
...
An orthonormal pair of eigenvectors is v1 =
1
√
2
1
− √2
1
√
2
1
√
2
1
1
, which
and v2 =
...
√
√
64 = 8 and σ2 = 4 = 2
...
Step 3: The matrix U is defined by
U=
1
1
Av1
Av2
σ1
σ2
1
√
2
1
√
2
=
1
√
2
1
− √2
...
6
...
Let
V =
1
0
0
1
1
0
and
0
1
, which are
...
Since the size of the matrix Σ is the same size
as A, we have that
√
2 5 √0
Σ=
...
Step 4: The SVD of A is
√
2 5 √0
0
5
1
0
0
1
...
The SVD of A is
A=
0 1
1 0
9
...
If x =
√
2
0
x1
x2
8
...
1
√
2
A=
1 0
0 1
, then the solution to the linear system Ax =
√
6
0
2
2
√0
2
0
0
⎡
⎢
⎣
−
√
6
6
0
√
3
3
√
6
6
√
2
2
√
3
3
√
− √6
6
2
2
√
− 33
⎤
⎥
⎦
...
The solution
2
is x1 = 1, x2 = 1
...
The condition number of the matrix A
2
...
Notice that a small change in the vector b in the linear system
Ax = b results in a significant difference in the solutions
...
a
...
b
...
28, x2 ≈ 1
...
9
...
The singular values
−4
of the matrix A are approximately σ1 ≈ 3
...
3, and σ3 ≈ 0
...
The condition number of the matrix A
is σ1 ≈ 3
...
Notice that a small change in the entries of the matrix A results in only
σ3
a small change in the solution
...
a
...
Since
⎡
⎡
⎤⎡
⎤
1 1 2
1 0 0
1 1 2
c1
⎣ 0 0 1 ⎦ −→ ⎣ 0 1 0 ⎦ , the only solution to the homogeneous linear system ⎣ 0 0 1 ⎦ ⎣ c2 ⎦ =
0 0 1
1 0 0
c3
⎡ 1 ⎤0 0
0
⎣ 0 ⎦ is the trivial solution, so B is linearly independent and hence, is a basis for R3
...
Notice that
0
the vectors in B are not √
⎧⎡
⎤ ⎡ √
⎤ ⎡ pairwise orthogonal
...
c
...
The Gram-Schmidt process yields the orthogonal vectors ⎣ 0 ⎦ and ⎣ 0 ⎦ ,
1
−1/2
which also span W
...
projW v = ⎡
1
1
1/2
1/2
1
−1/2
−1
⎣ 0 ⎦·⎣ 0 ⎦
⎣ 0 ⎦·⎣ 0 ⎦
1
1
−1/2
−1/2
2
...
The spanning vectors for W are not linearly independent so can be trimmed to a basis for the span
...
with pivots in columns one, two and three, then a basis for W is ⎣
⎦ ⎣ 0 ⎦ ⎣ −1 ⎦⎪
2
⎪
⎪
⎪
⎩
⎭
−2
0
1
⎧⎡
⎫
⎧⎡
⎧
⎤
⎤⎫
⎡
⎤
⎡
⎤
⎡
⎤⎫
−1
0
−6 ⎪
⎪ 0
⎪
⎪ 0 ⎪
⎪
⎪
⎪
⎪
⎪
⎪√
⎪
⎨⎢
⎬
⎨⎢
⎬
⎨
⎢ 2 ⎥ √6 ⎢ −2 ⎥ √39 ⎢ −1 ⎥⎬
0 ⎥
0 ⎥
13 ⎢
⊥
⎢
⎥ t ∈ R = span ⎢
⎥
...
⎥,
⎢
⎥,
⎢
⎥
b
...
e
...
projW v = ⎢
⎣ 2 ⎦
⎪ 2 ⎣ 1 ⎦⎪
⎪
⎪
⎩
⎭
−2
⎡ 1⎤
⎡
⎤⎡
⎤
⎡
⎤
x
x
a
a
3
...
If ⎣ y ⎦ ∈ W, then ⎣ y ⎦·⎣ b ⎦ = ax+by+cz = 0, so ⎣ b ⎦ is in W ⊥
...
The orthogonal complement
z
z
c
c
⎧⎡
⎤⎫
⎨ a ⎬
of W is the set of all vectors that are orthogonal to everything in W, that is, W ⊥ = span ⎣ b ⎦
...
c
⎡
⎤ ⎡
1
0
0
0
⎤
a
x1
⎥ ⎢
⎥
⎤
⎡
⎤
b ⎥ ·⎢ x2 ⎥ ⎡
⎦ ⎣
⎦
a
a
c
x3 ⎣
ax1 + bx2 + cx3 ⎣
|ax1 + bx2 + cx3 |
⎤ ⎡
⎤
b ⎦=
b ⎦ d
...
projW ⊥ v = ⎡
2 + b 2 + c2
a
a
a
a 2 + b 2 + c2
⎢
⎥ ⎢
⎥
c
c
⎢ b ⎥ ·⎢ b ⎥
⎣
⎦ ⎣
⎦
c
c
Note that this norm is the distance from the point (x1 , x2 , x3 ) to the plane
...
a
...
then
⎢
⎢
⎣
1
⟨p, q⟩ =
b
...
f
...
3
−1
√
1
4
−1 (x
− 4x3 + 6x2 − 4x + 1)dx = 4 510
c
...
cos θ =
= − 666110 e
...
a
...
b
...
For example,
√
√
√
π
||1|| =
dx = 2π
...
1
√1 , x2 √1 + √
π
2π
2π
√
projW x2 ∥= 1 2π 5 + 144π
3
c
...
∥
cos x, x2
1
√
π
cos x +
1
√
π
sin x, x2
1
√
π
sin x = 1 π 2 − 4 cos x
3
170
Chapter 6 Inner Product Spaces
⎡
⎢
⎢
6
...
Since v = c1 v1 + · · · + cn vn and B is orthonormal so ⟨v, vi ⟩ = ci , then ⎢
⎣
⟨v, vi ⟩
vi = ⟨v, vi ⟩ vi = ci vi
⟨vi , vi ⟩
2
1
1
2
√ , and c3 = ⟨v, v3 ⟩ = √
+ √3
...
projvi v =
c1
c2
...
...
...
⟨v, vn ⟩
⎤
⎥
⎥
⎥
...
The coordinates are given by c1 = ⟨v, v1 ⟩ = 1, c2 = ⟨v, v2 ⟩ =
⎡
⎢
⎢
7
...
, vn } be an orthonormal basis and [v]B = ⎢
⎣
⎤
v1
v2
...
...
Then there are scalars c1 ,
...
Using the properties of an inner product and the fact that the vectors
are orthonormal,
√
∥ v ∥ = ⟨v, v⟩ = v · v = ⟨c1 v1 + c2 v2 + · · · + cn vn , c1 v1 + c2 v2 + · · · + cn vn ⟩
=
⟨c1 v1 , c1 v1 ⟩ + · · · + ⟨cn vn , cn vn ⟩ =
=
c2 ⟨v1 , v1 ⟩ + · · · + c2 ⟨vn , vn ⟩
n
1
c2 + · · · + c2
...
Let v = c1 v1 + · · · + cm vm
...
n
1
⟨v, vi ⟩ vi
m
=
v−
m
i=1
⟨v, vi ⟩ vi , v −
i=1
⟨v, vi ⟩ vi
m
= ⟨v, v⟩ − 2 v,
i=1
m
⟨v, vi ⟩ vi
m
= ||v||2 − 2
i=1
m
= ||v||2 −
+
i=1
m
⟨v, vi ⟩ vi ,
i=1
⟨v, vi ⟩ vi
m
⟨v, vi ⟩2 +
i=1
⟨v, vi ⟩2
2
i=1
⟨v, vi ⟩ ,
so
m
||v||2 ≥
⎡
⎤
⎡
2
i=1
⟨v, vi ⟩
...
a
...
2
⎧⎡
⎤ ⎡
⎤ ⎡
⎤⎫
1/2
−1 ⎪
⎪ 1
⎪
⎪
⎨⎢
⎥ ⎢
⎥ ⎢
⎥⎬
⎢ 1 ⎥ , ⎢ −1/2 ⎥ , ⎢ 0 ⎥
...
An orthogonal basis is B1 = ⎣
⎪ 1 ⎦ ⎣ 1/2 ⎦ ⎣ 1 ⎦⎪
⎪
⎪
⎩
⎭
1
−1/2
0
⎤
⎡
1
0
⎢ −1
1 ⎥
⎥,v = ⎢
⎣ 0
1 ⎦ 2
1
−1
⎤
⎥
⎥ , and v3 =
⎦
171
Chapter Test Chapter 6
⎧⎡
⎤ ⎡
⎤ ⎡ √
1/2
− 2/2
⎪ 1/2
⎪
⎨⎢
1/2 ⎥ ⎢ −1/2 ⎥ ⎢ √ 0
⎥,⎢
⎥,⎢
c
...
Q = ⎢
⎣ 1/2 1/2
√
2/2 ⎦
0 0
2
1/2 −1/2
0
e
...
10
...
⎦⎪
⎪
⎭
α1 (c1 v1 ) + α2 (c2 v2 ) + · · · + αn (cn vn ) = (α1 c1 )v1 + (α2 c2 )v2 + · · · + (αn cn )vn = 0
...
Since ci ̸= 0 for i = 1,
...
, n, so B1 is a basis
...
The basis B1 is orthonormal if and only if
1 = ||ci vi || = |ci |||vi || ⇔ |ci | =
1
for all i
...
T
4
...
Every set of pairwise orthogonal vectors are also linearly
independent
...
F
...
T
5
...
7
...
T
22 + 12 + (−4)2 + 32
√
= 30
||v1 || =
8
...
9
...
⟨v1 , v2 ⟩
cos θ = √ √
24 10
√
−4 3
=
15
⟨v1 , v2 ⟩ = −4 + 1 − 8 + 3 = −8 ̸= 0
10
...
11
...
T
172
Chapter 6 Linear Transformations
⎡
13
...
T
16
...
T
⎤
1
15
...
If v1 = ⎣ 0 ⎦ and v2 =
1
⎡
⎤
−1
⎣ 1 ⎦ , then
1
⎧⎡
⎤⎫
⎨ −1 ⎬
W ⊥ = span ⎣ −2 ⎦
...
F
...
T
21
...
18
...
T
23
...
A basis for W ⊥ is {x}
...
F
...
28
...
F
...
T
32
...
F
...
T
−1
√
1
2
dx =
4
2
24
...
The only eigenvalue of the
n × n identity matrix is λ = 1
...
T
⟨2u, 2v + 2w⟩ = 2 ⟨u, v⟩+4 ⟨u, w⟩
...
T
30
...
It is the line perpendicular
1
given by y = − 2 x
...
F
...
36
...
If dim(W ) = dim(W ⊥ ),
then the sum can not be 5
...
37
...
T
38
...
F
...
A
...
1
1
...
A∪B = {−4, −3, −2, −1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
3
...
4
...
, −6, −5, 11, 12, 13,
...
A ∩ B = [0, 3]
8
...
A\B = {−4, 0, 1, 3, 5, 7}
6
...
A\B = (−11, 0)
10
...
(A ∪ B)c ∩ C = (8, ∞)
11
...
(A ∪ B)\C = (−11, −9)
14
...
16
...
18
...
20
...
(A ∩ B) ∩ C = {5} = A ∪ (B ∪ C)
23
...
A\(B ∪ C) = (A\B) ∩ (A\C)
= {3, 9, 11}
25
25
22
...
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
= {1, 2, 3, 5, 7, 9, 11, 14}
26
...
Let x ∈ (Ac )c
...
If x ∈ A, then x is not
in Ac , that is, x ∈ (Ac )c , so A ⊆ (Ac )c
...
28
...
29
...
Then x ∈ A and x ∈ B, so x ∈ B and x ∈ A
...
Similarly, we can show
that if x ∈ B ∩ A, then x ∈ A ∩ B
...
An element x ∈ A ∪ B if and only if
x ∈ A or x ∈ B ⇔ x ∈ B or x ∈ A ⇔ x ∈ B ∪ A
and hence, A ∪ B = B ∪ A
...
Let x ∈ (A ∩ B) ∩ C
...
So x ∈ A and (x ∈ B and x ∈ C), and hence,
(A ∩ B) ∩ C ⊆ A ∩ (B ∩ C)
...
32
...
33
...
Then x ∈ A or x ∈ (B ∩ C), so x ∈ A or (x ∈ B and x ∈ C)
...
Therefore, x ∈ (A ∪ B) ∩ (A ∪ C), so we have that A ∪ (B ∩ C) ⊆ (A ∪ B) ∩ (A ∪ C)
...
34
...
Then (x ∈ A) and x ∈ B ∩ C, so (x ∈ A) and (x ∈ B or x ∈ C) and
/
/
/
hence, x ∈ (A\B) or x ∈ (A\C)
...
Similarly, we can show that
(A\B) ∪ (A\C) ⊆ A\(B ∩ C)
...
Let x ∈ A\B
...
Hence, A\B ⊆ A ∩ B c
...
Hence A ∩ B c ⊆ A\B
...
We have that (A ∪ B) ∩ Ac = (A ∩ Ac ) ∪ (B ∩ Ac ) = φ ∪ (B ∩ Ac ) = B\A
...
Let x ∈ (A ∪ B)\(A ∩ B)
...
Since an element can not be both in a set and not in a set, we have
/
/
that (x ∈ A and x ∈ B) or (x ∈ B and x ∈ A), so (A ∪ B)\(A ∩ B) ⊆ (A\B) ∪ (B\A)
...
38
...
39
...
Then x ∈ A and (y ∈ B and y ∈ C)
...
Therefore, A × (B ∩ C) ⊆ (A × B) × (A × C)
...
40
...
Exercise Set A
...
Since for each first coordinate there is a unique
second coordinate, then f is a function
...
Since f (1) = −2 = f (4), then f is not one-toone
...
2 Functions
175
3
...
The range of f is the set
{−2, −1, 3, 9, 11}
...
f (A) = {−2, 3}
7
...
9
...
For example,
{(1, −2), (2, −1), (3, 3), (4, 5), (5, 9), (6, 11)}
...
Since Y contains more elements than X, then
it is not possible
...
If g : Y → X is defined by
{(−2, 1), (−1, 2), (3, 3), (5, 4), (9, 5), (11, 6), (14, 6)},
then g is onto
...
The inverse image is the set of all numbers
that are mapped to −2 by the function f, that is
f −1 ({−2}) = {1, 4}
...
f −1 (f ({1}) = f −1 ({−2}) = {1, 4}
11
...
f −1 (C ∪ D) = f −1 ([1, ∞))
= (−∞, −1] ∪ [1, ∞) = f −1 (C) ∪ f −1 (D)
f (A ∪ B) = f ((−3, 7)) = [0, 49)
and
f (A) ∪ f (B) = [0, 25] ∪ [0, 49) = [0, 49),
the two sets are equal
...
Since
14
...
and
f (A) ∩ f (B) = [0, 4] ∩ [0, 4] = [0, 4],
The function g is one-to-one but f is not one-toone
...
15
...
That is, x = y−b
...
a
16
...
Then x5 < x5 and 2x1 < 2x2 , so x5 + 2x1 < x5 + 2x2 and hence, f is one-to-one
...
17
...
If n is odd, then f (n) (x) = −x + c and if n is even, then f (n) (x) = x
...
y = f (x)
y = (f ◦ f )(x)
y
y
1
1
1
x
1
x
176
Appendix A Preliminaries
19
...
To show that f is one-to-one, we have that
e2x1 −1 = e2x2 −1 ⇔ 2x1 − 1 = 2x2 − 1 ⇔ x1 = x2
...
Since the exponential function is always positive, f is not onto R
...
Define g : R → (0, ∞) by g(x) = e2x−1
...
Let y = e2x−1
...
Solving for x gives x = 1+ln y
...
2
20
...
21
...
To show that f is one-to-one, we have that 2n1 = 2n2 if and only if n1 = n2
...
Since every image is an even number, the range of f is a proper subset of N, and hence, the function f is
not onto
...
Since every natural number is mapped to an even natural number, we have that f −1 (E) = N
and f −1 (O) = φ
...
f (E) = O, f (O) = E
23
...
Let p and q be odd numbers, so there are integers m and n such that p = 2m + 1 and q = 2n + 1
...
Hence,
f (A) = {2k + 1 | k ∈ Z}
...
f (B) = {2k + 1 | k ∈ Z} c
...
d
...
f −1 (O) = {(m, n) | n is odd} f
...
g
...
24
...
To show f is one-to-one, we have
f ((x1 , y1 )) = f ((x2 , y2 )) ⇔ (2x1 , 2x1 + 3y1 ) = (2x2 , 2x2 + 3y2 )
⇔ 2x1 = 2x2 and 2x1 + 3y1 = 2x2 + 3y2
⇔ x1 = x2 and y1 = y2
⇔ (x1 , y1 ) = (x2 , y2 )
...
Suppose (a, b) ∈ R2
...
Next solve
2
a
b−a
+ 3y = b ⇔ y =
...
c
...
2
3
25
...
Then there is some x ∈ A ∪ B such that f (x) = y
...
Hence, f (A ∪ B) ⊆
f (A)∪f (B)
...
So there exists x1 ∈ A or x2 ∈ B such that
f (x1 ) = y and f (x2 ) = y
...
Therefore, f (A)∪f (B) ⊆ f (A∪B)
...
Let x ∈ f −1 (C ∪ D)
...
Hence, x ∈ f −1 (C) or x ∈ f −1 (D) and
we have shown f −1 (C ∪ D) ⊆ f −1 (C) ∪ f −1 (D)
...
27
...
So there is some x ∈ f −1 (C) such that f (x) = y, and hence, y ∈ C
...
28
...
Let c ∈ C
...
Since g is a surjection there is some a ∈ A such that g(a) = b
...
29
...
Since f is a surjection, there is some b ∈ B such that f (b) = c
...
Then (f ◦ g)(a) = f (g(a)) = f (b) = c, so that f ◦ g is a surjection
...
Suppose (f ◦ g)(a1 ) = (f ◦ g)(a2 ), that is f (g(a1 )) = f (g(a2 ))
...
Now since g is one-to-one, then a1 = a2 and hence, f ◦ g is one-to-one
...
Suppose g(a1 ) = g(a2 )
...
Since f ◦ g is an injection, then a1 = a2 and hence, g is an injection
...
Let y ∈ f (A)\f (B)
...
So there is some x ∈ A but which is not in B, with
/
y = f (x)
...
A
...
Let x ∈ f −1 (C\D)
...
Therefore,
/
/
f −1 (C\D) ⊆ f −1 (C)\f −1 (D)
...
Exercise Set A
...
If the side is x, then by the Pythagorean Theorem the hypotenuse is given by h2 = x2 + x2 =
√
2x2 , so h = 2x
...
If the side is x, then the height is h =
√
√
the area is A = 1 x 23 x = 43 x2
...
Since a =
2
c
4
c
√ ,
2
then the area A = 1 bh = 1 a2 =
2
2
...
Let s = p and t = u , then =
q
v
t
hence, s is a rational number
...
If a divides b, there is some k such that ak = b
and if b divides c, there is some ℓ such that bℓ = c
...
6
...
7
...
Then n2 = (2k + 1)2 = 2(2k 2 + k) + 1, so n2 is
odd
...
Since n2 + n + 3 = n(n + 1) + 3 and the
product of two consecutive natural numbers is an
even number, then n2 + n + 3 is odd
...
If b = a + 1, then (a + b)2 = (2a + 1)2 =
2(2a2 + 2a) + 1, so (a + b)2 is odd
...
If m = 2k + 1 and n = 2ℓ + 1, then mn =
(2k + 1)(2ℓ + 1) = 2(2kℓ + k + ℓ) + 1 and hence
mn is odd
...
f (x) ≤ g(x) ⇔ x2 − 2x + 1 ≤ x + 1 ⇔
x(x − 3) ≤ 0 ⇔ 0 ≤ x ≤ 3
...
Let m = 2 and n = 3
...
13
...
This implies n2 = 2k + 1 for some integer
k but taking square roots does not lead to the
conclusion
...
Suppose n is even, so there is some k such that
n = 2k
...
15
...
Since p and q are positive, then pq =
p2 = p = p+q
...
Using the contrapositive argument we suppose x > 0
...
2
√
19
...
Then 2q 3 = p3 ,
so p3 is even and hence p is even
...
14
...
Then
(2k+1)3 = 8k 3 +12k 2 +6k+1 = 2(4k 3 +6k 2 +3k)+1
and hence n3 is odd
...
16
...
By the Quadratic Formula, the solutions to n2 + n − c = 0 are
√
√
−1 ± 4c + 1
−1 ± 8k + 5
n=
=
,
2
2
which is not be an integer
...
To prove the contrapositive statement suppose that y is rational with y = p
...
20
...
If n = 1, then since
2
2
2
2
1
2
>
1
3
the result holds n ≥ 1
...
If 7xy ≤ 3x + 2y , then 3x − 7xy + 2y = (3x − y)(x − 2y) ≥ 0
...
The first case is not possible since the
assumption is that x < 2y
...
178
Appendix A Preliminaries
22
...
Let A =
[−1, 1] and B = [0, 1]
...
23
...
Let C =
[−4, 4] and D = [0, 4]
...
24
...
Since A ⊆ B, then x ∈ B and
hence, y ∈ f (B)
...
25
...
Since C ⊂ D,
then f (x) ∈ D
...
26
...
2, we showed that f (A ∩ B) ⊆ f (A) ∩ f (B)
...
Then there are x1 ∈ A and x2 ∈ B such that f (x1 ) = y = f (x2 )
...
27
...
This part of the proof does not require that f be
one-to-one
...
So y ∈ f (A)\f (B) and hence,
/
f (A\B) ⊂ f (A)\f (B)
...
So there is some x ∈ A such that y = f (x)
...
Therefore, f (A)\f (B) ⊂ f (A\B)
...
In Theorem 3, of Section A
...
Now let x ∈ f −1 (f (A)) so y = f (x) ∈ f (A)
...
Then f (x) = f (x1 ) and since f is one-to-one, we have x = x1
...
29
...
2, f (f −1 (C)) ⊂ C
...
Since f is onto, there is some x such that
y = f (x)
...
Therefore, C ⊂ f (f −1 (C))
...
4
1
...
For the inductive hypothesis assume the summation formula holds
6
for the natural number n
...
6
6
12 + 22 + 32 + · · · + n2 + (n + 1)2 =
Hence, the summation formula holds for all natural numbers n
...
• Base case: n = 1 : 13 =
12 (2)2
4
• Inductive hypothesis: Assume 13 + 23 + · · · + n3 =
n2 (n+1)2
...
4
13 + 23 + · · · + n3 + (n + 1)2 =
A
...
For the base case n = 1, we have that the left hand side of the summation is 1 and the right hand side is
1(3−1)
= 1, so the base case holds
...
Next consider
n(3n − 1)
3n2 + 5n + 2
+ (3n + 1) =
2
2
(n + 1)(3n + 2)
=
...
4
...
Consider
3 + 11 + 19 + · · · + (8n − 5) + (8(n + 1) − 5) = 4n2 − n + 8n + 3 = 4n2 + 7n + 3
= (4n2 + 8n + 4) − 4 − n + 3 = 4(n + 1)2 − (n + 1)
...
For the base case n = 1, we have that the left hand side of the summation is 2 and the right hand side is
1(4)
2 = 2, so the base case holds
...
Next consider
1
(n + 1)(3n + 4)
(3n2 + 7n + 4) =
2
2
(n + 1)(3(n + 1) + 1)
=
...
6
...
Consider
3 + 7 + 11 + · · · + (4n − 1) + (4(n + 1) − 1) = n(2n + 1) + 4n + 3 = 2n2 + 5n + 3
= (2n + 3)(n + 1) = (n + 1)(2(n + 1) + 1)
...
For the base case n = 1, we have that the left hand side of the summation is 3 and the right hand side is
3(2)
2 = 3, so the base case holds
...
Next consider
3 + 6 + 9 + · · · + 3n + 3(n + 1) =
1
3
3(n + 1)(n + 2)
(3n2 + 9n + 6) = (n2 + 3n + 2) =
...
180
Appendix A Preliminaries
8
...
3
Consider
n(n + 1)(n + 2)
+ (n + 1)(n + 2)
3
(n + 1)(n + 2)(n + 3)
=
...
For the base case n = 1, we have that the left hand side of the summation is 21 = 2 and the right hand
side is 22 − 2 = 2, so the base case holds
...
Next consider
n+1
n
2k =
k=1
k=1
2k + 2n+1 = 2n+1 − 2 + 2n+1 = 2n+2 − 2
...
10
...
11
...
n
2 + 4 + · · · + 2n
1
2
3
4
5
2 = 1(2)
6 = 2(3)
12 = 3(4)
40 = 4(5)
30 = 5(6)
The pattern displayed by the data suggests the sum is 2 + 4 + 6 + · · · + (2n) = n(n + 1)
...
For the inductive hypothesis assume the summation formula holds for the
natural number n
...
Hence, the summation formula holds for all natural numbers n
...
4 Mathematical Induction
181
12
...
Then
1 + 5 + 9 + · · · + (4n − 3) = 1 + (1 + 4) + (1 + 2 · 4) + · · · + (1 + (n − 1) · 4)
= n + 4(1 + 2 + 3 + · · · + (n − 1))
=n+4
(n − 1)n
2
= n + 2(n − 1)n = 2n2 − n
...
The base case n = 5 holds since 32 = 25 > 25 = 52
...
Consider 2n+1 = 2(2n ), so that by the inductive hypothesis 2n+1 = 2(2n ) > 2n2
...
14
...
Using the inductive hypothesis (n+1)2 = n2 +2n+1 > (2n+1)+(2n+1) = 4n+2 > 2n+3 = 2(n+1)+1
15
...
The inductive hypothesis is n2 + n is
divisible by 2
...
By the inductive hypothesis, n2 + n is divisible
by 2, so since both terms on the right are divisible by 2, then (n + 1)2 + (n + 1) is divisible by 2
...
16
...
• Inductive hypothesis: Assume xn − y n is divisible by x − y
...
Since x − y divides both terms on the right, then x − y divides xn+1 − y n+1
...
For the base case n = 1, we have that the left hand side of the summation is 1 and the right hand side
is r−1 = 1, so the base case holds
...
Next consider
1 + r + r2 + · · · + rn−1 + rn =
Hence, the summation formula holds for all
18
...
, b
...
r−1
r−1
r−1
natural numbers n
...
c
...
Consider
f1 + f2 + · · · + fn + fn+1 = fn+2 + fn+1 − 1 = fn+3 − 1
...
19
...
1, A ∩ (B1 ∪ B2 ) = (A ∩ B1 ) ∪ (A ∩ B2 ), so the base case n = 2 holds
...
Consider
A ∩ (B1 ∪ B2 ∪ · · · ∪ Bn ∪ Bn+1 ) = A ∩ ((B1 ∪ B2 ∪ · · · ∪ Bn ) ∪ Bn+1 )
= [A ∩ (B1 ∪ B2 ∪ · · · ∪ Bn )] ∪ (A ∩ Bn+1 )
= (A ∩ B1 ) ∪ (A ∩ B2 ) ∪ · · · ∪ (A ∩ Bn ) ∪ (A ∩ Bn+1 )
...
• Base case: n = 1 : Since the grid is 2 × 2 if one square is removed the remaining three squares can be
covered with the shape
...
Consider a 2n+1 × 2n+1 grid
...
If one square is removed
from the entire grid, then it must be removed from one of the four grids
...
The remaining three grids have a total of 3(2n × 2n )
squares and hence, can also be covered
...
22
...
By the Binomial Theorem,
n
2n = (1 + 1)n =
k=0
n
n
+
r−1
r
n!
n!
+
(r − 1)!(n − r + 1)! (r)!(n − r)!
n!
1
1
=
+
(r − 1)!(n − r)! n − r + 1 r
(n + 1)!
n+1
=
=
r!(n − r + 1)!
r
=
24
...
k
n
0 = (1 − 1)n (1 + (−1))n =
(−1)k
k=0
n
Title: Introduction to Linear Algebra with Applications by Jim DeFranza (Solution Manual)
Description: Complete Solution Manual from cover to cover
Description: Complete Solution Manual from cover to cover