Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Title: Introduction to Linear Algebra with Applications by Jim DeFranza
Description: Complete book from cover to cover in pdf format
Description: Complete book from cover to cover in pdf format
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
DeFranza
Linear Algebra
Introduction to Linear Algebra with Applications by Jim DeFranza and Daniel Gagliardi provides
the proper balance between computation, problem solving, and abstraction that will equip students with
the necessary skills and problem solving strategies to allow for a greater understanding and appreciation
of linear algebra and its numerous applications
...
Each concept is fully developed presenting natural connections
between topics giving students a working knowledge of the theory and techniques for each module
covered
...
MD DALIM 976667 7/29/08 CYAN MAG YELO BLACK
Ranging from routine to more challenging, each exercise set extends the concepts
or techniques by asking the student to construct complete arguments
...
Examples are designed to develop intuition and prepare students to think more
conceptually about new topics as they are introduced
...
Summaries conclude each section with important facts and techniques providing
students with easy access to the material needed to master the exercise sets
...
mhhe
...
mhhe
...
Lawrence University
Dan Gagliardi
SUNY Canton
First Pages
INTRODUCTION TO LINEAR ALGEBRA WITH APPLICATIONS
Published by McGraw-Hill, a business unit of The McGraw-Hill Companies, Inc
...
Copyright © 2009 by The McGraw-Hill Companies, Inc
...
No part of this
publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system,
without the prior written consent of The McGraw-Hill Companies, Inc
...
Some ancillaries, including electronic and print components, may not be available to customers outside the United States
...
1 2 3 4 5 6 7 8 9 0 DOC/DOC 0 9 8
ISBN 978–0–07–353235–6
MHID 0–07–353235–5
Editorial Director: Stewart K
...
Kane
Senior Media Project Manager: Tammy Juran
Designer: Laurie B
...
Leland
Supplement Producer: Melissa M
...
25/12 Times
Printer: R
...
Donnelly Crawfordsville, IN
Library of Congress Cataloging-in-Publication Data
DeFranza, James, 1950–
Introduction to linear algebra / James DeFranza, Daniel Gagliardi
...
p
...
Includes index
...
paper)
1
...
2
...
I
...
II
...
QA184
...
D44 2009
515′
...
mhhe
...
Jim DeFranza is Professor of Mathematics
at St
...
St
...
It is this many years of working closely with students that has
shaped this text in Linear Algebra and the other texts he has written
...
D
...
Dr
...
Dr
...
Jim is married
and has two children David and Sara
...
Daniel Gagliardi is an Assistant Professor of Mathematics at SUNY Canton, in
Canton New York
...
Gagliardi began his career as a software engineer at IBM
in East Fishkill New York writing programs to support semiconductor development
and manufacturing
...
D
...
Dr
...
In particular, his current work
is concerned with developing algorithmic formulations to describe the fine structure
(characters and Weyl groups) of local symmetric spaces
...
Gagliardi also does
research in Graph Theory
...
In addition to his work as a mathematician, Dr
...
Dr
...
v
Revised Pages
Contents
Preface ix
CHAPTER
1
Systems of Linear Equations and Matrices
1
1
...
1 12
1
...
2 23
1
...
3 37
1
...
4 45
1
...
5 51
1
...
6 65
1
...
7 77
1
...
8 84
Review Exercises 89
Chapter Test 90
CHAPTER
2
Linear Combinations and Linear Independence
2
...
1 99
2
...
2 108
2
...
3 120
Review Exercises 123
Chapter Test 125
vi
93
Revised Pages
Contents
CHAPTER
3
Vector Spaces
127
3
...
1 137
3
...
2 154
3
...
3 171
3
...
4 182
3
...
5 193
Review Exercises 194
Chapter Test 195
CHAPTER
4
Linear Transformations
199
4
...
1 211
4
...
2 223
4
...
3 233
4
...
4 245
4
...
5 253
4
...
6 268
Review Exercises 270
Chapter Test 272
CHAPTER
5
235
Eigenvalues and Eigenvectors 275
5
...
1 285
5
...
2 298
5
...
3 309
300
vii
Revised Pages
viii
Contents
5
...
4 315
Review Exercises 316
Chapter Test 318
CHAPTER
6
Inner Product Spaces
310
321
6
...
1 331
6
...
2 341
6
...
3 352
6
...
4 364
6
...
5 375
6
...
6 383
6
...
7 392
6
...
8 403
Review Exercises 404
Chapter Test 406
Appendix 409
Answers to Odd-Numbered Exercises 440
Index 479
Revised Pages
Preface
Introduction to Linear Algebra with Applications is an introductory text targeted to
second-year or advanced first-year undergraduate students
...
The centerpiece
of our philosophy regarding the presentation of the material is that each topic should
be fully developed before the reader moves onto the next
...
We take great care to meet both of these objectives
...
As a result, the reader is prepared
for each new unit, and there is no need to repeat a concept in a subsequent chapter
when it is utilized
...
Our approach is to take advantage of this
opportunity by presenting abstract vector spaces as early as possible
...
To motivate the definition of an abstract vector space, and the subtle concept of
linear independence, we use addition and scalar multiplication of vectors in Euclidean
space
...
This approach equips students with the necessary skills and problemsolving strategies in an abstract setting that allows for a greater understanding and
appreciation for the numerous applications of the subject
...
Linear systems, matrix algebra, and determinants: We have given a streamlined, but complete, discussion of solving linear systems, matrix algebra, determinants, and their connection in Chap
...
Computational techniques are introduced,
and a number of theorems are proved
...
Determinants are no longer central in linear
algebra, and we believe that in a course at this level, only a few lectures should
be devoted to the topic
...
1
...
ix
Revised Pages
x
Preface
2
...
1, providing students with a familiar
structure to work with as they start to explore the properties which are used later
to characterize abstract vector spaces
...
Linear independence: We have found that many students have difficulties with
linear combinations and the concept of linear independence
...
When students fail to grasp them, the full benefits of the course cannot
be realized
...
2 to a careful exposition of linear combinations and linear independence
in the context of Euclidean space
...
First, by placing
these concepts in a separate chapter their importance in linear algebra is highlighted
...
Third, many of the important ramifications of linear combinations and linear independence are considered
in the familiar territory of Euclidean spaces
...
Euclidean spaces ޒn : The Euclidean spaces and their algebraic properties are
introduced in Chap
...
3
...
5
...
6
...
Formal definitions and theorems are then given to describe the situation
in general
...
7
...
These questions are designed to help the student
connect concepts and better understand the facts presented in the chapter
...
Rigor and intuition: The approach we have taken attempts to strike a balance
between presenting a rigorous development of linear algebra and building intuition
...
When a proof is
not present, we include a motivating discussion describing the importance and
use of the result and, if possible, the idea behind a proof
...
Abstract vector spaces: We have positioned abstract vector spaces as a central
topic within Introduction to Linear Algebra with Applications by placing their
introduction as early as possible in Chap
...
We do this to ensure that abstract
vector spaces receive the appropriate emphasis
...
However, Euclidean spaces still play a central
role in our approach because of their familiarity and since they are so widely
used
...
Revised Pages
Preface
xi
10
...
They are written, whenever possible,
using nontechnical language and mostly without notation
...
Our intention is to help students to make connections between the concepts of
the section as they survey the topic from a greater vantage point
...
Much of this growth is fueled by the power of modern computers and the availability
of computer algebra systems used to carry out computations for problems involving
large matrices
...
Recently, a consortium of mathematics educators has placed its importance, relative to applications, second only to calculus
...
Whether
the intended audience is engineering, economics, science, or mathematics students,
the abstract theory is essential to understanding how linear algebra is applied
...
1
...
However, many types of applications involve the more sophisticated concepts we develop in the text
...
1, and are
presented at the end of a chapter as soon as the required background material is completed
...
It is our
hope that the topics we have chosen will interest the reader and lead to further inquiry
...
4
...
An introduction to the connection between differential equations and linear algebra
is given in Secs
...
5 and 5
...
Markov chains and quadratic forms are examined in
Secs
...
4 and 6
...
Section 6
...
One of the most familiar applications
here is the problem of finding the equation of a line that best fits a set of data points
...
6
...
Technology
Computations are an integral part of any introductory course in mathematics and
certainly in linear algebra
...
That said, we also encourage the
student to make appropriate use of the available technologies designed to facilitate,
or to completely carry out, some of the more tedious computations
...
Our approach in Introduction to Linear Algebra with
Applications is to assume that some form of technology will be used, but leave the
choice to the individual instructor and student
...
Note that this text can be
used with or without technology
...
From our own experience, we have found that Scientific Notebook,TM
A
which offers a front end for LTEX along with menu access to the computer algebra
system MuPad, allows the student to gain experience using technology to carry out
A
computations while learning to write clear mathematics
...
Another aspect of technology in linear algebra has to do with the accuracy and
efficiency of computations
...
Moreover, the
accuracy of the results can be affected by computer roundoff error
...
Overcoming problems of this kind is extremely important
...
In our
text, the fundamental concepts of linear algebra are introduced using simple examples
...
Other Features
1
...
These provide additional
motivation and emphasize the relevance of the material that is about to be covered
...
Writing style: The writing style is clear, engaging, and easy to follow
...
We limit the use of jargon and provide explanations that are as readerfriendly as possible
...
Introduction to Linear Algebra with Applications is specifically designed to be a
readable text from which a student can learn the fundamental concepts in linear
algebra
...
Exercise sets: Exercise sets are organized with routine exercises at the beginning
and the more difficult problems toward the end
...
The early portion of each
exercise set tests the student’s ability to apply the basic concepts
...
The latter portion of each exercise set extends the concepts and
techniques by asking the student to construct complete arguments
...
Review exercise sets: The review exercise sets are organized as sample exams
with 10 exercises
...
At least one problem
in each of these sets presents a new idea in the context of the material of the
chapter
...
Length: The length of the text reflects the fact that it is specifically designed for
a one-semester course in linear algebra at the undergraduate level
...
Appendix: The appendix contains background material on the algebra of sets,
functions, techniques of proof, and mathematical induction
...
Course Outline
The topics we have chosen for Introduction to Linear Algebra with Applications
closely follow those commonly covered in a first introductory course
...
Nevertheless, we have written the text to be flexible, allowing for some permutations
of the order of topics without any loss of consistency
...
1 we present all the
basic material on linear systems, matrix algebra, determinants, elementary matrices,
and the LU decomposition
...
2 is entirely devoted to a careful exposition of linear combinations and linear independence in ޒn
...
The addition of this chapter gives us
the opportunity to develop all the important ideas in a familiar setting
...
3
...
3 is a discussion
of subspaces, bases, and coordinates
...
4
...
Also, in Chap
...
Chap
...
An abundance of examples are given to illustrate the techniques of computing
eigenvalues and finding the corresponding eigenvectors
...
In Chap
...
We also give a description
of the Gram-Schmidt process used to find an orthonormal basis for an inner product
space and present material on orthogonal complements
...
The Appendix contains
a brief summary of some topics found in a Bridge Course to higher mathematics
...
Application sections are placed at the end of chapters as soon
as the requisite background material has been covered
...
Instructor solutions manual: This manual contains detailed solutions to all
exercises
...
Student solutions manual: This manual contains detailed solutions to oddnumbered exercises
...
Text website www
...
com/defranza: This website accompanies the text and
is available for both students and their instructors
...
In addition to these assets, instructors will be able to access
additional quizzes, sample exams, the end of chapter true/false tests, and the
Instructor’s Solutions Manual
...
Their thoughtful comments and
excellent suggestions have helped us enormously with our efforts to realize our vision
of a reader-friendly introductory text on linear algebra
...
We are also grateful to Ernie Stitzinger of North Carolina State University who had
the tiring task of checking the complete manuscript for accuracy, including all the
exercises
...
Sponsoring Editor), Michelle Driscoll (Developmental Editor), and Joyce Watters
(Project Manager) who have helped us in more ways than we can name, from the
inception of this project to its completion
...
Finally, we want to express our gratitude to the staff
at McGraw-Hill Higher Education, Inc
...
Confirming Pages
Preface
...
Leong Wah June, Universiti Putra Malaysia
Cerry Klein, University of Missouri–Columbia
Kevin Knudson, Mississippi State University
Hyungiun Ko, Yonsei University
Jacob Kogan, University of Maryland–Baltimore County
David Meel, Bowling Green State University
Martin Nakashima, California State Poly University–Pomona
Eugene Spiegel, University of Connecticut–Storrs
Dr
...
Like calculus, linear
algebra is a subject with elegant theory and many diverse applications
...
To help
with this transition, some colleges and universities offer a Bridge Course to Higher
Mathematics
...
All this is in the context of a specific body of knowledge
...
Whether you are
taking this course as part of a mathematics major or because linear algebra is applied
in your specific area of study, a clear understanding of the theory is essential for
applying the concepts of linear algebra to mathematics or other fields of science
...
The organization of the material is based on our philosophy that each
topic should be fully developed before readers move onto the next
...
It is particularly
applicable to the study of mathematics
...
In our text, this material is
contained in Chaps
...
All other branches of the tree, representing more
advanced topics and applications, extend from the foundational material of the trunk or
from the ancillary material of the intervening branches
...
If you remain committed to learning this beautiful subject, the
rewards will be significant in other courses you may take, and in your professional
career
...
edu
Dan Gagliardi
gagliardid@canton
...
1
1
...
3
1
...
5
1
...
7
1
...
The chemical reaction that occurs in
the leaves of plants converts carbon dioxide and
water to carbohydrates with the release of oxygen
...
The law of conservation of mass states
that the total mass of all substances present before
and after a chemical reaction remains the same
...
To balance the photosynthesis reaction equation, the same number of carbon atoms must appear on both
sides of the equation, so
a = 6d
The same number of oxygen atoms must appear on both sides, so
2a + b = 2c + 6d
and the same number of hydrogen atoms must appear on both sides, so
2b = 12d
1
Confirming Pages
Chapter 1 Systems of Linear Equations and Matrices
This gives us the system of three linear equations in four variables
⎧
⎪ a
− 6d = 0
⎨
2a + b − 2c − 6d = 0
⎪
⎩
2b
− 12d = 0
Any positive integers a, b, c, and d that satisfy all three equations are a solution to
this system which balances the chemical equation
...
Many diverse applications are modeled by systems of equations
...
In
this chapter we develop systematic methods for solving systems of linear equations
...
1
ß
2
Systems of Linear Equations
As the introductory example illustrates, many naturally occurring processes are
modeled using more than one equation and can require many equations in many variables
...
To develop this idea, consider the set of equations
2x − y = 2
x + 2y = 6
which is a system of two equations in the common variables x and y
...
In this example we proceed by solving the first equation for y, so that
y = 2x − 2
To find the solution, substitute y = 2x − 2 into the second equation to obtain
x + 2(2x − 2) = 6
and solving for x gives
x=2
Substituting x = 2 back into the first equation yields 2(2) − y = 2, so that y = 2
...
Since both of these
equations represent straight lines, a solution exists provided that the lines intersect
...
1(a)
...
If there are no
solutions, the system is inconsistent
...
The two lines have different slopes and hence intersect at a unique point, as shown
in Fig
...
2
...
1(b)
...
1 Systems of Linear Equations
x + 2y = 6
y
5
2x − y = 2
y
y
x+y =1 5
5
(2, 2)
Ϫ5
x−y =1
5
Ϫ5
(a)
−x + y = 1
x
Ϫ5
5
Ϫ5
(b)
x
2x + 2y = 2
Ϫ5
5
x
Ϫ5
(c)
Figure 1
3
...
1(c)
...
A linear equation
in the n variables x1 , x2 ,
...
The first subscript indicates the equation number while the second
specifies the term of the equation
...
...
...
⎪
...
...
...
...
...
⎪
⎪
⎩
am1 x1 + am2 x2 + · · · + amn xn = bm
This is also referred to as an m × n linear system
...
A solution to a linear system with n variables is an ordered sequence
(s1 , s2 ,
...
, xn = sn
...
Confirming Pages
4
Chapter 1 Systems of Linear Equations and Matrices
The Elimination Method
The elimination method, also called Gaussian elimination, is an algorithm used to
solve linear systems
...
An m × n linear system is in triangular form provided that the coefficients
aij = 0 whenever i > j
...
Two examples of triangular systems are
⎧
⎧
⎨ x1 − 2x2 + x3 = −1
⎨ x1 + x2 − x3 − x4 = 2
x2 − 3x3 = 5
x2 − x3 − 2x4 = 1
and
⎩
⎩
x3 = 2
2x3 − x4 = 3
When a linear system is in triangular form, then the solution set can be obtained
using a technique called back substitution
...
Substituting this into the second equation,
we obtain x2 − 3(2) = 5, so x2 = 11
...
The solution is also written as (19, 11, 2)
...
The next theorem gives three operations that transform a linear system into an
equivalent system, and together they can be used to convert any linear system to an
equivalent system in triangular form
...
1 Systems of Linear Equations
THEOREM 1
Let
⎧
⎪ a11 x1 + a12 x2 + · · · + a1n xn = b1
⎪
⎪
⎪
⎪ a21 x1 + a22 x2 + · · · + a2n xn = b2
⎨
a31 x1 + a32 x2 + · · · + a3n xn = b3
⎪
...
...
...
...
...
⎪
...
...
...
Performing any one of the following operations on the linear
system produces an equivalent linear system
...
Interchanging any two equations
...
Multiplying any equation by a nonzero constant
...
Adding a multiple of one equation to another
...
If equation i is multiplied by a
constant c ̸= 0, then equation i of the new system is
cai1 x1 + cai2 x2 + · · · + cain xn = cbi
Let (s1 , s2 ,
...
Since
ai1 s1 + ai2 s2 + · · · + ain sn = bi ,
then
cai1 s1 + cai2 s2 + · · · + cain sn = cbi
Hence (s1 , s2 ,
...
Consequently, the
systems are equivalent
...
Thus, equation j of the new system
becomes
(cai1 + aj 1 )x1 + (cai2 + aj 2 )x2 + · · · + (cain + aj n )xn = cbi + bj
or equivalently,
c(ai1 x1 + ai2 x2 + · · · + ain xn ) + (aj 1 x1 + aj 2 x2 + · · · + aj n xn ) = cbi + bj
Now let (s1 , s2 ,
...
Then
ai1 s1 + ai2 s2 + · · · + ain sn = bi
and
aj 1 s1 + aj 2 s2 + · · · + aj n sn = bj
Therefore,
c(ai1 s1 + ai2 s2 + · · · + ain sn ) + (aj 1 s1 + aj 2 s2 + · · · + aj n sn ) = cbi + bj
so that (s1 , s2 ,
...
5
Confirming Pages
6
Chapter 1 Systems of Linear Equations and Matrices
EXAMPLE 1
Use the elimination method to solve the linear system
...
Using back substitution gives x = 0
...
2
...
y
x+y =1 5
y
−x + y = 1
x+y =1 5
y=1
(1, 0)
Ϫ5
−x + y = 1
5
x
Ϫ5
Ϫ5
5
x
Ϫ5
(a)
(b)
Figure 2
Converting a linear system to triangular form often requires many steps
...
To articulate this process, the notation, for example,
(−2) · E1 + E3 −→ E3
will mean add −2 times equation 1 to equation 3, and replace equation 3 with the
result
...
EXAMPLE 2
Solution
Solve the linear system
...
1 Systems of Linear Equations
⎧
⎨ x+y+ z= 4
−x − y + z = −2
⎩
2x − y + 2z = 2
E1 + E2 → E2
− 2E1 + E3 → E3
7
⎧
⎨x + y + z = 4
2z = 2
−→
⎩
− 3y
= −6
Interchanging the second and third equations gives the triangular linear system
⎧
⎧
⎨x + y + z = 4
⎨x + y + z = 4
2z = 2
E2 ↔ E3
− 3y
= −6
−→
⎩
⎩
− 3y
= −6
2z = 2
Using back substitution, we have z = 1, y = 2, and x = 4 − y − z = 1
...
Recall from solid geometry that the graph of an equation of the form
ax + by + cz = d is a plane in three-dimensional space
...
3(a)
...
3(b) are the lines of the
pairwise intersections of the three planes
...
(1, 2, 1)
(a)
(b)
Figure 3
Similar to the 2 × 2 case, the geometry of Euclidean space helps us better understand
the possibilities for the general solution of a linear system of three equations in three
variables
...
Alternatively, a 3 × 3 system can
have infinitely many solutions if
1
...
3
...
The three planes intersect in a line (like the pages of a book)
...
example, the linear system given by
⎧
⎨−y +z=0
y
=0
⎩
z=0
Confirming Pages
8
Chapter 1 Systems of Linear Equations and Matrices
Figure 4
EXAMPLE 3
Solution
represents three planes whose intersection is the x axis
...
Finally, there are two cases in which a 3 × 3 linear system has no solutions
...
Certainly, when all three planes are parallel, the system has
no solutions, as illustrated by the linear system
⎧
⎪ z = 0
⎨
z = 1
⎪
⎩
z = 2
Also, a 3 × 3 linear system has no solutions, if the lines of the pairwise intersections
of the planes are parallel, but not the same, as shown in Fig
...
From the previous discussion we see that a 3 × 3 linear system, like a 2 × 2 linear
system, has no solutions, has a unique solution, or has infinitely many solutions
...
1
...
In Example 3 we consider a linear system with four variables
...
Solve the linear system
...
After we do so, the coefficient of x1 is 1
...
1 Systems of Linear Equations
9
which is an equivalent system in triangular form
...
It is common in this case to replace x4
with the parameter t
...
The reader can check that
x1 = 3t − 2, x2 = t − 3, x3 = 2t + 1, and x4 = t is a solution for any t by substituting these values in the original equations
...
For example, if t = 0, then a particular solution is
(−2, −3, 1, 0)
...
In this case we call x4 a free variable
...
In this
case, the solution set is an r-parameter family of solutions where r is equal to the
number of free variables
...
⎧
⎨
x1 − x2 − 2x3 − 2x4 − 2x5 = 3
3x1 − 2x2 − 2x3 − 2x4 − 2x5 = −1
⎩
−3x1 + 2x2 + x3 + x4 − x5 = −1
After performing the operations E3 + E2 → E3 followed by E2 − 3E1 → E2 , we
have the equivalent system
⎧
3
⎨ x1 − x2 − 2x3 − 2x4 − 2x5 =
x2 + 4x3 + 4x4 + 4x5 = − 10
⎩
− x3 − x4 − 3x5 = − 2
The variables x4 and x5 are both free variables, so to write the solution, let x4 = s
and x5 = t
...
EXAMPLE 5
Solution
Solve the linear system
...
This is accomplished
by using the following operations
...
In the previous examples the algorithm for converting a linear system to triangular
form is based on using a leading variable in an equation to eliminate the same variable
in each equation below it
...
Confirming Pages
1
...
Find the vertex of the parabola
...
Conditions on a, b,
and c are imposed by substituting the given points into this equation
...
In particular, we have
⎧
⎧
⎨ a− b+c= 1
⎨a − b + c = 1
−4E1 + E2 → E2
4a + 2b + c = −2
6b − 3c = −6
→
−9E1 + E3 → E3
⎩
⎩
9a + 3b + c = 1
12b − 8c = −8
Next, with b as the leading variable we, eliminate b from equation 3, so that
⎧
⎧
⎨a − b + c = 1
⎨a − b + c = 1
6b − 3c = −6
−2E2 + E3 → E3
6b − 3c = −6
→
⎩
⎩
12b − 8c = −8
− 2c = 4
Now, using back substitution on the last system gives c = −2, b = −2, and a = 1
...
5
...
A m × n linear system has a unique solution, infinitely many solutions, or
no solutions
...
Interchanging any two equations in a linear system does not alter the set of
solutions
...
Multiplying any equation in a linear system by a nonzero constant does not
alter the set of solutions
...
Replacing an equation in a linear system with the sum of the equation and a
scalar multiple of another equation does not alter the set of solutions
...
Every linear system can be reduced to an equivalent triangular linear
system
...
1
Perform the operations −E1 + E2 → E2 and
−2E1 + E3 → E3 , and write the new equivalent
system
...
In Exercises 5–18, solve the linear system using
the elimination method
...
Consider the linear system
⎧
⎨ x1 − x2 − 2x3 = 3
−x1 + 2x2 + 3x3 = 1
⎩
2x1 − 2x2 − 2x3 = −2
Perform the operations E1 + E2 → E2 and
−2E1 + E3 → E3 , and write the new equivalent
system
...
2
...
Solve the linear system
...
Consider the linear system
⎧
+ 3x4 = 2
⎪ x1
⎪
⎨
x1 + x2
+ 4x4 = 3
+ x3 + 8x4 = 3
⎪ 2x1
⎪
⎩
x1 + x2 + x3 + 6x4 = 2
Perform the operations −E1 + E2 → E2 ,
−2E1 + E3 → E3 , −E1 + E4 → E4 , −E2 +
E4 → E4 , and −E3 + E4 → E4 , and write the
new equivalent system
...
4
...
2x + 3y = −2
−2x
= 0
6
...
4x
= 4
−3x + 2y = −3
8
...
3x − 2y = 4
x − 2y = 4
3
3
3x − 5y = 1
−x + 5 y = − 1
3
3
10
...
⎩
x − 2y + z = −2
12
...
1 Systems of Linear Equations
⎧
⎨ −2x − 2y + 2z = 1
x
+ 5z = −1
13
...
⎩
2x − 2y − 8z = 2
15
...
3x1 + 4x2 + 3x3 = 0
3x1 − 4x2 + 3x3 = 4
−2x1 + x2
=2
3x1 − x2 + 2x3 = 1
13
⎧
⎨ x − 2y + 4z = a
2x + y − z = b
27
...
⎩
4x + 2y + z = c
In Exercises 29–32, determine the value of a that
makes the system inconsistent
...
x + y = −2
2x + ay = 3
30
...
x1 − 2x2 − 2x3 − x4 = −3
− 2x1 + x2 + x3 − 2x4 = −3
31
...
=1
2x1 + 2x2 − x3
− x2
+ 3x4 = 2
x− y=2
3x − 3y = a
32
...
19
...
−2x + y = a
−3x + 2y = b
2x + 3y = a
x+ y=b
⎧
⎨ 3x + y + 3z = a
−x −
z=b
21
...
⎩
x − y − 2z = c
In Exercises 23–28, give restrictions on a, b, and c
such that the linear system is consistent
...
x − 2y = a
−2x + 4y = 2
24
...
x − 2y = a
−x + 2y = b
26
...
Find the vertex of the parabola
...
(0, 0
...
75), (−1, 4
...
(0, 2), (−3, −1), (0
...
75)
35
...
5, −3
...
3, 2
...
(0, −2875), (1, −5675), (3, 5525)
37
...
Sketch the lines
...
Find the point where the four lines
2x + y = 0, x + y = −1, 3x + y = 1, and
4x + y = 2 intersect
...
39
...
Has a unique solution
b
...
Is inconsistent
40
...
Confirming Pages
14
Chapter 1 Systems of Linear Equations and Matrices
41
...
Describe the solution set where the variables x3
and x4 are free
...
Describe the solution set where the variables x2
and x4 are free
...
Consider the system
⎧
⎨ x1 − x2 + x3 − x4 + x5 = 1
x2
− x4 − x5 = −1
⎩
x3 − 2x4 + 3x5 = 2
a
...
b
...
ß
1
...
Determine the values of k such that the linear
system
9x + ky = 9
kx + y = −3
has
a
...
Infinitely many solutions
c
...
Determine the values of k such that the linear
system
⎧
⎨ kx + y + z = 0
x + ky + z = 0
⎩
x + y + kz = 0
has
a
...
A one-parameter family of solutions
c
...
1
...
The algorithm can be streamlined
by introducing matrices to represent linear systems
...
For example, the array of numbers
⎤
⎡
2 3 −1
4
⎣ 3 1
0 −2 ⎦
−2 4
1
3
is a 3 × 4 matrix
...
The variables are placeholders
...
For example, the coefficients and constants of the linear system
⎧
− 3x4 = 11
⎪ −4x1 + 2x2
⎪
⎨
2x1 − x2 − 4x3 + 2x4 = − 3
⎪
3x2
− x4 =
0
⎪
⎩
−2x1
+ x4 =
4
Confirming Pages
1
...
Notice that for an
m × n linear system the augmented matrix is m × (n + 1)
...
Notice that we always use a 0 to record any missing
terms
...
The relationship is
illustrated below:
Linear system
⎧
⎨ x+y− z= 1
2x − y + z = −1
⎩
−x − y + 3z = 2
Corresponding augmented matrix
⎤
⎡
1
1 −1
1
⎣ 2 −1
1 −1 ⎦
−1 −1
3
2
Using the operations −2E1 + E2 → E2 Using the operations −2R1 + R2 → R2
and E1 + E3 → E3 , we obtain the equiv- and R1 + R3 → R3 , we obtain the equivalent triangular system
alent augmented matrix
⎡
⎤
⎧
1
1 −1
1
⎨x + y − z= 1
⎣ 0 −3
3 −3 ⎦
− 3y + 3z = −3
⎩
0
0
2
3
2z = 3
The notation used to describe the operations on an augmented matrix is similar
to the notation we introduced for equations
...
Analogous to the triangular
form of a linear system, a matrix is in triangular form provided that the first nonzero
entry for each row of the matrix is to the right of the first nonzero entry in the row
above it
...
1
...
Confirming Pages
16
Chapter 1 Systems of Linear Equations and Matrices
THEOREM 2
Any one of the following operations performed on the augmented matrix, corresponding to a linear system, produces an augmented matrix corresponding to an
equivalent linear system
...
Interchanging any two rows
...
Multiplying any row by a nonzero constant
...
Adding a multiple of one row to another
...
An m × n matrix A is called
row equivalent to an m × n matrix B if B can be obtained from A by a sequence of
row operations
...
1
...
3
...
Write the augmented matrix of the linear system
...
Interpret the final matrix as a linear system (which is equivalent to the original)
...
Example 1 illustrates how we can carry out steps 3 and 4
...
⎡
⎤
⎡
⎤
⎡
⎤
1 0 0 1
1 0
0 0 5
1 2
1 −1 1
0 1 ⎦
a
...
⎣ 0 1 −1 0 1 ⎦ c
...
Reading directly from the augmented matrix, we have x3 = 3, x2 = 2, and
x1 = 1
...
b
...
So the variable x3 is free, and the general solution is
S = {(5, 1 + t, t, 3) | t ∈
...
The augmented matrix is equivalent to the linear system
x1 + 2x2 + x3 − x4 = 1
3x2 − x3
=1
Using back substitution, we have
1
x2 = (1 + x3 )
and
3
x1 = 1 − 2x2 − x3 + x4 =
1 5
− x3 + x4
3 3
Confirming Pages
1
...
⎧
x − 6y − 4z = −5
⎨
2x − 10y − 9z = −4
⎩
− x + 6y + 5z = 3
To solve this system, we write the augmented matrix
⎡
⎤
1
−6 −4 −5
⎣
2
−10 −9 −4 ⎦
−1
6
5
3
where we have shaded the entries to eliminate
...
Echelon Form of a Matrix
In Example 2, the final augmented matrix
⎡
1 −6 −4
⎣ 0
2 −1
0
0
1
⎤
−5
6 ⎦
−2
is in row echelon form
...
1
...
1 by *, is to the right of the first nonzero term in the previous
row
...
Confirming Pages
18
Chapter 1 Systems of Linear Equations and Matrices
0
...
...
1 is one row, a step may extend over
several columns
...
The
matrix is in reduced row echelon form if, in addition, each pivot is a 1 and all other
entries in this column are 0
...
If we read from
the last matrix above, the solution to the corresponding linear system is, as before,
x = −1, y = 2, and z = −2
...
We summarize the previous discussion on row echelon form in the
next definition
...
2 Matrices and Elementary Row Operations
DEFINITION 2
19
Echelon Form An m × n matrix is in row echelon form if
1
...
2
...
, k are the rows with nonzero entries and if the leading
nonzero entry (pivot) in row i occurs in column ci , for 1, 2,
...
The matrix is in reduced row echelon form if, in addition,
3
...
4
...
The process of transforming a matrix to reduced row echelon form is called
Gauss-Jordan elimination
...
⎧
⎪ x1 − x2 − 2x3 + x4 = 0
⎪
⎨
2x1 − x2 − 3x3 + 2x4 = −6
⎪ −x1 + 2x2 + x3 + 3x4 = 2
⎪
⎩
x1 + x2 − x3 + 2x4 = 1
The augmented matrix of the linear system is
⎡
⎤
1 −1 −2 1
0
⎢ 2 −1 −3 2 −6 ⎥
⎢
⎥
⎣ −1
2
1 3
2 ⎦
1
1 −1 2
1
To transform the matrix into reduced row echelon form, we first use the leading 1
in row 1 as a pivot to eliminate the terms in column 1 of rows 2, 3, and 4
...
The
required row operations are
R2 + R1 → R1
−R2 + R3 → R3
−2R2 + R4 → R4
Confirming Pages
20
Chapter 1 Systems of Linear Equations and Matrices
reducing the matrix
⎡
1 −1 −2 1
⎢ 0
1
1 0
⎢
⎣ 0
1 −1 4
0
2
1 1
⎤
0
−6 ⎥
⎥
2 ⎦
1
to
⎡
0 −1 1
1
1 0
0 −2 4
0 −1 1
1
⎢ 0
⎢
⎣ 0
0
⎤
−6
−6 ⎥
⎥
8 ⎦
13
Notice that each entry in row 3 is evenly divisible by 2
...
Specifically, the operations
applied to the last matrix give
⎡
R4 + R1 → R1
−2R4 + R2 → R2
2R4 + R3 → R3
1
⎢ 0
⎢
⎣ 0
0
0
1
0
0
0
0
1
0
0
0
0
1
⎤
−19
16 ⎥
⎥
−22 ⎦
−9
which is in reduced row echelon form
...
Confirming Pages
1
...
21
⎧
⎨
3x1 − x2 + x3 + 2x4 = −2
x1 + 2x2 − x3 + x4 = 1
⎩
− x1 − 3x2 + 2x3 − 4x4 = −6
The linear system in matrix form is
⎡
3 −1
1
2
⎣ 1
2 −1
1
−1 −3
2 −4
which can be reduced to
⎡
1 2
⎣ 0 1
0 0
−1
1
−1
3
1 − 20
3
⎤
−2
1 ⎦
−6
⎤
1
5 ⎦
−10
Notice that the system has infinitely many solutions, since from the last row we
see that the variable x4 is a free variable
...
EXAMPLE 5
Solution
Solve the linear system
...
The
following steps describe the process
...
As this
system has no solution, the system is inconsistent
...
2
...
The reduced
augmented matrix for a consistent linear system can have a row of zeros
...
Example 6 gives
an illustration
...
⎡
⎤
1
0 2 a
⎣ 2
1 5 b ⎦
1 −1 1 c
The operation −2R1 + R2 → R2 followed by −R1 + R3 → R3 and finally followed
by R2 + R3 → R3 reduces the augmented matrix to
⎡
⎤
1 0 2
a
⎣ 0 1 1
b − 2a ⎦
0 0 0 b + c − 3a
Hence, the corresponding linear system is consistent provided that b + c − 3a = 0
...
Notice also that when the system is consistent, the
third row will contain all zeros and the variable x3 is a free variable
...
1
...
Then divide each entry of row 1 by the
leading entry
...
2 Matrices and Elementary Row Operations
23
2
...
3
...
Note that the leading entry may
not be in column 2
...
Continue in this way, making sure that the leading entry of each row is a 1 with
zeros elsewhere in that column
...
The leading 1 in any row should be to the right of a leading 1 in the row above it
...
All rows of zeros are placed at the bottom of the matrix
...
It is an important fact that we
will state here as a theorem without proof
...
Fact Summary
1
...
2
...
3
...
4
...
5
...
Exercise Set 1
...
Do not solve the
system
...
2x − 3y = 5
−x + y = −3
2
...
⎩
4x + y − z = 1
Confirming Pages
24
Chapter 1 Systems of Linear Equations and Matrices
⎧
⎨ −3x + y + z = 2
− 4z = 0
4
...
2x1
− x3 = 4
x1 + 4x2 + x3 = 2
6
...
⎩
x1 + 3x2 + 3x3 − 3x4 = −4
⎧
− 3x3 + 4x4 = −3
⎨ 3x1
−4x1 + 2x2 − 2x3 − 4x4 = 4
8
...
⎡
1 0 0
9
...
⎣ 0 1 0
0 0 1
⎡
1 0
2
11
...
⎣ 0 1
3
0 0
0
⎡
1 −2 0
0 1
13
...
⎣ 0 0 0
0 0 0
−1
1
2
0
⎤
⎦
⎤
2
0 ⎦
−2
3
⎤
−3
2 ⎦
0
4
4
3
0
⎤
⎦
⎤
−3
2 ⎦
0
⎤
−1
0 ⎦
0
⎡
1 0
15
...
⎣ 0 0
0 0
⎤
0
0 ⎦
1
⎤
0
0 ⎦
1
0
0
0
0
1
0
17
...
1
0
3 −3 0
0
0 1
1
4
⎡
1 0
19
...
⎣ 0 1
0 0
0 −3
0 −1
1
2
2
5
0
−3 0
0 1
⎤
1
7 ⎦
−1
⎤
−1
1 ⎦
4
5
In Exercises 21–28, determine whether the matrices
are in reduced row echelon form
...
0 1 3
1
0
22
...
⎣ 0 1 2 ⎦
0 0 1
⎤
⎡
1 2 0
24
...
⎣ 0 0 1 −2 ⎦
0 0 0
0
⎤
⎡
1 0 −3 4
1 5 ⎦
26
...
⎣ 0 0 1 5
0 1 0 0 −1
⎡
Confirming Pages
1
...
⎣ 0 1 1 5
0 0 0 1
2
3
⎤
6 ⎦
1
3
In Exercises 29–36, find the reduced row echelon
form of the matrix
...
2 3
−2 1
30
...
⎣ 3 −1 0 ⎦
−1 −1 2
⎤
⎡
0
2
1
32
...
−4 −2 −1
−2 −3
0
⎧
− 4z = 1
⎨ 2x
4x + 3y − 2z = 0
40
...
⎩
x + y + z=2
⎧
⎨ 3x − 2z = −3
−2x + z = −2
42
...
44
...
−4 1
4
3 4 −3
34
...
⎣ 0
1 −4
2
2
⎤
⎡
4 −3 −4 −2
2
1 −4 ⎦
36
...
Convert the augmented matrix to
reduced row echelon form, and find the solution of the
linear system
...
38
...
⎩
−2x − 2y
= −2
25
46
...
48
...
The augmented matrix of a linear system has the
form
⎤
⎡
1
2 −1 a
⎣ 2
3 −2 b ⎦
−1 −1
1 c
a
...
b
...
c
...
Give a specific consistent linear system and
find one particular solution
...
The augmented matrix of a linear system has the
form
ax
2x
y
(a − 1)y
a
...
b
...
c
...
Give a specific consistent linear system and
find one particular solution
...
Determine the values of a for which the linear
system is consistent
...
When it is consistent, does the linear system
have a unique solution or infinitely many
solutions?
c
...
ß
1
...
The augmented matrix of a linear system has the
form
⎤
⎡
−2 3
1 a
⎣ 1 1 −1 b ⎦
0 5 −1 c
52
...
Matrix Algebra
Mathematics deals with abstractions that are based on natural concepts in concrete
settings
...
Numbers can be added and multiplied, and they have properties
such as the distributive and associative properties
...
For example, we can define addition and multiplication so that
algebra can be performed with matrices
...
Let A be an m × n matrix
...
1
...
⎢
...
Row i −→ ⎢ ai1
⎢
⎢
...
...
...
aij
...
...
⎥
...
⎥
· · · ain ⎥ = A
⎥
...
⎦
...
3 Matrix Algebra
27
then
a11 = −2
a21 = 5
a31 = 2
a12 = 1
a22 = 7
a32 = 3
a13 = 4
a23 = 11
a33 = 22
A vector is an n × 1 matrix
...
For a given matrix A, it is convenient to refer to its row vectors and its column
vectors
...
Thus, A = B if and only if
aij = bij , for 1 ≤ i ≤ m and 1 ≤ j ≤ n
...
DEFINITION 1
EXAMPLE 1
Addition and Scalar Multiplication If A and B are two m × n matrices,
then the sum of the matrices A + B is the m × n matrix with the ij term given by
aij + bij
...
Perform the operations on the matrices
⎤
⎡
2 0
1
and
A = ⎣ 4 3 −1 ⎦
−3 6
5
a
...
2A − 3B
⎤
−2 3 −1
6 ⎦
B=⎣ 3 5
4 2
1
⎡
Confirming Pages
28
Chapter 1 Systems of Linear Equations and Matrices
Solution
a
...
To evaluate this expression, we first multiply each entry of the matrix A by 2
and each entry of the matrix B by −3
...
This gives
⎤
⎤
⎡
⎡
−2 3 −1
2 0
1
6 ⎦
2A + (−3B) = 2 ⎣ 4 3 −1 ⎦ + (−3) ⎣ 3 5
4 2
1
−3 6
5
⎤
⎤ ⎡
⎡
6
−9
3
4
0
2
6 −2 ⎦ + ⎣ −9 −15 −18 ⎦
=⎣ 8
−12 −6
−3
−6 12 10
⎤
⎡
10 −9
5
= ⎣ −1 −9 −20 ⎦
−18
6
7
In Example 1(a) reversing the order of the addition of the matrices gives the
same result
...
This is so because addition of real numbers
is commutative
...
Some other familiar properties that hold for real numbers also
hold for matrices and scalars
...
THEOREM 4
Properties of Matrix Addition and Scalar Multiplication Let A, B, and
C be m × n matrices and c and d be real numbers
...
A + B = B + A
2
...
c(A + B) = cA + cB
4
...
c(dA) = (cd)A
6
...
7
...
Confirming Pages
1
...
We will prove property 2 and leave the others as exercises
...
Let Ai , Bi , and Ci
denote the ith column vector of A, B, and C, respectively
...
⎥
(Ai + Bi ) + Ci = ⎝⎣
...
...
⎡
⎢
=⎣
ami
⎤
bmi
⎡
a1i + b1i
c1i
⎥ ⎢
...
...
...
ami + bmi
cmi
cmi
⎡
⎤
(a1i + b1i ) + c1i
⎥ ⎢
⎥
...
⎦=⎣
⎦
...
...
⎡
(ami + bmi ) + cmi
⎤
a1i + (b1i + c1i )
⎢
⎥
...
=⎣
⎦ = Ai + (Bi + Ci )
...
Matrix Multiplication
We have defined matrix addition and a scalar multiplication, and we observed that
these operations satisfy many of the analogous properties for real numbers
...
Matrix multiplication is more difficult
to define and is developed from the dot product of two vectors
...
⎥
⎣
⎣
...
un
v1
v2
...
...
For example,
⎤
⎤ ⎡
⎡
−5
2
⎣ −3 ⎦ · ⎣ 1 ⎦ = (2)(−5) + (−3)(1) + (−1)(4) = −17
4
−1
Now to motivate the concept and need for matrix multiplication we first introduce
the operation of multiplying a vector by a matrix
...
The first component of Bv is the dot product of the first row vector of B with v,
while the second component is the dot product of the second row vector of B with v,
so that
Bv =
1 −1
−2
1
1
3
(1)(1) + (−1)(3)
(−2)(1) + (1)(3)
=
Using this operation, the matrix B transforms the vector v =
−1 2
0 1
−2
...
3 Matrix Algebra
Thus, we see that A(Bv) is the product of the matrix
a11 b11 + a12 b21
a21 b11 + a22 b21
a11 b12 + a12 b22
a21 b12 + a22 b22
x
...
2
...
Using the matrices A and B given above, we have
AB =
=
1 −1
−1 2
−2
1
0 1
(−1)(1) + (2)(−2) (−1)(−1) + (2)(1)
(0)(1) + (1)(−2)
(0)(−1) + (1)(1)
This matrix transforms the vector
(AB)v =
1
3
to
−5 3
−2 1
4
1
=
−5 3
−2 1
in one step
...
The notion of matrices as transformations is taken up
again in Chap
...
For another illustration of the matrix product let
⎡
⎤
⎡
⎤
3 −2
5
1
3
0
⎢
⎥
A=⎣
and
B = ⎣ −1
4 −2 ⎦
2
1 −3 ⎦
−4
6
2
1
0
3
The entries across the first row of the product matrix AB are obtained from the dot
product of the first row vector of A with the first, second, and third column vectors of
Confirming Pages
32
Chapter 1 Systems of Linear Equations and Matrices
B, respectively
...
Finally, the terms in the third row of AB are the dot products of the third row vector
of A again with the first, second, and third column vectors of B, respectively
...
This condition can be
relaxed somewhat
...
DEFINITION 3
Matrix Multiplication Let A be an m × n matrix and B an n × p matrix; then
the product AB is an m × p matrix
...
Because matrix multiplication is only defined when the number
of columns of the matrix on the left equals the number of rows of the matrix on the
right, it is possible for AB to exist with BA being undefined
...
As a result, we cannot interchange the order when multiplying two matrices
unless we know beforehand that the matrices commute
...
Example 2 illustrates that even when AB and BA are both defined, they might
not be equal
...
3 Matrix Algebra
EXAMPLE 2
Verify that the matrices
A=
1 0
−1 2
0 1
1 1
B=
and
do not satisfy the commutative property for multiplication
...
0 1
2 1
=
−1 2
0 2
=
In Example 3 we describe all matrices that commute with a particular matrix
...
Let S be the set of all 2 × 2 matrices defined by
S=
a
c
0
a
a, c ∈ ޒ
Then each matrix in S commutes with the matrix A
...
A(B + C)
Solution
and
C=
3 2 −2 1
1 6 −2 4
b
...
Also since the matrices B and C have the same number of rows
and columns, the matrix B + C is defined, so the expressions in parts (a) and (b)
are defined
...
We first add the matrices B and C inside the parentheses and then multiply
on the left by the matrix A
...
In this case we compute AB and AC separately and then add the two resulting
matrices
...
3 Matrix Algebra
35
Notice that in Example 4 the matrix equation
A(B + C) = AB + AC
holds
...
They are listed in Theorem 5
...
1
...
3
...
A(BC) = (AB)C
c(AB) = (cA)B = A(cB)
A(B + C) = AB + AC
(B + C)A = BA + CA
We have already seen that unlike with real numbers, matrix multiplication does
not commute
...
Recall that if x and y are real numbers such that xy = 0, then either x = 0
or y = 0
...
For example, let
A=
1 1
1 1
Then
AB =
1 1
1 1
and
−1
1
−1
1
B=
−1
1
=
−1
1
0 0
0 0
Transpose of a Matrix
The transpose of a matrix is obtained by interchanging the rows and columns of a
matrix
...
For example, the transpose of the matrix
⎤
⎡
1 2 −3
4 ⎦
is
A=⎣ 0 1
−1 2
1
⎤
1 0 −1
2 ⎦
At = ⎣ 2 1
−3 4
1
⎡
Notice that the row vectors of A become the column vectors of At
...
Confirming Pages
36
Chapter 1 Systems of Linear Equations and Matrices
THEOREM 6
Suppose A and B are m × n matrices, C is an n × p matrix, and c is a scalar
...
(A + B)t = At + B t
2
...
(At )t = A
4
...
Since
AC is m × p, then (AC)t is p × m
...
So the sizes of the products agree
...
DEFINITION 5
EXAMPLE 5
Solution
Symmetric Matrix
An n × n matrix is symmetric provided that At = A
...
Let
A=
a
c
b
d
=
a
b
Then A is symmetric if and only if
A=
a
c
b
d
c
d
= At
which holds if and only if b = c
...
1
...
This allows algebra to be carried
out with matrices
...
3 Matrix Algebra
2
...
3
...
Even when AB and
BA are both defined, it is possible for AB ̸= BA
...
The distributive properties hold
...
5
...
6
...
3
7
...
In Exercises 1–4, use the matrices
A=
2
4
−3
1
C=
B=
1
5
1
...
−1 3
−2 5
8
...
In Exercises 9 and 10, use the matrices
1
−2
2 −3 −3
−3 −2
0
⎤
⎡
3 −1
B = ⎣ 2 −2 ⎦
3
0
A=
2
...
3
...
4
...
In Exercises 5 and 6, use the matrices
⎤
⎡
−3 −3 3
0 2 ⎦
A=⎣ 1
0 −2 3
⎤
⎡
−1 3 3
B = ⎣ −2 5 2 ⎦
1 2 4
⎤
⎡
−5 3 9
C = ⎣ −3 10 6 ⎦
2 2 11
9
...
10
...
11
...
12
...
Find (A − B) + C and 2A + B
...
Show that A + 2B − C = 0
...
⎤
−1
1 1
A = ⎣ 3 −3 3 ⎦
−1
2 1
⎤
⎡
−2
3 −3
2 ⎦
B = ⎣ 0 −1
3 −2 −1
⎡
⎤
−2 −2 −1
2
1 ⎦
A = ⎣ −3
1 −1 −1
⎤
⎡
1 −1 −2
3 ⎦
B = ⎣ −2 −2
−3
1 −3
⎡
37
Confirming Pages
38
Chapter 1 Systems of Linear Equations and Matrices
In Exercises 13–16, use the matrices
−2 −3
3
0
A=
B=
2 0
−2 0
2
0
−1 −1
C=
Find a 2 × 2 matrix B that is not the zero matrix,
such that AB is the zero matrix
...
Find all 2 × 2 matrices of the form
A=
such that
13
...
A2 = AA =
14
...
15
...
M=
2 0 −1
1 0 −2
A=
−3
−3
1
1
−3 −2
3 −1
C=
−1 −3
Whenever possible, perform the operations
...
17
...
Find all matrices of the form
1 1
a b
such that AM = MA
...
Find matrices A and B such that AB = 0 but
BA ̸= 0
...
Show there are no 2 × 2 matrices A and B such
that
1 0
AB − BA =
0 1
31
...
B t − 2A
19
...
If A and B are 2 × 2 matrices, show that the sum
of the terms on the diagonal of AB − BA is 0
...
BAt
21
...
b
c
28
...
Find (A + 2B)(3C)
...
Let
Bt )
23
...
24
...
Let
A=
⎤
1
0 0
A = ⎣ 0 −1 0 ⎦
0
0 1
⎡
−1
1
−2
2
B=
1
2
7
5
−1 −2
Show that AB = AC and yet B ̸= C
...
Let
A=
0 2
0 5
3
−1
34
...
If the matrices A and B commute, show that
A2 B = BA2
...
Suppose A, B, and C are n × n matrices and B
and C both commute with A
...
Show that BC and A commute
...
Give specific matrices to show that BC and
CB do not have to be equal
...
4 The Inverse of a Square Matrix
37
...
Show that if
for each vector x in ޒn , Ax = 0, then A is the
zero matrix
...
For each positive integer n, let
An =
1−n
n
Show that An Am = An+m
...
Find all 2 × 2 matrices that satisfy AAt = 0
...
Suppose that A and B are symmetric matrices
...
41
...
ß
1
...
An n × n matrix A is called idempotent provided
that A2 = AA = A
...
Show that if
AB = BA, then the matrix AB is idempotent
...
An n × n matrix A is skew-symmetric provided
At = −A
...
44
...
a
...
b
...
The Inverse of a Square Matrix
In the real number system, the number 1 is the multiplicative identity
...
For an n × n matrix A, we can
check that the n × n matrix
⎡
⎤
1 0 0 ··· 0
⎢ 0 1 0 ··· 0 ⎥
⎢
⎥
⎢
⎥
I = ⎢ 0 0 1 ··· 0 ⎥
⎢
...
⎥
⎣
...
⎦
...
0 0 0 ··· 1
is the multiplicative identity
...
For
4 × 4 identity matrices are, respectively,
⎤
⎡
1 0 0
1 0
⎣ 0 1 0 ⎦
and
0 1
0 0 1
example, the 2 × 2, 3 × 3, and
⎡
1
⎢ 0
⎢
⎣ 0
0
0
1
0
0
0
0
1
0
⎤
0
0 ⎥
⎥
0 ⎦
1
Confirming Pages
40
Chapter 1 Systems of Linear Equations and Matrices
DEFINITION 1
Inverse of a Square Matrix
matrix B such that
Let A be an n × n matrix
...
EXAMPLE 1
Find an inverse of the matrix
1 1
1 2
A=
Solution
In order for a 2 × 2 matrix B =
satisfy
1 1
1 2
x1
x3
x2
x4
x1
x3
=
x2
x4
to be an inverse of A, matrix B must
x1 + x3
x1 + 2x3
x2 + x4
x2 + 2x4
=
1 0
0 1
This matrix equation is equivalent to the linear system
⎧
=1
⎪ x1 + x3
⎪
⎨
x2
+ x4 = 0
=0
⎪
⎪ x1 + 2x3
⎩
x2
+ 2x4 = 1
The augmented matrix
⎡
1
⎢ 0
⎢
⎣ 1
0
and the reduced row echelon
⎤
⎡
0 1 0 1
1 0
⎢ 0 1
1 0 1 0 ⎥
⎥→⎢
⎣ 0 0
0 2 0 0 ⎦
1 0 2 1
0 0
form are given by
⎤
2
0 0
0 0 −1 ⎥
⎥
1 0 −1 ⎦
0 1
1
Thus, the solution is x1 = 2, x2 = −1, x3 = −1, x4 = 1, and an inverse matrix is
B=
2
−1
−1
1
The reader should verify that AB = BA = I
...
THEOREM 7
The inverse of a matrix, if it exists, is unique
...
That is, AB = BA = I and AC = CA = I
...
Indeed,
B = BI = B(AC) = (BA)C = (I )C = C
Confirming Pages
1
...
When
the inverse of a matrix A exists, we call A invertible
...
THEOREM 8
The inverse of the matrix A =
a
c
b
d
exists if and only if ad − bc ̸= 0
...
That is, if ad − bc = 0,
then the inverse does not exist
...
Confirming Pages
42
Chapter 1 Systems of Linear Equations and Matrices
To illustrate the use of the formula, let
A=
−1
3
2
1
then
1
6 − (−1)
A−1 =
3 1
−1 2
1
7
2
7
3
7
−1
7
=
For an example which underscores the necessity of the condition that ad − bc ̸= 0,
we consider the matrix
1 1
A=
1 1
Observe that in this case ad − bc = 1 − 1 = 0
...
To find the inverse of larger square matrices, we extend the method of augmented matrices
...
Let B be another n × n matrix, and let
B1 , B2 ,
...
Since AB1 , AB2 ,
...
⎥
...
⎥
AB1 = ⎢
...
⎦
⎣
...
⎦
...
...
⎥
⎣
...
0
⎡
⎢
⎢
Ax = ⎢
⎣
0
1
...
...
⎡
⎢
⎢
Ax = ⎢
⎣
0
0
...
...
But all n linear systems can be solved simultaneously
by row-reducing the n × 2n augmented matrix
⎡
⎤
a11 a12
...
0
⎢ a21 a22
...
0 ⎥
⎢
⎥
⎢
...
...
...
...
...
...
...
...
...
ann
0 0
...
4 The Inverse of a Square Matrix
43
On the left is the matrix A, and on the right is the matrix I
...
In this case, each
of the linear systems can be solved
...
Example 2 illustrates the procedure
...
The final result is
⎡
⎤
1 0 0 2 1 4
⎣ 0 1 0 1 1 2 ⎦
0 0 1 1 1 3
so the inverse matrix is
⎤
⎡
2 1 4
A−1 = ⎣ 1 1 2 ⎦
1 1 3
The reader should check that AA−1 = A−1 A = I
...
Solution
Following the procedure described
⎡
1
⎣ 3
3
above, we start with the matrix
⎤
−1 2 1 0 0
−3 1 0 1 0 ⎦
−3 1 0 0 1
Confirming Pages
44
Chapter 1 Systems of Linear Equations and Matrices
After the two row operations −3R1 + R2 → R2 followed by −3R1 + R3 → R3 ,
this matrix is reduced to
⎡
⎤
1 −1
2
1 0 0
⎣ 0
0 −5 −3 1 0 ⎦
0
0 −5 −3 0 1
Next perform the row operation −R2 + R3 → R3 to obtain
⎡
1 −1
2
⎣ 0
0 −5
0
0
0
⎤
1
0 0
−3
1 0 ⎦
0 −1 1
The 3 × 3 matrix of coefficients on the left cannot be reduced to the identity matrix,
and therefore, the original matrix does not have an inverse
...
The matrix A of Example 3 has two equal rows and cannot be row-reduced to the
identity matrix
...
Theorem 9 gives a formula for the inverse of the product of invertible
matrices
...
Then AB is invertible and
(AB)−1 = B −1 A−1
Proof Using the properties of matrix multiplication, we have
(AB)(B −1 A−1 ) = A(BB −1 )A−1 = AI A−1 = AA−1 = I
and
(B −1 A−1 )(AB) = B −1 (A−1 A)B = B −1 I B = BB −1 = I
Since, when it exists, the inverse matrix is unique, we have shown that the inverse
of AB is the matrix B −1 A−1
...
4 The Inverse of a Square Matrix
EXAMPLE 4
Solution
Suppose that B is an invertible matrix and A is any matrix with AB = BA
...
Since AB = BA, we can multiply both sides on the right by B −1 to obtain
(AB)B −1 = (BA)B −1
By the associative property of matrix multiplication this last equation can be written as
A(BB −1 ) = BAB −1
and since BB −1 = I, we have
A = BAB −1
Next we multiply on the left by B −1 to obtain
B −1 A = B −1 BAB −1
so
B −1 A = AB −1
as required
...
1
...
d −b
a b
1
...
If A =
−c
a
c d
3
...
4
...
Exercise Set 1
...
Find A−1 or
indicate that it does not exist
...
1
...
−3 1
1 2
45
3
...
1
2
1
2
⎤
0 1 −1
1 ⎦
5
...
⎣ −1 0 0 ⎦
2 1 1
⎤
⎡
3 −3
1
0
1 ⎦
7
...
⎣ 1
0 −1 3
⎤
−2 −3
3
0
⎢ 2
0 −2
0 ⎥
⎥
16
...
Let
A=
1
⎢ 0
10
...
⎢
⎣ −3 −2 −3 0 ⎦
0
1
3 3
⎡
⎤
1
0
0 0
⎢ −2
1
0 0 ⎥
⎥
12
...
⎢
⎣ 0
0 0
2 ⎦
0
0 0 −1
⎤
⎡
3
0 0 0
⎢ −6
1 0 0 ⎥
⎥
14
...
⎢
⎣ −1 0
0
0 ⎦
−2 1 −1
1
⎡
1
−4
B=
1 2
−1 3
Verify that AB + A can be factored as A(B + I )
and AB + B can be factored as (A + I )B
...
⎢
⎣ 0 0 −1 −1 ⎦
0 0
0 −2
⎡
2
3
18
...
19
...
Show that A2 − 2A + 5I = 0
...
Show that A−1 = 1 (2I − A)
...
Show in general that for any square matrix A
satisfying A2 − 2A + 5I = 0, the inverse is
A−1 = 1 (2I − A)
...
Determine those values
⎡
1
⎣ 3
1
of
λ
2
2
21
...
Determine those values
⎡
2
⎣ 3
1
of λ for which the matrix
⎤
λ 1
2 1 ⎦
2 1
is not invertible
...
is not invertible
...
Let
⎡
1
A=⎣ 1
0
λ for which the matrix
⎤
0
0 ⎦
1
⎤
λ 0
1 1 ⎦
0 1
Confirming Pages
47
1
...
Determine those values of λ for which A is
invertible
...
For those values found in part (a) find the
inverse of A
...
Determine those values of λ for which the matrix
⎤
⎡
λ −1
0
⎣ −1
λ −1 ⎦
0 −1
λ
is invertible
...
Find 2 × 2 matrices A and B that are not
invertible but A + B is invertible
...
Find 2 × 2 matrices A and B that are invertible
but A + B is not invertible
...
If A and B are n × n matrices and A is invertible,
show that
(A + B)A−1 (A − B) = (A − B)A−1 (A + B)
28
...
, B k , where k
is any positive integer, in terms of A, P , and
P −1
...
Let A and B be n × n matrices
...
Show that if A is invertible and AB = 0, then
B = 0
...
If A is not invertible, show there is an n × n
matrix B that is not the zero matrix and such
that AB = 0
...
Show that if A is symmetric and invertible, then
A−1 is symmetric
...
31
...
−1
32
...
33
...
34
...
35
...
Show that the product of two
orthogonal matrices is orthogonal
...
Show the matrix
A=
cos θ − sin θ
sin θ
cos θ
is orthogonal
...
)
37
...
If A, B, and C are n × n invertible matrices,
show that
(ABC)−1 = C −1 B −1 A−1
b
...
, Ak are n × n
invertible matrices, then
(A1 A2 · · · Ak )−1 = A−1 A−1 · · · A−1
k
k−1
1
38
...
Show that if ann ̸= 0 for
all n, then A is invertible and the inverse is
⎡ 1
⎤
0
0
...
0 ⎥
⎢
⎥
a22
⎢
...
...
⎥
...
...
...
...
...
⎢
⎥
1
0 ⎦
0
...
0
ann
39
...
Show that if
A is in upper (lower) triangular form, then A−1 is
also in upper (lower) triangular form
...
Suppose B is row equivalent to the n × n
invertible matrix A
...
41
...
a
...
Show the 2 × 2 linear system in the variables
x1 and x3 that is generated in part (a) yields
d = 0
...
c
...
Confirming Pages
Chapter 1 Systems of Linear Equations and Matrices
1
...
We can then write a linear system as a single
equation, using a matrix and two vectors, which generalizes the linear equation ax = b
for real numbers
...
To illustrate the process, consider the linear system
⎧
⎨ x − 6y − 4z = −5
2x − 10y − 9z = −4
⎩
−x + 6y + 5z = 3
The matrix of coefficients is given by
⎡
⎤
1 −6 −4
A = ⎣ 2 −10 −9 ⎦
−1
6
5
Now let x and b be the vectors
⎤
⎡
x
x=⎣ y ⎦
z
and
⎤
−5
b = ⎣ −4 ⎦
3
⎡
Then the original linear system can be rewritten as
Ax = b
We refer to this equation as the matrix form of the linear system and x as the vector
form of the solution
...
In particular, if A is invertible, we can multiply both sides
of the previous equation on the left by A−1 , so that
A−1 (Ax) = A−1 b
Since matrix multiplication is associative, we have
A−1 A x = A−1 b
therefore,
x = A−1 b
For the example above, the inverse of the matrix
⎤
⎡
⎡
2
1
−6 −4
is
A−1 = ⎣ − 1
A = ⎣ 2 −10 −9 ⎦
2
−1
6
5
1
3
7
1
2
1
2
0
1
⎤
⎦
Confirming Pages
1
...
This fact is recorded in Theorem 10
...
EXAMPLE 1
Write the linear system in matrix form and solve
...
By
Theorem 8, of Sec
...
4, the inverse is
1
10
3
4
−1
2
Now, by Theorem 10, the solution to the linear system is
x=
1
10
3
4
−1
2
1
2
=
1
10
1
8
=⎣
so that
x=
DEFINITION 1
1
10
and
y=
⎡
1
10
8
10
⎤
⎦
8
10
Homogeneous Linear System A homogeneous linear system is a system of
the form Ax = 0
...
Confirming Pages
50
Chapter 1 Systems of Linear Equations and Matrices
EXAMPLE 2
Solution
Let
⎤
1 2 1
A=⎣ 1 3 0 ⎦
1 1 2
Find all vectors x such that Ax = 0
...
To find the general solution, we row-reduce
the augmented matrix
⎡
⎤
⎡
⎤
1 2 1 0
1 2
1 0
⎣ 1 3 0 0 ⎦
⎣ 0 1 −1 0 ⎦
to
1 1 2 0
0 0
0 0
From the reduced matrix we see that x3 is free with x2 = x3 , and x1 = −2x2 − x3 =
−3x3
...
ޒ
S= ⎣
⎭
⎩
t
Notice that the trivial solution is also included in S as a particular solution with
t = 0
...
If a homogeneous linear system Ax = 0 is such that A is invertible, then by
Theorem 10, the only solution is x = 0
...
1
...
EXAMPLE 3
Solution
Show that if x and y are distinct solutions to the homogeneous system Ax = 0,
then x + cy is a solution for every real number c
...
The result of Example 3 shows that if the homogeneous equation Ax = 0 has
two distinct solutions, then it has infinitely many solutions
...
5 Matrix Equations
51
equation Ax = 0 either has one solution (the trivial solution) or has infinitely many
solutions
...
To see this, let u and v be distinct solutions to Ax = b and c a real number
...
THEOREM 11
If A is an m × n matrix, then the linear system Ax = b has no solutions, one
solution, or infinitely many solutions
...
1
...
2
...
3
...
4
...
Exercise Set 1
...
1
...
2x + 3y = −1
−x + 2y = 4
−4x − y = 3
−2x − 5y = 2
⎧
⎨ 2x − 3y + z = −1
−x − y + 2z = −1
3
...
⎧
⎨
−x
⎩
−x
3y − 2z = 2
+ 4z = − 3
− 3z = 4
⎧
⎨ 4x1 + 3x2 − 2x3 − 3x4 = −1
−3x1 − 3x2 + x3
= 4
5
...
⎧
⎨
3x2 + x3 − 2x4 = −4
4x2 − 2x3 − 4x4 = 0
⎩
x1 + 3x2 − 2x3
= 3
Confirming Pages
52
Chapter 1 Systems of Linear Equations and Matrices
In Exercises 7–12, given the matrix A and vectors x
and b, write the equation Ax = b as a linear system
...
A =
2
2
−5
1
x=
x
y
b=
−2 4
0 3
x=
x
y
b=
A−1
3
2
8
...
−1
1
⎤
0 −2
0
9
...
A = ⎣ 4 −1 1 ⎦
−4
3 5
⎤
⎤
⎡
⎡
−3
x
x = ⎣ y ⎦b = ⎣ 2 ⎦
1
z
11
...
A = ⎣ 2
1
⎡
x1
⎢ x2
⎢
x=⎣
x3
x4
15
...
5 −5
3
1 −2 −4
⎤
⎥
⎥b =
⎦
2
0
⎤
−2 4 −2
0 1
1 ⎦
0 1 −2
⎤
⎤
⎡
4
⎥
⎥ b = ⎣ −3 ⎦
⎦
1
In Exercises 13–16, use the information given to solve
the linear system Ax = b
...
A−1
A−1
⎤
2 0 −1
4 ⎦
=⎣ 4 1
1 2
4
⎤
⎡
1
b = ⎣ −4 ⎦
1
⎡
⎤
−4 3 −4
0 ⎦
=⎣ 2 2
1 2
4
⎤
⎡
2
b=⎣ 2 ⎦
−2
⎡
⎤
−3 −2
0
3
⎢ −1
2 −2
3 ⎥
⎥
=⎢
⎣ 0
1
2 −3 ⎦
−1
0
3
1
⎤
⎡
2
⎢ −3 ⎥
⎥
b=⎢
⎣ 2 ⎦
3
⎡
⎤
3
0 −2 −2
⎢ 2
0
1 −1 ⎥
⎥
=⎢
⎣ −3 −1 −1
1 ⎦
2 −1 −2 −3
⎤
⎡
1
⎢ −4 ⎥
⎥
b=⎢
⎣ 1 ⎦
1
⎡
In Exercises 17–22, solve the linear system by finding
the inverse of the coefficient matrix
...
x + 4y = 2
3x + 2y = −3
18
...
⎩
x − 3y + 2z = 1
⎧
⎨ −2x − 2y − z = 0
−x − y
=−1
20
...
5 Matrix Equations
⎧
⎪ − x1 − x2 − 2x3 + x4 = −1
⎪
⎨
2x1 + x2 + 2x3 − x4 = 1
21
...
Find a nonzero 3 × 3 matrix A such that the
vector
⎤
⎡
1
⎣ −1 ⎦
1
23
...
Find a nonzero 3 × 3 matrix A such that the
vector
⎤
⎡
−1
⎣ 2 ⎦
1
⎧
+ x4 = −3
⎪ −x1 − 2x2
⎪
⎨
−x1 + x2 − 2x3 + x4 = −2
22
...
1 −1
2
3
Use the inverse matrix to solve the linear system
Ax = b for the given vector b
...
b =
2
1
b
...
Let
⎤
−1
0 −1
1 −3 ⎦
A = ⎣ −3
1 −3
2
⎡
Use the inverse matrix to solve the linear system
Ax = b for the given vector b
...
b = ⎣ 1 ⎦
1
⎤
⎡
1
b
...
Let
⎡
A=⎣
⎤
−1 −4
3 12 ⎦
2
8
Find a nontrivial solution to Ax = 0
...
Let
⎤
1 −2 4
A = ⎣ 2 −4 8 ⎦
3 −6 12
Find a nontrivial solution to Ax = 0
...
29
...
Show that if Au = Av and
u ̸= v, then A is not invertible
...
Suppose that u is a solution to Ax = b and that v
is a solution to Ax = 0
...
31
...
Write the linear system in matrix form Ax = b
and find the solution
...
Find a 2 × 3 matrix C such that CA = I
...
)
c
...
32
...
Write the linear system in matrix form Ax = b
and find the solution
...
Find a 2 × 3 matrix C such that CA = I
...
Show that the solution to the linear system is
given by x = Cb
...
6
ß
54
Determinants
In Sec
...
4 we saw that the number ad − bc, associated with the 2 × 2 matrix
A=
a
c
b
d
has special significance
...
In particular, using this terminology, the matrix
A is invertible if and only if the determinant is not equal to 0
...
The information
provided by the determinant has theoretical value and is used in some applications
...
For this reason the information desired is generally
found by using other more efficient methods
...
EXAMPLE 1
Find the determinant of the matrix
...
A =
Solution
a
...
|A| =
c
...
A =
3 5
4 2
c
...
DEFINITION 2
Determinant of a 3 × 3 Matrix
The determinant of the matrix
⎤
a11 a12 a13
A = ⎣ a21 a22 a23 ⎦
a31 a32 a33
⎡
Confirming Pages
1
...
EXAMPLE 2
Find the determinant of the matrix
⎡
2
A=⎣ 3
5
Solution
⎤
1 −1
1
4 ⎦
−3
3
By Definition 2, the determinant is given by
3
1
3 4
1 4
+ (−1)
−1
5 −3
5 3
−3 3
= (2) [3 − (−12)] − (1)(9 − 20) + (−1)(−9 − 5)
= 30 + 11 + 14
= 55
det(A) = |A| = 2
⎡
+
⎣ −
+
−
+
−
⎤
+
− ⎦
+
Figure 1
In Example 2, we found the determinant of a 3 × 3 matrix by using an expansion
along the first row
...
The pattern for the signs is shown in Fig
...
The
expansion along the second row is given by
2
2 −1
1 −1
−4
+1
5
5
3
−3
3
= −3(3 − 3) + (6 + 5) − 4(−6 − 5) = 55
det(A) = |A| = −3
1
−3
The 2 × 2 determinants in this last equation are found from the original matrix by
deleting the second row and first column, the second row and second column, and
the second row and the third column, respectively
...
In this case
2 1
2 −1
1 −1
+3
− (−3)
3 1
3
4
1
4
= 5(4 + 1) + 3(8 + 3) + 3(2 − 3) = 55
det(A) = |A| = 5
Confirming Pages
56
Chapter 1 Systems of Linear Equations and Matrices
The determinant can also be computed using expansions along any column in a
similar manner
...
DEFINITION 3
Minors and Cofactors of a Matrix If A is a square matrix, then the minor
Mij , associated with the entry aij , is the determinant of the (n − 1) × (n − 1)
matrix obtained by deleting row i and column j from the matrix A
...
For the matrix of Example 2, several minors are
M11 =
1 4
−3 3
M12 =
3 4
5 3
and
M13 =
3
5
1
−3
Using the notation of Definition 3, the determinant of A is given by the cofactor
expansion
det(A) = a11 C11 + a12 C12 + a13 C13
= 2(−1)2 (15) + 1(−1)3 (−11) − 1(−1)4 (−14)
= 30 + 11 + 14 = 55
DEFINITION 4
Determinant of a Square Matrix
If A is an n × n matrix, then
n
det(A) = a11 C11 + a12 C12 + · · · + a1n C1n =
a1k C1k
k=1
Similar to the situation for 3 × 3 matrices, the determinant of any square matrix
can be found by expanding along any row or column
...
Then the determinant of A equals the cofactor expansion
along any row or any column of the matrix
...
, n and
j = 1,
...
One
such class of matrices is the square triangular matrices
...
A square matrix is a
diagonal matrix if aij = 0, for all i ̸= j
...
6 Determinants
Some examples of upper triangular matrices are
⎤
⎡
2 −1 0
1 1
⎣ 0
0 3 ⎦
and
0 2
0
0 2
⎤
1 1 0 1
⎣ 0 0 0 1 ⎦
0 0 1 1
⎡
and some examples of lower triangular matrices are
1 0
1 1
THEOREM 13
⎤
2 0 0
⎣ 0 1 0 ⎦
1 0 2
⎡
57
⎡
1
⎢ 0
⎢
⎣ 1
0
and
0
0
3
1
0
0
1
2
⎤
0
0 ⎥
⎥
0 ⎦
1
If A is an n × n triangular matrix, then the determinant of A is the product of the
terms on the diagonal
...
The proof for a
lower triangular matrix is identical
...
If n = 2, then
det(A) = a11 a22 − 0 and hence is the product of the diagonal terms
...
We need to show
that the same is true for an (n + 1) × (n + 1) triangular matrix A
...
⎥
...
...
...
...
...
⎥
...
...
...
⎢
⎥
⎣ 0
an,n+1 ⎦
0
0 · · · ann
0
0
0 · · · 0 an+1,n+1
Using the cofactor expansion along row n + 1, we have
a12
a22
0
...
...
...
0
det(A) = (−1)(n+1)+(n+1) an+1,n+1
a11
0
0
...
...
...
...
· · · ann
Since the determinant on the right is n × n and upper triangular, by the inductive
hypothesis
det(A) = (−1)2n+2 (an+1,n+1 )(a11 a22 · · · ann )
= a11 a22 · · · ann an+1,n+1
Properties of Determinants
Determinants for large matrices can be time-consuming to compute, so any properties
of determinants that reduce the number of computations are useful
...
Confirming Pages
58
Chapter 1 Systems of Linear Equations and Matrices
THEOREM 14
Let A be a square matrix
...
If two rows of A are interchanged to produce a matrix B, then det(B) =
− det(A)
...
If a multiple of one row of A is added to another row to produce a matrix B,
then det(B) = det(A)
...
If a row of A is multiplied by a real number α to produce a matrix B, then
det(B) = αdet(A)
...
For the case n = 2 let
A=
a
c
b
d
Then det(A) = ad − bc
...
Assume that the result holds for n × n matrices and A is an (n + 1) × (n + 1)
matrix
...
Expanding
the determinant of A along row i and of B along row j, we have
and
det(A) = ai1 Ci1 + ai2 Ci2 + · · · + ain Cin
det(B) = aj 1 Dj 1 + aj 2 Dj 2 + · · · + aj n Dj n
= ai1 Dj 1 + ai2 Dj 2 + · · · + ain Dj n
where Cij and Dij are the cofactors of A and B, respectively
...
If the signs of the cofactors Cij and Dij are the same, then
they differ by one row interchanged
...
In either case, by the inductive
hypothesis, we have
det(B) = − det(A)
The proofs of parts 2 and 3 are left as exercises
...
To highlight the usefulness of this theorem, recall that by Theorem 13, the
determinant of a triangular matrix is the product of the diagonal entries
...
This
method is illustrated in Example 3
...
6 Determinants
Solution
Since column 1 has two zeros, an expansion along this column will involve the
fewest computations
...
THEOREM 15
Let A and B be n × n matrices and α a real number
...
The determinant computation is multiplicative
...
3
...
5
...
det(αA) = αn det(A)
det(At ) = det(A)
If A has a row (or column) of all zeros, then det(A) = 0
...
If A has a row (or column) that is a multiple of another row (or column), then
det(A) = 0
...
Verify Theorem 15, part 1
...
We also have det(A) det(B) = (−8)(5) = −40
...
THEOREM 16
A square matrix A is invertible if and only if det(A) ̸= 0
...
To establish the converse, we will prove the contrapositive statement
...
By the remarks at the end of Sec
...
4, the matrix A is
row equivalent to a matrix R with a row of zeros
...
Then
det(A−1 ) =
1
det(A)
Proof If A is invertible, then as in the proof of Theorem 16, det(A) ̸= 0,
det(A−1 ) ̸= 0, and
det(A) det(A−1 ) = 1
Therefore,
det(A−1 ) =
1
det(A)
The final theorem of this section summarizes the connections between inverses,
determinants, and linear systems
...
Then the following statements are equivalent
...
The matrix A is invertible
...
The linear system Ax = b has a unique solution for every vector b
...
6 Determinants
61
3
...
4
...
5
...
The graph of the equation
(x − h)2
(y − k)2
+
=1
a2
b2
is an ellipse with center
(h, k), horizontal axis of
length 2a, and vertical
axis of length 2b
...
In the 17th century, Johannes Kepler’s observations of the orbits of
planets about the sun led to the conjecture that these orbits are elliptical
...
The graph of an
equation of the form
Ax 2 + Bxy + Cy 2 + Dx + Ey + F = 0
is a conic section
...
An astronomer who wants to determine the approximate orbit of an object traveling about the sun sets up a coordinate system in the plane of the orbit with the
sun at the origin
...
31), (1, 1), (1
...
21), (2, 1
...
5, 1)
...
We need to find the equation of an ellipse in the form
Ax 2 + Bxy + Cy 2 + Dx + Ey + F = 0
Each data point must satisfy this equation; for example, since the point (2, 1
...
31) + C(1
...
31) + F = 0
so
4A + 2
...
7161C + 2D + 1
...
1C +
0
...
62B + 1
...
31E + F
⎪
⎪ 2
...
82B + 1
...
5D + 1
...
25A + 2
...
5D +
E+F
=0
=0
=0
=0
=0
Since the equation Ax 2 + Bxy + Cy 2 + Dx + Ey + F = 0 describing the
ellipse passing through the five given points has infinitely many solutions, by
Theorem 17, we have
Revised Confirming Pages
62
Chapter 1 Systems of Linear Equations and Matrices
x2
xy
y2
x
y 1
0
0 0
...
31 1
1
1
1
1
1 1
4 2
...
72
2 1
...
25 1
...
46 1
...
21 1
6
...
5
1 2
...
014868x 2 + 0
...
039y 2 + 0
...
003y + 0
...
2
...
To illustrate the technique
consider the 2 × 2 linear system
ax + by = u
cx + dy = v
with ad − bc ̸= 0
...
To eliminate the variable y, we multiply the first equation by d and the second
equation by b, and then we subtract the two equations
...
av − cu
y=
ad − bc
Using determinants, we can write the solution as
x=
u b
v d
a b
c d
and
y=
Notice that the solutions for x and y are similar
...
The determinant in the numerator for x is formed
by replacing the first column of the coefficient matrix with the column of constants
on the right-hand side of the linear system
...
This method of solving a linear system is called Cramer’s rule
...
6 Determinants
EXAMPLE 6
63
Use Cramer’s rule to solve the linear system
...
The solution
is given by
x=
THEOREM 18
2 3
3 7
29
14 − 9
5
=
=
29
29
and
y=
2 2
−5 3
29
=
6 − (−10)
16
=
29
29
Cramer’s Rule Let A be an n × n invertible matrix, and let b be a column vector
with n components
...
If x = ⎢
...
⎦
...
, n
Proof Let Ii be the matrix obtained by replacing the ith column of the identity
matrix with x
...
Therefore,
xi =
det(Ai )
det(A)
Confirming Pages
64
Chapter 1 Systems of Linear Equations and Matrices
If a unique solution exists, then Cramer’s rule can be used to solve larger square
linear systems
...
EXAMPLE 7
Solution
Solve the linear system
...
Fact Summary
Let A and B be n × n matrices
...
c d
2
...
...
...
...
...
...
...
1
...
The matrix A is invertible if and only if det(A) ̸= 0
...
6 Determinants
65
4
...
5
...
6
...
7
...
8
...
If A has a row or column of zeros, then det(A) = 0
...
If one row or column of A is a multiple of another row or column, then
det(A) = 0
...
If A is invertible, then det(A−1 ) = det(A)
...
6
In Exercises 1–4, evaluate the determinant of the
matrix by inspection
...
⎣ 0
0
0 4
⎤
⎡
1 2 3
2
...
⎢
⎣ 4
2 2 0 ⎦
1
1 6 5
⎤
⎡
1 −1
2
4 ⎦
4
...
5
...
1
5
3
−2
⎤
1 0
0
0 ⎦
7
...
⎣ 7 2 1 ⎦
3 6 6
⎡
9
...
Find the determinant of the matrix by using an
expansion along row 1
...
Find the determinant of the matrix by using an
expansion along row 2
...
Find the determinant of the matrix by using an
expansion along column 2
...
Interchange rows 1 and 3 of the matrix, and
find the determinant of the transformed matrix
...
Multiply row 1 of the matrix found in part (d)
by −2, and find the determinant of the new
matrix
...
Confirming Pages
66
Chapter 1 Systems of Linear Equations and Matrices
f
...
First, use an expansion along row 3 of
the new matrix
...
g
...
10
...
Find the determinant of the matrix by using an
expansion along row 4
...
Find the determinant of the matrix by using an
expansion along row 3
...
Find the determinant of the matrix by using an
expansion along column 2
...
In (a), (b), and (c), which computation do you
prefer, and why?
e
...
In Exercises 11–26, find the determinant of the
matrix
...
11
...
4 3
9 2
13
...
1 2
4 13
15
...
1 1
2 2
⎤
5 −5 −4
5 ⎦
17
...
⎣ 2
−3 −1 −5
⎤
⎡
−3
4 5
1 4 ⎦
19
...
⎣ 1
−4
0
4
⎤
⎡
1 −4 1
21
...
⎣ 4 0 0 ⎦
1 2 4
⎡
2 −2 −2 −2
⎢ −2
2
3
0
23
...
⎢
⎣ −1 −1 −3
2
−1 −2
2
1
⎡
−1
1
1
0
⎢ 0
0 −1
0
⎢
0
1 −1
25
...
⎢ 1
⎢
⎣ 0
1
1
1
−1
1
1 −1
⎡
In Exercises 27–30, let
⎡
⎤
⎥
⎥
⎦
⎤
⎥
⎥
⎦
a
A=⎣ d
g
and assume det(A) = 10
...
6 Determinants
27
...
28
...
29
...
30
...
Find x, assuming
⎡
g
h
i
⎤
d
e ⎦
f
⎤
x
2
1
1 ⎦=0
0 −5
x2
det ⎣ 2
0
32
...
Suppose a1 ̸= b1
...
Use the three systems to answer the questions
...
Form the coefficient matrices A, B, and C,
respectively, for the three systems
...
Find det(A), det(B), and det(C)
...
Which of the coefficient matrices have
inverses?
d
...
e
...
f
...
67
35
...
⎧
⎨ x − y − 2z = 3
−x + 2y + 3z = 1
⎩
2x − 2y − 2z = −2
a
...
b
...
c
...
d
...
36
...
⎧
⎨ x + 3y − 2z = −1
2x + 5y + z = 2
⎩
2x + 6y − 4z = −2
a
...
b
...
c
...
d
...
37
...
⎧
− z=−1
⎨ −x
2x
+ 2z = 1
⎩
x − 3y − 3z = 1
a
...
b
...
c
...
d
...
In Exercises 38–43, use the fact that the graph of the
general equation
Ax 2 + Bxy + Cy 2 + Dx + Ey + F = 0
is essentially a parabola, circle, ellipse, or hyperbola
...
a
...
b
...
Confirming Pages
68
Chapter 1 Systems of Linear Equations and Matrices
39
...
Find the equation of the parabola in the form
2
Cy + Dx + Ey + F = 0
In Exercises 44–51, use Cramer’s rule to solve the
linear system
...
2x + 3y = 4
2x + 2y = 4
45
...
that passes through the points (−3, −3), (−1, 2),
and (3, 0)
...
Sketch the graph of the circle
...
−9x − 4y = 3
−7x + 5y = −10
41
...
Find the equation of the hyperbola in the form
48
...
−x − 3y = 4
−8x + 4y = 3
that passes through the points (−2, −2), (3, 2),
and (4, −3)
...
Sketch the graph of the parabola
...
a
...
b
...
42
...
Find the equation of the ellipse in the form
Ax 2 + Cy 2 + Dx + Ey + F = 0
that passes through the points (−3, 2), (−1, 3),
(1, −1), and (4, 2)
...
Sketch the graph of the ellipse
...
a
...
b
...
ß
1
...
⎩
4x
− z = −8
⎧
⎨ 2x + 3y + 2z = −2
−x − 3y − 8z = −2
51
...
An n × n matrix is skew-symmetric provided
At = −A
...
53
...
54
...
Elementary Matrices and LU Factorization
In Sec
...
2 we saw how the linear system Ax = b can be solved by using Gaussian
elimination on the corresponding augmented matrix
...
The
upper triangular form of the resulting matrix made it easy to find the solution by using
back substitution
...
1
...
) In a similar manner, if an augmented
matrix is reduced to lower triangular form, then forward substitution can be used to
find the solution of the corresponding linear system
...
7 Elementary Matrices and LU Factorization
69
first equation of the linear system
⎧
= 3
⎨ x1
−x1 + x2
= −1
⎩
2x1 − x2 + x3 = 5
we obtain the solution x1 = 3, x2 = 2, and x3 = 1
...
In this section we show how, in certain cases, an m × n matrix A can be written
as A = LU, where L is a lower triangular matrix and U is an upper triangular matrix
...
For example, an LU factorization of the
−3 −2
is given by
matrix
3
4
−3
3
−2
4
=
−1 0
1 2
3 2
0 1
3 2
−1 0
...
with L =
Elementary Matrices
As a first step we describe an alternative method for carrying out row operations using
elementary matrices
...
As an illustration, the elementary matrix E1 is formed by interchanging the first
and third rows of the 3 × 3 identity matrix I , that is,
⎤
⎡
0 0 1
E1 = ⎣ 0 1 0 ⎦
1 0 0
Corresponding to the three row operations given in Theorem 2 of Sec
...
2, there
are three types of elementary matrices
...
Also, the row operation kR1 + R2 −→ R2 applied to I yields the
elementary matrix
⎤
⎡
1 0 0
E2 = ⎣ k 1 0 ⎦
0 0 1
Confirming Pages
70
Chapter 1 Systems of Linear Equations and Matrices
Next, if c ̸= 0, the row operation cR2 −→ R2
⎡
1 0
E3 = ⎣ 0 c
0 0
performed on I produces the matrix
⎤
0
0 ⎦
1
Using any row operation, we can construct larger elementary matrices from larger
identity matrices in a similar manner
...
To illustrate the process, let A be the 3 × 3 matrix given by
⎤
⎡
1 2 3
A=⎣ 4 5 6 ⎦
7 8 9
Multiplying A by the matrix E1 , defined above,
⎡
7 8
E1 A = ⎣ 4 5
1 2
we obtain
⎤
9
6 ⎦
3
Observe that E1 A is the result of interchanging the first and third rows of A
...
THEOREM 19
Let A be an m × n matrix and E the elementary matrix obtained from the m × m
identity matrix I by a single row operation R
...
Then R(A) = EA
...
Specifically, let Ei be the elementary matrix corresponding to the
row operation Ri with 1 ≤ i ≤ k
...
⎡
The elementary matrices corresponding to these
⎤
⎡
⎡
1 0 0
1 0
E2 = ⎣ 0 1
E1 = ⎣ −3 1 0 ⎦
0 0 1
1 0
row operations are given
⎤
⎡
0
1 0
0 ⎦
E3 = ⎣ 0 1
1
0 3
by
⎤
0
0 ⎦
1
Confirming Pages
1
...
⎡
The Inverse of an Elementary Matrix
An important property of elementary matrices is that they are invertible
...
Then E is invertible
...
Proof Let E be an elementary matrix
...
1
...
There are three cases depending
on the form of E
...
Second, if E is the result of multiplying one row
of I by a nonzero scalar c, then det(E) = c det(I ) = c ̸= 0
...
In either case, det(E) ̸= 0 and hence E is invertible
...
1
...
In this
case starting with the n × 2n augmented matrix
[E | I ]
we reduce the elementary matrix on the left (to I ) by applying the reverse operation
used to form E, obtaining
I | E −1
That E −1 is also an elementary matrix follows from the fact that the reverse of
each row operation is also a row operation
...
The corresponding elementary matrix
is given by
⎤
⎡
1 2 0
E=⎣ 0 1 0 ⎦
0 0 1
Since det(E) = 1, then E is invertible with
⎤
⎡
1 −2 0
1 0 ⎦
E −1 = ⎣ 0
0
0 1
Confirming Pages
72
Chapter 1 Systems of Linear Equations and Matrices
Observe that E −1 corresponds to the row operation R2 : −2R2 + R1 −→ R1 which
says to subtract 2 times row 2 from row 1, reversing the original row operation R
...
1
...
Theorem 21
gives a restatement of this fact in terms of elementary matrices
...
The matrix A is row equivalent to B if and only if
there are elementary matrices E1 , E2 ,
...
In light of Theorem 21, if A is row equivalent to B, then B is row equivalent to
A
...
, Ek
...
, and E1 , we obtain
−1
−1
−1
A = E1 · · · Ek−1 Ek B
−1
−1
−1
Since each of the matrices E1 , E2 ,
...
Theorem 22 uses elementary matrices to provide a characterization of invertible
matrices
...
Proof First assume that there are elementary matrices E1 , E2 ,
...
To show this,
we multiply both sides of A = E1 E2 · · · Ek−1 Ek by B to obtain
−1
−1 −1
−1
−1 −1
BA = (Ek · · · E2 E1 )A = (Ek · · · E2 E1 )(E1 E2 · · · Ek−1 Ek ) = I
establishing the claim
...
In Sec
...
4,
we showed that A is row equivalent to the identity matrix
...
, Ek such that I = Ek Ek−1 · · · E2 E1 A
...
Since E1 ,
...
LU Factorization
There are many reasons why it is desirable to obtain an LU factorization of a matrix
...
Finding input
vectors xi requires that we solve k linear systems
...
7 Elementary Matrices and LU Factorization
A=
0
0
Figure 1
73
same for each linear system, the process is greatly simplified if A is replaced with its
LU factorization
...
If A is an n × n matrix with an LU factorization
given by A = LU, then L and U are also n × n
...
1
...
1
...
If this determinant
is not zero, then by Theorem 9 of Sec
...
4 the inverse of the matrix A is given by
A−1 = (LU )−1 = U −1 L−1
To describe the process of obtaining an LU factorization of an m × n matrix A,
suppose that A can be reduced to an upper triangular matrix by a sequence of row
operations which correspond to lower triangular elementary matrices
...
, Lk such that
Lk Lk−1 · · · L1 A = U
Since each of the matrices Li with 1 ≤ i ≤ k is invertible, we have
A = L−1 L−1 · · · L−1 U
k
1
2
By Theorem 20, L−1 , L−1 ,
...
They are also lower
k
1
2
triangular
...
Observe that L is lower triangular as it is
k
1
2
the product of lower triangular matrices
...
EXAMPLE 2
Solution
Find an LU factorization of the matrix
⎡
3
A=⎣ 6
−1
⎤
6 −3
15 −5 ⎦
−2
6
Observe that A can be row-reduced to an upper triangular matrix by means of
the row operations R1 : 1 R1 −→ R1 , R2 : −6R1 + R2 −→ R2 , and R3 : R1 +
3
R3 −→ R3
...
Specifically, A must be reducible to
upper triangular form without any row interchanges
...
Theorem 23
summarizes these results
...
Lk
...
k
1
2
A simple example of a matrix that cannot be reduced to upper triangular form
0 1
...
(See Exercise 29
...
EXAMPLE 3
Solution
Find an LU factorization of the matrix
⎤
⎡
1 −3 −2
0
1 −1 ⎦
A = ⎣ 1 −2
2 −4
3
2
⎤
−3 −2
0
1
3 −1 ⎦
0
1
4
⎡
1
Observe that A can be reduced to the upper triangular matrix U =⎣ 0
0
by means of the elementary matrices
⎤
⎤
⎡
⎡
⎡
1 0 0
1 0 0
E1 = ⎣ −1 1 0 ⎦
E2 = ⎣ 0 1 0 ⎦
E3 = ⎣
0 0 1
−2 0 1
1
0
1
⎤
0 0
1 0 ⎦
−2 1
Confirming Pages
1
...
To illustrate the procedure, consider the linear system Ax = b with
⎤
⎡
3
b = ⎣ 11 ⎦ and A the matrix of Example 2
...
Next we solve the linear system U x = y
...
The following steps summarize the procedure for solving the linear system
Ax = b when A admits an LU factorization
...
2
...
4
...
Define the vector y by means of the equation U x = y
...
Use back substitution to solve the system U x = y for x
...
PLU Factorization
We have seen that a matrix A has an LU factorization provided that it can be rowreduced without interchanging rows
...
In this
case the matrix A can be factored as A = P LU, where P is a permutation matrix,
that is, a matrix that results from interchanging rows of the identity matrix
...
The corresponding elementary matrices are given by
⎤
⎤
⎤
⎡
⎡
⎡
0 0 1
1 0 0
1
0 0
1 0 ⎦
E2 = ⎣ −1 1 0 ⎦
and
E3 = ⎣ 0
E1 = ⎣ 0 1 0 ⎦
1 0 0
0 0 1
0 −1 1
Confirming Pages
1
...
Hence,
−1
A = E1
⎡
0
=⎣ 0
1
= P LU
−1 −1
E2 E3 U
⎤
⎤⎡
⎤⎡
1 2
0
1 0 0
0 1
3 ⎦
1 0 ⎦⎣ 1 1 0 ⎦⎣ 0 2
0 0 −5
0 1 1
0 0
Fact Summary
1
...
2
...
3
...
4
...
5
...
6
...
Exercise Set 1
...
Find the 3 × 3 elementary matrix E that performs
the row operation
...
Compute EA, where
⎤
⎡
1 2
1
2 ⎦
A=⎣ 3 1
1 1 −4
1
...
R1 ↔ R2
3
...
−R1 + R3 −→ R3
In Exercises 5–10:
a
...
b
...
5
...
A =
−2 5
2 5
⎤
1 2 −1
3 ⎦
7
...
A = ⎣ 3 1 0 ⎦
−2 1 1
⎤
⎡
0 1 1
9
...
A = ⎢
⎣ 0
1
0
0
1
0
0
1
0
0
⎤
1
0 ⎥
⎥
0 ⎦
0
In Exercises 11–16, find the LU factorization of the
matrix A
...
A =
1 −2
−3
7
12
...
14
...
16
...
−2x + y = −1
4x − y = 5
3x − 2y = 2
−6x + 5y = − 7
2
⎧
⎨ x + 4y − 3z = 0
−x − 3y + 5z = −3
19
...
⎩
−2x + 4y − z = 4
⎧
⎪ x − 2y + 3z + w = 5
⎪
⎨
x − y + 5z + 3w = 6
21
...
⎤
⎡
0
1 −1
0 ⎦
23
...
A = ⎣ 2 1
1 0 −3
In Exercises 25–28, find the inverse of the matrix A
by using an LU factorization
...
A =
⎤
⎥
⎥
⎦
1
−3
26
...
17
...
⎪ −x − 2y − z + 4w = 1
⎪
⎩
2x + 2y + 2z + 2w = 1
1
2
4
−11
7
20
⎤
2 1 −1
27
...
A = ⎣ 3 −1 1 ⎦
−3
1 0
⎡
29
...
30
...
Show that if
A is row equivalent to B and B is row equivalent
to C, then A is row equivalent to C
...
Show that if A and B are n × n invertible
matrices, then A and B are row equivalent
...
Suppose that A is an n × n matrix with an LU
factorization, A = LU
...
What can be said about the diagonal entries
of L?
b
...
c
...
Confirming Pages
1
...
8
79
Applications of Systems of Linear Equations
In the opening to this chapter we introduced linear systems by describing their connection to the process of photosynthesis
...
Balancing Chemical Equations
Recall from the introduction to this chapter that a chemical equation is balanced if
there are the same number of atoms, of each element, on both sides of the equation
...
EXAMPLE 1
Propane is a common gas used for cooking and home heating
...
When propane burns, it combines with oxygen gas, O2 , to form carbon
dioxide, CO2 , and water, H2 O
...
Solution
We need to find whole numbers x1 , x2 , x3 , and x4 , so that the equation
x1 C3 H8 + x2 O2 −→ x3 CO2 + x4 H2 O
is balanced
...
For example, if t = 8, then
x1 = 2, x2 = 10, x3 = 6, and x4 = 8
...
In these models, edges and points are
used to represent streets and intersections, respectively
...
To balance a traffic network, we assume that the outflow of
each intersection is equal to the inflow, and that the total flow into the network is
equal to the total flow out
...
1
...
100
300
500
300
200
400
400
600
500
Figure 1
Solution
To complete the traffic model, we need to find values for the eight unknown flows,
as shown in Fig
...
100
x1
x6
300
x2
200
x8
x3
x7
400
500
300
400
x4
600
x5
500
Figure 2
Our assumptions about the intersections give us the set of linear equations
⎧
= 300 + x1
⎪x2 + x6
⎪
⎪
⎪
⎪100 + 500 = x6 + 300
⎪
⎪
⎪
⎨200 + x
= x2 + x7
3
⎪
= 400 + x4
⎪300 + x7
⎪
⎪
⎪400 + 500 = x + x
⎪
3
8
⎪
⎪
⎩
x4 + 600
= 400 + x5
Confirming Pages
1
...
However, to obtain particular solutions,
we must choose numbers for s and t that produce positive values for each xi in the
system (otherwise we will have traffic going in the wrong direction!) For example,
s = 400 and t = 300 give a viable solution
...
EXAMPLE 3
Table 1 gives the amount, in milligrams (mg), of vitamin A, vitamin C, and calcium
contained in 1 gram (g) of four different foods
...
Suppose
that a dietician wants to prepare a meal that provides 200 mg of vitamin A, 250
mg of vitamin C, and 300 mg of calcium
...
The
amounts for each of the foods needed to satisfy the dietician’s requirement can be
found by solving the linear system
⎧
⎪ 10x1 + 30x2 + 20x3 + 10x4 = 200
⎨
50x1 + 30x2 + 25x3 + 10x4 = 250
⎪
⎩
60x1 + 20x2 + 40x3 + 25x4 = 300
Rounded to two decimal places, the solution to the linear system is given by
x1 = 0
...
11t
x3 = 5 − 0
...
13 + 0
...
Hence, particular solutions
can be found by choosing nonnegative values of t such that
0 ≤ 5 − 0
...
4
0
...
In a real
economy there are tens of thousands of goods and services
...
For example, consider an economy
for which the outputs are services, raw materials, and manufactured goods
...
Table 2
Services
Raw materials
Manufacturing
Services
0
...
05
0
...
03
0
...
04
Manufactured goods
0
...
3
0
...
00 worth of service, the service sector requires $0
...
05 worth of raw materials, and $0
...
The
data in Table 2 are recorded in the matrix
⎤
⎡
0
...
05 0
...
03 0
...
04 ⎦
0
...
3 0
...
8 Applications of Systems of Linear Equations
83
This matrix is called the input-output matrix
...
Each
component of Ax represents the level of production that is used by the corresponding
sector and is called the internal demand
...
04
Ax = ⎣ 0
...
02
given by
⎤
⎤ ⎡
⎤⎡
16
200
0
...
02
0
...
04 ⎦ ⎣ 100 ⎦ = ⎣ 16 ⎦
64
150
0
...
2
This result means that the service sector requires $16 billion of services, raw materials, and manufactured goods
...
Alternatively, suppose that the external demand D is given
...
Thus, to balance the economy, x must satisfy
x − Ax = D
that is,
(I − A)x = D
When I − A is invertible, then
x = (I − A)−1 D
EXAMPLE 4
Suppose that the external demand for services, raw materials, and manufactured
goods in the economy described in Table 2 is given by
⎤
⎡
300
D = ⎣ 500 ⎦
600
Find the levels of production that balance the economy
...
96 −0
...
02
⎣ −0
...
96 −0
...
02 −0
...
8
x3
Since the matrix on the left is invertible, the production vector x can be found by
multiplying both sides by the inverse
...
04 0
...
03
⎣ x2 ⎦ = ⎣ 0
...
06 0
...
04 0
...
27
x3
⎤
⎡
360
≈ ⎣ 569 ⎦
974
⎡
So the service sector must produce approximately $360 billion worth of services, the raw material sector must produce approximately $569 billion worth of raw
materials, and the manufacturing sector must produce approximately $974 billion
worth of manufactured goods
...
8
In Exercises 1–4, use the smallest possible positive
integers to balance the chemical equation
...
When subjected to heat, aluminium reacts with
copper oxide to produce copper metal and
aluminium oxide according to the equation
Al3 + CuO −→ Al2 O3 + Cu
Balance the chemical equation
...
When sodium thiosulfate solution is mixed with
brown iodine solution, the mixture becomes
colorless as the iodine is converted to colorless
sodium iodide according to the equation
I2 + Na2 S2 O3 −→ NaI + Na2 S4 O6
Balance the chemical equation
...
Cold remedies such as Alka-Seltzer use the
reaction of sodium bicarbonate with citric acid in
solution to produce a fizz (carbon dioxide gas)
...
For every 100 mg
of sodium bicarbonate, how much citric acid
should be used? What mass of carbon dioxide will
be produced?
4
...
Find the traffic flow pattern for the network in the
figure
...
Give one
specific solution
...
8 Applications of Systems of Linear Equations
8
...
Flow rates are in cars per half-hour
...
Find the traffic flow pattern for the network in the
figure
...
Give one
specific solution
...
The table lists the number of milligrams of
vitamin A, vitamin B, vitamin C, and niacin
contained in 1 g of four different foods
...
Determine
how many grams of each food must be included,
and describe any limitations on the quantities of
each food that can be used
...
Find the traffic flow pattern for the network in the
figure
...
What
is the current status of the road labeled x5 ?
150
Group 2
Group 3
Group 4
Vitamin A
20
30
40
10
Vitamin B
40
20
35
20
Vitamin C
50
40
10
30
5
5
10
5
Niacin
x4
x5
100
200
x2
100 200
500
x1
x3
x4
x2
50
x3
100
10
...
Also listed are the daily
recommended amounts based on a 2000-calorie
diet
...
Write the input-output matrix A for the
economy
...
If the levels of production, in billions, of the
three sectors of the economy are 300, 150, and
200, respectively, find the internal demand
vector for the economy
...
An economy is divided into three sectors as
described in the table
...
Services
Raw materials
0
...
04
0
...
03
0
...
04
Manufacturing
0
...
3
d
...
Manufacturing
Services
c
...
0
...
Economies are, in general, very complicated with many sectors
...
If the external demands to the sectors are given in the vector D, determine
the levels of production that balance the economy
...
041
0
...
018
...
022
0
...
018
0
...
01
0
...
032
0
...
03
0
...
019
0
...
001
0
...
011
0
...
018
0
...
039
0
...
021
0
...
049
0
...
039
0
...
041
0
...
05
0
...
009
0
...
011
0
...
025
0
...
009
0
...
038
0
...
007
0
...
043
0
...
005
0
...
002
0
...
011
0
...
035
0
...
003
0
...
029
0
...
039
0
...
049
0
...
044
0
...
024
0
...
024
0
...
048
0
...
001
0
...
023
0
...
047
0
...
023
0
...
04
0
...
028
0
...
019
0
...
027
0
...
021
0
...
The table contains estimates for national health
care in billions of dollars
...
021
0
...
047
0
...
019
0
...
042
0
...
042
0
...
Make a scatter plot of the data
...
Use the 1970, 1980, and 1990 data to write a
system of equations that can be used to find a
parabola that approximates the data
...
Solve the system found in part (b)
...
Plot the parabola along with the data points
...
Use the model found in part (c) to predict an
estimate for national health care spending in
2010
...
The number of cellular phone subscribers
worldwide from 1985 to 2002 is given in the
Confirming Pages
1
...
Use the data from 1985, 1990, and 2000 to
fit a parabola to the data points
...
Year
Cellular Phone
Subscribers(millions)
1985
1
1990
11
2000
741
2001
955
2002
1155
In Exercises 15–18, use the power of a matrix to solve
the problems
...
Demographers are interested in the movement of
populations or groups of populations from one
region to another
...
a
...
b
...
Multiply the
matrices to find the populations
...
If in the year 2002 the population of a city was
1,500,000 and of the suburbs was 600,000,
write a matrix product that gives a 2 × 1 vector
containing the populations in the city and in
87
the suburbs in the year 2004
...
d
...
16
...
The researcher estimates that
it is likely that 80 percent of the infected mice
will recover in a week and 20 percent of healthy
mice will contract the disease in the same week
...
Write a 2 × 2 matrix that describes the
percentage of the population that transition
from healthy to healthy, healthy to infected,
infected to infected, and infected to healthy
...
Determine the number of healthy and infected
mice after the first week
...
Determine the number of healthy and infected
mice after the second week
...
Determine the number of healthy and infected
mice after six weeks
...
In a population of 50,000 there are 20,000
nonsmokers, 20,000 smokers of one pack or less a
day, and 10,000 smokers of more than one pack a
day
...
After
one month what part of the population is in each
category? After two months how many are in
each category? After one year how many are in
each category?
18
...
She hired an advertising firm to develop a
Confirming Pages
88
Chapter 1 Systems of Linear Equations and Matrices
campaign to introduce her product to the market
...
How long will
it take for the new company to acquire 20 percent
of the consumers?
In Exercises 19 and 20, the figure shows an electrical
network
...
Batteries are
represented using two parallel line segments of
unequal length, and it is understood the current flows
out of the terminal denoted by the longer line segment
...
To analyze an
electrical network requires Kirchhoff’s laws, which
state all current flowing into a junction, denoted using
a black dot, must flow out and the sum of the products
of current I and resistance R around a closed path (a
loop) is equal to the total voltage in the path
...
a
...
b
...
c
...
I1
R1 = 4
Loop
I2
I5
R2 = 6
R5 = 2
18 V
Loop
I3
R3 = 4
I4
R4 = 6
I6
R6 = 3
Loop
16 V
In Exercises 21 and 22, use the fact that if a plate has
reached a thermal equilibrium, then the temperature at
a grid point, not on the boundary of the plate, is the
average of the temperatures of the four closest grid
points
...
Estimate the
temperature at each interior grid point
...
30
20
25
20
22
...
a
...
b
...
c
...
25
20
Confirming Pages
1
...
Consider the linear system
⎧
⎪ x + y + 2z + w = 3
⎪
⎨
−x
+ z + 2w = 1
+ w=−2
⎪ 2x + 2y
⎪
⎩
x + y + 2z + 3w = 5
a
...
b
...
c
...
d
...
e
...
f
...
2
...
Can you decide by inspection whether the
determinant of the coefficient matrix is 0?
Explain
...
Can you decide by inspection whether the
linear system has a unique solution for every
choice of a, b, c, and d? Explain
...
Find all idempotent matrices of the form
a
0
b
c
4
...
Find all
a b
that will commute with every
matrices
c d
matrix in S
...
Let A and B be 2 × 2 matrices
...
Show that the sum of the terms on the main
diagonal of AB − BA is 0
...
If M is a 2 × 2 matrix and the sum of the main
diagonal entries is 0, show there is a constant c
such that
M 2 = cI
c
...
Find the traffic flow pattern for the network in the
figure
...
Give one
specific solution
...
Determine the values of a, b, c, and d for
which the linear system is consistent
...
Determine the values of a, b, c, and d for
which the linear system is inconsistent
...
Does the linear system have a unique solution
or infinitely many solutions?
f
...
500
600
Confirming Pages
90
Chapter 1 Systems of Linear Equations and Matrices
7
...
Explain why the matrix
⎡
1 1
⎢ 0 1
⎢
A=⎢ 0 0
⎢
⎣ 0 0
0 0
1
1
1
0
0
1
1
1
1
0
1
1
1
1
1
⎤
⎥
⎥
⎥
⎥
⎦
is invertible
...
Determine the maximum number of 1’s that
can be added to A such that the resulting
matrix is invertible
...
Show that if A is invertible, then At is invertible
and (At )−1 = (A−1 )t
...
A matrix A is skew-symmetric provided At = −A
...
Let A be an n × n matrix and define
B = A + At
and
C = A − At
Show that B is symmetric and C is
skew-symmetric
...
Show that every n × n matrix can be written as
the sum of a symmetric and a skew-symmetric
matrix
...
Suppose u and v are solutions to the linear system
Ax = b
...
Chapter 1: Chapter Test
In Exercises 1–45, determine whether the statement is
true or false
...
A 2 × 2 linear system has one solution, no
solutions, or infinitely many solutions
...
A 3 × 3 linear system has no solutions, one
solution, two solutions, three solutions, or
infinitely many solutions
...
If A and B are n × n matrices with no zero
entries, then AB ̸= 0
...
Homogeneous linear systems always have at least
one solution
...
If A is an n × n matrix, then Ax = 0 has a
nontrivial solution if and only if the matrix A has
an inverse
...
If A and B are n × n matrices and Ax = Bx for
every n × 1 matrix x, then A = B
...
If A, B, and C are invertible n × n matrices, then
(ABC)−1 = A−1 B −1 C −1
...
If A is an invertible n × n matrix, then the linear
system Ax = b has a unique solution
...
If A and B are n × n invertible matrices and
AB = BA, then A commutes with B −1
...
If A and B commute, then A2 B = BA2
...
The matrix ⎡
⎢
⎢
⎢
⎢
⎣
⎤
1 −2 3 1
0
0 −1 4 3
2 ⎥
⎥
0
0 3 5 −2 ⎥
⎥
0
0 0 0
4 ⎦
0
0 0 0
6
does not have an inverse
...
Interchanging two rows of a matrix changes the
sign of its determinant
...
Multiplying a row of a matrix by a nonzero
constant results in the determinant being
multiplied by the same nonzero constant
...
If two rows of a matrix are equal, then the
determinant of the matrix is 0
...
Performing the operation aRi + Rj → Rj on a
matrix multiplies the determinant by the
constant a
...
If A =
1 2
, then A2 − 7A = 2I
...
If A and B are invertible matrices, then A + B is
an invertible matrix
...
If A and B are invertible matrices, then AB is an
invertible matrix
...
8 Applications of Systems of Linear Equations
19
...
20
...
21
...
The matrix
2 −1
4 −2
does not have an inverse
...
If the n × n matrix A is idempotent and
invertible, then A = I
...
If A and B commute, then At and B t commute
...
If A is an n × n matrix and det(A) = 3, then
det(At A) = 9
...
The coefficient matrix is
2
2
A=
1 −1
27
...
The linear system has a unique solution
...
The only solution to the linear system is
x = −7/4 and y = −5/4
...
The inverse of the coefficient matrix A is
A−1 =
1
4
1
4
1
2
−1
2
31
...
The solution to the system is given by the matrix
equation
x
y
=
1
4
1
4
1
2
−1
2
3
1
In Exercises 33–36, use the linear system
⎧
⎨ x1 + 2x2 − 3x3 = 1
2x1 + 5x2 − 8x3 = 4
⎩
−2x1 − 4x2 + 6x3 = −2
33
...
The determinant of the coefficient matrix is 0
...
A solution to the linear system is
x1 = −4, x2 = 0, and x3 = −1
...
The linear system has infinitely many solutions,
and the general solution is given by x3 is free,
x2 = 2 + 2x3 , and x1 = −3 − x3
...
After the operation R1 ←→ R2 is performed, the
matrix becomes
⎤
⎡
1
0 1 −1
⎣ −1 −2 1
3 ⎦
2
1 2
1
38
...
The matrix A is row equivalent to
⎤
⎡
1
0 1 −1
⎣ 0 −2 2
2 ⎦
0
0 1
4
Confirming Pages
92
Chapter 1 Systems of Linear Equations and Matrices
40
...
If A is viewed as the augmented matrix of a
linear system, then the solution to the linear
system is x = −5, y = 3, and z = 4
...
The matrix products AB and BA are both defined
...
The matrix expression −2BA + 3B simplifies to a
2 × 3 matrix
...
The matrix expression −2BA + 3B equals
−3 −5 3
−5
7 16
In Exercises 42–45, use the matrices
⎤
1 1
2
1 ⎦
A = ⎣ −2 3
4 0 −3
⎡
B=
1 2 1
−1 3 2
45
...
1
2
...
3
Vectors in ޒn 94
Linear Combinations 101
Linear Independence 111
I
n the broadest sense a signal is any timevarying quantity
...
A seismic disturbance is detected
as signals from within the earth
...
A video signal is a sequence of
images
...
A compact disc contains
discrete signals representing sound
...
The period of
a wave is the time it takes for one cycle of the
wave, and the frequency is the number of cycles
that occur per unit of time
...
Every periodic motion is the mixture of
sine and cosine waves with frequencies proportional to a common frequency, called
the fundamental frequency
...
T
T
T
T
T
T
and for any n, the signal can be approximated by the fundamental set
πx
πx
2πx
2πx
nπx
nπx
1, cos
, sin
, cos
, sin
,
...
A
square wave on the interval [−π, π] along with the approximations
4
4
4
4
4
4
sin x, sin x +
sin 3x, sin x +
sin 3x +
sin 5x
π
π
3π
π
3π
5π
and
4
4
4
4
sin x +
sin 3x +
sin 5x +
sin 7x
π
3π
5π
7π
are shown in Fig
...
As more terms are added, the approximations become better
...
1 we defined a vector, with n entries, as an n × 1 matrix
...
In this chapter
we study sets of vectors and analyze their additive properties
...
In
Chap
...
2
...
1 Vectors in ޒn
95
Similarly Euclidean 3-space, denoted by , 3ޒis the set of all vectors with three entries,
that is,
⎧⎡
⎫
⎤
⎨ x1
⎬
⎣ = 3ޒx2 ⎦ x1 , x2 , x3 are real numbers
⎩
⎭
x3
In general, Euclidean n-space consists of vectors with n entries
...
⎥ xi ∈ ,ޒfor i = 1, 2,
...
⎣ = ޒ
⎪
⎪
...
(1, 2)
x
Figure 2
y
v
v
x
Figure 3
DEFINITION 2
Geometrically, in 2ޒand 3ޒa vector is a directed line segment from the origin to
the point whose coordinates are equal to the components of the vector
...
2
...
The length of a vector is the length of the line segment from the initial point
√
√
1
is 12 + 22 = 5
...
For example, the length of v =
2
vector is unchanged if it is relocated elsewhere in the plane, provided that the length
and direction remain unchanged
...
See Fig
...
When the initial point of a vector is the origin, we say
vector v =
2
that the vector is in standard position
...
The operations of addition and scalar multiplication are defined
componentwise as they are for matrices
...
1
...
⎥+⎢
⎣
...
⎡
un
Let u and v be vectors in
⎤ ⎡
v1
u 1 + v1
v 2 ⎥ ⎢ u 2 + v2
...
...
...
vn
un + vn
⎤
⎥
⎥
⎦
Confirming Pages
Chapter 2 Linear Combinations and Linear Independence
2
...
⎥
⎣
...
...
Two vectors u and v are added according to the
parallelogram rule, as shown in Fig
...
The vector cu is a scaling of the vector u
...
4(b) are examples of scaling a vector with 0 < c < 1 and c > 1
...
4(b)
...
4(c)
...
4(c),
it is common to draw the difference vector u − v from the terminal point of v to the
terminal point of u
...
⎡
⎤
−1
v=⎣ 4 ⎦
3
⎡
and
⎤
4
w=⎣ 2 ⎦
6
⎡
Using the componentwise definitions of addition and scalar multiplication, we have
⎤⎞
⎡ ⎤
⎤ ⎡
⎛ ⎡
4
−1
1
(2u + v) − 3w = ⎝2 ⎣ −2 ⎦ + ⎣ 4 ⎦⎠ − 3 ⎣ 2 ⎦
6
3
3
⎤
⎤⎞ ⎡
⎤ ⎡
⎛⎡
−12
−1
2
= ⎝⎣ −4 ⎦ + ⎣ 4 ⎦⎠ + ⎣ −6 ⎦
−18
3
6
⎤
⎤ ⎡
⎤ ⎡
⎡
−11
−12
1
= ⎣ 0 ⎦ + ⎣ −6 ⎦ = ⎣ −6 ⎦
−9
−18
9
y
y
y
2u
v
EXAMPLE 1
v
u
u+
96
1
2u
u
x
(a)
v
x
−2u
(b)
Figure 4
u−v
u
x
−v
(c)
Confirming Pages
2
...
1
...
If u and v are vectors in ޒn , then
⎡
u1
⎢ u2
u+v=⎢
...
...
⎥=⎢
...
⎦ ⎣
...
...
...
...
...
⎣
...
0
⎤
⎥
⎥
⎦
⎤
⎥
⎥=v+u
⎦
each component equal to 0, that is,
⎤
⎥
⎥
⎦
Hence, for any vector v in ޒn , we have v + 0 = v
...
This enables us to define the
additive inverse of any vector v as the vector
⎡
⎤
−v1
⎢ −v2 ⎥
−v = ⎢
...
⎦
...
Theorem 1 summarizes the essential algebraic properties of vectors in ޒn
...
3
...
The
remaining justifications are left as exercises
...
The following algebraic
properties hold
...
Commutative property:
2
...
Additive identity: The vector 0 satisfies 0 + u = u + 0 = u
...
Additive inverse: For every vector u, the vector −u satisfies
u + (−u) =−u + u = 0
...
c(u + v) = cu + cv
6
...
c(du) = (cd)u
8
...
This will be important in Sec
...
2
...
Also verify that
for any scalars c and d, c(du) = (cd)u
...
For the second verification, we have
1
−1
=c
d
−d
4
−3
7
−1
u + (v + w) =
c(du) = c d
+
=
cd
−cd
7
−1
= cd
1
−1
= (cd)u
The properties given in Theorem 1 can be used to establish other useful properties
of vectors in ޒn
...
⎥ = 0
⎣
...
...
1 Vectors in ޒn
99
We also have the property that (−1)u = −u
...
In the case of real numbers, the statement xy = 0 is
equivalent to x = 0 or y = 0
...
That
is, if cu = 0, then either c = 0 or u = 0
...
⎥
⎣
...
...
, cun = 0
...
Otherwise, u1 = u2 = · · · = un = 0, that is, u = 0
...
The definitions of vector addition and scalar multiplication in ޒn agree with
the definitions for matrices in general and satisfy all the algebraic properties
of matrices
...
The zero vector, whose components are all 0, is the additive identity for
vectors in ޒn
...
3
...
Multiplying such a vector by a positive scalar changes
the length of the vector but not the direction
...
Exercise Set 2
...
Find u + v and v + u
...
Find (u + v) + w and u + (v + w)
...
Find u − 2v + 3w
...
Find −u + 1 v − 2w
...
Find −3(u + v) − w
...
Find 2u − 3(v − 2w)
...
Find −2(u + 3v) + 3u
...
Find 3u − 2v
...
If x1 and x2 are real scalars, verify that
(x1 + x2 )u = x1 u + x2 u
...
If x1 is a real scalar, verify that
x1 (u + v) = x1 u + x1 v
...
⎡ ⎤
2
11
...
v = ⎣ 3 ⎦
2
⎤
⎡
0
13
...
v = ⎣ 0 ⎦
1
2
In Exercises 15 and 16, find w such that
−u + 3v − 2w = 0
...
u = ⎣ 4 ⎦
0
2
⎤
⎤
⎡
⎡
2
−2
v = ⎣ −3 ⎦
16
...
Explain what the solution to the linear system implies
about the vector equation
...
c1
1
−2
+ c2
3
−2
=
−2
−1
18
...
c1
1
2
+ c2
−1
−2
=
3
1
20
...
c1 ⎣ 4 ⎦ + c2 ⎣ 3 ⎦ + c3 ⎣ 1 ⎦ = ⎣ −3 ⎦
4
3
−1
−5
⎡
⎤
⎤
⎤
⎤ ⎡
⎡
⎡
−1
0
1
1
22
...
c1 ⎣ 0 ⎦ + c2 ⎣ 1 ⎦ + c3 ⎣ −1 ⎦ = ⎣ 0 ⎦
2
1
1
−1
⎡
⎤
⎤
⎤
⎡
⎡ ⎤ ⎡
6
−1
0
2
24
...
25
...
c1
1
1
27
...
c1
3
1
2
1
=
a
b
−1
1
=
a
b
+ c2
+ c2
2
−2
+ c2
+ c2
6
2
=
=
a
b
a
b
In Exercises 29–32, find all vectors
⎤
⎡
a
v=⎣ b ⎦
c
so that the vector equation c1 v1 + c2 v2 + c3 v3 = v can
be solved
...
2 Linear Combinations
v3
30
...
v1
v3
⎤
0
v2 = ⎣ 1 ⎦
1
⎡
⎤
0
v2 = ⎣ 1 ⎦
0
⎡
⎤
−1
32
...
33
...
34
...
35
...
⎤
2
v2 = ⎣ 1 ⎦
1
⎡
36
...
37
...
38
...
39
...
40
...
2
...
v1
⎤
1
=⎣ 0 ⎦
1
⎤
⎡
2
=⎣ 1 ⎦
0
⎤
⎡
1
=⎣ 1 ⎦
1
⎤
⎡
1
=⎣ 1 ⎦
0
⎤
⎡
1
=⎣ 1 ⎦
−1
⎤
⎡
3
=⎣ 2 ⎦
0
⎡
101
Linear Combinations
In three-dimensional Euclidean space 3ޒthe coordinate vectors that define the three
axes are the vectors
⎤
⎡
⎡ ⎤
⎡ ⎤
1
0
0
e2 = ⎣ 1 ⎦
and
e3 = ⎣ 0 ⎦
e1 = ⎣ 0 ⎦
0
0
1
Every vector in 3ޒcan then be obtained from
example, the vector
⎤
⎡
⎤
⎡
⎡
1
2
v = ⎣ 3 ⎦ = 2⎣ 0 ⎦+ 3⎣
0
3
these three coordinate vectors, for
⎤
⎤
⎡
0
0
1 ⎦ + 3⎣ 0 ⎦
1
0
Geometrically, the vector v is obtained by adding scalar multiples of the coordinate vectors, as shown in Fig
...
The vectors e1 , e2 , and e3 are not unique in this
respect
...
Combining vectors in this manner plays a
central role in describing Euclidean spaces and, as we will see in Chap
...
DEFINITION 1
⎡
Linear Combination Let S = {v1 , v2 ,
...
, ck be scalars
...
Any vector v that can be written
in this form is also called a linear combination of the vectors of S
...
EXAMPLE 1
Determine whether the vector
⎤
−1
v=⎣ 1 ⎦
10
is a linear combination of the vectors
⎤
⎤
⎡
⎡
1
−2
v1 = ⎣ 0 ⎦
v2 = ⎣ 3 ⎦
and
1
−2
⎡
⎤
−6
v3 = ⎣ 7 ⎦
5
⎡
Confirming Pages
2
...
EXAMPLE 2
Determine whether the vector
⎤
−5
v = ⎣ 11 ⎦
−7
is a linear combination of the vectors
⎤
⎡
⎡ ⎤
1
0
v1 = ⎣ −2 ⎦
v2 = ⎣ 5 ⎦
and
2
5
⎡
⎤
2
v3 = ⎣ 0 ⎦
8
⎡
Confirming Pages
104
Chapter 2 Linear Combinations and Linear Independence
Solution
The vector v is a linear combination of the vectors v1 , v2 , and v3 if there are scalars
c1 , c2 , and c3 , such that
⎤
⎤
⎤
⎤
⎡
⎡
⎡
⎡
1
0
2
−5
⎣ 11 ⎦ = c1 ⎣ −2 ⎦ + c2 ⎣ 5 ⎦ + c3 ⎣ 0 ⎦
2
5
8
−7
The augmented matrix corresponding to this equation is given by
⎡
⎤
1 0 2 −5
⎣ −2 5 0 11 ⎦
2 5 8 −7
Reducing the augmented matrix
⎡
⎤
1 0 2 −5
⎣ −2 5 0 11 ⎦
2 5 8 −7
to
⎡
1 0 2
⎣ 0 5 4
0 0 0
⎤
−5
1 ⎦
2
shows that the linear system is inconsistent
...
To see this geometrically, first observe that the vector v3 is a linear combination
of v1 and v2
...
Specifically,
4
c1 v1 + c2 v2 + c3 v3 = c1 v1 + c2 v2 + c3 2v1 + v2
5
4
= (c1 + 2c3 )v1 + c2 + c3 v2
5
The set of all vectors that are linear combinations of v1 and v2 is a plane in , 3ޒ
which does not contain the vector v, as shown in Fig
...
v
Figure 2
Confirming Pages
2
...
⎥
⎣
...
0
n vectors given by
⎤
⎡
0
0
⎢ 0
1 ⎥
...
...
...
0
1
These vectors can also be defined by the equations
(ek )i =
105
⎤
⎥
⎥
⎦
1 if i = k
0 if i ̸= k
where 1 ≤ k ≤ n
...
Indeed, for any vector v in
ޒn , let the scalars be the components of the vector, so that
⎡
⎤
⎡ ⎤
⎡
⎤
⎡ ⎤
v1
1
0
0
⎢ v2 ⎥
⎢ 0 ⎥
⎢ 1 ⎥
⎢ 0 ⎥
v = ⎢
...
⎥ + v2 ⎢
...
⎥
⎣
...
⎦
⎣
...
⎦
...
...
0
vn
0
1
= v 1 e 1 + v2 e 2 + · · · + vn e n
Linear combinations of more abstract objects can also be formed, as illustrated
in Example 3 using 2 × 2 matrices
...
3 when we consider abstract vector spaces
...
Thus, the
matrix A is a linear combination of the matrices M1 , M2 , and M3
...
Show that if x1 , x2 ,
...
Since x1 , x2 ,
...
Axn = 0
Then using the algebraic properties of matrices, we have
A(c1 x1 + c2 x2 + · · · + cn xn ) = A(c1 x1 ) + A(c2 x2 ) + · · · + A(cn xn )
= c1 (Ax1 ) + c2 (Ax2 ) + · · · + cn (Axn )
= c1 0 + c2 0 + · · · + cn 0
=0
The result of Example 4 is an extension of the one given in Example 3 of
Sec
...
5
...
...
...
...
...
...
...
If we use the
column vectors of the coefficient matrix A, then the matrix equation can be written
Confirming Pages
2
...
⎣
...
am1
⎤
⎡
⎤
⎡
a12
a1n
⎥
⎢ a22 ⎥
⎢ a2n
⎥ + x2 ⎢
...
⎦
⎣
...
...
am2
amn
⎤
107
⎡
⎤
b1
⎥ ⎢ b2 ⎥
⎥=⎢
...
⎦
...
This equation can also
be written as
x1 A1 + x2 A2 + · · · + xn An = b
where Ai denotes the ith column vector of the matrix A
...
THEOREM 2
The linear system Ax = b is consistent if and only if the vector b can be expressed
as a linear combination of the column vectors of A
...
Let A be an m × n matrix and B an n × p
matrix
...
a1n ⎡ b1i ⎤
⎢ a21 a22
...
...
⎥
...
...
⎦
⎣
...
...
...
amn
⎡
⎤
a11 b1i + a12 b2i + · · · + a1n bni
⎢ a21 b1i + a22 b2i + · · · + a2n bni ⎥
⎢
⎥
=⎢
⎥
...
...
...
⎣
⎦
...
...
...
...
...
ni ⎥
...
amn bni
= b1i A1 + b2i A2 + · · · + bni An
Since for i = 1, 2,
...
Confirming Pages
108
Chapter 2 Linear Combinations and Linear Independence
Fact Summary
1
...
, en
...
If x1 , x2 ,
...
3
...
The left side is a linear combination of the
column vectors of A
...
The linear system Ax = b is consistent if and only if b is a linear
combination of the column vectors of A
...
2
In Exercises 1–6, determine whether the vector v is a
linear combination of the vectors v1 and v2
...
v =
v2 =
2
...
v =
v2 =
4
...
v = ⎣ 10 ⎦
10
⎤
⎡
1
v2 = ⎣ 4 ⎦
2
⎤
⎡
−2
6
...
⎤
⎡
⎡ ⎤
2
2
v1 = ⎣ −2 ⎦
7
...
2 Linear Combinations
⎤
5
8
...
v = ⎣ 1 ⎦
5
⎡
⎡
⎤
−1
v2 = ⎣ −1 ⎦
3
⎤
−3
10
...
v = ⎢
⎣ 17 ⎦
7
⎡
⎤
1
v1 = ⎣ −1 ⎦
0
⎤
⎡
3
v3 = ⎣ −1 ⎦
−3
⎤
⎡
1
v1 = ⎣ 2 ⎦
−1
⎤
⎡
0
v3 = ⎣ 1 ⎦
2
⎡
⎤
−3
v1 = ⎣ 2 ⎦
1
⎤
⎡
−1
v3 = ⎣ 10 ⎦
3
⎤
⎡
2
⎢ −3 ⎥
⎥
v1 = ⎢
⎣ 4 ⎦
1
⎡
⎤
⎡
1
−1
⎢ 6 ⎥
⎢ −1
⎥
v2 = ⎢
v3 = ⎢
⎣ −1 ⎦
⎣ 2
2
3
⎡ ⎤
⎤
⎡
6
2
⎢ 3 ⎥
⎢ 3 ⎥
⎥
v1 = ⎢
12
...
3
0
13
...
v = ⎣ −1 ⎦
−3
⎤
⎡
−2
v2 = ⎣ −1 ⎦
2
⎤
⎡
2
v4 = ⎣ −1 ⎦
−2
⎤
⎡
−3
16
...
v =
3
1
v3 =
3
0
⎤
0
v1 = ⎣ 1 ⎦
1
⎤
⎡
−2
v3 = ⎣ −3 ⎦
−1
⎡
⎤
−1
v1 = ⎣ −1 ⎦
2
⎤
⎡
0
v3 = ⎣ −1 ⎦
−2
⎡
In Exercises 17–20, determine if the matrix M is a
linear combination of the matrices M1 , M2 , and M3
...
M =
4 0
M1 =
1
1
2
−1
M3 =
−1 3
2 1
M2 =
−2 3
1 4
Confirming Pages
110
Chapter 2 Linear Combinations and Linear Independence
18
...
M =
M2 =
1+x
26
...
M =
M2 =
1 + x, −x, x 2 + 1
28
...
Describe all vectors in 3ޒthat can be written as a
linear combination of the vectors
⎤
⎡ ⎤
⎤
⎡
⎡
1
3
1
⎣ 3 ⎦
⎣ 7 ⎦
⎣ 2 ⎦
and
0
−2
−1
0 0
0 1
21
...
⎡
1
22
...
p(x) = x 3 − 2x + 1
2 1
3 4
M1 =
x2
25
...
Write the
linear combination of the column
⎤
⎤
⎡
−1
2 −1
3
4 ⎦ and x = ⎣ −1 ⎦
...
3 2
−1 −2
...
Let A =
2 5
3
4
each column vector of AB as a linear
combination of the column vectors of A
...
Let A = ⎣ 1 −1
−4
3
1
⎤
⎡
3
2 1
1 0 ⎦
...
30
...
If v = v1 + v2 + v3 + v4 and
v4 = v1 − 2v2 + 3v3 , write v as a linear
combination of v1 , v2 , and v3
...
If v = v1 + v2 + v3 + v4 and v2 = 2v1 − 4v3 ,
write v as a linear combination of v1 , v3 , and v4
...
Suppose that the vector v is a linear combination
of the vectors v1 , v2 ,
...
Show
that v is a linear combination of v2 ,
...
34
...
, vn , and w1 , w2 ,
...
Show that v is a linear
combination of v1 , v2 ,
...
, wm
...
Let S1 be the set of all linear combinations of the
vectors v1 , v2 ,
...
,
Confirming Pages
2
...
Show that
S1 = S2
...
Let S1 be the set of all linear combinations of the
vectors v1 , v2 ,
...
,
vk , v1 + v2
...
37
...
If there is a scalar c such that
A3 = cA1 , then show that the linear system has
infinitely many solutions
...
3
y
v
u = 1v
2
x
−v
38
...
If A3 = A1 + A2 , then show that the
linear system has infinitely many solutions
...
The equation
2y ′′ − 3y ′ + y = 0
is an example of a differential equation
...
Then show that any linear
combination of f (x) and g(x) is another solution
to the differential equation
...
2
...
At the
other extreme, there are infinitely many different subsets S such that the collection
of all linear combinations of vectors from S is ޒn
...
, en } is ޒn , but so
is the collection of linear combinations of T = {e1 ,
...
In this way S
and T both generate ޒn
...
As motivation let two vectors u and v in
2ޒlie on the same line, as shown in Fig
...
Thus, there is a nonzero scalar c such
that
Figure 1
This condition can also be written as
y
111
u = cv
u − cv = 0
v
u
x
In this case we say that the vectors u and v are linearly dependent
...
On the other hand,
the vectors shown in Fig
...
This concept is generalized to
sets of vectors in ޒn
...
, vm } in ޒn is linearly independent provided that the only solution to
the equation
c1 v1 + c2 v2 + · · · + cm vm = 0
is the trivial solution c1 = c2 = · · · = cm = 0
...
Confirming Pages
112
Chapter 2 Linear Combinations and Linear Independence
For example, the set of coordinate vectors
in ޒn is linearly independent
...
, en }
Determine whether the vectors
⎤
⎡
⎡
1
0
⎢ 0 ⎥
⎢ 1
⎥
v2 = ⎢
v1 = ⎢
⎣ 1 ⎦
⎣ 1
2
2
⎤
⎥
⎥
⎦
and
are linearly independent or linearly dependent
...
Then, from
equation 2, we have c3 = 0 and from equation 1 we have c1 = 0
...
Therefore, the
vectors are linearly independent
...
Solution
⎤
−2
v3 = ⎣ 3 ⎦
1
⎡
⎤
2
v4 = ⎣ 1 ⎦
1
⎡
As in Example 1, we need to solve
⎤
⎤
⎤
⎤ ⎡ ⎤
⎡
⎡
⎡
⎡
0
1
−1
−2
2
c1 ⎣ 0 ⎦ + c 2 ⎣ 1 ⎦ + c 3 ⎣ 3 ⎦ + c 4 ⎣ 1 ⎦ = ⎣ 0 ⎦
0
2
2
1
1
Confirming Pages
2
...
In Example 2, we verified that the set of vectors {v1 , v2 , v3 , v4 } is linearly dependent
...
THEOREM 3
Let S = {v1 , v2 ,
...
If n > m, then the
set S is linearly dependent
...
, n
In this way we have
c1 v1 + c2 v2 + · · · + cn vn = 0
in matrix form, is the homogeneous linear system
⎡
c1
⎢ c2
Ac = 0
where
c=⎢
...
...
Thus, the solution
is not unique and S = {v1 ,
...
Notice that from Theorem 3, any set of three or more vectors in , 2ޒfour or
more vectors in , 3ޒfive or more vectors in , 4ޒand so on, is linearly dependent
...
In this case, a set of n vectors
in ޒm may be either linearly independent or linearly dependent
...
Confirming Pages
114
Chapter 2 Linear Combinations and Linear Independence
EXAMPLE 3
Determine whether the matrices
1 0
M2 =
M1 =
3 2
−1 2
3 2
5 −6
−3 −2
M3 =
and
are linearly independent
...
Criteria to determine if a set of vectors is linearly independent or dependent are
extremely useful
...
THEOREM 4
If a set of vectors S = {v1 , v2 ,
...
Proof Suppose that the vector vk = 0, for some index k, with 1 ≤ k ≤ n
...
Confirming Pages
2
...
Proof Let S = {v1 , v2 ,
...
Then there are scalars c1 , c2 ,
...
Then solving the previous equation for the
vector vk , we have
c1
ck−1
ck+1
cn
vk = − v1 − · · · −
vk−1 −
vk+1 − · · · − vn
ck
ck
ck
ck
Conversely, let vk be such that
vk = c1 v1 + c2 v2 + · · · + ck−1 vk−1 + ck+1 vk+1 + · · · + cn vn
Then
c1 v1 + c2 v2 + · · · + ck−1 vk−1 − vk + ck+1 vk+1 + · · · + cn vn = 0
Since the coefficient of vk is −1, the linear system has a nontrivial solution
...
As an illustration, let S be the set of vectors
⎧⎡ ⎤ ⎡
⎤⎫
⎤⎡
2 ⎬
−1
⎨ 1
S = ⎣ 3 ⎦, ⎣ 2 ⎦, ⎣ 6 ⎦
⎭
⎩
2
1
1
Notice that the third vector is twice the first vector, that is,
⎤
⎤
⎡
⎡
1
2
⎣ 6 ⎦ = 2⎣ 3 ⎦
1
2
Thus, by Theorem 5, the set S is linearly dependent
...
Then show that not every vector can be written as a linear
combination of the others
...
Now, observe that
v1 and v3 are linear combinations of the other two vectors, that is,
v1 = 0v2 − v3
and
v3 = 0v2 − v1
Confirming Pages
116
Chapter 2 Linear Combinations and Linear Independence
However, v2 cannot be written as a linear combination of v1 and v3
...
3, any linear combination of the vectors v1 and v3 is a vector that
is along the x axis
...
Figure 3
THEOREM 6
1
...
2
...
Proof (1) Let T be a subset of S
...
, vk } and S = {v1 ,
...
, vm }
...
(2) Let T = {v1 ,
...
Label the vectors of S that are
not in T as vk+1 ,
...
Since T is linearly dependent, there are scalars c1 ,
...
, ck , ck+1 = ck+2 = · · · = cm = 0 is a collection of m scalars, not
all 0, with
c1 v1 + c2 v2 + · · · + ck vk + 0vk+1 + · · · + 0vm = 0
Consequently, S is linearly dependent
...
, vn } and an arbitrary vector not in S, we have
seen that it may or may not be possible to write v as a linear combination of S
...
That this cannot happen for a linearly independent set
is the content of Theorem 7
...
3 Linear Independence
THEOREM 7
117
Let S = {v1 , v2 ,
...
Suppose that there are scalars
c1 , c2 ,
...
Proof To prove the result, let v be written as
n
v=
n
ck vk
v=
and as
k=1
dk vk
k=1
Then
n
0=v−v=
k=1
n
ck vk −
dk vk
k=1
n
=
k=1
(ck − dk )vk
Since the set of vectors S is linearly independent, the only solution to this last
equation is the trivial one
...
, cn − dn = 0,
or c1 = d1 , c2 = d2 ,
...
2
...
Theorem 8 gives criteria for when the solution is
unique
...
The solution is unique if and only
if the column vectors of A are linearly independent
...
Suppose that the column
vectors A1 , A2 ,
...
⎥
and
d=⎢
...
⎦
⎣
...
...
In vector form, we have
c 1 A1 + c 2 A2 + · · · + c n An = b
and
d1 A1 + d2 A2 + · · · + dn An = b
By Theorem 7, c1 = d1 , c2 = d2 ,
...
Hence, c = d and the solution to
the linear system is unique
...
Let v be a
solution to the linear system Ax = b, and assume that the column vectors of A are
linearly dependent
...
, cn , not all 0, such that
⎡
⎢
that is, if c = ⎢
⎣
⎤
c 1 A1 + c 2 A2 + · · · + c n An = 0
c1
c2 ⎥
...
Since matrix multiplication satisfies the dis
...
cn
tributive property,
A(v + c) = Av + Ac = b + 0 = b
Therefore, the vector v + c is another solution to the linear system, and the solution
is not unique
...
Therefore,
we have shown that if the solution is unique, then the column vectors of A are
linearly independent
...
1
...
Linear Independence and Determinants
In Chap
...
(See Theorem 16 of Sec
...
6
...
This
gives an alternative method for showing that a set of vectors is linearly independent
...
EXAMPLE 5
Solution
⎧⎡
⎤ ⎡ ⎤⎫
⎤⎡
1 ⎬
1
⎨ 1
S = ⎣ 0 ⎦, ⎣ 2 ⎦, ⎣ 4 ⎦
⎭
⎩
5
4
3
Determine whether the set S is linearly independent
...
3 Linear Independence
119
The determinant of A can be found by expanding along the first column, so that
1 1
2 4
−0
4 5
4 5
= −6 − 0 + 3(2) = 0
det(A) = 1
+3
1 1
2 4
Therefore, by the previous remarks S is linearly dependent
...
THEOREM 9
Let A be a square matrix
...
1
...
3
...
5
...
The
The
The
The
The
The
matrix A is invertible
...
homogeneous linear system Ax = 0 has only the trivial solution
...
determinant of the matrix A is nonzero
...
Fact Summary
Let S be a set of m vectors in ޒn
...
If m > n, then S is linearly dependent
...
If the zero vector is in S, then S is linearly dependent
...
If u and v are in S and there is a scalar c such that u = cv, then S is
linearly dependent
...
If any vector in S is a linear combination of other vectors in S, then S is
linearly dependent
...
If S is linearly independent and T is a subset of S, then T is linearly
independent
...
If T is linearly dependent and T is a subset of S, then S is linearly
dependent
...
If S = {v1 ,
...
, cm is uniquely determined
...
The linear system Ax = b has a unique solution if and only if the column
vectors of A are linearly independent
...
If A is a square matrix, then the column vectors of A are linearly
independent if and only if det(A) ̸= 0
...
3
In Exercises 1–10, determine whether the given
vectors are linearly independent
...
v1 =
−1
1
v2 =
2
−3
2
...
v1 =
1
−4
v2 =
−2
8
4
...
v1
v3 =
v2 =
v3
10
...
v1 = ⎣ 2 ⎦ v2 = ⎣ 2 ⎦
1
3
⎡
⎡
6
...
v1 = ⎣ 4 ⎦ v2 = ⎣ 3 ⎦
−1
3
⎡
⎤
3
v3 = ⎣ −5 ⎦
5
⎡
⎤
⎤
⎡
3
−1
8
...
11
...
M1 =
−1 2
1 1
M3 =
2 2
−1 0
13
...
M1 =
M3 =
1 −2
−2 −2
−1 1
−2 2
0 −1
−1
1
2 0
−1 2
M2 =
M2 =
M4 =
M2 =
M4 =
1 4
0 1
0 −1
2
2
1
1
−1 −2
−2 −1
1 −1
−2
2
2 −1
Confirming Pages
121
2
...
15
...
v1 =
2
−1
v3 =
⎡
17
...
v1 = ⎣
⎡
v3 = ⎣
v2 =
v2 =
1
2
−2
21
...
−1
−2
22
...
a
...
A = ⎢
⎣ −1
3
−4
−2
are linearly independent
...
Let
⎤
1
v1 = ⎣ 1 ⎦
1
⎡
⎤
1
v2 = ⎣ 2 ⎦
3
⎡
⎤
1
v3 = ⎣ 1 ⎦
2
⎡
a
...
In Exercises 19 and 20, explain, without solving a
linear system, why the column vectors of the matrix A
are linearly dependent
...
a
...
A = ⎣ 1 0 1 ⎦
−1 1 0
⎡
1
a
1 0
1 0
1 2
0 1
1
3
⎤
−1
2
6
−2
0
2 ⎦
4 −3 −2
⎤
2 3
3 1 ⎥
⎥
−1 0 ⎦
5 2
b
...
Let
M1 =
1 0
−1 0
M3 =
M2 =
1 1
1 0
0 1
1 1
a
...
b
...
Show that the matrix
0 3
3 1
M=
cannot be written as a linear combination of
M1, M2 , and M3
...
⎤
⎡
1 2 0
25
...
A = ⎣ 1 −1
0
2 −4
In Exercises 27–30, determine whether the set of
polynomials is linearly independent or linearly
dependent
...
, pn (x)} is linearly independent
provided
c1 p1 (x) + c2 p2 (x) + · · · + cn pn (x) = 0
31
...
f1 (x) = ex f2 (x) = e−x
f3 (x) = e2x
33
...
f1 (x) = x f2 (x) = ex
f3 (x) = sin πx
35
...
36
...
37
...
p1 (x) = 1 p2 (x) = −2 +
p3 (x) = 2x p4 (x) = −12x + 8x 3
28
...
p1 (x) = 2 p2 (x) = x p3 (x) = x 2
p4 (x) = 3x − 1
30
...
A set of
functions S = {f1 (x), f2 (x),
...
38
...
39
...
Show that if v3 cannot be written as
a linear combination of v1 and v2 , then
{v1 , v2 , v3 } is linearly independent
...
Let S = {v1 , v2 , v3 }, where v3 = v1 + v2
...
Write v1 as a linear combination of the vectors
in S in three different ways
...
3 Linear Independence
b
...
41
...
, An are linearly independent,
then
{x ∈ ޒn | Ax = 0} = {0}
...
Let v1 ,
...
Define vectors wi = Avi , for i = 1,
...
Show
that the vectors w1 ,
...
Show, using a 2 × 2 matrix, that the
requirement of invertibility is necessary
...
If ad − bc ̸= 0, show that the vectors
a
b
c
d
and
are linearly independent
...
What can you say about the two
vectors?
2
...
Show that
T = {v1 , v2 , v1 + v2 + v3 } is also linearly
independent
...
Determine for which nonzero values of a the
vectors
⎤
⎤
⎡
⎡ 2 ⎤
⎡
1
0
a
⎣ 0 ⎦
⎣ 0 ⎦
⎣ a ⎦
and
1
2
1
are linearly independent
...
Let
⎧⎡
⎪ 2s − t
⎪
⎨⎢
s
S= ⎢
t
⎪⎣
⎪
⎩
s
⎤
⎫
⎪
⎪
⎬
⎥
⎥ s, t ∈ ޒ
⎦
⎪
⎪
⎭
a
...
b
...
Let
⎤
1
v1 = ⎣ 0 ⎦
2
⎡
and
⎤
1
v2 = ⎣ 1 ⎦
1
⎡
a
...
Find a vector ⎣ b ⎦ that cannot be written
c
as a linear combination of v1 and v2
...
Describe all vectors in 3ޒthat can be written
as a linear combination of v1 and v2
...
Let
⎤
1
v3 = ⎣ 0 ⎦
0
⎡
Is T = {v1 , v2 , v3 } linearly independent or
linearly dependent?
e
...
6
...
Show that S = {v1 , v2 , v3 , v4 } is linearly
dependent
...
Show that T = {v1 , v2 , v3 } is linearly
independent
...
Show that v4 can be written as a linear
combination of v1 , v2 , and v3
...
How does the set of all linear combinations of
vectors in S compare with the set of all linear
combinations of vectors in T ?
7
...
Let
a
...
b
...
c
...
Without solving the linear system, determine
whether it has a unique solution
...
Solve the linear system
...
Let
M1 =
1 0
−1 1
M3 =
0 1
1 0
M2 =
1
2
1
1
a
...
b
...
d
...
Can the matrix
1 −1
1
2
be written as a linear combination of M1, M2 ,
and M3 ?
⎡
1
A=⎣ 2
1
⎤
3
2
−1
3 ⎦
1 −1
a
...
b
...
What can you conclude as to
whether the linear system is consistent or
inconsistent?
c
...
Without solving the linear system, does the
system have a unique solution? Give two
reasons
...
Two vectors in ޒn are perpendicular provided
their dot product is 0
...
,
vn } is a set of nonzero vectors which are pairwise
perpendicular
...
a
...
b
...
c
...
Consider the equation
c1 v1 + c2 v2 + · · · + cn vn = 0
Use the dot product of vi , for each 1 ≤ i ≤ n,
with the expression on the left of the previous
equation to show that ci = 0, for each
1 ≤ i ≤ n
...
3 Linear Independence
125
Chapter 2: Chapter Test
In Exercises 1–33, determine whether the statement is
true or false
...
Every vector in 3ޒcan be written as a linear
combination of
⎤
⎡ ⎤
⎤
⎡
⎡
0
0
1
⎣ 0 ⎦
⎣ 1 ⎦
⎣ 0 ⎦
1
0
0
2
...
Every 2 × 2 matrix can be written as a linear
combination of
0
0
1
0
0
0
1 0
0 0
0 0
1 0
0
1
In Exercises 4–8, use the vectors
⎤
⎤
⎡
⎡
1
2
v1 = ⎣ 0 ⎦
v2 = ⎣ 1 ⎦
1
0
⎤
⎡
4
v3 = ⎣ 3 ⎦
−1
4
...
5
...
6
...
7
...
8
...
3ޒ
9
...
10
...
In Exercises 11–14, use the matrices
−1
0
M1 =
1
0
M3 =
0 0
0 1
0 0
1 0
M2 =
M4 =
2
1
−1
3
11
...
12
...
13
...
14
...
The vectors
⎡
⎤
1
⎣ s ⎦
1
⎤
s
⎣ 0 ⎦
0
⎡
⎤
0
⎣ 1 ⎦
s
⎡
are linearly independent if and only if s = 0 or
s = 1
...
The set S = {v1 , v2 } is linearly independent
...
Every vector in 2ޒcan be written as a linear
combination of v1 and v2
...
If the column vectors of a matrix A are v1 and v2 ,
then det(A) = 0
...
If b is in
2ޒ
and c1 v1 + c2 v2 = b, then
c1
c2
= A−1 b
where A is the 2 × 2 matrix with column vectors
v1 and v2
...
The column vectors of the matrix
cos θ
− sin θ
sin θ
cos θ
are linearly independent
...
If v1 and v2 are linearly independent vectors
in ޒn and v3 cannot be written as a scalar
multiple of v1 , then v1 , v2 , and v3 are linearly
independent
...
If S = {v1 , v2 ,
...
23
...
24
...
25
...
26
...
27
...
28
...
29
...
30
...
31
...
32
...
33
...
, v5 } is a subset of , 4ޒthen S is
linearly dependent
...
1
3
...
3
3
...
5
Definition of a Vector Space 129
Subspaces 140
Basis and Dimension 156
Coordinates and Change of Basis 173
Application: Differential Equations 185
W
hen a digital signal is sent through space
(sometimes across millions of miles),
errors in the signal are bound to occur
...
One obvious
method is to send messages repeatedly to increase
the likelihood of receiving them correctly
...
An innovative
methodology developed by Richard Hamming in
1947 involves embedding in the transmission a
means for error detection and self-correction
...
Some of these vectors are identified as codewords © Brand X Pictures/PunchStock/RF
depending on the configuration of the 1s and 0s within it
...
⎥
⎣
...
b7
127
Confirming Pages
128
Chapter 3 Vector Spaces
is a codeword, a test using matrix multiplication is
⎡
1 1 1 0 1
C=⎣ 0 1 1 1 0
1 0 1 1 0
performed
...
To carry out the test, we compute the product of C and b,
using modulo 2 arithmetic, where an even result corresponds to a 0 and an odd result
corresponds to a 1
...
Put another way, b
is a codeword if it is a solution to the homogeneous equation Cb ≡ 0 (mod 2)
...
On the other hand, if the vector received is not
a codeword, an algorithm involving the syndrome vector can be applied to restore
it to the original
...
1 Definition of a Vector Space
transmission
...
To see this, observe that if u and v are codewords, then
the sum u + v is also a codeword since
C(u + v) = Cu + Cv = 0 + 0 = 0 (mod 2)
It also has the property that every codeword can be written as a linear combination
of a few key codewords
...
The set of codewords in the chapter opener is an example
...
1
Definition of a Vector Space
In Chap
...
With respect to these
operations, we saw in Theorem 1 of Sec
...
1 that sets of vectors satisfy many of
the familiar algebraic properties enjoyed by numbers
...
In particular,
we consider as vectors any class of objects with definitions for addition and scalar
multiplication that satisfy the properties of this theorem
...
DEFINITION 1
Vector Space A set V is called a vector space over the real numbers provided
that there are two operations—addition, denoted by ⊕, and scalar multiplication,
denoted by ⊙—that satisfy all the following axioms
...
ޒ
1
...
2
...
(u ⊕ v) ⊕ w = u ⊕ (v ⊕ w)
Closed under addition
Addition is commutative
Addition is associative
Confirming Pages
130
Chapter 3 Vector Spaces
4
...
5
...
6
...
Additive identity
Additive inverse
Closed under scalar
multiplication
7
...
(c + d) ⊙ u = (c ⊙ u) ⊕ (d ⊙ u)
9
...
We also will
point out that for general vector spaces the set of scalars can be chosen from any
field
...
EXAMPLE 1
Solution
EXAMPLE 2
Solution
Euclidean Vector Spaces The set V = ޒn with the standard operations of
addition and scalar multiplication is a vector space
...
2
...
The fact that ޒn is closed under addition and scalar multiplication is a direct consequence of how these operations are defined
...
Vector Spaces of Matrices Show that the set V = Mm×n of all m × n matrices
is a vector space over the scalar field ,ޒwith ⊕ and ⊙ defined componentwise
...
Thus, the closure
axioms (axioms 1 and 6) are satisfied
...
The other
seven axioms are given in Theorem 4 of Sec
...
3
...
1 Definition of a Vector Space
131
When we are working with more abstract sets of objects, the operations of addition
and scalar multiplication can be defined in nonstandard ways
...
This is illustrated in the next several examples
...
ޒDefine addition and scalar multiplication by
a ⊕ b = 2a + 2b
k ⊙ a = ka
and
Show that addition is commutative but not associative
...
To determine whether addition is associative, we evaluate and compare the
expressions
(a ⊕ b) ⊕ c
and
a ⊕ (b ⊕ c)
In this case, we have
(a ⊕ b) ⊕ c = (2a + 2b) ⊕ c
= 2(2a + 2b) + 2c
= 4a + 4b + 2c
a ⊕ (b ⊕ c) = a ⊕ (2b + 2c)
= 2a + 2(2b + 2c)
= 2a + 4b + 4c
and
We see that the two final expressions are not equal for all choices of a, b, and
c
...
EXAMPLE 4
Let V =
...
Solution
In this case
a ⊕ b = ab
and
b ⊕ a = ba
Since a b ̸= ba for all choices of a and b, the commutative property of addition is
not upheld, and V is not a vector space
...
Confirming Pages
132
Chapter 3 Vector Spaces
EXAMPLE 5
Let V = {(a, b) | a, b ∈
...
Define
(v1 , v2 ) ⊕ (w1 , w2 ) = (v1 + w1 + 1, v2 + w2 + 1)
c ⊙ (v1 , v2 ) = (cv1 + c − 1, cv2 + c − 1)
and
Verify that V is a vector space
...
Since addition of real
numbers is commutative and associative, axioms 2 and 3 hold for the ⊕ defined
here
...
Specifically, 0 = (−1, −1),
so axiom 4 holds
...
The
remaining axioms all follow from the similar properties of the real numbers
...
, an are real numbers and an ̸= 0
...
Polynomials
comprise one of the most basic sets of functions and have many applications in
mathematics
...
Denote by Pn
the set of all polynomials of degree n or less
...
That is, if
p(x) = a0 + a1 x + a2 x 2 + · · · + an−1 x n−1 + an x n
Confirming Pages
3
...
Solution
Since the sum of two polynomials of degree n or less is another polynomial of
degree n or less, with the same holding for scalar multiplication, the set V is closed
under addition and scalar multiplication
...
For
example,
p(x) ⊕ q(x) = (a0 + b0 ) + (a1 + b1 )x + (a2 + b2 )x 2 + · · · + (an + bn )x n
= (b0 + a0 ) + (b1 + a1 )x + (b2 + a2 )x 2 + · · · + (bn + an )x n
= q(x) ⊕ p(x)
In the sequel we will use Pn to denote the vector space of polynomials of degree
n or less along with the zero polynomial
...
The latter set is not closed under addition
...
EXAMPLE 7
Vector Space of Real-Valued Functions Let V be the set of real-valued
functions defined on a common domain given by the interval [a, b]
...
Show that V is a real vector space
...
Similarly, the set V is closed
under scalar multiplication
...
Then
(f ⊕ g)(x) = f (x) + g(x) = g(x) + f (x) = (g ⊕ f )(x)
Addition is also associative since for any functions f, g, and h in V , we have
(f ⊕ (g ⊕ h))(x) = f (x) + (g ⊕ h)(x)
= f (x) + g(x) + h(x)
= (f ⊕ g)(x) + h(x)
= ((f ⊕ g) ⊕ h)(x)
The zero element of V , denoted by 0, is the function that is 0 for all real
numbers in [a, b]
...
The
distributive property of real numbers gives us
(c + d) ⊙ f (x) = (c + d)f (x) = cf (x) + df (x)
= (c ⊙ f )(x) ⊕ (d ⊙ f )(x)
so (c + d) ⊙ f = (c ⊙ f ) ⊕ (d ⊙ f ), establishing property 8
...
The set of complex numbers, denoted by ,ރis defined by
{ = ރa + bi | a, b ∈ }ޒ
where i satisfies
√
or equivalently
i = −1
i 2 = −1
The set of complex numbers is an algebraic extension of the real numbers, which
it contains as a subset
...
With the appropriate definitions of addition and scalar multiplication, the set of
complex numbers ރis a vector space
...
Define vector addition on ރby
z ⊕ w = (a + bi) + (c + di) = (a + c) + (b + d)i
and scalar multiplication by
α ⊙ z = α ⊙ (a + bi) = αa + (αb)i
Verify that ރis a vector space
...
1 Definition of a Vector Space
Solution
135
For each element z = a + bi in ,ރassociate the vector in 2ޒwhose components
are the real and imaginary parts of z
...
2ޒIn this way ރand 2ޒhave the same algebraic structure
...
ރ
In Example 8, we showed that ރis a vector space over the real numbers
...
We leave the
details to the reader
...
EXAMPLE 9
Let a, b, and c be fixed real numbers
...
Verify that V is a
vector space
...
The vectors u and v are in V provided that
au1 + bu2 + cu3 = 0
and
av1 + bv2 + cv3 = 0
Now by definition
u ⊕ v = (u1 + v1 , u2 + v2 , u3 + v3 )
We know that u ⊕ v is in V since
a(u1 + v1 ) + b(u2 + v2 ) + c(u3 + v3 ) = au1 + av1 + bu2 + bv2 + cu3 + cv3
= (au1 + bu2 + cu3 ) + (av1 + bv2 + cv3 )
=0
Similarly, V is closed under scalar multiplication since for any scalar α, we have
and
α ⊙ u = (αu1 , αu2 , αu3 )
a(αu1 ) + b(αu2 ) + c(αu3 ) = α(au1 + bu2 + cu3 ) = α(0) = 0
In this case the zero vector is (0, 0, 0), which is also on the plane P
...
Confirming Pages
136
Chapter 3 Vector Spaces
We conclude this section by showing that some familiar algebraic properties of
ޒn extend to abstract vector spaces
...
Proof Let u be an element of V
...
We show that v = w
...
THEOREM 2
Let V be a vector space, u a vector in V , and c a real number
...
2
...
4
...
Proof (1) By axiom 8, we have
0 ⊙ u = (0 + 0) ⊙ u = (0 ⊙ u) ⊕ (0 ⊙ u)
Adding the inverse −(0 ⊙ u) to both sides of the preceding equation gives the
result
...
Combining this with axiom 7 gives
c ⊙ 0 = c ⊙ (0 ⊕ 0) = (c ⊙ 0) ⊕ (c ⊙ 0)
Again adding the inverse −(c ⊙ 0) to both sides of the last equation gives the
result
...
Since −u is by definition the additive inverse of u and by Theorem 1 additive inverses are unique, we have
(−1) ⊙ u = −u
...
If c = 0, then the conclusion holds
...
Then
multiply both sides of
c⊙u=0
Confirming Pages
3
...
137
by
1⊙u=0
Fact Summary
1
...
2
...
The set of polynomials of
degree n or less with termwise operations is a vector space
...
In all vector spaces, additive inverses are unique
...
Exercise Set 3
...
3ޒShow that V with the
given operations for ⊕ and ⊙ is not a vector space
...
⎣ y1 ⎦ ⊕ ⎣ y2 ⎦ = ⎣ y1 − y2 ⎦
z1
z2
z1 − z2
⎡
⎡
⎤ ⎡
⎤
x1
cx1
c ⊙ ⎣ y1 ⎦ = ⎣ cy1 ⎦
z1
cz1
⎤
⎤ ⎡
⎤ ⎡
x1
x2
x1 + x2 − 1
2
...
⎣ y1 ⎦ ⊕ ⎣ y2 ⎦ = ⎣ 2y1 + 2y2 ⎦
z1
z2
2z1 + 2z2
⎡
⎡
⎤ ⎡
⎤
x1
cx1
c ⊙ ⎣ y1 ⎦ = ⎣ cy1 ⎦
z1
cz1
⎤ ⎡
⎤ ⎡
⎤
x1
x2
x1 + x2
4
...
Write out all 10 vector space axioms to show 2ޒ
with the standard componentwise operations is a
vector space
...
Write out all 10 vector space axioms to show that
M2×2 with the standard componentwise operations
is a vector space
...
Let V = 2ޒand define addition as the standard
componentwise addition and define scalar
multiplication by
c⊙
x
y
x+c
y
=
Show that V is not a vector space
...
Determine
whether V is a vector space
...
Let
a b
a, b, c ∈ ޒ
c 1
a
...
b
...
Let
⎫
⎧⎡
⎤
⎬
⎨ a
V = ⎣ b ⎦ a, b ∈ ޒ
⎭
⎩
1
a
...
b
...
9
...
c⊙
10
...
Determine
whether V is a vector space
...
k⊙
b+e
1
kb
1
In Exercises 14–19, let V be the set of 2 × 2 matrices
with the standard (componentwise) definitions for
vector addition and scalar multiplication
...
If V is not a vector space,
show that at least one of the 10 axioms does not hold
...
Let V be the set of all skew-symmetric matrices,
that is, the set of all matrices such that At = −A
...
Let V be the set of all upper triangular matrices
...
Let V be the set of all real symmetric matrices,
that is, the set of all matrices such that At = A
...
Let V be the set of all invertible matrices
...
Let V be the set of all idempotent matrices
...
Let B be a fixed matrix, and let V be the set of
all matrices A such that AB = 0
...
Let
11
...
Determine
whether V is a vector space
...
Let
V =
a
c
b
0
a, b, c ∈ ޒ
V =
a
c
b
−a
a, b, c ∈ ޒ
and define addition and scalar multiplication as
the standard componentwise operations
...
21
...
Define
A ⊕ B = AB
c ⊙ A = cA
Confirming Pages
3
...
Determine the additive identity and additive
inverse
...
Show that V is not a vector space
...
Let
t
1+t
V =
Define
t1
1 + t1
v⊕v=v
⊕
t2
1 + t2
t1 + t2
1 + t1 + t2
ct
1 + ct
23
...
Find the additive identity and inverse
...
Show that V is a vector space
...
Verify 0 ⊙ v = 0 for all v
...
Let
and
⎡
⎤
1
u=⎣ 0 ⎦
1
⎡
⎤
2
v = ⎣ −1 ⎦
1
S = {au + bv | a, b ∈ }ޒ
c⊙v=v
Show that S is a vector space
...
Find the additive identity and inverse
...
Show that V is a vector space
...
Verify that 0 ⊙ v = 0 for all v
...
Let v be a vector in ޒn , and let
Define ⊕ and ⊙ by
=
t
1+t
Show that S with the standard componentwise
operations is a vector space
...
Let
⎧⎡
⎤
⎨ x
S= ⎣ y ⎦
⎩
z
⎫
⎬
3x − 2y + z = 0
⎭
Show that S with the standard componentwise
operations is a vector space
...
Let S be the set of all vectors
⎤
⎡
x
⎣ y ⎦
z
in 3ޒsuch that x + y − z = 0 and
2x − 3y + 2z = 0
...
28
...
Determine the additive identity and additive
inverse
...
Show that V is a vector space
...
Show that if ⊕ and ⊙ are the standard
componentwise operations, then V is not a
vector space
...
Let V be the set of all real-valued functions
defined on ޒwith the standard operations that
satisfy f (0) = 1
...
Determine whether V is a vector space
...
Let f (x) = x 3 defined on ޒand let
V = {f (x + t) | t ∈ }ޒ
30
...
ޒ
Define
f (x + t1 ) ⊕ f (x + t2 ) = f (x + t1 + t2 )
Define f ⊕ g by
c ⊙ f (x + t) = f (x + ct)
(f ⊕ g)(x) = f (x) + g(x)
a
...
b
...
and define c ⊙ f by
(c ⊙ f )(x) = f (x + c)
ß
3
...
For example, the xy plane in 3ޒgiven by
⎫
⎧⎡
⎤
⎬
⎨ x
⎣ y ⎦ x, y ∈ ޒ
⎭
⎩
0
is a subset of
...
3ޒAnother example of a subspace of a vector space is given
in Example 9 of Sec
...
1
...
DEFINITION 1
Subspace A subspace W of a vector space V is a nonempty subset that is itself
a vector space with respect to the inherited operations of vector addition and scalar
multiplication on V
...
For example, let V be the vector space 2ޒwith the
standard definitions of addition and scalar multiplication
...
2 Subspaces
141
In this way we say that W is closed under addition
...
On the other hand, the subset
u⊕v
W
a∈ޒ
is not closed under addition, since
1
u
a
1
W=
y
a
1
v
x
W is not a subspace of V
Figure 1
⊕
b
1
=
a+b
2
which is not in W
...
1
...
Now let us suppose that a nonempty subset W is closed under both of the operations on V
...
Fortunately, our task is simplified as most of
these properties are inherited from the vector space V
...
Since u and v are
also in V , then
u⊕v=v⊕u
Similarly, any three vectors in W satisfy the associative property, as this property is
also inherited from V
...
Since W is closed under scalar multiplication, 0 ⊙ w ∈ W
...
3
...
Thus, 0 ∈ W
...
All the other vector space properties, axioms 7 through 10, are inherited
from V
...
Conversely, if W is a subspace of
V , then it is necessarily closed under addition and scalar multiplication
...
THEOREM 3
Let W be a nonempty subset of the vector space V
...
By Theorem 3, the first of the examples above with
W=
a
0
a∈ޒ
Confirming Pages
142
Chapter 3 Vector Spaces
is a subspace of 2ޒwhile the second subset
W =
a
1
a∈ޒ
is not
...
We also have that any vector space
V , being a subset of itself, is a subspace
...
Determine whether W is a subspace of V
...
Let
u=
u
u+1
and
v=
be vectors in W
...
Thus, W is not a subspace of V
...
In particular, if 0 ∈ W or the additive inverse of a vector is not in W , then W is not a
/
subspace
...
EXAMPLE 2
The trace of a square matrix is the sum of the entries on the diagonal
...
Confirming Pages
3
...
The sum of the two matrices is
a2 b2
a1 + a2 b1 + b2
a1 b1
⊕
=
w1 ⊕ w2 =
c1 d1
c2 d2
c1 + c2 d1 + d2
Since the trace of w1 ⊕ w2 is
(a1 + a2 ) + (d1 + d2 ) = (a1 + d1 ) + (a2 + d2 ) = 0
then W is closed under addition
...
Thus, W is also closed
under scalar multiplication
...
EXAMPLE 3
Solution
Let W be the subset of V = Mn×n consisting of all symmetric matrices
...
Show that W is a subspace of V
...
1
...
Let A
and B be matrices in W and c be a real number
...
1
...
EXAMPLE 4
Solution
Let V = Mn×n with the standard operations and W be the subset of V consisting
of all idempotent matrices
...
Recall that a matrix A is idempotent provided that A2 = A (See Exercise 42 of
Sec
...
3
...
Then
(c ⊙ A)2 = (cA)2 = c2 A2 = c2 A = c2 ⊙ A
so that
(c ⊙ A)2 = c ⊙ A
if and only if
c2 = c
Since this is not true for all values of c, then W is not closed under scalar multiplication and is not a subspace
...
THEOREM 4
A nonempty subset W of a vector space V is a subspace of V if and only if for
each pair of vectors u and v in W and each scalar c, the vector u ⊕ (c ⊙ v) is in W
...
By Theorem 3 it suffices to
show that W is closed under addition and scalar multiplication
...
Next, since W is nonempty, let u be any vector in W
...
Now, if c is any scalar, then c ⊙ u = 0 ⊕ (c ⊙ u)
and hence is in W
...
Conversely, if W is a subspace with u and v in W , and c a scalar, then since W
is closed under addition and scalar multiplication, we know that u ⊕ (c ⊙ v) is
in W
...
Solution
t ∈ޒ
⎫
⎬
⎭
Let u and v be vectors in W and c be a real number
...
Alternatively, the set W can be written as
⎫
⎧ ⎡
⎤
3
⎬
⎨
W = t⎣ 0 ⎦ t ∈ޒ
⎭
⎩
−2
which is a line through the origin in
...
2 Subspaces
145
We now consider what happens when subspaces are combined
...
Then the intersection W1 ∩ W2 is
also a subspace of V
...
Since W1 and W2 are both subspaces, then by Theorem 4, u ⊕ (c ⊙ v) is
in W1 and is in W2 , and hence is in the intersection
...
The extension to an arbitrary number of subspaces is stated in Theorem 5
...
Example 6 shows that the union of two subspaces need not be a subspace
...
Solution
The subspaces W1 and W2 consist of all vectors that lie on the x axis and the y axis,
respectively
...
2
...
These subspaces are used to analyze certain properties
of the vector space
...
2
...
, vk } be a set of vectors in a vector
space V , and let c1 , c2 ,
...
A linear combination of the vectors of
S is an expression of the form
(c1 ⊙ v1 ) ⊕ (c2 ⊙ v2 ) ⊕ · · · ⊕ (ck ⊙ vk )
When the operations of vector addition and scalar multiplication are clear, we
will drop the use of the symbols ⊕ and ⊙
...
Care is still needed when interpreting
expressions defining linear combinations to distinguish between vector space operations and addition and multiplication of real numbers
...
, vn } be
a (finite) set of vectors in V
...
, cn ∈ }ޒ
PROPOSITION 1
If S = {v1 , v2 ,
...
Proof Let u and w be vectors in span(S) and c a scalar
...
, cn and d1 ,
...
Confirming Pages
3
...
Solution
147
To determine if v is in the span of S, we consider
⎤
⎤
⎡
⎡
⎡
2
1
c1 ⎣ −1 ⎦ + c2 ⎣ 3 ⎦ + c3 ⎣
0
−2
Solving this linear system, we obtain
c1 = −2
c2 = 1
and
the equation
⎤
⎤ ⎡
−4
1
1 ⎦=⎣ 4 ⎦
−6
4
c3 = −1
This shows that v is a linear combination of the vectors in S and is thus in span(S)
...
3
...
5v
span{v1 , v2 }
v2
x
Figure 3
Since every line through the origin in 2ޒand , 3ޒand every plane through the
origin in , 3ޒcan be written as the span of vectors, these sets are subspaces
...
Solution
Recall that a 2 × 2 matrix is symmetric provided that it has the form
a
b
b
c
Since any matrix in span(S) has the form
a
1 0
0 0
+b
0
1
1
0
0 0
0 1
+c
=
a
b
b
c
span(S) is the collection of all 2 × 2 symmetric matrices
...
3ޒThe vector v is in span(S) provided that there are
scalars c1 , c2 , and c3 such that
⎤
⎤
⎤ ⎡
⎡
⎡ ⎤
⎡
a
1
1
1
c1 ⎣ 1 ⎦ + c 2 ⎣ 0 ⎦ + c 3 ⎣ 1 ⎦ = ⎣ b ⎦
c
1
2
0
This linear system in matrix form is given
⎡
1 1
⎣ 1 0
1 2
After row-reducing, we obtain
⎡
1 0 0
⎣ 0 1 0
0 0 1
by
1
1
0
⎤
a
b ⎦
c
⎤
−2a + 2b + c
⎦
a− b
2a − b − c
Confirming Pages
3
...
Thus, every vector in 3ޒcan be written as a linear combination of the three
given vectors
...
3ޒ
EXAMPLE 10
Solution
z
y
x
7x − y + 9z = 0
Figure 4
Show that
⎧⎡
⎤⎫
⎤⎡
⎤⎡
−6 ⎬
4
⎨ −1
span ⎣ 2 ⎦, ⎣ 1 ⎦, ⎣ 3 ⎦ ̸= 3ޒ
⎭
⎩
5
−3
1
We approach this problem in the same manner as in Example 9
...
We can see this by
reducing the augmented matrix
⎡
⎤
⎡
⎤
a
−1 4 −6
−1
4 −6 a
⎣ 2
⎣ 0 9 −9
⎦
b + 2a
1
3 b ⎦
to
7
1
1 −3
5 c
0 0
0 c + 9a − 9b
This last augmented matrix shows that the original system is consistent only if
7a − b + 9c = 0
...
3ޒSee Fig
...
Notice that the solution to the equation 7a − b + 9c = 0 can be written in
parametric form by letting b = s, c = t, and a = 1 s − 9 t, so that
7
7
⎫
⎧⎡
⎤⎫ ⎧ ⎡ 1 ⎤
⎤⎡
⎤⎡
⎡ 9 ⎤
−6 ⎬ ⎨
4
−7
⎬
⎨ −1
7
span ⎣ 2 ⎦, ⎣ 1 ⎦, ⎣ 3 ⎦ = s ⎣ 1 ⎦ + t ⎣ 0 ⎦ s, t ∈ ޒ
⎭ ⎩
⎭
⎩
5
−3
1
0
1
In this way, we see that the span is the subspace of all linear combinations of
two linearly independent vectors, highlighting the geometric interpretation of the
solution as a plane
...
3
...
Specifically, in
Example 9, we saw that the set of vectors
⎧⎡ ⎤ ⎡
⎤⎫
⎤ ⎡
1 ⎬
1
⎨ 1
S = {v1 , v2 , v3 } = ⎣ 1 ⎦, ⎣ 0 ⎦, ⎣ 1 ⎦
⎭
⎩
0
2
1
Confirming Pages
150
Chapter 3 Vector Spaces
spans
...
To see this, observe that the
⎤
1 1
0 1 ⎦
2 0
whose column vectors are the vectors of S, is row equivalent to the 3 × 3 identity
matrix, as seen in the solution to Example 9
...
] Consequently, by Theorem 7
of Sec
...
3, we have that every vector in 3ޒcan be written in only one way as a
linear combination of the vectors of S
...
Hence, not every vector in 3ޒ
can be written as a linear combination of the vectors in S ′
...
The vectors v′1 and v′2 are linearly independent vectors
which span the plane shown in Fig
...
3ޒ
To pursue these notions a bit further, there are many sets of vectors which span
...
2
...
The
ideal case, in terms of minimizing the number of vectors, is illustrated in Example 9
where the three linearly independent vectors of S span
...
3
...
EXAMPLE 11
Show that the set of matrices
S=
−1 0
,
2 1
does not span M2×2
...
1 1
1 0
Confirming Pages
3
...
Solution
An arbitrary vector in P2 can be written in the form ax 2 + bx + c
...
Therefore, span(S) = P2
...
DEFINITION 4
Null Space and Column Space Let A be an m × n matrix
...
The null space of A, denoted by N (A), is the set of all vectors in ޒn such
that Ax = 0
...
The column space of A, denoted by col(A), is the set of all linear combinations
of the column vectors of A
...
Moreover,
by Proposition 1, col(A) is a subspace of ޒm
...
2
...
THEOREM 6
EXAMPLE 13
Let A be an m × n matrix
...
Let
⎤
1 −1 −2
2
3 ⎦
A = ⎣ −1
2 −2 −2
⎡
a
...
b
...
Solution
and
⎤
3
b=⎣ 1 ⎦
−2
⎡
a
...
The corresponding augmented matrix is given by
⎡
⎤
⎡
1 −1 −2
1 0
3
⎣ −1
⎣ 0 1
2
3
1 ⎦
which reduces to
2 −2 −2 −2
0 0
vector x such
0
0
1
⎤
3
8 ⎦
−4
Confirming Pages
3
...
Specifically,
⎤
⎤
⎡
⎤
⎡
⎤
⎡
⎡
−2
−1
1
3
⎣ 1 ⎦ = 3 ⎣ −1 ⎦ + 8 ⎣ 2 ⎦ − 4 ⎣ 3 ⎦
−2
−2
2
−2
b
...
The corresponding augmented matrix for this linear system is the same as in
part (a), except for the right column that consists of three zeros
...
In Theorem 7 we show that the null space of a matrix also is a subspace
...
Then the null space of A is a subspace of ޒn
...
That is, A0 = 0
...
Then
A(u + cv) = Au + A(cv)
= Au + cA(v)
= 0 + c0 = 0
Hence, u + cv is in N (A), and therefore by Theorem 4, N (A) is a subspace
...
1
...
2
...
3
...
The span of two linearly independent vectors in 3ޒis a
plane that passes through the origin
...
4
...
The union of two subspaces
may not be a subspace
...
If A is an m × n matrix, the null space of A is a subspace of ޒn and the
column space of A is a subspace of ޒm
...
The linear system Ax = b is consistent if and only if b is in the column
space of A
...
2
In Exercises 1–6, determine whether the subset S of
2ޒis a subspace
...
0
y∈ޒ
1
...
S =
x
y
xy ≥ 0
3
...
S =
x
y
x2 + y2 ≤ 1
5
...
S =
x
2x − 1
x
3x
x∈ޒ
x∈ޒ
In Exercises 7–10, determine whether the subset S of
3ޒis a subspace
...
S = ⎣ x2 ⎦ x1 + x3 = −2
⎭
⎩
x3
⎧⎡
⎫
⎤
⎨ x1
⎬
8
...
S = ⎣
⎭
⎩
t +s
⎧⎡
⎫
⎤
⎨ x1
⎬
10
...
11
...
12
...
13
...
14
...
15
...
16
...
17
...
18
...
In Exercises 19–24, determine whether the subset S of
P5 is a subspace
...
Let S be the set of all polynomials with degree
equal to 3
...
Let S be the set of all polynomials with even
degree
...
Let S be the set of all polynomials such that
p(0) = 0
...
Let S be the set of all polynomials of the form
p(x) = ax 2
...
Let S be the set of all polynomials of the form
p(x) = ax 2 + 1
...
Let S be the set of all polynomials of degree less
than or equal to 4
...
v = ⎣ −1 ⎦
1
⎤
⎡
−2
26
...
2 Subspaces
27
...
S = ⎣ 2 ⎦, ⎣ 0 ⎦, ⎣ 1 ⎦, ⎣ 1 ⎦
⎭
⎩
1
1
3
1
−2 1
6 5
28
...
Let
⎧⎡
⎤
⎤⎡
⎤⎡
1
−1
⎨ 1
S = ⎣ 2 ⎦, ⎣ 3 ⎦, ⎣ 2 ⎦,
⎩
−1
−1
2
⎤⎫
⎤⎡
⎡
−3 ⎬
0
⎣ 6 ⎦, ⎣ 4 ⎦
⎭
5
1
In Exercises 29 and 30, determine if the polynomial
p(x) is in the span of
S = {1 + x, x 2 − 2, 3x}
29
...
3x 2 − x − 4
In Exercises 31–36, give an explicit description of the
span of S
...
S = ⎣ −1 ⎦, ⎣ 3 ⎦
⎭
⎩
−1
−2
⎧⎡
⎤⎫
⎤⎡ ⎤⎡
1 ⎬
2
⎨ 1
32
...
34
...
S = x, (x + 1)2 , x 2 + 3x + 1
36
...
a
...
b
...
S = ⎣ 1 ⎦, ⎣ 0 ⎦
⎭
⎩
−2
−1
⎧⎡
⎤ ⎡ ⎤⎫
⎤⎡
2 ⎬
0
⎨ 1
38
...
S = ⎣ 3 ⎦, ⎣ 1 ⎦, ⎣ −1 ⎦
⎭
⎩
−1
0
2
a
...
b
...
Let
⎧⎡
⎤
⎤⎡
⎤⎡
1
−1
⎨ 1
T = ⎣ 2 ⎦, ⎣ 3 ⎦, ⎣ 2 ⎦,
⎩
−1
−1
2
⎤⎫
⎡
−3 ⎬
⎣ 4 ⎦
⎭
5
Is span(T ) = ? 3ޒIs T linearly independent?
d
...
Let
S=
−3
,
0
2
0
1 1
,
1 0
−3 1
1 0
a
...
b
...
Let
2
0
−3
,
0
0
0
T =
0
1
1 1
,
1 0
Is span(T ) = M2×2 ? Is T linearly
independent?
−3 1
,
1 0
Confirming Pages
156
Chapter 3 Vector Spaces
43
...
Find span(S)
...
Are the three matrices that generate S linearly
independent?
47
...
Is S linearly independent?
c
...
d
...
Is T linearly
independent? Is span(T ) = P3 ?
44
...
Show that S is a subspace of
...
Find two vectors that span S
...
Let
⎧⎡
⎤
−s
⎨
S = ⎣ s − 5t ⎦
⎩
3t + 2s
s, t ∈ ޒ
a
...
3ޒ
⎫
⎬
⎭
b
...
1
2
Is S a subspace? Explain
...
Let A be an m × n matrix and let
S = x ∈ ޒn
Ax = 0
Is S a subspace? Explain
...
Let A be a fixed n × n matrix and let
S = { B ∈ Mn×n | AB = BA}
Is S a subspace? Explain
...
Suppose S and T are subspaces of a vector
space V
...
Are the two vectors found in part (b) linearly
independent?
d
...
51
...
um }) and
T = span({v1 , v2 ,
...
Show that
S + T = span({u1 ,
...
vn })
(See Exercise 50
...
Let
S=
d
...
Are the two vectors found in part (b) linearly
independent?
a
−a
−x
z
x, y, z ∈ ޒ
b
c
a, b, c ∈ ޒ
and
46
...
Describe the subspace S
...
Is S = M2×2 ?
ß
3
...
Show that S and T are subspaces
...
Describe all matrices in S + T
...
)
Basis and Dimension
In Sec
...
3 we introduced the notion of linear independence and its connection to
the minimal sets that can be used to generate or span ޒn
...
Confirming Pages
3
...
As a first step, we
generalize the concept of linear independence to abstract vector spaces introduced in
Sec
...
1
...
, vm } in a vector space V is called linearly independent provided that
the only solution to the equation
c1 v1 + c2 v2 + · · · + cm vm = 0
is the trivial solution c1 = c2 = · · · = cm = 0
...
EXAMPLE 1
Let
⎤
1
v1 = ⎣ 0 ⎦
−1
⎡
and let W = span{v1 , v2 , v3 }
...
Show that v3 is a linear combination of v1 and v2
...
Show that span{v1 , v2 } = W
...
Show that v1 and v2 are linearly independent
...
To solve the vector equation
⎤
⎤
⎤ ⎡
⎡
⎡
−3
1
0
c1 ⎣ 0 ⎦ + c 2 ⎣ 2 ⎦ = ⎣ 4 ⎦
7
−1
2
we row-reduce the corresponding augmented matrix for the linear system to
obtain
⎡
⎤
⎡
⎤
1 0 −3
1 0 −3
⎣ 0 2
4 ⎦ −→ ⎣ 0 1
2 ⎦
−1 2
0 0
7
0
The solution to the vector equation above is c1 = −3 and c2 = 2, therefore
v3 = −3v1 + 2v2
Notice that the vector v3 lies in the plane spanned by v1 and v2 , as shown
in Fig
...
Figure 1
b
...
As a
result, the vector v3 is not needed to generate W, so that span{v1 , v2 } = W
...
Since neither vector is a scalar multiple of the other, the vectors v1 and v2 are
linearly independent
...
We
accomplished this by eliminating the vector v3 from the set, which, as we saw in the
solution, is a linear combination of the vectors v1 and v2 and hence does not affect
the span
...
THEOREM 8
Let v1 ,
...
, vn }
...
, vn−1 , then
W = span{v1 ,
...
, vn−1 }, then there are scalars c1 , c2 ,
...
Then v = c1 v1 + · · · + cn−1 vn−1 + 0vn , so that v
is also in span{v1 ,
...
Therefore,
span{v1 ,
...
, vn }
Conversely, if v is in span{v1 ,
...
, cn such
that v = c1 v1 + · · · + cn vn
...
, vn−1 ,
there are scalars d1 ,
...
Then
v = c1 v1 + · · · + cn−1 vn−1 + cn vn
= c1 v1 + · · · + cn−1 vn−1 + cn (d1 v1 + · · · + dn−1 vn−1 )
= (c1 + cn d1 )v1 + · · · + (cn−1 + cn dn−1 )vn−1
so that v ∈ span{v1 ,
...
, vn } ⊆ span{v1 ,
...
Therefore,
W = span{v1 ,
...
, vn−1 }
EXAMPLE 2
Compare the column spaces of the matrices
⎡
1
⎢ 2
A=⎢
⎣ 1
3
⎤
0 −1 1
0
1 7 ⎥
⎥
1
2 7 ⎦
4
1 5
⎡
1
⎢ 2
and B = ⎢
⎣ 1
3
⎤
0 −1 1
2
0
1 7 −1 ⎥
⎥
1
2 7
1 ⎦
4
1 5 −2
Confirming Pages
3
...
2 it can be shown that the the column
vectors of the matrix A are linearly independent
...
2
...
In addition, the first four column vectors of B are the same as
the linearly independent column vectors of A, hence by Theorem 5 of Sec
...
3 the
fifth column vector of B must be a linear combination of the other four vectors
...
As a consequence of Theorem 8, a set of vectors {v1 ,
...
, vn } is minimal, in the sense of the number of spanning vectors, when
they are linearly independent
...
2 that when a vector in ޒn can
be written as a linear combination of vectors from a linearly independent set, then the
representation is unique
...
THEOREM 9
If B = {v1 , v2 ,
...
Motivated by these ideas, we now define what we mean by a basis of a vector
space
...
B is a linearly independent set of vectors in V
2
...
, en }
ޒn
spans
and is linearly independent, so that S is a basis for ޒn
...
In Example 3 we give a basis for , 3ޒwhich is
not the standard basis
...
3ޒ
⎧⎡ ⎤ ⎡
⎤⎫
⎤⎡
0 ⎬
1
⎨ 1
B = ⎣ 1 ⎦, ⎣ 1 ⎦, ⎣ 1 ⎦
⎭
⎩
−1
1
0
Confirming Pages
160
Chapter 3 Vector Spaces
Solution
First, to show that S spans , 3ޒwe must show that the equation
⎤
⎤
⎤
⎤ ⎡
⎡
⎡
⎡
a
1
1
0
c1 ⎣ 1 ⎦ + c2 ⎣ 1 ⎦ + c3 ⎣ 1 ⎦ = ⎣ b ⎦
c
0
1
−1
has a solution for every choice of a, b, and c in
...
For example, sup⎤
⎡
1
pose that v = ⎣ 2 ⎦; then
3
c1 = 2(1) − 2 − 3 = −3
c2 = −1 + 2 + 3 = 4
c3 = −1 + 2 = 1
so that
⎤ ⎡ ⎤
⎤ ⎡
⎤
⎡
1
0
1
1
−3 ⎣ 1 ⎦ + 4 ⎣ 1 ⎦ + ⎣ 1 ⎦ = ⎣ 2 ⎦
3
−1
1
0
Since the linear system is consistent for all choices of a, b, and c, we know that
span(B) =
...
2
...
Therefore, B is a basis for
...
Again by Theorem 9 of Sec
...
3, B is linearly independent
...
For example, consider the standard basis B = {e1 , e2 , e3 } for
...
Confirming Pages
3
...
THEOREM 10
Let B = {v1 ,
...
Then
Bc = {cv1 , v2 ,
...
Proof If v is an element of the vector space V , then since B is a basis there are
scalars c1 ,
...
But since c ̸= 0, we can
also write
c1
v = (cv1 ) + c2 v2 + · · · + cn vn
c
so that v is a linear combination of the vectors in Bc
...
To
show that Bc is linearly independent, consider the equation
c1 (cv1 ) + c2 v2 + · · · + cn vn = 0
By vector space axiom 9 we can write this as
(c1 c)(v1 ) + c2 v2 + · · · + cn vn = 0
Now, since B is linearly independent, the only solution to the previous equation is
the trivial solution
c1 c = 0
c2 = 0
cn = 0
...
Therefore, Bc is linearly independent and hence is a
basis
...
Solution
0 1
,
0 0
1
0
,
0 −1
0 0
1 0
In Example 2 of Sec
...
2 we showed that W is a subspace of M2×2
...
We also know that S is linearly independent
system
0
1
0
0 1
0 0
+ c2
+ c3
=
c1
0
0 −1
0 0
1 0
is equivalent to
c1
0 0
c2
=
0 0
c3 −c1
which has only the trivial solution c1 = c2 = c3 = 0
...
Similar to the situation for ޒn , there is a natural set of matrices in Mm×n that
comprise a standard basis
...
The set S = {eij | 1 ≤ i ≤ m, 1 ≤ j ≤ n} is the standard basis for Mm×n
...
Solution
0 0
0 1
1
−4
Let
a b
c d
be an arbitrary matrix in M2×2
...
Hence, B does not span M2×2 , and therefore is not a basis
...
3 Basis and Dimension
163
Notice that in Example 5 the set B is linearly independent, but the three matrices
do not span the set of all 2 × 2 matrices
...
Another vector space we have already considered is Pn , the vector space of
polynomials of degree less than or equal to n
...
, x n }
Indeed, if p(x) = a0 + a1 x + a2 x 2 + · · · + an x n is any polynomial in Pn , then it is a
linear combination of the vectors in B, so span(B) = Pn
...
We can write this equation as
c0 + c1 x + c2 x 2 + · · · + cn x n = 0 + 0x + 0x 2 + · · · + 0x n
Since two polynomials are identical if and only if the coefficients of like terms are
equal, then c1 = c2 = c3 = · · · = cn = 0
...
EXAMPLE 6
Solution
Show that B = {x + 1, x − 1, x 2 } is a basis for P2
...
To verify that B spans P2 , we
must show that scalars c1 , c2 , and c3 can be found such that
c1 (x + 1) + c2 (x − 1) + c3 x 2 = ax 2 + bx + c
for every choice of a, b, and c
...
This linear system has the unique solution
c1 = 1 (b + c)
2
c2 = 1 (b − c)
2
c3 = a
Therefore, span(B) = P2
...
Therefore, the set B is also linearly independent and hence is a basis
...
Specifically, we have
1 = 1 (x + 1) − 1 (x − 1)
2
2
and x 2 is already in B
...
2
...
Hence, any basis of ޒn contains
at most n vectors
...
For example, as we have already seen, two linearly
independent vectors in 3ޒspan a plane
...
The number n, an invariant of ޒn , is called the dimension of ޒn
...
THEOREM 11
If a vector space V has a basis with n vectors, then every basis has n vectors
...
, vn } be a basis for V , and let T = {u1 , u2 ,
...
We claim that T is linearly dependent
...
That is,
u1 = λ11 v1 + λ12 v2 + · · · + λ1n vn
u2 = λ21 v1 + λ22 v2 + · · · + λ2n vn
...
...
After collecting like terms, we obtain
(c1 λ11 + c2 λ21 + · · · + cm λm1 )v1
+ (c1 λ12 + c2 λ22 + · · · + cm λm2 )v2
...
...
...
c1 λ1n + c2 λ2n + · · · + cm λmn = 0
This last linear system is not square with n equations in the m variables c1 ,
...
Since m > n, by Theorem 3 of Sec
...
3 the linear system has a nontrivial solution,
and hence T is linearly dependent
...
3 Basis and Dimension
165
Now, suppose that T = {u1 , u2 ,
...
By the result we just established it must be the case that m ≤ n
...
Consequently n = m as desired
...
DEFINITION 3
Dimension of a Vector Space The dimension of the vector space V , denoted
by dim(V ), is the number of vectors in any basis of V
...
, en }
{e11 , e12 , e21 , e22 }
{eij | 1 ≤ i ≤ m, i ≤ j ≤ n}
{1, x, x 2 ,
...
If such a basis does not exist, then V is called infinite
dimensional
...
In this text our focus is on finite
dimensional vector spaces, although infinite dimensional vector spaces arise naturally
in many areas of science and mathematics
...
THEOREM 12
Suppose that V is a vector space with dim(V ) = n
...
If S = {v1 , v2 ,
...
2
...
, vn } and span(S) = V , then S is linearly independent and
S is a basis
...
If
v is in S, then v is in span(S)
...
As in the proof of
Confirming Pages
166
Chapter 3 Vector Spaces
Theorem 11, the set {v, v1 , v2 ,
...
Thus, there are scalars
c1 ,
...
Solving for v gives
c1
c2
cn
v=−
v1 −
v2 − · · · −
vn
cn+1
cn+1
cn+1
As v was chosen arbitrarily, every vector in V is in span(S) and therefore V =
span(S)
...
Then by
Theorem 5 of Sec
...
3 one of the vectors in S can be written as a linear combination of the other vectors
...
We continue this process until we arrive at a linearly independent spanning
set with less than n elements
...
EXAMPLE 7
Determine whether
is a basis for
...
Let
⎤
1 0
1 0 ⎦
0 1
be the matrix whose column vectors are the vectors of B
...
2
...
We can also show that B is a basis by showing
that B spans
...
3
...
, vm } is
a subspace
...
From Theorem 12, this is equivalent to determining whether S is linearly independent
...
, vm are in ޒn , as in Example 7, form the matrix A with ith
column vector equal to vi
...
1
...
Now if the column
vectors of A are linearly dependent, then there are scalars c1 ,
...
3 Basis and Dimension
167
that c1 v1 + · · · + cm vm = 0
...
⎥
⎣
...
cm
Then Ac = 0 = Bc
...
Observe that the column vectors of B associated with the
pivots are linearly independent since none of the vectors can be a linear combination
of the column vectors that come before
...
By Theorem 12,
these same column vectors form a basis for col(A)
...
For example, the row-reduced
echelon form of the matrix
⎤
⎤
⎡
⎡
1 0 1
1 0 1
is the matrix
B=⎣ 0 1 1 ⎦
A=⎣ 0 0 0 ⎦
0 0 0
0 1 1
However, the column spaces of A and B are different
...
See Fig
...
and
z
z
v2
span{v1 , v2 }
xz plane
w2
y
w1
v1
x
xy plane
Figure 2
x
y
span{w1 , w2 }
Confirming Pages
168
Chapter 3 Vector Spaces
The details of these observations are made clearer by
example
...
⎡
1
⎣ 1
0
system, we reduce the corresponding augmented
That is,
⎤
⎡
1 2 2 3 0
1 0 0 1
0 1 1 1 0 ⎦ reduces to ⎣ 0 1 0 1
1 2 1 3 0
0 0 1 0
matrix to reduced
0
1
1
⎤
0
0 ⎦
0
In the general solution, the variables c1 , c2 , and c3 are the dependent variables corresponding to the leading ones in the reduced matrix, while c4 and c5 are free
...
To establish the claim in this case, let s = 1 and t = 0
...
Also, to see that v5 is a linear combination of v1 , v2 , and v3 , we let s = 0 and t = 1
...
Observe that S ′ is linearly independent since each of these vectors corresponds to a
column with a leading 1
...
Confirming Pages
3
...
Given a set S = {v1 , v2 , v3 ,
...
Form a matrix A whose column vectors are v1 , v2 ,
...
2
...
3
...
In Example 8 we use the process described above to show how to obtain a basis
from a spanning set
...
Solution
Start by constructing the
reduce the matrix
⎡
1 0 1
⎣ 0 1 1
1 1 2
matrix whose column vectors are the vectors in S
...
Therefore, a basis B for span(S) is given by {v1 , v2 , v4 }, that is,
⎧⎡
⎤⎫
⎤⎡ ⎤⎡
1 ⎬
0
⎨ 1
B = ⎣ 0 ⎦, ⎣ 1 ⎦, ⎣ 2 ⎦
⎭
⎩
1
1
1
A set of vectors in a vector space that is not a basis can be expanded to a basis
by using Theorem 13
...
, vn } is a linearly independent subset of a vector space
V
...
, vn } is linearly
independent
...
Thus, cn+1 = 0 and the starting
equation is equivalent to
c1 v1 + c2 v2 + · · · + cn vn = 0
Confirming Pages
170
Chapter 3 Vector Spaces
Since S is linearly independent, then
c1 = 0
c2 = 0
...
An alternative method for expanding a set of vectors in ޒn to a basis is to add the
coordinate vectors to the set and then trim the resulting set to a basis
...
EXAMPLE 9
Solution
Find a basis for 4ޒthat contains the vectors
⎡ ⎤
1
⎢ 0 ⎥
and
v1 = ⎢ ⎥
⎣ 1 ⎦
0
⎤
−1
⎢ 1 ⎥
⎥
v2 = ⎢
⎣ −1 ⎦
0
⎡
Notice that the set {v1 , v2 } is linearly independent
...
4 = ) 4ޒTo find a basis, form the set S = {v1 , v2 , e1 , e2 , e3 , e4 }
...
4ޒNow proceed as in
Example 8 by reducing the matrix
⎤
⎡
1 −1 1 0 0 0
⎢ 0
1 0 1 0 0 ⎥
⎥
⎢
⎣ 1 −1 0 0 1 0 ⎦
0
0 0 0 0 1
to reduced row echelon form
⎡
1
⎢ 0
⎢
⎣ 0
0
0
1
0
0
0
0
1
0
⎤
1
1 0
1
0 0 ⎥
⎥
0 −1 0 ⎦
0
0 1
Observe that the pivot columns are 1, 2, 3, and 6
...
The following useful corollary results from repeated application of Theorem 13
...
, vr } be a linearly independent set of vectors in an n-dimensional
vector space V with r < n
...
That is, there
are vectors {vr+1 , vr+2 ,
...
, vr , vr+1 ,
...
Confirming Pages
3
...
1
...
The set of vectors can be
linearly independent or linearly dependent
...
2
...
3
...
4
...
, en
...
The standard basis for M2×2 consists of the four matrices
1 0
0 0
0 0
0 1
0 0
1 0
0 1
0 0
6
...
, x n }
...
If vn is a linear combination of v1 , v2 ,
...
9
...
11
...
n
span{v1 , v2 ,
...
, vn−1 , vn }
dim( = ) ޒn, dim(Mm×n ) = mn, dim(Pn ) = n + 1
If a set B of n vectors of V is linearly independent, then B is a basis for V
...
Every linearly independent subset of V can be expanded to a basis for V
...
Exercise Set 3
...
⎧⎡
⎤⎫
⎤⎡
0 ⎬
⎨ 2
1
...
S = ⎣ 0 ⎦, ⎣ 1 ⎦, ⎣ 1 ⎦ V = 3ޒ
⎭
⎩
1
0
1
2
...
S = {2, x, x 3 + 2x 2 − 1}V = P3
5
...
S =
0 1
,
0 0
1 0
,
0 1
0 0
,
1 0
2
1
−3
2
V = M2×2
In Exercises 7–12, show that S is basis for the vector
space V
...
S =
1
,
1
8
...
S = ⎣ −1 ⎦, ⎣ −2 ⎦, ⎣ 2 ⎦
⎭
⎩
−2
−3
1
V = 3ޒ
⎧⎡
⎤⎫
⎤⎡
⎤⎡
1 ⎬
2
⎨ −1
10
...
S =
1 0
,
1 0
V = M2×2
1 1
,
−1 0
V = 4ޒ
⎧⎡
⎤⎡
2
⎪ −1
⎪
⎨⎢
1 ⎥⎢ 1
⎥, ⎢
16
...
S = {1, 2x 2 + x + 2, −x 2 + x} V = P2
18
...
Let S be the subspace of V = M2×2 consisting of
all 2 × 2 symmetric matrices
...
S = {p(x) | p(0) = 0} V = P2
12
...
⎧⎡
⎤⎡ ⎤⎡
1
⎨ −1
13
...
S = ⎣ −2 ⎦, ⎣ 1 ⎦, ⎣
⎩
2
1
⎧⎡
⎤⎡ ⎤⎡
2
1
⎪
⎪
⎨⎢
1 ⎥⎢ 1 ⎥⎢
⎥, ⎢ ⎥, ⎢
15
...
S =
22
...
1 0
0 1
0 1
,
−1 2
In Exercises 19–24, find a basis for the subspace S of
the vector space V
...
⎫
⎧
s + 2t
⎬
⎨
− s + t s, t ∈ ޒV = 3ޒ
19
...
S = {p(x) | p(0) = 0, p(1) = 0} V = P3
In Exercises 25–30, find a basis for the span(S) as a
subspace of
...
S = ⎣ 2 ⎦, ⎣ 0 ⎦, ⎣ 2 ⎦
⎭
⎩
1
−2
−1
⎧⎡
⎤⎫
⎤⎡
⎤⎡
2 ⎬
4
⎨ −2
26
...
S = ⎣ −3 ⎦, ⎣ 2 ⎦, ⎣ −1 ⎦, ⎣ 3 ⎦
⎭
⎩
−1
0
2
0
⎧⎡
⎤⎫
⎤⎡
⎤⎡
⎤⎡
1 ⎬
−3
1
⎨ −2
28
...
S = ⎣ −3 ⎦, ⎣ 2 ⎦, ⎣ −1 ⎦, ⎣ 0 ⎦
⎭
⎩
4
2
2
0
⎧⎡
⎤⎫
⎤⎡ ⎤⎡
⎤⎡
2 ⎬
0
1
⎨ 2
30
...
⎧⎡
⎤⎫
⎤⎡
1 ⎬
2
⎨
31
...
4 Coordinates and Change of Basis
⎧⎡
⎤ ⎡ ⎤⎫
1 ⎬
⎨ −1
32
...
S = ⎢
⎪⎣ 2 ⎦ ⎣
⎪
⎩
4
⎤⎫
3 ⎪
⎪
⎬
1 ⎥
⎥ V = 4ޒ
1 ⎦⎪
⎪
⎭
2
⎧⎡
⎤⎡
1
⎪ −1
⎪
⎨⎢
1 ⎥ ⎢ −3
⎥, ⎢
34
...
S = ⎣ 1 ⎦, ⎣ 1 ⎦ V = 3ޒ
⎭
⎩
1
3
⎧⎡
⎨
36
...
Find a basis for the subspace of Mn×n consisting
of all diagonal matrices
...
Show that if S = {v1 , v2 ,
...
, cvn } is also a basis for V
...
Show that if S = {v1 , v2 ,
...
, Avn } is also a basis
...
4
40
...
Suppose that V is a vector space with
dim(V ) = n
...
42
...
43
...
⎫
⎬
⎭
44
...
and
Coordinates and Change of Basis
From our earliest experiences with Euclidean space we have used rectangular coordinates, (or xy coordinates), to specify the location of a point in the plane
...
Equipped with our knowledge of linear combinations, we now understand these xy
coordinates to be the scalar multiples required to express the vector as a linear com2
bination of the standard basis vectors e1 and e2
...
1(a)
...
For
example, since
2
1
−1
=5
+1
2
2
3
1
1
5 1
2, 2
the x ′ y ′ coordinates of v are given by
...
1(b)
...
Let V be
a vector space with basis B = {v1 , v2 ,
...
From Theorem 7 of Sec
...
3, every
vector v in V can be written uniquely as a linear combination of the vectors of B
...
, cn such that
v = c1 v1 + c2 v2 + · · · + cn vn
It is tempting to associate the list of scalars {c1 , c2 ,
...
However, changing the order of the basis vectors in B will change the
order of the scalars
...
To remove this ambiguity, we introduce
the notion of an ordered basis
...
2ޒThen the list of scalars associated with the vector
DEFINITION 1
Ordered Basis An ordered basis of a vector space V is a fixed sequence of
linearly independent vectors that span V
...
4 Coordinates and Change of Basis
DEFINITION 2
175
Coordinates Let B = {v1 , v2 ,
...
Let v be a vector in V , and let c1 , c2 ,
...
, cn are called the coordinates of v relative to B
...
⎥
⎣
...
cn
and refer to the vector [v]B as the coordinate vector of v relative to B
...
, en }
are simply the components of the vector
...
, x n } are the coefficients of the polynomial
...
The coordinates c1 and c2 are found by writing v as a linear combination of the
two vectors in B
...
We therefore have that the coordinate vector of
1
relative to B is
v=
5
3
[v]B =
2
EXAMPLE 2
Let V = P2 and B be the ordered basis
B = 1, x − 1, (x − 1)2
Confirming Pages
176
Chapter 3 Vector Spaces
Find the coordinates of p(x) = 2x 2 − 2x + 1 relative to B
...
Let
1
0
B=
0
,
0
0 1
,
1 0
0 0
0 1
Show that B is a basis for W and find the coordinates of
2 3
v=
3 5
relative to B
...
3
...
The matrices in B are also
linearly independent and hence are a basis for W
...
4 Coordinates and Change of Basis
177
Change of Basis
Many problems in applied mathematics are made easier by changing from one basis of
a vector space to another
...
Let V be a vector space of dimension 2 and let
B = {v1 , v2 } and B ′ = {v′1 , v′2 }
be ordered bases for V
...
Since B ′ is a basis, there are scalars a1 , a2 , b1 , and b2 such
that
v1 = a1 v′1 + a2 v′2
v2 = b1 v′1 + b2 v′2
Then v can be written as
v = x1 (a1 v′1 + a2 v′2 ) + x2 (b1 v′1 + b2 v′2 )
Collecting the coefficients of v′1 and v′2 gives
v = (x1 a1 + x2 b1 )v′1 + (x1 a2 + x2 b2 )v′2
so that the coordinates of v relative to the basis B ′ are given by
[v]B ′ =
x1 a1 + x2 b1
x1 a2 + x2 b2
Now by rewriting the vector on the right-hand side as a matrix product, we have
[v]B ′ =
a1
a2
b1
b2
x1
x2
=
a1
a2
b1
b2
[v]B
Notice that the column vectors of the matrix are the coordinate vectors [v1 ]B ′ and
[v2 ]B ′
...
Find the transition matrix from B to B ′
...
b
...
By denoting the vectors in B by v1 and v2 and those in B ′ by v′1 and v′2 , the
column vectors of the transition matrix are [v1 ]B ′ and [v2 ]B ′
...
Since
0
−1
′
[v]B ′ = [I ]B [v]B
B
then
6
3
2
0
=
11
−2
3 −1
Observe that the same vector, relative to the different bases, is obtained from
the coordinates [v]B and [v]B ′
...
The result is stated in Theorem 14
...
, vn }
and
B ′ = {v′1 , v′2 ,
...
4 Coordinates and Change of Basis
179
Moreover, a change of coordinates is carried out by
′
[v]B ′ = [I ]B [v]B
B
In Example 5 we use the result of Theorem 14 to change from one basis of P2
to another
...
Find the transition matrix [I ]B
...
Let p(x) = 3 − x + 2x 2 and find [p(x)]B ′
...
To find the first column vector of the transition matrix, we must find scalars
a1 , a2 , and a3 such that
a1 (1) + a2 (x + 1) + a3 (x 2 + x + 1) = 1
By inspection we see that the solution is a1 = 1, a2 = 0, and a3 = 0
...
The solutions are given by b1 = −1, b2 = 1, and b3 = 0, and
c1 = 0, c2 = −1, and c3 = 1
...
The basis B is the standard basis for P2 , so the coordinate vector of
p(x) = 3 − x + 2x 2 relative to B is given by
⎤
⎡
3
[p(x)]B ′ = ⎣ −1 ⎦
2
Confirming Pages
180
Chapter 3 Vector Spaces
Hence,
[p(x)]B ′
⎤
⎤ ⎡
⎤⎡
4
3
−1
0
1 −1 ⎦ ⎣ −1 ⎦ = ⎣ −3 ⎦
2
2
0
1
⎡
1
=⎣ 0
0
Notice that 3 − x + 2x 2 = 4(1) − 3(x + 1) + 2(x 2 + x + 1)
...
and let v =
4
a
...
b
...
c
...
d
...
Solution
a
...
The
2
2
1
2
1
2
−1
2
′
[I ]B =
B
1
2
=0
=1
1
2
b
...
By Theorem 14, the coordinates of v relative to B ′ are given by
4
[v]B ′ =
−1
2
1
2
1
2
1
2
3
4
=
1
2
7
2
Revised Confirming Pages
3
...
Using the coordinates of v relative to the two bases, we have
3
1
0
0
1
+4
=v=
1 ′
7
v + v′
2 1 2 2
d
...
2 shows the location of the terminal point (3, 4) of
the vector v relative to the e1 e2 axes and the v′1 v′2 axes
...
4 is that the transition matrix [I ]B between bases
B
B and B ′ of a finite dimensional vector space is invertible
...
To see this, suppose that V is a vector
B
space of dimension n with ordered bases
B = {v1 , v2 ,
...
, v′n }
and
′
To show that [I ]B is invertible, let x ∈ ޒn be such that
B
′
[I ]B x = 0
B
Observe that the left-hand side of this equation in vector form is x1 [v1 ]B ′ + · · · +
xn [vn ]B ′
...
Hence, so are the vectors [v1 ]B ′ , · · · , [vn ]B ′
...
Since
′
the only solution to the homogeneous equation [I ]B x = 0 is the trivial solution, then
B
B′
by Theorem 17 of Sec
...
6, the matrix [I ]B is invertible
...
Confirming Pages
182
Chapter 3 Vector Spaces
THEOREM 15
Let V be a vector space of dimension n with ordered bases
B = {v1 , v2 ,
...
, v′n }
and
′
Then the transition matrix [I ]B from B to B ′ is invertible and
B
′
[I ]B ′ = ([I ]B )−1
B
B
Fact Summary
Let V be a vector space with dim(V ) = n
...
In ޒn , the coordinates of a vector with respect to the standard basis are the
components of the vector
...
Given any two ordered bases for V , a transition matrix can be used to
change the coordinates of a vector relative to one basis to the coordinates
relative to the other basis
...
If B and B ′ are two ordered bases for V , the transition matrix from B to B ′
′
is the matrix [I ]B whose column vectors are the coordinates of the basis
B
vectors of B relative to the basis B ′
...
If B and B ′ are two ordered bases for V , the transition matrix from B to B ′
is invertible and the inverse matrix is the transition matrix from B ′ to B
...
4
In Exercises 1–8, find the coordinates of the vector v
relative to the ordered basis B
...
B =
3
,
1
−2
2
v=
8
0
−2
−1
−2
v=
,
1
1
4
⎧⎡
⎤ ⎡ ⎤⎫
⎤⎡
1 ⎬
3
1
⎨
3
...
B =
⎤
2
v = ⎣ −1 ⎦
9
⎧⎡ ⎤ ⎡
⎤ ⎡ ⎤⎫
0 ⎬
1
⎨ 2
4
...
4 Coordinates and Change of Basis
5
...
B = {x 2 + 2x + 2, 2x + 3, −x 2 + x + 1}
v = p(x) = −3x 2 + 6x + 8
1
0
7
...
B =
−1
,
1
0 1
,
0 2
2
1
1 1
0 3
−2
3
In Exercises 9–12, find the coordinates of the vector v
relative to the two ordered bases B1 and B2
...
B1 =
B2 =
−3
,
1
2
,
1
2
2
0
1
⎤⎡
v=
1
0
⎧⎡
⎤⎫
⎤⎡
−1 ⎬
1
⎨ −2
10
...
B1 = {x 2 − x + 1, x 2 + x + 1, 2x 2 }
B2 = {2x 2 + 1, −x 2 + x + 2, x + 3}
v = p(x) = x 2 + x + 3
1
1
1 0
,
0 2
0 1
,
1 0
B2 =
3
0
−1
,
1
0 0
,
1 0
v=
1 −1
,
1
0
v=
1 0
−1 0
12
...
13
...
B1 =
−2
,
1
[v]B1 =
2
3
1
2
2
1
1
[v]B1 =
,
0
3
−1
⎧⎡ ⎤ ⎡
⎤ ⎡ ⎤⎫
0 ⎬
0
⎨ 1
15
...
B1 = ⎣ 1 ⎦, ⎣ 0 ⎦, ⎣ 1 ⎦
⎭
⎩
0
2
0
⎧⎡ ⎤ ⎡
⎤⎫
⎤⎡
−1 ⎬
0
⎨ 1
B2 = ⎣ 0 ⎦, ⎣ 1 ⎦, ⎣ −1 ⎦
⎭
⎩
0
2
1
B2 =
Confirming Pages
184
Chapter 3 Vector Spaces
[v]B1
⎤
2
=⎣ 1 ⎦
1
⎡
17
...
B1 = {x 2 − 1, 2x 2 + x + 1, −x + 1}
B2 = {(x − 1)2 , x + 2, (x + 1)2 }
⎡ ⎤
1
[v]B1 = ⎣ 1 ⎦
2
19
...
20
...
21
...
a
...
b
...
22
...
2ޒ
B
a
...
Find [I ]B1
2
B
c
...
5 Application: Differential Equations
23
...
If [v]S =
1
2
cos θ − sin θ
sin θ
cos θ
x
, then find [v]B
...
Draw the rectangle in the plane with vertices
be a second ordered basis
...
Find [I ]B
S
b
...
c
...
d
...
24
...
5
0
1
1
0
1
1
c
...
Draw the rectangle in the plane
2
with vertices the coordinates of the vectors,
given in part (b), relative to the ordered
basis B
...
Suppose that B1 = {u1 , u2 , u3 } and
B2 = {v1 , v2 , v3 } are ordered bases for a vector
space V such that u1 = −v1 + 2v2 , u2 =
−v1 + 2v2 − v3 , and u3 = −v2 + v3
...
Find the transition matrix [I ]B2
1
b
...
They are used extensively by scientists and engineers to solve problems
concerning growth, motion, vibrations, forces, or any problem involving the rates
of change of variable quantities
...
As it
turns out, linear algebra is highly useful to these efforts
...
In this section and in Sec
...
3 we give a brief
introduction to the connection between linear algebra and differential equations
...
An equation that
involves x, y, y ′ , y ′′ ,
...
We will henceforth drop the qualifier ordinary since
none of the equations we investigate will involve partial derivatives
...
Confirming Pages
186
Chapter 3 Vector Spaces
The Exponential Model
One of the simplest kinds of differential equations is the first-order equation given by
y ′ = ky
where k is a real number
...
A solution to a differential equation is a function y = f (t) that satisfies the
equation, that is, results in an identity when substituted for y in the original equation
...
As an illustration, consider the differential equation y ′ = 3y
...
Since the parameter C in the solution is arbitrary,
the solution produces a family of functions all of which satisfy the differential equation
...
In certain cases a physical constraint imposes a condition on the solution that
allows for the identification of a particular solution
...
This is
called an initial condition
...
The solution to the previous initial-value problem
is given by
y(t) = 2e3t
From a linear algebra perspective we can think of the general solution to the
differential equation y ′ = ky as the span, over ,ޒof the vector ekt which describes a
one-dimensional subspace of the vector space of differentiable functions on the real
line
...
After computing the first and
Confirming Pages
3
...
As this equation is quadratic there
are three possibilities for the roots r1 and r2
...
The auxiliary equation can have
two distinct real roots, one real root, or two distinct complex roots
...
Case 1 The roots r1 and r2 are real and distinct
...
Let y = erx
...
Although the auxiliary equation has only one
root, there are still two distinct solutions, given by
y1 (x) = erx
EXAMPLE 2
Solution
and
y2 (x) = xerx
Find two distinct solutions to the differential equation y ′′ − 2y ′ + y = 0
...
Since the auxiliary equation r 2 − 2r + 1 = (r − 1)2 = 0 has the
repeated root r = 1, two distinct solutions of the differential equation are
y1 (x) = ex
and
y2 (x) = xex
Confirming Pages
188
Chapter 3 Vector Spaces
Case 3 The auxiliary equation has distinct complex (conjugate) roots given by
r1 = α + βi and r2 = α − βi
...
Let y = erx , so the auxiliary equation corresponding to y ′′ − 2y ′ + 5y = 0 is given
by r 2 − 2r + 5 = 0
...
The two solutions to the differential equation are then given
by
y1 (x) = ex cos 2x
and
y2 (x) = ex sin 2x
In what follows we require Theorem 16 on existence and uniqueness for secondorder linear differential equations
...
THEOREM 16
Let p(x), q(x), and f (x) be continuous functions on the interval I
...
Fundamental Sets of Solutions
With solutions in hand for each one of these cases, we now consider the question
as to whether there are other solutions to equations of this type, and if so, how they
can be described
...
We will see that in each case
the functions y1 (x) and y2 (x) form a basis for the vector space of solutions to the
equation y ′′ + ay ′ + by = 0
...
Toward this end, for a positive integer n ≥ 0, let V = C (n) (I ) be the vector space
of all functions that are n times differentiable on the real interval I
...
We first show that the solution
set to the differential equation y ′′ + ay ′ + by = 0 is a subspace of V = C (2) (I )
...
If y1 (x) and y2 (x) are solutions to the differential equation y ′′ + ay ′ +
by = 0 and c is any scalar, then y1 (x) + cy2 (x) is also a solution
...
5 Application: Differential Equations
189
Proof Since y1 (x) and y2 (x) are both solutions, then
′′
′
y1 (x) + ay1 (x) + by1 (x) = 0
and
′′
′
y2 (x) + ay2 (x) + by2 (x) = 0
Now to show that y(x) = y1 (x) + cy2 (x) is a solution to the differential equation,
observe that
′
′
y ′ (x) = y1 (x) + cy2 (x)
and
′′
′′
y ′′ (x) = y1 (x) + cy2 (x)
Substituting the values for y, y ′ , and y ′′ in the differential equation and rearranging
the terms gives
′′
′′
′
′
y1 (x) + cy2 (x) + a[y1 (x) + cy2 (x)] + b[y1 (x) + cy2 (x)]
′′
′′
′
′
= y1 (x) + cy2 (x) + ay1 (x) + acy2 (x) + by1 (x) + bcy2 (x)
′′
′
′′
′
= [y1 (x) + ay1 (x) + by1 (x)] + c[y2 (x) + ay2 (x) + by2 (x)]
=0+0=0
Let S be the set of solutions to the differential equation y ′′ + ay ′ + by = 0
...
3
...
To analyze the algebraic structure of S, we recall from Exercise 31 of Sec
...
3
that a set of functions U = {f1 (x), f2 (x),
...
Theorem 18 provides a useful test
to decide whether two functions are linearly independent on an interval
...
Define the function W [f, g] on I by
W [f, g](x) =
f (x) g(x)
f ′ (x) g ′ (x)
= f (x)g ′ (x) − f ′ (x)g(x)
If W [f, g](x0 ) is nonzero for some x0 in I , then f (x) and g(x) are linearly independent on I
...
Observe that the determinant of the corresponding coefficient
matrix is W f, g (x)
...
1
...
Accordingly, f (x) and g(x)
are linearly independent
...
The
Wronskian, and the result of Theorem 18, can be extended to any finite set of functions
that have continuous derivatives up to order n
...
THEOREM 19
Let y1 (x) and y2 (x) be solutions to the differential equation y ′′ + ay ′ + by = 0
...
At this point we are now ready to show that any two linearly independent solutions
to the differential equation y ′′ + ay ′ + by = 0 span the subspace of solutions
...
Proof Let y(x) be a particular solution to the initial-value problem
y ′′ + ay ′ + by = 0
with
y(x0 ) = y0
and
′
y ′ (x0 ) = y0
for some x0 in I
...
Since y1 (x) and y2 (x) are linearly independent, then by Theorem 19,
the determinant of the coefficient matrix is nonzero
...
5 Application: Differential Equations
191
of Sec
...
6, there exist unique numbers c1 and c2 that provide a solution for the
linear system
...
By the uniqueness
part of Theorem 16,
y(x) = g(x) = c1 y1 (x) + c2 y2 (x)
as claimed
...
In light of this theorem, the fundamental set
{y1 (x), y2 (x)} is a basis for the subspace S of solutions to y ′′ + ay ′ + by = 0
...
We now return to the specific cases for the solutions to y ′′ + ay ′ + by = 0
...
To show that these functions form a fundamental set, we compute the
Wronskian, so that
e r1 x
e r2 x
r1 x
r1 e
r 2 e r2 x
r1 x r2 x
= r2 (e e ) − r1 (er1 x er2 x )
= r2 e(r1 +r2 )x − r1 e(r1 +r2 )x
= e(r1 +r2 )x (r2 − r1 )
W [y1 , y2 ](x) =
Since the exponential function is always greater than 0 and r1 and r2 are distinct, the
Wronskian is nonzero for all x, and therefore the functions are linearly independent
...
For case 2, the Wronskian is given by
W [erx , xerx ] = e2rx
Since e2rx is never zero, {erx , xerx } is a fundamental set of solutions for problems of
this type
...
If β = 0,
then the differential equation becomes y ′′ + ay ′ = 0 which reduces to case 1
...
Two important areas are in mechanical and electrical oscillations
...
The motion of the object is described by the solution of an initial-value problem of
the form
my ′′ + cy ′ + ky = f (x)
y(0) = A
y ′ (0) = B
where m is the mass of the object attached to the spring, c is the damping coefficient,
k is the stiffness of the spring, and f (x) represents some external force
...
EXAMPLE 4
Solution
Let the mass of an object attached to a spring be m = 1, and the spring constant
k = 4
...
The differential equation describing the position of the object is given by
y ′′ + cy ′ + 4y = 0
When c = 2, the auxiliary equation for y ′′ + 2y ′ + 4y = 0 is
r 2 + 2r + 4 = 0
√
√
Since the roots are the complex values r1 = −1 + 3i and r2 = −1 − 3i, the
general solution for the differential equation is
√
√
y(x) = e−x c1 cos( 3x) + c2 sin( 3x)
From the initial conditions, we have
y(x) = 2e
−x
√
cos( 3x) +
√
√
3
sin( 3x)
3
When c = 4, the auxiliary equation for y ′′ + 4y ′ + 4y = 0 is
r 2 + 4r + 4 = (r + 2)2 = 0
Since there is one repeated real root, the general solution for the differential
equation is
y(x) = c1 e−2x + c2 xe−2x
From the initial conditions,
y(x) = 2e−2x (2x + 1)
When c = 5, the auxiliary equation for y ′′ + 5y ′ + 4y = 0 is
r 2 + 5r + 4 = (r + 1)(r + 4) = 0
Since there are two distinct real roots, the general solution for the differential
equation is
y(x) = c1 e−x + c2 e−4x
Confirming Pages
3
...
1
...
5
In Exercises 1–4, find the general solution to the
differential equation
...
Find two distinct solutions to the homogeneous differential equation
...
Show that the two solutions from part (a) are linearly independent
...
Write the general solution
...
y ′′ − 5y ′ + 6y = 0
2
...
y ′′ + 4y ′ + 4y = 0
4
...
5
...
y ′′ − 3y ′ + 2y = 0 y(1) = 0 y ′ (1) = 1
7
...
Find the general solution to the associated
homogeneous differential equation for which
g(x) = 0
...
b
...
c
...
8
...
Find the general solution to the associated
homogeneous differential equation for which
g(x) = 0
...
Assume there exists a particular solution to the
nonhomogeneous equation of the form
yp (x) = A cos 2x + B sin 2x
Substitute yp (x) into the differential equation
to find conditions on the coefficients A and B
...
Verify that yc (x) + yp (x) is a solution to the
differential equation
...
Let w be the weight of an object attached to a
spring, g the constant acceleration due to gravity
of 32 ft/s2 , k the spring constant, and d the
distance in feet that the spring is stretched by the
weight
...
Suppose that a 2-lb weight stretches a
d
spring by 6-in
...
Notice that this system is
undamped ; that is, the damping coefficient is 0
...
Suppose an 8-lb object is attached to a spring
with a spring constant of 4 lb/ft and that the
damping force on the system is twice the velocity
...
Review Exercises for Chapter 3
1
...
4ޒ
of k the vectors
⎡
⎡ ⎤
2
0
⎢ 3
⎢ 0 ⎥
⎢
⎢ ⎥
⎣ 4
⎣ 1 ⎦
k
4
⎤
⎥
⎥
⎦
2
...
Let
S=
a−b
b+c
a
a−c
a, b, c ∈ ޒ
a
...
5 3
in S?
b
...
Find a basis B for S
...
Give a 2 × 2 matrix that is not in S
...
Let S = {p(x) = a + bx + cx 2 | a + b + c = 0}
...
Show that S is a subspace of P2
...
Find a basis for S
...
5
...
a
...
b
...
6
...
Explain why the set S is not a basis for
...
Show that v3 is a linear combination of v1 and
v2
...
Find the dimension of the span of the set S
...
Find a basis B for 4ޒthat contains the vectors
v1 and v2
...
5 Application: Differential Equations
e
...
f
...
g
...
h
...
i
...
⎡
7
...
, vn } = V and
c1 v1 + c2 v2 + · · · + cn vn = 0
with c1 ̸= 0
...
, vn } = V
...
Let V = M2×2
...
Give a basis for V and find its dimension
...
Show that S and T are subspaces of the vector
space V
...
Give bases for S and T and specify their
dimensions
...
Give a description of the matrices in S ∩ T
...
9
...
Show that B = {u, v} is a basis for
...
Find the coordinates of the vector w =
relative to the ordered basis B
...
Let c be a fixed scalar and let
p1 (x) = 1
p2 (x) = x + c
p3 (x) = (x + c)2
a
...
b
...
Chapter 3: Chapter Test
In Exercises 1–35, determine whether the statement is
true or false
...
If V = ޒand addition and scalar multiplication
are defined as
x ⊕ y = x + 2y
then V is a vector space
...
The set
⎧⎡
⎤ ⎡ ⎤⎫
⎤⎡
0 ⎬
2
⎨ 1
S = ⎣ 3 ⎦, ⎣ 1 ⎦, ⎣ 4 ⎦
⎭
⎩
3
−1
1
is a basis for
...
A line in 3ޒis a subspace of dimension 1
...
The set
4
...
5
...
The set
x
y
S=
y≤0
is a subspace of
...
The set
S = {A ∈ M2×2 | det(A) = 0}
is a subspace of M2×2
...
The set
{2, 1 + x, 2 − 3x 2 , x 2 − x + 1}
is a basis for P3
...
The set
{x 3 − 2x 2 + 1, x 2 − 4, x 3 + 2x, 5x}
is a basis for P3
...
The dimension of the subspace
⎫
⎧⎡
⎤
⎬
⎨ s + 2t
S = ⎣ t − s ⎦ s, t ∈ ޒ
⎭
⎩
s
of 3ޒis 2
...
If
S=
and
T =
1
,
4
1
,
4
then span(S) = span(T )
...
⎫
⎬
⎭
13
...
14
...
15
...
16
...
17
...
18
...
19
...
, vn } is a linearly independent set
of vectors in ޒn , then S is a basis
...
If A is a 3 × 3 matrix and for every vector
⎤
⎡
a
b = ⎣ b ⎦ the linear system Ax = b has a
c
solution, then the column vectors of A span
...
If an n × n matrix is invertible, then the column
vectors form a basis for ޒn
...
If a vector space has bases S and T and the
number of elements of S is n, then the number of
elements of T is also n
...
In a vector space V , if
3
5
span{v1 , v2 ,
...
, wm are any elements of V , then
span{v1 , v2 ,
...
, wm } = V
...
5 Application: Differential Equations
24
...
25
...
and
30
...
3
−1
32
...
The coordinates of
1
1
, relative to B1 , are
...
27
...
28
...
The transition matrix from B2 to B1 is
B
[I ]B1
2
=
1 3
1 1
In Exercises 30–35, use the bases of P3 ,
B1 = {1, x, x 2 , x 3 }
B2 = {x, x 2 , 1, x 3 }
⎤
⎡
1
[x 3 + 2x 2 − x]B1 = ⎣ 2 ⎦
−1
⎤
⎡
0
⎢ −1 ⎥
⎥
[x 3 + 2x 2 − x]B1 = ⎢
⎣ 2 ⎦
1
⎤
⎡
0
⎢ −1 ⎥
⎥
[x 3 + 2x 2 − x]B2 = ⎢
⎣ 2 ⎦
1
⎤
⎡
−1
⎢ 2 ⎥
⎥
[x 3 + 2x 2 − x]B2 = ⎢
⎣ 0 ⎦
1
34
...
The transition matrix from B1 to
⎡
0 1 0
⎢ 0 0 1
B2
[I ]B1 = ⎢
⎣ 1 0 0
0 0 0
B2 is
⎤
0
0 ⎥
⎥
0 ⎦
1
197
Confirming Pages
Confirming Pages
CHAPTER
Linear Transformations
CHAPTER OUTLINE
4
...
2
4
...
4
4
...
6
Linear Transformations 200
The Null Space and Range 214
Isomorphisms 226
Matrix Representation of a Linear Transformation 235
Similarity 249
Application: Computer Graphics 255
A
critical component in the design of an airplane is
the airflow over the wing
...
Lift and
drag are aerodynamic forces that are generated by the
Yaw
movement of the aircraft through the air
...
Lift,
created by the rush of air over the wing, must overcome the force of gravity before the airplane can fly
...
These
models involve linear systems with millions of
x
y
equations and variables
...
1, linear algebra provides systematic methods for solv- Roll
Pitch
ing these equations
...
To check the feasibility of their designs, aeronautical engineers use
computer graphics to visualize simulations of the aircraft in flight
...
The pitch
measures the fore and aft tilt of an airplane, relative to the earth, while the roll measures the tilt from side to side
...
Using
the figure above, the pitch is a rotation about the y axis, while a roll is a rotation
about the x axis
...
During a simulation, the attitude and heading of
the aircraft can be changed by applying a transformation to its coordinates relative
to a predefined center of equilibrium
...
Specifically, if the angles of
199
Confirming Pages
Chapter 4 Linear Transformations
rotation for pitch, roll, and yaw are given by θ, ϕ, and ψ, respectively, then the matrix
representations for these transformations are given by
⎤
⎤
⎡
⎤ ⎡
⎡
cos ψ sin ψ 0
1
0
0
cos θ 0 − sin θ
⎦ ⎣ 0 cos ϕ − sin ϕ ⎦ and ⎣ − sin ψ cos ψ 0 ⎦
⎣ 0
1
0
0
0
1
0 sin ϕ cos ϕ
sin θ 0 cos θ
This type of transformation is a linear map between vector spaces, in this case from
3ޒto
...
Due to their wide applicability linear transformations on vector spaces are of
general interest and are the subject of this chapter
...
That
is, the image of a linear combination under a linear transformation is also a linear
combination in the range
...
4
...
One may metaphorically refer to elements of the set as nouns
and functions that operate on elements as verbs
...
3, and linear transformations on vector spaces
are the functions
...
In this case we say that T
maps V into W , and we write T: V −→ W
...
EXAMPLE 1
Define a mapping T: 2ޒ →− 2ޒby
T
x
y
=
x+y
x−y
a
...
b
...
c
...
ޒ
Confirming Pages
4
...
Since e1 =
T (e1 ) =
0
1
and e2 =
1+0
1−0
, we have
1
1
=
T (e2 ) =
and
b
...
Thus, the only vector that is mapped
0
0
...
To show that the mapping T preserves vector space addition, let
u=
u1
u2
v=
and
v1
v2
Then
v1
u1
+
u2
v2
u 1 + v1
=T
u 2 + v2
(u1 + v1 ) + (u2 + v2 )
=
(u1 + v1 ) − (u2 + v2 )
u 1 + u2
v1 + v 2
=
+
u1 − u2
v 1 − v2
v1
u1
+T
=T
u2
v2
= T (u) + T (v)
T (u + v) = T
We also have
cu1
cu2
cu1 + cu2
=
cu1 − cu2
= cT (u)
T (cu) = T
=c
u 1 + u2
u 1 − u2
A mapping T between vector spaces V and W that satisfies the two properties,
as in Example 1,
T (u + v) = T (u) + T (v)
and
T (cu) = cT (u)
Confirming Pages
202
Chapter 4 Linear Transformations
is called a linear transformation from V into W
...
Definition 1 combines the two requirements for the linearity of T into one
statement
...
The mapping T: V → W
is called a linear transformation if and only if
T (cu + v) = cT (u) + T (v)
for every choice of u and v in V and scalars c in
...
The mapping T defined in Example 1 is a linear operator on
...
EXAMPLE 2
Let A be an m × n matrix
...
Show that T is a linear transformation
...
Let A be the 2 × 3 matrix
A=
Find the images of
⎤
1
⎣ 1 ⎦
1
⎡
1 2 −1
−1 3
2
and
⎤
7
⎣ −1 ⎦
5
⎡
under the mapping T: 2ޒ → 3ޒwith T (x) = Ax
...
By Theorem 5 of Sec
...
3, for all vectors u and v in ޒn and all scalars c in ,ޒ
A(cu + v) = cAu + Av
Therefore,
T (cu + v) = cT (u) + T (v)
b
...
1 Linear Transformations
and
⎤⎞
7
T ⎝⎣ −1 ⎦⎠ =
5
⎛⎡
1 2 −1
−1 3
2
⎤
7
⎣ −1 ⎦ =
5
⎡
203
0
0
Later in this chapter, in Sec
...
4, we show that every linear transformation between
finite dimensional vector spaces can be represented by a matrix
...
In Example 3
we consider the action of a linear transformation from a geometric perspective
...
Discuss the action of T on
of the equation
⎤ ⎡
⎛⎡
1
T ⎝⎣ 0 ⎦ + ⎣
1
b
...
Find the image of the set
d
...
Solution
a vector in , 3ޒand give a geometric interpretation
⎤⎞
⎤⎞
⎛⎡
⎤⎞
⎛⎡
0
1
0
1 ⎦⎠ = T ⎝⎣ 0 ⎦⎠ + T ⎝⎣ 1 ⎦⎠
1
1
1
⎫
⎧ ⎡
⎤
1
⎬
⎨
S1 = t ⎣ 2 ⎦ t ∈ ޒ
⎭
⎩
1
⎫
⎧⎡
⎤
⎬
⎨ x
S2 = ⎣ y ⎦ x, y ∈ ޒ
⎭
⎩
3
⎫
⎧⎡
⎤
⎬
⎨ x
S3 = ⎣ 0 ⎦ x, z ∈ ޒ
⎭
⎩
z
a
...
Let
⎤
⎤
⎡ ⎤
⎡
⎡
1
0
1
v1 = ⎣ 0 ⎦
v2 = ⎣ 1 ⎦
and
v3 = v1 + v2 = ⎣ 1 ⎦
1
1
2
Confirming Pages
204
Chapter 4 Linear Transformations
The images of these vectors are shown in Fig
...
We see from the figure that
0
1
1
,
is equal to the vector sum T (v1 ) + T (v2 ) =
+
T (v3 ) =
1
1
0
as desired
...
The set S1 is a line in 3-space with direction vector ⎣ 2 ⎦
...
c
...
In
this case,
x
x, y ∈ ޒ
T (S2 ) =
y
Thus, the image of S2 is the entire xy plane, which from the description of T
as a projection is the result we expect
...
The set S3 is the xz plane
...
Again, this is the expected result, given our description
of T
...
Confirming Pages
4
...
a
...
b
...
c
...
Solution
First observe that if p(x) is in P3 , then it has the form
so that
p(x) = ax 3 + bx 2 + cx + d
T (p(x)) = p′ (x) = 3ax 2 + 2bx + c
Since p′ (x) is in P2 , then T is a map from P3 into P2
...
To show that T is linear, let p(x) and q(x) be polynomials of degree 3 or less,
and let k be a scalar
...
Consequently,
d
T (kp(x) + q(x)) =
(kp(x) + q(x))
dx
d
d
(kp(x)) +
(q(x))
=
dx
dx
= kp′ (x) + q ′ (x)
= kT (p(x)) + T (q(x))
Therefore, the mapping T is a linear transformation
...
The image of the polynomial p(x) = 3x 3 + 2x 2 − x + 2 is
T (p(x)) =
d
(3x 3 + 2x 2 − x + 2) = 9x 2 + 4x − 1
dx
c
...
PROPOSITION 1
Let V and W be vector spaces, and let T: V → W be a linear transformation
...
Proof Since T (0) = T (0 + 0) and T is a linear transformation, we know that
T (0) = T (0 + 0) = T (0) + T (0)
...
Confirming Pages
206
Chapter 4 Linear Transformations
EXAMPLE 5
Define a mapping T: 2ޒ →− 2ޒby
T
x
y
ex
ey
=
Determine whether T is a linear transformation
...
EXAMPLE 6
Define a mapping T: Mm×n −→ Mn×m by
T (A) = At
Show that the mapping is a linear transformation
...
1
...
EXAMPLE 7
Coordinates Let V be a vector space with dim(V ) = n, and B =
{v1 , v2 ,
...
Let T: V −→ ޒn be the map that sends a
vector v in V to its coordinate vector in ޒn relative to B
...
3
...
Show that the map T is also a linear transformation
...
Since B is a basis, there are
unique sets of scalars c1 ,
...
, dn such that
u = c1 v1 + · · · + cn vn
and
v = d1 v1 + · · · + dn vn
Confirming Pages
4
...
2 ⎥ + ⎢
...
=⎢
...
⎦
...
...
As mentioned earlier, when T: V −→ W is a linear transformation, then the
structure of V is preserved when it is mapped into W
...
To see this, let V and W be vector
spaces and T: V → W be a linear transformation
...
This is illustrated in Example 8
...
3ޒ
If
1
−1
0
T (e2 ) =
and
T (e3 ) =
T (e1 ) =
1
2
1
find T (v), where
⎤
⎡
1
v=⎣ 3 ⎦
2
To find the image of the vector v, we first write the vector as a linear combination
of the basis vectors
...
Hence,
⎤⎞
⎡
⎤⎞
⎛ ⎡ ⎤ ⎡ ⎤
⎛⎡
1
1
1
2
T ⎝⎣ 3 ⎦⎠ = T ⎝−1 ⎣ 1 ⎦ + ⎣ 2 ⎦ + 2 ⎣ 1 ⎦⎠
2
3
1
6
By the linearity of T, we have
⎤⎞
⎛⎡
⎤⎞
⎛⎡
⎛⎡
1
2
T ⎝⎣ 3 ⎦⎠ = (−1)T ⎝⎣ 1 ⎦⎠ + T ⎝⎣
1
6
⎤
⎡
⎡ ⎤ ⎡
−1
1
= − ⎣ 1 ⎦ + ⎣ −2 ⎦ + 2 ⎣
−3
1
⎤
⎡
2
=⎣ 1 ⎦
4
⎤⎞
⎤⎞
⎛⎡
1
1
2 ⎦⎠ + 2T ⎝⎣ 1 ⎦⎠
2
3
⎤
2
2 ⎦
4
Confirming Pages
4
...
For example, let S, T: 2ޒ → 2ޒbe
defined by
S
x
y
=
x+y
−x
and
x
y
T
=
2x − y
x + 3y
We then define
(S + T )(v) = S(v) + T (v)
To illustrate this definition, let v =
(S + T )(v) = S(v) + T (v) =
2
−1
(cS)(v) = c(S(v))
and
; then
2 + (−1)
−2
+
2(2) − (−1)
2 + 3(−1)
=
6
−3
For scalar multiplication let c = 3
...
THEOREM 1
Let V and W be vector spaces and let S, T: V → W be linear transformations
...
If c is any scalar, the function cS
defined by
(cS)(v) = cS(v)
is a linear transformation from V into W
...
Then
(S + T )(du + v) = S(du + v) + T (du + v)
= S(du) + S(v) + T (du) + T (v)
= dS(u) + S(v) + dT (u) + T (v)
= d(S(u) + T (u)) + S(v) + T (v)
= d(S + T )(u) + (S + T )(v)
Confirming Pages
210
Chapter 4 Linear Transformations
so that S + T is a linear transformation
...
Using the sum of two linear transformations and the scalar product defined above,
the set of all linear transformations between two given vector spaces is itself a vector
space, denoted by £(U, V )
...
As we saw in Example 2, every m × n matrix A defines a linear map from ޒn to
m
...
The
ޒ
product matrix AB, which is an m × p matrix, then defines a linear transformation
from ޒp to ޒm
...
4
...
The desire for this correspondence is what motivated
the definition of matrix multiplication given in Sec
...
3
...
If T: V → U and S : U → W are linear transformations, then the composition map S ◦T: V → W , defined by
(S ◦T )(v) = S(T (v))
is a linear transformation
...
2
...
Applying S ◦T to cv1 + v2 , we obtain
(S ◦T )(cv1 + v2 ) = S(T (cv1 + v2 ))
= S(cT (v1 ) + T (v2 ))
= S(cT (v1 )) + S(T (v2 ))
= cS(T (v1 )) + S(T (v2 ))
= c(S ◦T )(v1 ) + (S ◦T )(v2 )
This shows that S ◦T is a linear transformation
...
If, in
addition, we define a product on £(V , V ) by
ST (v) = (S ◦T )(v)
then the product satisfies the necessary properties making £(V , V ) a linear algebra
...
1 Linear Transformations
211
Fact Summary
Let V , W , and Z be vector spaces and S and T functions from V into W
...
The function T is a linear transformation provided that for all u, v in V and
all scalars c, T (cu + v) = cT (u) + T (v)
...
If A is an m × n matrix and T is defined by T (x) = Ax, then T is a linear
transformation from ޒn into ޒm
...
If T is a linear transformation, then the zero vector in V is mapped to the
zero vector in W , that is, T (0) = 0
...
If B = {v1 , v2 ,
...
5
...
, vn } is a set of vectors in V and T is a linear transformation,
then
T (c1 v1 + c2 v2 + · · · + cn vn ) = c1 T (v1 ) + c2 T (v2 ) + · · · + cn T (vn )
for all scalars c1 ,
...
6
...
7
...
Exercise Set 4
...
1
...
T
x
y
=
x+y
x−y+2
3
...
T
x
y
=
2x − y
x + 3y
5
...
T
x
y
In Exercises 7–16, determine whether the function is a
linear transformation between vector spaces
...
T: ,ޒ → ޒT (x) = x 2
8
...
T: ,ޒ → 2ޒT
10
...
T: , 3ޒ → 3ޒ
⎛⎡
⎛⎡
x
y
⎤
⎤⎞ ⎡
x+y−z
x
⎦
2xy
T ⎝⎣ y ⎦⎠ = ⎣
x+z+1
z
Confirming Pages
212
Chapter 4 Linear Transformations
Let
12
...
T: P3 → P3 ,
⎤
1
u=⎣ 2 ⎦
3
T
T (p(x)) = p(x) + x
15
...
T: M2×2 → M2×2 , T (A) = A + At
In Exercises 17–20, a function T: V → W between
vector spaces and two vectors u and v in V are given
...
Find T (u) and T (v)
...
Is T (u + v) = T (u) + T (v)?
c
...
Define T: 2ޒ → 2ޒby
Let
=
−2
3
u=
−x
y
v=
2
−2
18
...
Define T: P3 → 2ޒby
3
2
T (ax + bx + cx + d) =
−a − b + 1
c+d
1
0
=
2
3
T
14
...
Define T: 2ޒ → 3ޒby
⎤⎞
⎛⎡
x
T ⎝⎣ y ⎦⎠ =
z
...
If T: 3ޒ → 3ޒis a linear operator and
⎤
⎤⎞ ⎡
⎛⎡
1
1
T ⎝⎣ 0 ⎦⎠ = ⎣ −1 ⎦
0
0
⎤
⎤⎞ ⎡
⎛⎡
2
0
T ⎝⎣ 1 ⎦⎠ = ⎣ 0 ⎦
1
0
⎤
⎤⎞ ⎡
⎛⎡
1
0
T ⎝⎣ 0 ⎦⎠ = ⎣ −1 ⎦
1
1
⎤⎞
⎛⎡
1
then find T ⎝⎣ 7 ⎦⎠
...
If T: P2 → P2 is a linear operator and
T (1) = 1 + x
T (x) = 2 + x 2
T (x 2 ) = x − 3x 2
then find T (−3 + x − x 2 )
...
If T: M2×2 → M2×2 is a linear operator and
T (e11 ) =
0 1
0 0
T (e12 ) =
1
0
T (e21 ) =
1 1
0 0
T (e22 ) =
0 0
2 0
Let
3
⎤
−1
2
v = ⎣ −1 ⎦
1
21
...
1 Linear Transformations
then find
25
...
Define a linear operator T: 2ޒ → 2ޒby
? If so, find
→ 3ޒby
⎤
3
3 ⎦
2
27
...
Find a matrix A such that T (v) = Av
...
Find T (e1 ) and T (e2 )
...
Define a linear transformation T: 3ޒ → 2ޒby
⎤
⎡
x − 2y
x
= ⎣ 3x + y ⎦
T
y
2y
a
...
b
...
2
a
...
Is it possible to determine T (v) for all vectors
v in ? 3ޒExplain
...
26
...
Is it possible to determine T (2x 2 − 3x + 2)? If
so, find it; and if not, explain why
...
Is it possible to determine T (3x 2 − 4x)? If so,
find it; and if not, explain why
...
Suppose that T: 3ޒ → 3ޒis a linear operator
such that
⎤
⎤⎞ ⎡
⎛⎡
−1
1
T ⎝⎣ 0 ⎦⎠ = ⎣ 2 ⎦
3
0
⎤
⎤⎞ ⎡
⎛⎡
2
1
T ⎝⎣ 1 ⎦⎠ = ⎣ −2 ⎦
1
0
a
...
b
...
31
...
32
...
33
...
Find all vectors in 3ޒthat are mapped to the
zero vector
...
Let w = ⎣ −6 ⎦
...
Define T: C (0) [0, 1] → ޒby
1
T (f ) =
3
a vector v in ޒsuch that T (v) = w
...
Define T: P2 → P2 by
T (p(x)) = p′ (x) − p(0)
a
...
b
...
c
...
Suppose T1: V → ޒand T2: V → ޒare linear
transformations
...
36
...
Show
that T is a linear transformation
...
Suppose that B is a fixed n × n matrix
...
Show
that T is a linear operator
...
Define T: ޒ → ޒby T (x) = mx + b
...
ß
4
...
a
...
b
...
40
...
If T (v) = 0, then find T (u + v)
...
Suppose that T: ޒn → ޒm is a linear
transformation and {v, w} is a linearly
independent subset of ޒn
...
42
...
, vn } is linearly dependent
...
, T (vn )} is linearly dependent
...
Let S = {v1 , v2 , v3 } be a linearly independent
subset of
...
44
...
, vn } is a basis for V
...
, n, show
that T1 (v) = T2 (v) for all v in V
...
Verify that £(U, V ) is a vector space
...
3
...
We also defined the column space of A as the subspace of
ޒm of all linear combinations of the column vectors of A
...
DEFINITION 1
Null Space and Range Let V and W be vector spaces
...
2 The Null Space and Range
215
The null space of a linear transformation is then the set of all vectors in V that are
mapped to the zero vector, with the range being the set of all images of the mapping,
as shown in Fig
...
T
V
T
U
N (T )
V
0
U
R(T )
Figure 1
In Theorem 3 we see that the null space and the range of a linear transformation
are both subspaces
...
1
...
2
...
Proof (1) Let v1 and v2 be in N (T ), so that T (v1 ) = 0 and T (v2 ) = 0
...
3
...
(2) Let w1 and w2 be in R(T )
...
Then for any scalar c,
T (cv1 + v2 ) = cT (v1 ) + T (v2 ) = cw1 + w2
so that cw1 + w2 is in R(T ) and hence R(T ) is a subspace of W
...
Find a basis for the null space of T and its dimension
...
Give a description of the range of T
...
Find a basis for the range of T and its dimension
...
The null space of T is found by setting each component of the image vector
equal to 0
...
b
...
Therefore,
⎧⎡
⎤⎫
⎤⎡
⎤⎡
⎤⎡
0 ⎬
0
1
⎨ 1
R(T ) = span ⎣ 0 ⎦, ⎣ 1 ⎦, ⎣ −1 ⎦, ⎣ 0 ⎦
⎭
⎩
1
0
0
1
c
...
Consequently, the four vectors found to span the range in part (b) are linearly
dependent and do not form a basis
...
3
...
2 The Null Space and Range
Since the reduced matrix has pivots in
range of T is
⎧⎡
⎤⎡
⎨ 1
B = ⎣ 0 ⎦, ⎣
⎩
1
217
the first three columns, a basis for the
⎤⎫
⎤⎡
0 ⎬
1
1 ⎦, ⎣ −1 ⎦
⎭
0
0
Therefore, dim(R(T )) = 3
...
3ޒ
EXAMPLE 2
Define the linear transformation T: P4 −→ P3 , by
T (p(x)) = p′ (x)
Find the null space and range of T
...
Since these are the only
polynomials for which the derivative is 0, we know that N (T ) is the set of constant
polynomials in P4
...
To see this, let
q(x) = ax 3 + bx 2 + cx + d be an arbitrary element of P3
...
That is, to find p(x),
we integrate q(x) to obtain
a
b
c
p(x) = q(x) dx = (ax 3 + bx 2 + cx + d) dx = x 4 + x 3 + x 2 + dx + e
4
3
2
which is an element of P4 , with p′ (x) = q(x)
...
In Sec
...
1, we saw that the image of an arbitrary vector v ∈ V can be computed if
the image T (vi ) is known for each vector vi in a basis for V
...
THEOREM 4
Let V and W be finite dimensional vector spaces and B = {v1 , v2 ,
...
If T: V → W is a linear transformation, then
R(T ) = span{T (v1 ), T (v2 ),
...
First, if w is in R(T ), then there is a vector v in V such that T (v) = w
...
, cn with
v = c1 v1 + c2 v2 + · · · + cn vn
so that
T (v) = T (c1 v1 + c2 v2 + · · · + cn vn )
Confirming Pages
218
Chapter 4 Linear Transformations
From the linearity of T, we have
w = T (v) = c1 T (v1 ) + c2 T (v2 ) + · · · + cn T (vn )
As w is a linear combination of T (v1 ), T (v2 ),
...
, T (vn )}
...
, T (vn )}
On the other hand, suppose that w ∈ span{T (v1 ), T (v2 ),
...
Then
there are scalars c1 ,
...
Therefore, span{T (v1 ), T (v2 ),
...
EXAMPLE 3
Let T: 3ޒ → 3ޒbe a linear operator and B = {v1 , v2 , v3 } a basis for
...
Is ⎣ 2 ⎦ in R(T )?
1
b
...
c
...
Solution
⎤
1
a
...
}ޒIn particular, if t = 0, then a solution is c1 = 2, c2 = −1, and c3 = 0
...
Confirming Pages
4
...
To find a basis for R(T ), we row-reduce the matrix
⎤
⎡
⎡
1 0
1
1
2
⎣ 0 1
⎣ 1
0
1 ⎦
to obtain
0 0
0 −1 −1
219
⎤
1
1 ⎦
0
Since the leading 1s are in columns 1 and 2, a basis for R(T ) is given by
⎧⎡
⎤⎫
⎤⎡
1 ⎬
⎨ 1
R(T ) = span ⎣ 1 ⎦, ⎣ 0 ⎦
⎭
⎩
−1
0
Observe that since the range is spanned by two linearly independent vectors,
R(T ) is a plane in , 3ޒas shown in Fig
...
c
...
That is,
N (T ) = span {−v1 − v2 + v3 }
3
which is a line in
...
2
...
Confirming Pages
220
Chapter 4 Linear Transformations
THEOREM 5
Let V and W be finite dimensional vector spaces
...
To establish the result, we consider three cases
...
In this case, the image of every
vector in V is the zero vector (in W ), so that R(T ) = {0}
...
Now suppose 1 ≤ r = dim(N (T )) < n
...
, vr } be a basis for N (T )
...
3
...
, vn }, such
that {v1 , v2 ,
...
, vn } is a basis for V
...
, T (vn )} is a basis for R(T )
...
, T (vr ), T (vr+1 ),
...
, T (vn ) and hence R(T ) = span(S)
...
Since T is linear, the previous
equation can be written as
T (cr+1 vr+1 + cr+2 vr+2 + · · · + cn vn ) = 0
From this last equation, we have cr+1 vr+1 + cr+2 vr+2 + · · · + cn vn is in N (T )
...
, vr } is a basis for N (T ), there are scalars c1 , c2 ,
...
, vr , vr+1 ,
...
In particular, cr+1 = cr+2 = · · · = cn = 0
...
, T (vn ) are a basis for R(T )
...
If {v1 ,
...
, T (vn )}
A similar argument to the one above shows that {T (v1 ),
...
Thus, dim(R(T )) = n = dim(V ), and the result also holds in this
case
...
2 The Null Space and Range
EXAMPLE 4
221
Define a linear transformation T: P4 −→ P2 by
T (p(x)) = p′′ (x)
Find the dimension of the range of T, and give a description of the range
...
Since p(x) is in N (T ) if and
only if its degree is 0 or 1, the null space is the subspace of P4 consisting of polynomials with degree 1 or less
...
Since dim(P4 ) = 5, by Theorem 5 we have
2 + dim(R(T )) = 5
so
dim(R(T )) = 3
Then as in the proof of Theorem 5, we have
T (x 2 ), T (x 3 ), T (x 4 ) = 2, 6x, 12x 2
is a basis for R(T )
...
Matrices
In Sec
...
2 we defined the column space of a matrix A, denoted by col(A), as the
span of its column vectors
...
We further examine these notions here
in the context of linear transformations
...
In this way we see that the range of T, which is a subspace of ޒm , is equal to the
column space of A, that is,
R(T ) = col(A)
The dimension of the column space of A is called the column rank of A
...
Applying Theorem 5, we have
column rank(A) + nullity(A) = n
Another subspace of ޒn associated with the matrix A is the row space of A,
denoted by row(A), and is the span of the row vectors of A
...
3
...
In particular, the columns with the leading 1s in the rowreduced form of A correspond to the column vectors of A needed for a basis of col(A)
...
On the other hand, row-reducing A eliminates row vectors that are linear
combinations of the others, so that the nonzero row vectors of the reduced form of A
form a basis for row(A)
...
We have now established Theorem 6
...
We can now define the rank of a matrix A as dim(row(A)) or dim(col(A))
...
For example, suppose that
a linear system consists of 20 equations each with 22 variables
...
That
is, every solution to the homogeneous linear system Ax = 0 is a linear combination
of two linearly independent vectors in
...
Hence, col(A) = , 02ޒ
and consequently every vector b in 02ޒis a linear combination of the columns of A
...
02ޒIn general,
if A is an m × n matrix, nullity(A) = r, and dim(col) = n − r = m, then the linear
system Ax = b is consistent for every vector b in ޒm
...
2
...
THEOREM 7
Let A be an n × n matrix
...
1
...
3
...
The
The
The
The
matrix A is invertible
...
homogeneous linear system Ax = 0 has only the trivial solution
...
Confirming Pages
4
...
6
...
8
...
10
...
12
...
223
The determinant of the matrix A is nonzero
...
The column vectors of A span ޒn
...
rank(A) = n
R(A) = col(A) = ޒn
N (A) = {0}
row(A) = ޒn
The number of pivot columns of the reduced row echelon form of A is n
...
1
...
2
...
, vn } is a basis for V , then
R(T ) = span{T (v1 ),
...
If V and W are finite dimensional vector spaces, then
dim(V ) = dim(R(T )) + dim(N (T ))
4
...
If A is an m × n matrix, then the rank of A is the number of leading 1s in
the row-reduced form of A
...
If A is an n × n invertible matrix, in addition to Theorem 9 of Sec
...
3, we
know that rank(A) = n, R(A) = col(A) = ޒn , N (A) = {0}, and the
number of leading 1s in the row echelon form of A is n
...
2
In Exercises 1–4, define a linear operator T: 2ޒ → 2ޒ
by
x − 2y
x
=
T
−2x + 4y
y
Determine whether the vector v is in N (T )
...
v =
0
0
Confirming Pages
224
Chapter 4 Linear Transformations
2
...
v =
1
3
4
...
5
...
p(x) = 5x + 2
7
...
p(x) = 3
In Exercises 9–12, define a linear operator
T: 3ޒ → 3ޒby
⎤
⎤⎞ ⎡
⎛⎡
x+
2z
x
T ⎝⎣ y ⎦⎠ = ⎣ 2x + y + 3z ⎦
x − y + 3z
z
Determine whether the vector v is in R(T )
...
v = ⎣ 3 ⎦
0
⎡ ⎤
2
10
...
v = ⎣ 1 ⎦
−2
⎤
⎡
−2
12
...
⎤
⎡
−1 −1
13
...
A = ⎣ 3 −3 ⎦
−2
2
⎤
⎡
1 0
15
...
A = ⎣ −1
6 −2
In Exercises 17–24, find a basis for the null space of
the linear transformation T
...
T: , 2ޒ → 2ޒ
T
18
...
T: , 3ޒ → 3ޒ
⎛⎡
⎤
⎤⎞ ⎡
x + 2z
x
T ⎝⎣ y ⎦⎠ = ⎣ 2x + y + 3z ⎦
x − y + 3z
z
20
...
T: , 3ޒ → 3ޒ
⎛⎡
⎤
⎤⎞ ⎡
x − 2y − z
x
T ⎝⎣ y ⎦⎠ = ⎣ −x + 2y + z ⎦
2x − 4y − 2z
z
22
...
2 The Null Space and Range
23
...
Determine whether
⎤
−6
w=⎣ 5 ⎦
0
T (p(x)) = p(0)
24
...
25
...
T: , 3ޒ → 5ޒ
⎤
1 −2 −3 1 5
1 0 4 ⎦v
T (v) = ⎣ 3 −1
1
1
3 1 2
27
...
T: , 3ޒ → 3ޒ
⎛⎡
⎤
⎤⎞ ⎡
x − y + 3z
x
T ⎝ ⎣ y ⎦⎠ = ⎣ x + y + z ⎦
−x + 3y − 5z
z
29
...
T: P2 → P2 ,
T (ax 2 + bx + c) = (a + b)x 2 + cx + (a + b)
3ޒ
3ޒ
225
31
...
3ޒSuppose
⎤
⎤
⎡
⎡
−2
0
T (v2 ) = ⎣ 1 ⎦
T (v1 ) = ⎣ 1 ⎦
1
−1
⎤
⎡
−2
T (v3 ) = ⎣ 2 ⎦
0
⎡
is in the range of T
...
Find a basis for R(T )
...
Find dim(N (T ))
...
Let T: 3ޒ → 3ޒbe a linear operator and
B = {v1 , v2 , v3 } a basis for
...
Determine whether
⎤
−2
w=⎣ 1 ⎦
2
⎡
is in the range of T
...
Find a basis for R(T )
...
Find dim(N (T ))
...
Let T: P2 → P2 be defined by
T (ax 2 + bx + c) = ax 2 + (a − 2b)x + b
a
...
b
...
34
...
Determine whether p(x) = x 2 − x − 2 is in the
range of T
...
Find a basis for R(T )
...
Find a linear transformation T: 2ޒ → 3ޒsuch
that R(T ) =
...
Find a linear operator T: 2ޒ → 2ޒsuch that
R(T ) = N (T )
...
Define a linear operator T: Pn → Pn by
′
T (p(x)) = p (x)
a
...
b
...
c
...
38
...
Show dim(N (T )) = k
...
Suppose T: 6ޒ → 4ޒis a linear transformation
...
If dim(N (T )) = 2, then find dim(R(T ))
...
If dim(R(T )) = 3, then find dim(N (T ))
...
Show that if T: V → V is a linear operator such
that R(T ) = N (T ), then dim(V ) is even
...
Let
1
0
0 −1
→ M2×2 by
A=
Define T: M2×2
T (B) = AB − BA
ß
4
...
42
...
Show
that R(T ) = Mn×n
...
Define T: Mn×n → Mn×n by T (A) = A + At
...
Find R(T )
...
Find N (T )
...
Define T: Mn×n → Mn×n by T (A) = A − At
...
Find R(T )
...
Find N (T )
...
Let A be a fixed n × n matrix, and define
T: Mn×n → Mn×n by T (B) = AB
...
Let A be a fixed n × n diagonal matrix, and
define T: ޒn → ޒn by T (v) = Av
...
Show dim(R(T )) is the number of nonzero
entries on the diagonal of A
...
Find dim(N (T ))
...
In this section we show how an isomorphism, which is a special kind of
linear transformation, can be used to establish a correspondence between two vector
spaces
...
For a more detailed description see App
...
A
...
DEFINITION 1
One-to-One and Onto
mapping
...
The mapping T is called one-to-one (or injective) if u ̸= v implies that T (u) ̸=
T (v)
...
2
...
That is, the range
of T is W
...
When we are trying to show that a mapping is one-to-one, a useful equivalent formulation comes from the contrapositive statement
...
To show that a mapping is onto, we must show that
if w is an arbitrary element of W , then there is some element v ∈ V with T (v) = w
...
3 Isomorphisms
EXAMPLE 1
Let T: 2ޒ → 2ޒbe the mapping defined by T (v) = Av, with
1 1
−1 0
A=
Show that T is one-to-one and onto
...
Thus, u = v, establishing that the mapping is oneto-one
...
2ޒWe
Next, to show that T is onto, let w =
b
must show that there is a vector v =
T (v) =
v1
v2
1 1
−1 0
in 2ޒsuch that
v1
v2
=
a
b
Applying the inverse of A to both sides of this equation, we have
v1
v2
=
0
1
−1
1
a
b
Thus, T is onto
...
As verification, observe that
3
1 1
−1 0
−2
3
=
1
2
An alternative argument is to observe that the column vectors of A are linearly
independent and hence are a basis for
...
2ޒ
Confirming Pages
228
Chapter 4 Linear Transformations
Theorem 8 gives a useful way to determine whether a linear transforation is
one-to-one
...
Proof First suppose that T is one-to-one
...
To show
this, let v be any vector in the null space of T, so that T (v) = 0
...
4
...
Since T is one-to-one, then v = 0, so only
the zero vector is mapped to the zero vector
...
Since the null space consists of only the zero vector, u − v =
0, that is, u = v
...
Solution
The vector
x
y
2x − 3y
5x + 2y
is in the null space of T if and only if
2x − 3y = 0
5x + 2y = 0
This linear system has the unique solution x = y = 0
...
The mapping of Example 2 can alternatively be defined by using the matrix
A=
2
5
−3
2
so that T (x) = Ax
...
This allows us to show
that the map is also onto
...
In Theorem 4 of Sec
...
2, we showed that if T: V −→ W is a linear transformation between vector spaces, and B = {v1 ,
...
, T (vn )}
...
Confirming Pages
4
...
, vn } is a basis
for V
...
, T (vn )} is a basis for R(T )
...
4
...
, T (vn )} =
R(T ), so it suffices to show that {T (v1 ),
...
To do
so, we consider the equation
c1 T (v1 ) + c2 T (v2 ) + · · · + cn T (vn ) = 0
which is equivalent to
T (c1 v1 + c2 v2 + · · · + cn vn ) = 0
Since T is one-to-one, the null space consists of only the zero vector of V , so that
c1 v1 + c2 v2 + · · · + cn vn = 0
Finally, since B is a basis for V , it is linearly independent; hence
c1 = c 2 = · · · = c n = 0
Therefore, {T (v1 ),
...
We note that in Theorem 9 if T is also onto, then {T (v1 ),
...
We are now ready to define an isomorphism on vector spaces
...
A linear transformation T: V −→
W that is both one-to-one and onto is called an isomorphism
...
Proposition 2 builds on the remarks following Example 2 and gives a useful
characterization of linear transformations defined by a matrix that are isomorphisms
...
Then T is an isomorphism if and only if A is invertible
...
Then x = A−1 b is the
preimage of b
...
To show that T is one-to-one, observe
that by Theorem 10 of Sec
...
5 the equation Ax = 0 has only the solution x =
A−1 0 = 0
...
Conversely, suppose that T is an isomorphism
...
Hence, by Theorem 7 of Sec
...
2 the matrix
A is invertible
...
THEOREM 10
If V is a vector space with dim(V ) = n, then V and ޒn are isomorphic
...
, vn } be an ordered basis for V
...
4
...
We claim that T is an isomorphism
...
Since B is a basis, there are unique scalars c1 ,
...
⎥
⎣
...
...
Therefore, N (T ) = {0}, and by Theorem
8, T is one-to-one
...
⎥
⎣
...
kn
ޒn
...
Observe that T (v) = w
and hence T is onto
...
So far in our experience we have seen that dim(P2 ) = 3 and dim(S2×2 ) = 3,
where S2×2 is the vector space of 2 × 2 symmetric matrices
...
Next we show that in fact all vector
spaces of dimension n are isomorphic to one another
...
DEFINITION 3
Inverse of a Linear Transformation Let V and W be vector spaces and
T: V −→ W a one-to-one linear transformation
...
If T is onto, then T −1 is defined on all of W
...
3 Isomorphisms
231
By Theorem 4 of Sec
...
2, if T is one-to-one, then the inverse map is well
defined
...
Applying T gives T (T −1 (w)) = T (u) and T (T −1 (w)) = T (v), so that T (u) = T (v)
...
The inverse map of a one-to-one linear transformation is also a linear transformation, as we now show
...
Then the mapping T −1: R(T ) −→ V is also a linear transformation
...
Also let v1 and
v2 be vectors in V with T −1 (w1 ) = v1 and T −1 (w2 ) = v2
...
Proposition 4 shows that the inverse transformation of an isomorphism defined
by matrix multiplication can be written using the inverse of the matrix
...
PROPOSITION 4
EXAMPLE 3
Let A be an n × n invertible matrix and T: ޒn −→ ޒn the linear transformation
defined by T (x) = Ax
...
Let T: 2ޒ → 2ޒbe the mapping of Example 1 with T (v) = Av, where
A=
1 1
−1 0
Verify that the inverse map T −1: 2ޒ →− 2ޒis given by T −1 (w) = A−1 w, where
A−1 =
Solution
Let v =
v1
v2
0 −1
1
1
be a vector in
...
Proof By Theorem 10, there are isomorphisms T1: V −→ ޒn and T2: W −→ ޒn ,
as shown in Fig
...
Let φ = T2−1 ◦T1: V −→ W
...
Next by Theorem 2 of Sec
...
1, the
composition T2−1 ◦T1 is linear
...
A
...
T1
V
ޒn
φ
T2
W
φ = T2−1 ◦T1: V −→ W
Figure 1
EXAMPLE 4
Solution
Find an explicit isomorphism from P2 onto the vector space of 2 × 2 symmetric
matrices S2×2
...
Let T1 and T2 be the respective
coordinate maps from P2 and S2×2 into
...
3 Isomorphisms
Observe that T2−1: →− 3ޒS2×2 maps the vector
⎤
⎡
c
⎣ b ⎦
to the symmetric matrix
a
a
b
b
c
Thus, the desired isomorphism is given by (T2−1 ◦T1 ): P2 −→ S2×2 with
For example,
(T2−1 ◦T1 )(ax 2 + bx + c) =
a
b
b
c
⎤⎞
2
(T2−1 ◦T1 )(x 2 − x + 2) = T2−1 (T1 (x 2 − x + 2)) = T2−1 ⎝⎣ −1 ⎦⎠ =
1
⎛⎡
1
−1
−1
2
Fact Summary
Let V and W be vector spaces and T a linear transformation from V into W
...
The mapping T is one-to-one if and only if the null space of T consists of
only the zero vector
...
If {v1 ,
...
, T (vn )} is a basis for the range of T
...
3
...
4
...
5
...
6
...
Then the mapping T is an
isomorphism if and only if A is invertible
...
If A is an invertible matrix and T (x) = Ax, then T −1 (x) = A−1 x
...
3
In Exercises 1–6, determine whether the linear
transformation is one-to-one
...
T: , 2ޒ → 2ޒ
T
2
...
T: ⎡⎛ޒ → 3ޒ
⎤
⎤⎞ ⎡
x+y−z
x
⎦
y
T ⎝⎣ y ⎦⎠ = ⎣
y−z
z
4
...
T: P2 → P2 ,
T (p(x)) = p′ (x) − p(x)
6
...
7
...
T: , 2ޒ → 2ޒ
x
y
T
−2x + y
x − 1y
2
=
9
...
T: , 2ޒ → 2ޒ
T
1
10 x
1
5x
+ 1y
5
+ 2y
5
15
...
T: , 3ޒ → 3ޒ
⎛⎡
⎤
⎤⎞ ⎡
2x + 3y − z
x
T ⎝⎣ y ⎦⎠ = ⎣ 2x + 6y + 3z ⎦
4x + 9y + 2z
z
17
...
T: , 3ޒ → 3ޒ
⎛⎡
10
...
Determine whether the set {T (e1 ), T (e2 )} is a basis
for
...
T: , 2ޒ → 2ޒ
T
x
y
12
...
T:
=
In Exercises 15–18, T: 3ޒ → 3ޒis a linear operator
...
3ޒ
⎤
⎤⎞ ⎡
x − y + 2z
x
⎦
y−z
T ⎝ ⎣ y ⎦⎠ = ⎣
2z
z
2ޒ
x
y
=
x
−3x
, 2ޒ
T
x
y
=
3x − y
−3x − y
⎤⎞ ⎡
⎤
4x − 2y + z
x
⎦
2x + z
T ⎝⎣ y ⎦⎠ = ⎣
z
2x − y + 3 z
2
⎤
⎤⎞ ⎡
x − y + 2z
x
T ⎝⎣ y ⎦⎠ = ⎣ −x + 2y − z ⎦
−y + 5z
z
In Exercises 19 and 20, T: P2 → P2 is a linear
operator
...
19
...
T (p(x)) = xp′ (x)
In Exercises 21–24, let T: V → V be the linear
operator defined by T (v) = Av
...
Show that T is an isomorphism
...
Find A−1
...
Show directly that T −1 (w) = A−1 w for all
w ∈ V
...
T
y
−2 −3
y
Confirming Pages
4
...
T
⎛⎡
x
23
...
T ⎝⎣ y
z
x
−2
3
y
−1 −1
⎤⎞ ⎡
−2
0
1
⎦⎠ = ⎣ 1 −1 −1
0
1
0
⎤⎞ ⎡
2 −1
1
⎦⎠ = ⎣ −1
1 −1
0
1
0
=
30
...
25
...
T ⎝⎣ y ⎦⎠ = ⎣ 2
z
1
1 −3
z
⎤
⎤⎡
⎤⎞ ⎡
⎛⎡
x
1
3
0
x
28
...
T
x
y
=
29
...
ß
4
...
31
...
Show that
T: Mn×n → Mn×n defined by
T (B) = ABA−1
is an isomorphism
...
Find an isomorphism from M2×2 onto
...
Find an isomorphism from 4ޒonto P3
...
Find an isomorphism from M2×2 onto P3
...
Let
⎫
⎧⎡
⎤
⎬
⎨ x
V = ⎣ y ⎦ x + 2y − z = 0
⎭
⎩
z
Find an isomorphism from V onto
...
Let
a
b
a, b, c ∈ ޒ
c −a
Find an isomorphism from P2 onto V
...
Suppose T: 3ޒ → 3ޒis an isomorphism
...
Matrix Representation of a Linear Transformation
Matrices have played an important role in our study of linear algebra
...
To illustrate
the idea, recall from Sec
...
1 that given any m × n matrix A, we can define a linear
transformation T: ޒn −→ ޒm by
T (v) = Av
In Example 8 of Sec
...
1, we showed how a linear transformation T: 2ޒ →− 3ޒis
completely determined by the images of the coordinate vectors e1 , e2 , and e3 of
...
Then
1 −1 0
v = Av
T (v) =
1
2 1
That is, the linear transformation T is given by a matrix product
...
, n
...
In this section we show that every linear transformation between finite dimensional
vector spaces can be written as a matrix multiplication
...
If
T: V −→ W is a linear transformation, then there exists a matrix A such that
[T (v)]B ′ = A[v]B
In the case for which V = ޒn , W = ޒm , and B and B ′ are, respectively, the standard
bases, the last equation is equivalent to
T (v) = Av
as above
...
Let V and W be vector spaces with ordered bases B = {v1 , v2 ,
...
, wm }, respectively, and let T: V −→ W be a linear transformation
...
⎥
⎣
...
cn
be the coordinate vector of v relative to the basis B
...
, n the vector T (vi ) is in W
...
4 Matrix Representation of a Linear Transformation
237
T (v1 ) = a11 w1 + a21 w2 + · · · + am1 wm
T (v2 ) = a12 w1 + a22 w2 + · · · + am2 wm
...
...
⎥
for i = 1, 2,
...
⎦
...
4
...
Thus, the coordinate vector of T (v) relative to B ′ can be written in vector
form as
⎡
⎤
⎡
⎤
⎡
⎤
a11
a12
a1n
⎢ a21 ⎥
⎢ a22 ⎥
⎢ a2n ⎥
[T (v)]B ′ = c1 ⎢
...
⎥ + · · · + cn ⎢
...
⎦
⎣
...
⎦
...
...
...
a12
a22
...
...
...
...
am1
am2
...
...
⎤⎡
⎤
c1
⎥⎢ c ⎥
⎥⎢ 2 ⎥
⎥⎣
...
⎦
...
In the case for which T: V → V
is a linear operator and B is a fixed ordered basis for V , the matrix representation for
the mapping T is denoted by [T ]B
...
THEOREM 12
Let V and W be finite dimensional vector spaces with ordered bases B = {v1 ,
v2 ,
...
wm }, respectively, and let T: V −→ W be a linear
′
transformation
...
Moreover, the coordinates of T (v) relative to B ′ are given by
′
[T (v)]B ′ = [T ]B [v]B
B
Confirming Pages
238
Chapter 4 Linear Transformations
Suppose that in Theorem 12 the vector spaces V and W are the same, B and B ′
are two different ordered bases for V , and T: V −→ V is the identity operator, that
′
′
is, T (v) = v for all v in V
...
3
...
EXAMPLE 1
Solution
Define the linear operator T: 3ޒ →− 3ޒby
⎤
⎤⎞ ⎡
⎛⎡
x
x
T ⎝⎣ y ⎦⎠ = ⎣ −y ⎦
z
z
a
...
3ޒ
b
...
Let B = {e1 , e2 , e3 } be the standard basis for
...
Since B is the standard basis for , 3ޒthe
by its components
...
1
...
Confirming Pages
4
...
For the given basis B = {v1 , v2 ,
...
, T (vn )
...
Find the coordinates of T (v1 ), T (v2 ),
...
, wm } of W
...
, [T (vn )]B ′
...
Define the m × n matrix [T ]B with ith column vector equal to [T (vi )]B ′
...
Compute [v]B
...
Compute the coordinates of T (v) relative to B ′ by
⎡
⎤
c1
⎢ c2 ⎥
′
[T (v)]B ′ = [T ]B [v]B = ⎢
...
⎦
...
Then T (v) = c1 w1 + c2 w2 + · · · + cm wm
...
be ordered bases for ޒ
′
Solution
a
...
B
−3
...
b
...
We first apply T to the basis vectors of B, which gives
⎤
⎡
2
3
1
and
T
=⎣ 3 ⎦
T
1
2
−1
⎤
1
=⎣ 4 ⎦
2
⎡
Next we find the coordinates of each of these vectors relative to the basis B ′
...
Using the definition of T directly, we have
⎤
⎤ ⎡
⎡
−2
−2
−3
= ⎣ −3 − 2 ⎦ = ⎣ −5 ⎦
T
−2
−1
−3 + 2
Now, to use the matrix found in part (a), we need to find the coordinates of
v relative to B
...
EXAMPLE 3
⎡
Define a linear transformation T: P2 −→ P3 by
T (f (x)) = x 2 f ′′ (x) − 2f ′ (x) + xf (x)
Find the matrix representation of T relative to the standard bases for P2 and P3
...
4 Matrix Representation of a Linear Transformation
Solution
241
Since the standard basis for P2 is B = {1, x, x 2 }, we first compute
T (1) = x
T (x) = x 2 − 2
T (x 2 ) = x 2 (2) − 2(2x) + x(x 2 ) = x 3 + 2x 2 − 4x
Since the standard basis for P3 is B ′ = {1, x, x 2 , x 3 }, the coordinates relative to
B ′ are
[T (1)]B ′
⎤
0
⎢ 1 ⎥
=⎢ ⎥
⎣ 0 ⎦
0
⎡
[T (x)]B ′
⎤
−2
⎢ 0 ⎥
⎥
=⎢
⎣ 1 ⎦
0
⎡
and
[T (x 2 )]B ′
⎤
0
⎢ −4 ⎥
⎥
=⎢
⎣ 2 ⎦
1
⎡
Hence, the matrix of the transformation is given by
⎤
⎡
0 −2
0
⎢ 1
′
0 −4 ⎥
⎥
[T ]B = ⎢
B
⎣ 0
1
2 ⎦
0
0
1
As an example, let f (x) = x 2 − 3x + 1
...
In Sec
...
1 we discussed the addition, scalar multiplication, and composition of
linear maps
...
The proofs are omitted
...
If S and T are linear transformations from V to W , then
′
′
′
1
...
[kT ]B = k[T ]B
for any scalar k
B
B
As before in the special case for which S and T are linear operators on a finite
dimensional vector space V , and B is a fixed ordered basis for V , the notation becomes
[S + T ]B = [S]B + [T ]B and [kT ]B = k[T ]B
...
Solution
The matrix representations for the linear operators S and T are, respectively,
⎤ ⎡
⎤ ⎤
⎡⎡
1
2
[S]B = ⎣⎣ S(e1 ) ⎦ ⎣ S(e2 ) ⎦ ⎦ =
0 −1
B
and
B
⎤
⎡⎡
[T ]B = ⎣⎣ T (e1 ) ⎦
⎣ T (e2 ) ⎦ ⎦ =
B
and
1
2
0 −1
[3S]B = 3
−1 1
3 0
B
Then by Theorem 13,
[S + T ]B =
⎤ ⎤
⎡
+
1
0
2
−1
−1 1
3 0
=
=
3
0
0
3
3
−1
6
−3
As we mentioned in Sec
...
1, the matrix of the composition is the product of the
matrices of the individual maps, as given in Theorem 14
...
If T: U → V and S: V → W are linear transformations, then
′′
′′
[S ◦T ]B = [S]B ′ [T ]B
B
B
B
′
Confirming Pages
4
...
COROLLARY 1
EXAMPLE 5
Let V be a finite dimensional vector space with ordered basis B
...
Find the matrix of D relative to the standard basis B = {1, x, x 2 , x 3 }
...
b
...
Use this matrix to find the second derivative of p(x) = 1 − x + 2x 3
...
By Theorem 12, we have
⎡⎡
⎤
[D]B = ⎣⎣ D(1) ⎦
⎡
0 1
⎢ 0 0
=⎢
⎣ 0 0
0 0
0
2
0
0
⎡
B
⎤
⎣ D(x) ⎦
⎡
B
⎤
0
0 ⎥
⎥
3 ⎦
0
⎤
⎣ D(x 2 ) ⎦
B
⎡
⎤ ⎤
⎣ D(x 3 ) ⎦ ⎦
B
Since the coordinate vector of p(x) = 1 − x + 2x 3 , relative to B, is given by
⎤
⎡
1
⎢ −1 ⎥
⎥
[p(x)]B = ⎢
⎣ 0 ⎦
2
then
⎡
0
⎢ 0
[D(p(x))]B = ⎢
⎣ 0
0
1
0
0
0
0
2
0
0
⎤
⎤ ⎡
⎤⎡
−1
1
0
0 ⎥ ⎢ −1 ⎥ ⎢ 0 ⎥
⎥
⎥=⎢
⎥⎢
3 ⎦⎣ 0 ⎦ ⎣ 6 ⎦
0
2
0
Confirming Pages
244
Chapter 4 Linear Transformations
Therefore, as expected, D(p(x)) = −1 + 6x 2
...
By Corollary 1, the matrix we need is given by
⎡
0 0
⎢ 0 0
[D 2 ]B = ([D]B )2 = ⎢
⎣ 0 0
0 0
If p(x) = 1 − x + 2x 3 , then
⎡
0
⎢ 0
2
[D (p(x))]B = ⎢
⎣ 0
0
so that p′′ (x) = 12x
...
COROLLARY 2
Let T be an invertible linear operator on a finite dimensional vector space V and
B an ordered basis for V
...
Fact Summary
Let V and W be vector spaces, B = {v1 ,
...
, wm } ordered
bases of V and W , respectively, and T a linear transformation from V into W
...
The matrix of T relative to B and B ′ is given by
′
[T ]B = [ [T (v1 )]B ′ [T (v2 )]B ′ · · · [T (vn )]B ′ ]
B
2
...
4 Matrix Representation of a Linear Transformation
245
3
...
That is, if [T (v)]B ′ = [b1 b2
...
If S is another linear transformation from V into W , then the matrix
representation of S + T relative to B and B ′ is the sum of the matrix
′
′
′
representations for S and T
...
B
B
B
5
...
That is, [cT ]B = c[T ]B
...
If S is a linear transformation from W into Z and B ′′ is an ordered basis
′′
′′
′
for Z, then [S ◦T ]B = [S]B ′ [T ]B
...
[T n ]B = ([T ]B )n
8
...
Exercise Set 4
...
a
...
b
...
4
...
T: , 2ޒ → 2ޒ
T
x
y
2
...
T: , 3⎛ → 3ޒ
⎡ޒ
v=
2
1
⎛⎡
In Exercises 5–12, T: V → V is a linear operator with
B and B ′ ordered bases for V
...
Find the matrix representation for T relative to
the ordered bases B and B ′
...
Find T (v), using a direct computation and
using the matrix representation
...
T: , 2ޒ → 2ޒ
T
x
y
=
B=
−x + 2y
3x
1
−1
,
2
0
Confirming Pages
246
Chapter 4 Linear Transformations
B′ =
v=
1
,
0
−1
−2
9
...
T: , 3ޒ → 3ޒ
⎤
⎤⎞ ⎡
⎛⎡
2x − z
x
T ⎝⎣ y ⎦⎠ = ⎣ −x + y + z ⎦
2z
z
⎧⎡
⎤ ⎡ ⎤⎫
⎤⎡
1 ⎬
1
⎨ −1
B = ⎣ 0 ⎦, ⎣ 2 ⎦, ⎣ 2 ⎦
⎭
⎩
1
0
1
⎫
⎧⎡
⎤
⎤⎡ ⎤⎡
0 ⎬
0
⎨ 1
B ′ = ⎣ 0 ⎦, ⎣ 1 ⎦, ⎣ 0 ⎦
⎭
⎩
1
0
0
⎤
⎡
1
v = ⎣ −1 ⎦
1
7
...
T: , 3ޒ → 3ޒ
⎤
⎤⎞ ⎡
⎛⎡
x+z
x
T ⎝⎣ y ⎦⎠ = ⎣ 2y − x ⎦
y+z
z
⎧⎡
⎤⎡
⎤⎡
0
−1
⎨ −1
B = ⎣ 1 ⎦, ⎣ −1 ⎦, ⎣ 1
⎩
1
1
1
⎧⎡
⎤⎡
⎤⎡
−1
1
⎨ 0
B ′ = ⎣ 0 ⎦, ⎣ 0 ⎦, ⎣ −1
⎩
0
−1
1
⎤
⎡
−2
v=⎣ 1 ⎦
3
B ′ = {1, x, x 2 }
v = x 2 − 3x + 3
10
...
Let
1
0
0 −1
H =
and let T be the linear operator on all 2 × 2
matrices with trace 0, defined by
T (A) = AH − H A
1
0
B=
0 −1
B′ = B
2
1
v=
3 −2
0 1
0 0
,
,
0 0
1 0
12
...
Let T: 2ޒ → 2ޒbe the linear operator defined by
T
x
y
x + 2y
x− y
=
Let B be the standard ordered basis for 2ޒand B ′
the ordered basis for 2ޒdefined by
B′ =
a
...
b
...
1
2
,
4
−1
Confirming Pages
4
...
Find [T ]B
...
Find [T ]B ′
...
Let C be the ordered basis obtained by
switching the order of the vectors in B
...
C
f
...
Find
′
[T ]B ′
...
Let T: 3ޒ → 2ޒbe the linear transformation
defined by
⎤
⎡
x−y
x
⎦
x
=⎣
T
y
x + 2y
b
...
Find
′
[T ]B
...
Let C ′ be the ordered basis obtained by
switching the first and second vectors in B ′
...
C
d
...
B
′
e
...
B
B
f
...
Find [T ]B
...
Find [T ]B ′
...
Let C be the ordered basis obtained by
switching the order of the vectors in B
...
C
d
...
Find
′′
[T ]B ′
...
Let C ′′ be the ordered basis obtained by
switching the order of the first and third
′′
vectors in B ′′
...
B
15
...
′
a
...
B
247
in terms of the functions T and S
...
Define a linear operator T: M2×2 → M2×2 by
a
c
T
b
d
2a
−d
=
c−b
d −a
Let B be the standard ordered basis for M2×2 and
B ′ the ordered basis
B′ =
a
...
c
...
e
...
B
Find [T ]B ′
...
′
Find [I ]B and [I ]B ′
...
Define a linear operator T: 2ޒ → 2ޒby
T
x
y
=
x
−y
Find the matrix for T relative to the standard
basis for
...
2ޒ
Confirming Pages
248
Chapter 4 Linear Transformations
18
...
2ޒ
19
...
⎥⎟ = c ⎢
...
⎦⎠
⎣
...
...
20
...
21
...
⎤
⎤⎞ ⎡
3x − z
x
⎦
x
S ⎝⎣ y ⎦⎠ = ⎣
z
z
⎛⎡
by
a
...
b
...
26
...
−3T + 2S
28
...
S ◦T
30
...
Find the matrix representation for the given
linear operator relative to the standard basis
...
Compute the image of v =
3
and using the matrix found in part (a)
...
−3S
23
...
T ◦S
25
...
31
...
Use the matrix to find the third
derivative of p(x) = −2x 4 − 2x 3 + x 2 − 2x − 3
...
Let T: P2 → P2 be defined by
T (p(x)) = p(x) + xp′ (x)
Find the matrix [T ]B where B is the standard
basis for P2
...
5 Similarity
33
...
S(p(x)) = xp(x)
35
...
Observe
that the operator T in Exercise 32 satisfies
T = D ◦S
...
B
B
Find the matrix for T relative to the standard
basis for M2×2
...
Let B = {v1 , v2 , v3 } and B ′ = {v2 , v1 , v3 } be
ordered bases for the vector space V
...
Describe the relationship between [v]B and
B
[v]B ′ and the relationship between the identity
′
matrix I and [T ]B
...
a
...
line perpendicular to
1
b
...
2ޒFind [T ]B
where T: 2ޒ → 2ޒis the linear operator that
ß
4
...
Let V be a vector space and B = {v1 , v2 ,
...
Define v0 = 0 and
T: V → V by
T (vi ) = vi + vi−1
for i = 1,
...
Similarity
We have just seen in Sec
...
4 that if T: V −→ V is a linear operator on the vector
space V , and B is an ordered basis for V , then T has a matrix representation relative
to B
...
However, the action of the
operator T on V is always the same regardless of the particular matrix representation,
as illustrated in Example 1
...
be a second basis for
...
Next, observe that
[v]B1 =
2
3
and
[v]B2 =
1
1
Applying the matrix representations of the operator T relative
obtain
2
1 1
=
[T (v)]B1 = [T ]B1 [v]B1 =
3
−2 4
and
1
2 0
[T (v)]B2 = [T ]B2 [v]B2 =
=
1
0 3
To see that the result is the same, observe that
T (v) = 5
1
0
+8
0
1
=
5
8
and
T (v) = 2
1
1
to B1 and B2 , we
5
8
2
3
+3
1
2
=
5
8
Theorem 15 gives the relationship between the matrices for a linear operator
relative to two distinct bases
...
Let P = [I ]B1 be the transition matrix from B2
2
to B1
...
By Theorem 12 of Sec
...
4, we have
[T (v)]B2 = [T ]B2 [v]B2
Alternatively, we can compute [T (v)]B2 as follows: First, since P is the transition
matrix from B2 to B1 ,
[v]B1 = P [v]B2
Thus, the coordinates of T (v) relative to B1 are given by
[T (v)]B1 = [T ]B1 [v]B1 = [T ]B1 P [v]B2
Now, to find the coordinates of T (v) relative to B2 , we multiply on the left by
P −1 , which is the transition matrix from B1 to B2 , to obtain
[T (v)]B2 = P −1 [T ]B1 P [v]B2
Since both representations for [T (v)]B2 hold for all vectors v in V , then [T ]B2 =
P −1 [T ]B1 P
...
1
...
5 Similarity
[v]B1
251
[T (v)]B1
[T ]B1
P −1
P
[T ]B2
[v]B2
[T (v)]B2
Figure 1
EXAMPLE 2
Let T, B1 , and B2 be the linear operator and bases of Example 1
...
3
...
2ޒFind the matrix of T relative to B1 , and then use Theorem 15
to find the matrix of T relative to B2
...
Using Theorem 15, we can define
similarity for square matrices without reference to a linear operator
...
We say that A is similar to
B if there is an invertible matrix P such that B = P −1 AP
...
This relation is
symmetric; that is, if the matrix A is similar to the matrix B, then B is similar to A
...
For this reason we say that
A and B are similar if either A is similar to B or B is similar to A
...
This relation is also transitive; that is, if A is similar to B and B is similar
to C, then A is similar to C
...
Any relation satisfying these three
properties is called an equivalence relation
...
5 Similarity
253
Fact Summary
Let V be a finite dimensional vector space, B1 and B2 ordered bases of V, and T
a linear operator on V
...
The matrix representations [T ]B1 and [T ]B2 are similar
...
In addition, the matrix
P is the transition matrix from B2 to B1
...
A matrix is similar to itself
...
If
A is similar to B and B is similar to C, then A is similar to C
...
5
In Exercises 1 and 2, [T ]B1 is the matrix
representation of a linear operator relative to the basis
B1 , and [T ]B2 is the matrix representation of the same
operator relative to the basis B2
...
1 2
−1 3
, [T ]B2 =
B1 =
1
,
0
1
,
1
−1
0
v=
4
−1
x
y
, [T ]B2 =
1
,
0
4
...
a
...
b
...
[T ]B1 =
0
2
2 1
−1 2
3
...
[T ]B1 =
representation of T relative to the bases B1
and B2
...
T ⎝⎣ y ⎦⎠ = ⎣ 0 ⎦
z
z
−1
2
B1 = {e1 , e2 , e3 }
⎧⎡ ⎤ ⎡
⎤⎫
⎤⎡
0 ⎬
−1
⎨ 1
B2 = ⎣ 0 ⎦, ⎣ 1 ⎦, ⎣ 0 ⎦
⎭
⎩
1
0
1
⎤
⎡
1
v=⎣ 2 ⎦
−1
Revised Confirming Pages
254
Chapter 4 Linear Transformations
⎤
⎤⎞ ⎡
x+y
x
6
...
[T ]B1 =
1 1
3 2
0
2
2
−3
[T ]B2 =
[T ]B2 =
B1 = {e1 , e2 }
−1
,
B2 =
2
9
...
[T ]B1 =
In Exercises 11–14, find the matrix representation of
the linear operator T relative to B1
...
11
...
[T ]B1 =
B2 =
B1 = {e1 , e2 }
In Exercises 7–10, [T ]B1 and [T ]B2 are, respectively,
the matrix representations of a linear operator relative
to the bases B1 and B2
...
1
,
0
−1 0
0 1
[T ]B2 =
9
2
23
2
12
...
T
x
y
=
B1 =
−4 1
0 1
1
0
0
1
3
B2 =
14
...
Let T: P2 −→ P2 be the linear operator defined
by T (p(x)) = p′ (x)
...
Find the
B
transition matrix P = [I ]B1 , and use Theorem 15
2
Confirming Pages
4
...
18
...
16
...
Find the matrix
representation [T ]B1 relative to the basis
B1 = {1, x, x 2 } and the matrix representation
[T ]B2 relative to B2 = {1, x, 1 + x 2 }
...
19
...
17
...
ß
4
...
Show that if A and B are similar matrices, then
At and B t are similar matrices
...
Show that if A and B are similar matrices, then
An and B n are similar matrices for each positive
integer n
...
Show that if A and B are similar matrices and λ
is any scalar, then det(A − λI ) = det(B − λI )
...
Computer-generated visual content is ubiquitous, found
in almost every arena from advertising and entertainment to science and medicine
...
Computer graphics are based
on displaying two- or three-dimensional objects in two-dimensional space
...
A single picture can be comprised of millions
of pixels, which collectively determine the image
...
1
...
The saddle shown in Fig
...
Figure 1
Confirming Pages
256
Chapter 4 Linear Transformations
Graphics Operations in 2ޒ
To manipulate images, computer programmers use linear transformations
...
2ޒOne of the properties of linear transformations that is especially useful to our work here is that linear
transformations map lines to lines, and hence polygons to polygons
...
) Therefore, to visualize the result of a linear
transformation on a polygon, we only need to transform the vertices
...
Scaling and Shearing
y
5
T
6
x
A transformation on an object that results in a horizontal contraction or dilation
(stretching) is called a horizontal scaling
...
2 with vertices (1, 1), (2, 1), and 3 , 3
...
The transformed triangle T ′ is obtained
by multiplying the x coordinate of each vertex by 3
...
3
...
2ޒ
Then by Theorem 12 of Sec
...
4, we have
Figure 2
T
x
y
'
T
6
x
Let vi and v′i , for i = 1, 2, and 3, be, respectively, the vertices (in vector form) of T
and T ′
...
Specifically,
v′1 =
Figure 3
3 0
0 1
3 0
0 1
1
1
=
3
1
v′2 =
3 0
0 1
2
1
=
6
1
and
v′3 =
3 0
0 1
3
2
3
=
9
2
3
These results are consistent with the transformed triangle T ′, as shown in Fig
...
In general, a horizontal scaling by a factor k is given by the linear transformation
Sh defined by
kx
x
=
Sh
y
y
Confirming Pages
4
...
In all the above cases, if k > 1, then the transformation is called a
dilation, or stretching; and if 0 < k < 1, then the operator is a contraction
...
4
...
Stretch the triangle horizontally by a factor of 2
...
Contract the triangle vertically by a factor of 3
...
Stretch the triangle horizontally by a factor of 2, and contract the triangle
vertically by a factor of 3
...
To stretch the triangle horizontally by a factor of 2, we apply the matrix
y
2 0
0 1
5
T
to each vertex to obtain
Ϫ5
5
Ϫ5
Figure 4
x
v′1 =
0
1
v′2 =
4
1
v′3 =
2
3
Connecting the new vertices by straight-line segments gives the triangle T ′
shown in Fig
...
b
...
5(b)
...
This operator is the composition of the linear operators of parts (a) and (b)
...
4
...
6
...
The linear operator S: 2ޒ → 2ޒused to produce a horizontal shear has the
form
x + ky
x
=
S
y
y
where k is a real number
...
7(a) with vertices v1 =
0
2
1
, and v3 =
, and let k = 2
...
6 Application: Computer Graphics
0
0
to each of the vertices of T, we obtain v′1 =
The resulting triangle
T′
is shown in Fig
...
2
0
, v′2 =
y
, and v′3 =
y
5
5
'
T
T
Ϫ5
5
x
Ϫ5
Ϫ5
5
x
Ϫ5
(a)
(b)
Figure 7
A vertical shear is defined similarly by
S
x
y
x
y + kx
=
In this case the matrix for S, relative to the standard basis B, is given by
[S]B =
EXAMPLE 2
Solution
Perform a vertical shear, with k = 2, on the triangle of Fig
...
The matrix of this operator, relative to the standard basis for , 2ޒis given by
1 0
2 1
y
5
Ϫ5
Applying this matrix to the vertices
5
v1 =
v2 =
2
1
v3 =
v′1 =
x
1
1
1
3
v′2 =
2
5
v′3 =
we obtain
Ϫ5
Figure 8
1 0
k 1
3
2
3
3
2
6
The images of the original triangle and the sheared triangle are shown in Fig
...
3
1
...
The linear operator that reflects a vector through the x axis is
given by
x
x
=
Rx
−y
y
A reflection through the y axis is given by
Ry
x
y
−x
y
=
and a reflection through the line y = x is given by
x
y
Ry=x
=
y
x
The matrix representations, relative to the standard basis B, for each of these are given
by
[Rx ]B =
EXAMPLE 3
Solution
1
0
0 −1
[Ry ]B =
−1 0
0 1
[Ry=x ]B =
0 1
1 0
Perform the following reflections on the triangle T of Fig
...
a
...
b
...
c
...
a
...
4 are given by
0
1
v1 =
v2 =
2
1
v3 =
1
3
Applying the matrix [Rx ]B to the vertices of the original triangle, we obtain
v′1 =
0
−1
v′2 =
2
−1
v′3 =
1
−3
The image of the triangle is shown in Fig
...
b
...
9(b)
...
Finally, applying the matrix [Rx=y ]B to the vertices of the original triangle,
we obtain
1
1
3
v′2 =
v′3 =
v′1 =
0
2
1
The image of the triangle is shown in Fig
...
Revised Confirming Pages
4
...
By Corollary 2 of Sec
...
4, to
reverse one of these operations, we apply the inverse matrix to the transformed image
...
a
...
b
...
Solution
a
...
4
...
By Corollary 2 of Sec
...
4, the matrix which reverses the operation of part (a)
is given by
[S −1 ]B = ([S]B )−1 = −
1
2
0
−1
−2
0
=
0 1
1
0
2
As we noted in Example 4(a), if a graphics operation S is given by a sequence
of linear operators S1 , S2 ,
...
, [Sn ]−1 in succession reverses the proB
B
B
cess one transformation at a time
...
For example, to translate the point (1, 3) three units to the right and two units up, add
3 to the x coordinate and 2 to the y coordinate to obtain the point (4, 5)
...
An
v2
b2
operation S: 2ޒ → 2ޒof the form
S(v) = v + b =
v1 + b1
v2 + b2
is called a translation by the vector b
...
Consequently, when b ̸= 0, then S cannot be accomplished by
means of a 2 × 2 matrix
...
The homogeneous coordinates
of a vector in 2ޒare obtained by adding a third component whose value is 1
...
As an illustration of this, let b =
x + b1
y + b2
1
...
6 Application: Computer Graphics
Now let v =
263
3
...
0
In the previous illustration the translation can be accomplished with less work
by simply adding the vector b to v
...
To do this,
we note that all the previous linear operators can be represented by 3 × 3 matrices
...
5 0 0
1
0 0
⎣ 0 −1 0 ⎦ ⎣ 0
1 0 ⎦⎣
0
0 1
0
0 1
operations is given
⎤ ⎡
1 0 −5
0 1
3 ⎦=⎣
0 0
1
by the product
1
...
5
−1 −3 ⎦
0
1
The vertices of the original triangle in homogeneous coordinates are given by
⎤
⎡ ⎤
⎡
⎡ ⎤
0
2
1
v1 = ⎣ 1 ⎦
v2 = ⎣ 1 ⎦
v3 = ⎣ 3 ⎦
1
1
1
y
10
Q
P
R
10 x
Ϫ10
Find the image of the triangle T of Fig
...
5, followed by
b=
3
a reflection through the x axis
...
5
−4
...
10
...
11(a) to the
triangle shown in Fig
...
Confirming Pages
264
Chapter 4 Linear Transformations
y
y
6
6
9
9
x
x
(b) Triangle T'
(a) Triangle T
Figure 11
Solution
y
Triangle T ′ is obtained from triangle T through a horizontal scaling by a factor of
3, followed by a vertical scaling by a factor of 2, without changing the left vertex
(1, 1)
...
One way to correct this
is to first translate the triangle so that the left vertex is located at the origin, perform
the scaling, and then translate back
...
The matrix is given by
⎤
⎤ ⎡
⎤⎡
⎤⎡
⎤⎡
⎡
3 0 −2
1 0 −1
3 0 0
1 0 0
1 0 1
⎣ 0 1 1 ⎦ ⎣ 0 2 0 ⎦ ⎣ 0 1 0 ⎦ ⎣ 0 1 −1 ⎦ = ⎣ 0 2 −1 ⎦
0 0
1
0 0
1
0 0 1
0 0 1
0 0 1
Notice that
x
⎤ ⎡
1 0
1 0 1
⎣ 0 1 1 ⎦=⎣ 0 1
0 0
0 0 1
⎡
that is, the matrix representation for translation by
◦
Rotation by 45
representation for translation by
y
x
Figure 12
−1
−1
⎤−1
−1
−1 ⎦
1
1
1
is the inverse of the matrix
...
See Fig
...
To
describe how a point is rotated, let (x, y) be the coordinates of a point in 2ޒand θ a
real number
...
If θ < 0,
the direction is clockwise
...
6 Application: Computer Graphics
265
given by
Sθ
x
y
x cos θ − y sin θ
x sin θ + y cos θ
=
The matrix of Sθ relative to the standard basis B = {e1 , e2 } for 2ޒis given by
[Sθ ]B =
cos θ − sin θ
sin θ
cos θ
When using homogeneous coordinates, we apply
⎡
cos θ − sin θ
⎣ sin θ cos θ
0
0
EXAMPLE 7
Solution
y
5
Ϫ5
Figure 13
Find the image of the triangle of Fig
...
The matrix for the combined operations is given by
⎡ √
⎤⎡
⎤
⎤
⎡
⎤⎡
3
−1 0
1 0
1
cos π − sin π 0
1 0
1
2
6
6
⎢ 2
⎥⎢
√
⎥
⎣ sin π
3
cos π
0 ⎦ ⎣ 0 1 −1 ⎦ = ⎢ 1
0 ⎥ ⎣ 0 1 −1 ⎦
⎣ 2
6
6
⎦
2
0 0
1
0 0
1
0
0
1
0
0 1
⎡ √
⎤
√
3
3
1
−1
2
2 + 2 ⎥
⎢ 2
√
√
1
3
3 ⎥
=⎢ 1
⎣ 2
2
2 − 2 ⎦
0
0
1
The vertices of the triangle in homogeneous coordinates are given by
⎤
⎤
⎡
⎡
⎡ ⎤
0
2
1
v2 = ⎣ 1 ⎦
and
v3 = ⎣ 3 ⎦
v1 = ⎣ 1 ⎦
1
1
1
5
Ϫ5
the matrix
⎤
0
0 ⎦
1
x
After applying the above matrix to each of these vectors, we obtain
⎡ √3 ⎤
⎡ 3√3 ⎤
⎡ √
⎤
2
2
√3 − 1
⎢
⎢
⎥
⎥
v′1 = ⎣ 1 ⎦
v′2 = ⎣ 3 ⎦
and
v′3 = ⎣ 3 + 1 ⎦
2
2
1
1
1
The resulting triangle is shown in Fig
...
Projection
Rendering a picture of a three-dimensional object on a flat computer screen requires
projecting points in 3-space to points in 2-space
...
Confirming Pages
266
Chapter 4 Linear Transformations
Parallel projection simulates the shadow that is cast onto a flat surface by a far away
light source, such as the sun
...
14 are rays intersecting an object in
3-space and the projection into 2-space
...
14 is such
that the xy plane represents the computer screen
...
If (x0 , y0 , z0 ) is a point in , 3ޒthen the parametric
equations of the line going through the point and in the direction of vd are given by
⎧
⎪x(t) = x0 + txd
⎨
y(t) = y0 + tyd
⎪
⎩
z(t) = z0 + tzd
for all t ∈
...
Solving for t, we obtain
t =−
z0
zd
Now, substituting this value of t into the first two equations above, we find the
coordinates of the projected point, which are given by
xp = x0 −
z0
xd
zd
yp = y0 −
z0
yd
zd
and
zp = 0
The components of vd can also be used to find the angles that the rays make with the
z axis and the xz plane
...
On the other hand, if the angles ψ and φ are given, then these equations can
be used to find the components of the projection vector vd
...
6◦
...
Find the direction vector vd and project the cube, shown in Fig
...
2ޒ
The vertices of the cube are located at the points (0, 0, 1), (1, 0, 1), (1, 0, 0),
(0, 0, 0), (0, 1, 1), (1, 1, 1), (1, 1, 0), and (0, 1, 0)
...
6 Application: Computer Graphics
267
y
x
z
Figure 15
b
...
and another that will translate the cube by the vector
1
Solution
a
...
Then
yd
◦
◦
2
2
tan ψ = tan 30 ≈ 0
...
6 )2 ≈ (0
...
577xd
and
xd + yd = 1
4
Solving the last two equations gives xd ≈ 0
...
25, so that the
direction vector is
⎤
⎡
0
...
25 ⎦
−1
Using the formulas for a projected point given above, we can project each
vertex of the cube into
...
16
...
Table 1
Vertex
Projected Point
(0,0,1)
(0
...
25)
(1,0,1)
(1
...
25)
(1,0,0)
(1, 0)
(0,0,0)
(0, 0)
(0,1,1)
(0
...
25)
(1,1,1)
(1
...
25)
(1,1,0)
(1, 1)
(0,1,0)
(0, 1)
Confirming Pages
268
Chapter 4 Linear Transformations
b
...
Depictions of the results when the original cube is rotated and
then the result is translated are shown in Figs
...
y
y
y
x
x
Figure 16
x
Figure 17
Figure 18
Exercise Set 4
...
Find the matrix representation relative to the
standard basis for the linear transformation
T: 2ޒ →− 2ޒthat transforms the triangle with
vertices at the points (0, 0), (1, 1), and (2, 0) to
the triangle shown in the figure
...
c
...
y
5
Ϫ5
5
5
Ϫ5
2
...
y
a
...
6 Application: Computer Graphics
b
...
y
5
Ϫ5
5
x
Ϫ5
3
...
a
...
b
...
c
...
4
...
a
...
b
...
c
...
5
...
a
...
269
b
...
c
...
6
...
a
...
b
...
c
...
d
...
Verify your answer
...
Let T: 2ޒ →− 2ޒbe the (nonlinear)
transformation that performs a translation by the
1
, followed by a rotation of 30◦
...
Using homogeneous coordinates, find the 3 × 3
matrix that performs the translation and
rotation
...
Apply the transformation to the parallelogram
with vertices (0, 0), (2, 0), (3, 1), and (1, 1),
and give a sketch of the result
...
Find the matrix that reverses T
...
Let T: 2ޒ →− 2ޒbe the (nonlinear)
transformation that performs a translation by the
−4
, followed by a reflection through
vector
2
the y axis
...
Using homogeneous coordinates, find the 3 × 3
matrix that performs the translation and
reflection
...
Apply the transformation to the trapezoid with
vertices (0, 0), (3, 0), (2, 1), and (1, 1), and
give a sketch of the result
...
Find the matrix that reverses T
...
Let
B=
1
1
,
−1
1
Confirming Pages
270
Chapter 4 Linear Transformations
be a basis for , 2ޒand let A be the triangle in the
xy coordinate system with vertices (0, 0), (2, 2),
and (0, 2)
...
Find the coordinates of the vertices of A
relative to B
...
Let T be the transformation that performs a
reflection through the line y = x
...
2ޒ
c
...
Sketch the
result
...
Show the same result is obtained by applying
0 1
to the original coordinates
...
Let
1
0
B=
,
1
1
be a basis for , 2ޒand let A be the parallelogram
in the xy coordinate system with vertices
(0, 0), (1, 1), (1, 0), and (2, 1)
...
Find the coordinates of the vertices of A
relative to B
...
Find the matrix representation relative to B of
the transformation T that performs a reflection
through the horizontal axis
...
Apply the matrix found in part (b) to the
coordinates found in part (a)
...
d
...
Apply this matrix to the original
coordinates, and verify the result agrees with
part (c)
...
Let T: 4ޒ →− 2ޒbe a linear transformation
...
Verify that
1
1
S=
is a basis for
...
If
T
T
determine
T
x
y
1
1
3
−1
,
3
−1
⎤
1
⎢ 2 ⎥
⎥
=⎢
⎣ 0 ⎦ and
2
⎤
⎡
3
⎢ 2 ⎥
⎥
=⎢
⎣ 4 ⎦
−2
⎡
for all
x
y
∈ 2ޒ
(Hint: Find the coordinates of
c
...
e
...
g
...
)
Describe all vectors in N (T )
...
Find a basis for R(T )
...
Find a basis for 4ޒthat contains the vectors
⎤
⎡
⎡ ⎤
−1
1
⎢ 1 ⎥
⎢ 0 ⎥
⎥
⎢
⎢ ⎥
and
⎣ 0 ⎦
⎣ 1 ⎦
1
1
h
...
Confirming Pages
271
4
...
Apply the matrix A found in part (h) to an
x
...
Define linear transformations S, T: P3 → P4 and
H: P4 → P4 by
S(p(x)) = p′ (0)
T (p(x)) = (x + 1)p(x)
H (p(x)) = p′ (x) + p(0)
a
...
b
...
c
...
d
...
3
...
a
...
Show the
mappings are linear transformations
...
Find the matrix for S and for T relative to the
standard basis for
...
Find the matrix for the linear transformations
T ◦S and S ◦T
...
4
...
Let T: M2×2 → M2×2 be defined by
1 3
−1 1
T (A) =
A
Is T a linear transformation? Is T one-to-one?
Is T an isomorphism?
b
...
2ޒ
5
...
a
...
′
b
...
B
6
...
Let B
vector across the line span
0
denote the standard basis for
...
Find [T ]B and [S]B
...
Find T
−2
1
and S
2
3
...
Find the matrix representation for the linear
operator H : 2ޒ → 2ޒthat reflects a vector
1
and
across the subspace span
−1
1
...
d
...
Find N (T ) and N (S)
...
Find all vectors v such that T (v) = v and all
vectors v such that S(v) = v
...
Let T: 3ޒ → 3ޒbe the linear operator that
reflects a vector across the plane
⎧⎡
⎤⎫
⎤⎡
0 ⎬
⎨ 1
span ⎣ 0 ⎦, ⎣ 1 ⎦
⎭
⎩
1
0
The projection of a vector u onto a vector v is the
vector
u·v
projv u =
v
v·v
and the reflection of v across the plane with
normal vector n is
v − 2 projn v
Let B denote the standard basis for
...
Find [T⎛⎡
]B
...
Find T ⎝⎣ 2 ⎦⎠
...
Find N (T )
...
Find R(T )
...
Find the matrix relative to B for T n , n ≥ 2
...
Define a transformation T: P2 → ޒby
tu + (1 − t)v
1
T (p(x)) =
10
...
a
...
b
...
d
...
f
...
Compute T (−x 2 − 3x + 2)
...
Is T one-to-one?
Find a basis for N (T )
...
Let B be the standard basis for P2 and
′
B ′ = {1}, a basis for
...
B
g
...
h
...
Let T: V → V be a linear operator such that
T 2 − T + I = 0, where I denotes the identity
mapping
...
Not a convex set
Suppose T: 2ޒ → 2ޒis an isomorphism and S
is a convex set in
...
d
...
Describe S ◦T and
T ◦S
...
Show that the image of a line segment under
the map T is another line segment
...
A set in 2ޒis called convex if for every pair
of vectors in the set, the line segment between
the vectors is in the set
...
x
S(f ) = F,
for
x
y
=
2x
y
Show that T is an isomorphism
...
Chapter 4: Chapter Test
In Exercises 1–40, determine whether the statement is
true or false
...
The transformation T: 2ޒ → 2ޒdefined by
T
x
y
=
is a linear transformation
...
The transformation T: ޒ → ޒdefined by
T (x) = 2x − 1 is a linear transformation
...
If b = 0, then the transformation T: ޒ → ޒ
defined by T (x) = mx + b is a linear
transformation
...
If A is an m × n matrix, then T defined by
T (v) = Av
is a linear transformation from ޒn into ޒm
...
6 Application: Computer Graphics
5
...
Define a
transformation T: Mn×n → Mn×n by
T (B) = (B + A)2 − (B + 2A)(B − 3A)
If A2 = 0, then T is a linear transformation
...
If T: 2ޒ →− 2ޒis
1
1
and v =
0
a linear operator and
6
...
If T: V −→ W is a linear transformation and
{v1 ,
...
, T (vn )} is a linearly independent
subset of W
...
The vector spaces P8 and M3×3 are isomorphic
...
If a linear map T: P4 −→ P3 is defined by
T (p(x)) = p′ (x), then T is a one-to-one map
...
If A is an n × n invertible matrix, then as a
mapping from ޒn into ޒn the null space of A
consists of only the zero vector
...
The linear operator T: 2ޒ → 2ޒdefined by
x
y
=
x−y
0
is one-to-one
...
If T: 2ޒ → 2ޒis the transformation that reflects
each vector through the origin, then the matrix for
T relative to the standard basis for 2ޒis
−1
0
0 −1
16
...
17
...
then T is an isomorphism
...
Every linear transformation between finite
dimensional vector spaces can be defined using a
matrix product
...
If T: 2ޒ → 2ޒis defined by
T
273
14
...
18
...
If U is isomorphic to V and V is
isomorphic to W , then U is isomorphic to W
...
If T: V → V is a linear operator and u ∈ N (T ),
then
T (cu + v) = T (v)
for all v ∈ V and scalars c
...
If P : 3ޒ → 3ޒis the projection defined by
⎤
⎤⎞ ⎡
⎛⎡
x
x
P ⎝ ⎣ y ⎦⎠ = ⎣ y ⎦
0
z
then P 2 = P
...
If T: V → W is a linear transformation between
vector spaces such that T assigns each element of
a basis for V to the same element of W , then T is
the identity mapping
...
If T: 5ޒ → 4ޒand dim(N (T )) = 2, then
dim(R(T )) = 3
...
If T: 5ޒ → 4ޒand dim(R(T )) = 2, then
dim(N (T )) = 2
...
If T: 3ޒ → 3ޒis defined by
⎤
⎤⎞ ⎡
⎛⎡
2x − y + z
x
⎦
x
T ⎝⎣ y ⎦⎠ = ⎣
y−x
z
Confirming Pages
274
Chapter 4 Linear Transformations
then the matrix for T −1 relative to the standard
basis for 3ޒis
⎤
⎡
0
1 0
⎣ 0
0 1 ⎦
−1 −2 1
25
...
The linear transformation T: 3ޒ → 3ޒdefined by
⎤
⎤⎞ ⎡
⎛⎡
x
x
T ⎝ ⎣ y ⎦⎠ = ⎣ 0 ⎦
y
z
projects each vector in 3ޒonto the xy plane
...
The linear operator T: 2ޒ → 2ޒdefined by
T
=
x
y
reflects each vector in 2ޒacross the line y = x
...
Let T: V → W be a linear transformation and
B = {v1 ,
...
If T is onto, then
{T (v1 ),
...
30
...
If T: V → V is the identity transformation, then
the matrix for T relative to any pair of bases B
and B ′ for V is the identity matrix
...
If T: 3ޒ → 3ޒis defined by
⎤
⎤⎞ ⎡
⎛⎡
x+y+z
x
T ⎝⎣ y ⎦⎠ = ⎣ y − x ⎦
y
z
then dim(N (T )) = 1
...
If T: M2×2 → M2×2 is defined by
35
...
then N (T ) = {0}
...
There exists a linear transformation T between
vector spaces such that T (0) ̸= 0
...
If T: P2 → P2 is defined by
T (p(x)) = p′′ (x) − xp′ (x)
then T is onto
...
If T: P3 → P3 is defined by
T (p(x)) = p′′ (x) − xp′ (x)
then q(x) = x 2 is in R(T )
...
The linear operator T: 3ޒ → 3ޒ
⎤⎞ ⎡
⎛⎡
3 −3
x
2
T ⎝ ⎣ y ⎦⎠ = ⎣ 1
3 −1
z
is an isomorphism
...
If A is an m × n matrix and T: ޒn → ޒm is
defined by
T (v) = Av
then the range of T is the set of all linear
combinations of the column vectors of A
...
If A is an m × n matrix with m > n and
T: ޒn → ޒm is defined by
T (v) = Av
then T cannot be one-to-one
...
If A is an m × n matrix with m > n and
T: ޒn → ޒm is defined by
T (v) = Av
then T cannot be onto
...
1
5
...
3
5
...
3
C
0
...
1
⎢
S ⎣
0
...
2
W
0
...
4
0
...
1
0
...
4
0
...
2
0
...
1
⎡
E
0
...
2
0
...
3
0
...
1
0
...
1 ⎥
⎦
0
...
5
Markov chain is a mathematical model used
to describe a random process that, at any
given time t = 1, 2, 3,
...
Between the times t and t + 1
the process moves from state j to state i with
a probability pij
...
As an
example, consider a city C with surrounding residential areas N, S, E, and W
...
In this
case a state is the location of a resident at any
given time
...
1
describes the situation with the probabilities of
moving from one location to another shown in the
corresponding transition matrix A = (pij )
...
2 is the probability that U
...
Geological Survery/DAL
a resident in region E moves to region S
...
A square matrix with each entry between
0 and 1 and column sums all equal to 1 is called a stochastic matrix
...
Assume that the
initial population distribution is given by the vector
⎤
⎡
0
...
2 ⎥
⎥
⎢
v = ⎢ 0
...
2 ⎦
0
...
For example, after 10 time steps, the population distribution
(rounded to two decimal places) is
⎤
⎡
0
...
20 ⎥
⎥
⎢
10
A v = ⎢ 0
...
20 ⎦
0
...
Starting with some initial distribution vector, the long-term behavior of the Markov
chain, that is, An v as n tends to infinity, gives the limiting population distribution
in the five regions into the future
...
If a transition
matrix for a Markov chain is a stochastic matrix with positive terms, then for any
initial probability vector v, there is a unique steady-state vector s
...
Finding the steady-state vector is equivalent to
solving the matrix equation
Ax = λx
with λ = 1
...
In our Markov chain example, the steady-state vector corresponds
to the eigenvalue λ = 1 for the transition matrix A
...
Google’s page rank algorithm is essentially a Markov chain
with transition matrix consisting of numerical weights for each site on the World Wide
Web used as a measure of its relative importance within the set
...
For any n × n matrix A, there exists at least one number-vector pair λ, v such that
Av = λv (although λ may be a complex number)
...
Many applications require finding such number-vector pairs
...
1
ß
276
Eigenvalues and Eigenvectors
One of the most important problems in linear algebra is the eigenvalue problem
...
A number λ is called
an eigenvalue of A provided that there exists a nonzero vector v in ޒn such that
Av = λv
Confirming Pages
277
5
...
The zero vector is a trivial solution to the eigenvalue equation for any number λ
and is not considered as an eigenvector
...
We
0
also have
1
−1
1
1
2
= −1
=
−1
1
−1
0 −1
1
so v2 =
is another eigenvector of A corresponding to the eigenvalue λ2 = −1
...
EXAMPLE 1
Let
A=
0 1
1 0
a
...
b
...
Solution
a
...
1
...
For λ1 = 1, a vector v1 =
is an eigenvector if
0 1
1 0
x
y
=
x
y
This yields the linear system
−x + y = 0
x−y=0
with solution set
S=
t
t
t ∈ޒ
t
, for t ̸= 0, is an eigenvector corret
sponding to the eigenvalue λ1 = 1
...
Specific eigenvectors of A can be found by choosing
any value for t so that neither v1 nor v2 is the zero vector
...
Geometric Interpretation of Eigenvalues and Eigenvectors
A nonzero vector v is an eigenvector of a matrix A only when Av is a scaling of
1 −1
...
For example, let A =
2
4
the eigenvalues of A are λ1 = 2 and λ2 = 3 with corresponding eigenvectors
1
1
and v2 =
, respectively
...
1 Eigenvalues and Eigenvectors
In Fig
...
Observe that this is not the
1
, then
case for an arbitrary vector
...
This is the case in general
...
If c is any nonzero real number, then
A(cv) = cA(v) = c(λv) = λ(cv)
so cv is another eigenvector associated with the eigenvalue λ
...
Building on the procedure used in Example 1, we now describe a general method
for finding eigenvalues and eigenvectors
...
1
...
THEOREM 1
The number λ is an eigenvalue of the matrix A if and only if
det(A − λI ) = 0
The equation det(A − λI ) = 0 is called the characteristic equation of the matrix
A, and the expression det(A − λI ) is called the characteristic polynomial of A
...
Notice that Vλ is the union of the
set of eigenvectors corresponding to λ and the zero vector
...
Therefore,
to show that Vλ is a subspace of ޒn , we need to show that it is also closed under
addition
...
Then
A(u + v) = Au + Av = λu + λv = λ(u + v)
Confirming Pages
280
Chapter 5 Eigenvalues and Eigenvectors
Alternatively, the set
Vλ = {v ∈ ޒn | Av = λv} = {v ∈ ޒn | (A − λI )v = 0} = N (A − λI )
Since Vλ is the null space of the matrix A − λI , by Theorem 3 of Sec
...
2 it is a
subspace of ޒn
...
Solution
By Theorem 1 to find the eigenvalues, we solve the characteristic equation
2−λ
−12
1
−5 − λ
= (2 − λ)(−5 − λ) − (1)(−12)
= λ2 + 3λ + 2
= (λ + 1)(λ + 2) = 0
det(A − λI ) =
Thus, the eigenvalues are λ1 = −1 and λ2 = −2
...
First, for
λ1 = −1,
2
1
A − λ1 I = A + I =
−12
−5
1 0
0 1
+
3
1
=
−12
−4
The null space of A + I is found by row-reducing the augmented matrix
3
1
−12
−4
0
0
1
0
to
The solution set for this linear system is given by S =
t = 1, we obtain the eigenvector v1 =
to λ1 = −1 is
For λ2 = −2,
Vλ1 =
t
4
1
−4
0
0
0
4t
t
t ∈
...
Hence, the eigenspace corresponding
1
t is any real number
A − λ2 I =
In a similar way we find that the vector v2 =
4
1
to λ2 = −2
...
1 Eigenvalues and Eigenvectors
Vλ2 =
3
1
t
t is any real number
The eigenspaces Vλ1 and Vλ2 are lines in the direction of the eigenvectors
and
3
1
4
1
, respectively
...
In Example 3 we illustrate how the eigenspace associated with a single eigenvalue
can have dimension greater than 1
...
Solution
The characteristic equation of A is
det(A − λI ) =
1−λ
0
0
0
1−λ
5
1
0
2−λ
1
0
0
Thus, the eigenvalues are
λ1 = 1
λ2 = 2
0
−10
0
3−λ
and
= (λ − 1)2 (λ − 2)(λ − 3) = 0
λ3 = 3
Since the exponent of the factor λ − 1 is 2, we say that the eigenvalue λ1 = 1 has
algebraic multiplicity 2
...
Since dim(Vλ1 ) = 2, we
say that λ1 has geometric multiplicity equal to 2
...
This is not the case in general
...
Thus, λ = 1 has algebraic multiplicity
2
...
Although eigenvectors are always nonzero, an eigenvalue can be zero
...
These cases are illustrated in Example 4
...
1 Eigenvalues and Eigenvectors
Solution
283
The characteristic equation is
det(A − λI ) =
−λ
0
0 −λ
0
1
0
−1
−λ
= −λ3 − λ = −λ(λ2 + 1) = 0
Thus, the eigenvalues are λ1 = 0, λ2 = i, and λ3 = −i
...
For example, let
A=
2
0
4
−3
Since det(A − λI ) = 0 if and only if (2 − λ)(−3 − λ) = 0, we see that the eigenvalues of A are precisely the diagonal entries of A
...
PROPOSITION 1
The eigenvalues of an n × n triangular matrix are the numbers on the diagonal
...
By Theorem 13 of Sec
...
6, the
characteristic polynomial is given by
det(A − λI ) = (a11 − λ)(a22 − λ) · · · (ann − λ)
Hence, det(A − λI ) = 0 if and only if λ1 = a11 , λ2 = a22 ,
...
Eigenvalues and Eigenvectors of Linear Operators
The definitions of eigenvalues and eigenvectors can be extended to linear operators
...
A number λ is an eigenvalue of T provided
that there is a nonzero vector v in V such that T (v) = λv
...
As an illustration define T: P2 → P2 by
T (ax 2 + bx + c) = (−a + b + c)x 2 + (−b − 2c)x − 2b − c
Observe that
T (−x 2 + x + 1) = 3x 2 − 3x − 3 = −3(−x 2 + x + 1)
Confirming Pages
284
Chapter 5 Eigenvalues and Eigenvectors
so p(x) = −x 2 + x + 1 is an eigenvector of T corresponding to the eigenvalue
λ = −3
...
EXAMPLE 5
Interpret the solutions to the equation
f ′ (x) = kf (x)
as an eigenvalue problem of a linear operator
...
Examples of such functions are polynomials, the trigonometric functions sin(x) and cos(x), and the natural exponential function ex on
...
That is, f (x) satisfies the differential equation
f ′ (x) = λf (x)
Nonzero solutions to this differential equation are eigenvectors of the operator T,
called eigenfunctions, corresponding to the eigenvalue λ
...
This class of functions is a model for exponential
growth and decay with extensive applications
...
1
...
2
...
3
...
4
...
Vλ = {v ∈ ޒn | Av = λv}
Confirming Pages
5
...
The eigenspace corresponding to λ is the null space of the matrix A − λI
...
The eigenvalues of a square triangular matrix are the diagonal entries
...
1
In Exercises 1–6, a matrix A and an eigenvector v are
given
...
1
...
A =
−1
1
0 −2
3
...
5
...
v=
0
1
v=
−1
1
⎡
⎤
⎤
1
−3
2 3
A = ⎣ −1 −2 1 ⎦ v = ⎣ 0 ⎦
1
−3
2 3
⎤
⎡
1 0
1
0 ⎦
A=⎣ 3 2
3 0 −1
⎡ 4 ⎤
−3
v=⎣ 1 ⎦
4
⎤
⎡
1 0 1 1
⎢ 0 1 0 0 ⎥
⎥
A=⎢
⎣ 1 1 0 0 ⎦
0 1 0 1
⎤
⎡
−1
⎢ 0 ⎥
⎥
v=⎢
⎣ −1 ⎦
1
⎤
⎡
1
1
1
0
⎢ −1 −1
0 −1 ⎥
⎥
A=⎢
⎣ −1
1
0
1 ⎦
0 −1 −1
0
⎤
⎡
0
⎢ 1 ⎥
⎥
v=⎢
⎣ −1 ⎦
0
⎡
In Exercises 7–16, a matrix A is given
...
Find the characteristic equation for A
...
Find the eigenvalues of A
...
Find the eigenvectors corresponding to each
eigenvalue
...
Verify the result of part (c) by showing that
Avi = λi vi
...
A =
−2
3
2
−3
8
...
A =
1
0
10
...
12
...
14
...
0
−1
−2
1
2
−3
⎤
−1 0
1
0 ⎦
A=⎣ 0 1
0 2 −1
⎤
⎡
0
2 0
A = ⎣ 0 −1 1 ⎦
0
0 1
⎤
⎡
2 1
2
A = ⎣ 0 2 −1 ⎦
0 1
0
⎤
⎡
1 1 1
A=⎣ 0 1 0 ⎦
0 0 1
⎡
−1 0
0 0
⎢ 0 2
0 0
A=⎢
⎣ 0 0 −2 0
0 0
0 4
⎡
⎤
⎥
⎥
⎦
Confirming Pages
286
Chapter 5 Eigenvalues and Eigenvectors
⎡
3
⎢ 0
16
...
Show that if λ2 + bλ + c is the characteristic
polynomial of the 2 × 2 matrix A, then
b = −tr(A) and c = det(A)
...
Let A be an invertible matrix
...
19
...
Show that A is not
invertible if and only if λ = 0 is an eigenvalue
of A
...
Let V be a vector space with dim(V ) = n and
T: V −→ V a linear operator
...
21
...
Show that if λ is
an eigenvalue of A, then λ = 0 or λ = 1
...
Show that A and At have the same eigenvalues
...
23
...
24
...
Define an operator
T (B) = AB − BA
0
0
corresponding to the
0
b
...
Let λ be an eigenvalue of A
...
What can be said
about corresponding eigenvectors?
29
...
Show that if v is an eigenvector
of C corresponding to the eigenvalue λ, then Bv
is an eigenvector of A corresponding to λ
...
Let A be an n × n matrix and suppose v1 ,
...
If S = span{v1 ,
...
31
...
Find the
eigenvalues and corresponding eigenvectors for T
...
Define a linear operator T: 2ޒ → 2ޒby
T
x
y
=
y
x
Show that the only eigenvalues of T are λ = ±1
...
33
...
Show that e =
27
...
1
is an eigenvector
0
eigenvalue λ = 2
...
25
...
Show that AB and BA have the same eigenvalues
...
Show that no such matrices A and B exist such
that
AB − BA = I
rotation of a vector by a nonnegative angle θ
...
34
...
Show that for each k, the function f (x) = ekx
is an eigenfunction for the operator T
...
Find the corresponding eigenvalues for each
eigenfunction f (x) = ekx
...
2 Diagonalization
c
...
Find the matrix representation for T relative to
the basis B
...
Find the matrix representation for T relative to
the basis B ′
...
Show that the eigenvalues for the matrices
found in parts (a) and (b) are the same
...
Define a linear operator T: P2 → P2 by
T (ax 2 + bx + c) = (a − b)x 2 + cx
Define two ordered bases for P2 by
B = {x − 1, x + 1, x 2 } and B ′ = {x + 1, 1, x 2 }
...
2
287
Diagonalization
Many applications of linear algebra involve factoring a matrix and writing it as the
product of other matrices with special properties
...
1
...
In this section, we determine if a
matrix A has a factorization of the form
A = P DP −1
where P is an invertible matrix and D is a diagonal matrix
...
4
...
Recall
that if A and B are n × n matrices, then A is similar to B if there exists an invertible
matrix P such that
B = P −1 AP
If B is a diagonal matrix, then the matrix A is called diagonalizable
...
One of the immediate benefits of diagonalizing a matrix
A is realized when computing powers of A
...
To see this, suppose that A is diagonalizable with
A = P DP −1
Then
A2 = (P DP −1 )(P DP −1 ) = P D(P −1 P )DP −1 = P D 2 P −1
Continuing in this way (see Exercise 27), we see that
Ak = P D k P −1
for any positive whole number k
...
As we shall soon see, diagonalization of a matrix A depends on the number of
linearly independent eigenvectors, and fails when A is deficient in this way
...
A square matrix has an inverse if and only if the matrix has only
nonzero eigenvalues (see Exercise 19 of Sec
...
1)
...
The diagonal entries of the matrix P −1 AP, in Example 1, are the eigenvalues of
the matrix A, and the column vectors of P are the corresponding eigenvectors
...
With Theorem 2 this idea is extended to n × n matrices
...
Moreover, if D = P −1 AP , with D a diagonal matrix, then the diagonal entries of D are the eigenvalues of A and the column vectors of P are the
corresponding eigenvectors
...
, vn , corresponding to the eigenvalues λ1 , λ2 ,
...
Note that the
Confirming Pages
5
...
Let
⎡
⎡
⎡
⎤
⎤
⎤
p11
p12
p1n
⎢ p21 ⎥
⎢ p22 ⎥
⎢ p2n ⎥
v1 = ⎢
...
⎥
...
⎥
⎣
...
⎦
⎣
...
...
pn1
pn2
pnn
and define the n × n matrix P so that the ith column vector is vi
...
2
...
Next, since the ith column vector of the product AP is
we have
APi = Avi = λi vi
⎡
⎢
⎢
AP = ⎢
⎣
⎡
λ1 p11
λ1 p21
...
...
...
λ1 pn1
λ2 pn2
p11
p21
...
...
...
pn1
= PD
pn2
⎢
⎢
=⎢
⎣
...
...
λn p1n
λn p2n
...
...
λn pnn
⎤⎡
λ1
...
p2n ⎥ ⎢ 0
⎥⎢
...
...
...
...
...
...
...
0
0
...
...
λn
⎤
⎥
⎥
⎥
⎦
where D is a diagonal matrix with diagonal entries the eigenvalues of A
...
Conversely, suppose that A is diagonalizable, that is, a diagonal matrix D and
an invertible matrix P exist such that
D = P −1 AP
As above, denote the column vectors of the matrix P by v1 , v2 ,
...
, λn
...
, n,
we have
Avi = λi vi
Hence, v1 , v2 ,
...
Since P is invertible, then by Theorem
9 of Sec
...
3 the vectors v1 , v2 ,
...
EXAMPLE 2
Use Theorem 2 to diagonalize the matrix
⎤
⎡
1
0 0
A = ⎣ 6 −2 0 ⎦
7 −4 2
Confirming Pages
290
Chapter 5 Eigenvalues and Eigenvectors
Solution
Since A is a triangular matrix, by Proposition 1 of Sec
...
1, the eigenvalues of the
matrix A are the diagonal entries
λ1 = 1
λ2 = −2
and
λ3 = 2
The corresponding eigenvectors, which are linearly independent, are given, respectively, by
⎤
⎤
⎡
⎡
⎡ ⎤
1
0
0
v2 = ⎣ 1 ⎦
and
v3 = ⎣ 0 ⎦
v1 = ⎣ 2 ⎦
1
1
1
Therefore, by Theorem 2, D = P −1 AP , where
⎤
⎡
1
0 0
and
D = ⎣ 0 −2 0 ⎦
0
0 2
⎡
1 0
P =⎣ 2 1
1 1
⎤
0
0 ⎦
1
To verify that D = P −1 AP , we can avoid finding P −1 by showing that
P D = AP
In this case,
⎤ ⎡
⎤⎡
1
1
0 0
1 0 0
P D = ⎣ 2 1 0 ⎦ ⎣ 0 −2 0 ⎦ = ⎣ 2
1
0
0 2
1 1 1
⎡
1
=⎣ 6
7
= AP
EXAMPLE 3
Solution
Let
⎡
⎤
0 0
−2 0 ⎦
−2 2
⎤
⎤⎡
1 0 0
0 0
−2 0 ⎦ ⎣ 2 1 0 ⎦
1 1 1
−4 2
⎤
⎤
⎡
−1
1 0
0 1 1
and
B = ⎣ 0 −1 1 ⎦
A=⎣ 1 0 1 ⎦
0
0 2
1 1 0
Show that A is diagonalizable but that B is not diagonalizable
...
To find the eigenvectors, we find the null space of
Confirming Pages
5
...
For λ1 = −1 we reduce the matrix
⎤
⎤
⎡
⎡
1 1 1
1 1 1
⎣ 0 0 0 ⎦
⎣ 1 1 1 ⎦
to
0 0 0
1 1 1
Hence,
⎧⎡
⎤⎫
⎤⎡
−1 ⎬
⎨ −1
N (A + I ) = span ⎣ 1 ⎦, ⎣ 0 ⎦
⎭
⎩
1
0
In a similar manner we find
⎧⎡
⎤⎫
⎨ 1 ⎬
N (A − 2I ) = span ⎣ 1 ⎦
⎭
⎩
1
⎤
⎤
⎡
⎤⎡
⎡
1
−1
−1
Since the three vectors ⎣ 1 ⎦, ⎣ 0 ⎦, and ⎣ 1 ⎦ are linearly independent,
1
1
0
by Theorem 2 the matrix A is diagonalizable
...
However, in this case
⎧⎡
⎧⎡
⎤⎫
⎤⎫
⎨ 1 ⎬
⎨ 1 ⎬
and
N (B − 2I ) = span ⎣ 3 ⎦
N (B + I ) = span ⎣ 0 ⎦
⎭
⎭
⎩
⎩
9
0
Since B does not have three linearly independent eigenvectors, by Theorem 2, B
is not diagonalizable
...
For example, if
the columns of P are permuted, then the resulting matrix also diagonalizes A
...
Theorem 3 gives sufficient conditions for a matrix to be diagonalizable
...
, λn be distinct eigenvalues with
corresponding eigenvectors v1 , v2 ,
...
Then the set {v1 , v2 ,
...
Proof The proof is by contradiction
...
, λn are distinct
eigenvalues of A with corresponding eigenvectors v1 , v2 ,
...
Then by Theorem 5 of Sec
...
3,
at least one of the vectors can be written as a linear combination of the others
...
, vm , with m < n,
are linearly independent, but v1 , v2 ,
...
Therefore, there are scalars
c1 ,
...
We multiply the last equation
by A to obtain
Avm+1 = A(c1 v1 + · · · + cm vm )
= c1 A(v1 ) + · · · + cm A(vm )
Further, since vi is an eigenvector corresponding to the eigenvalue λi , then Avi =
λi vi , and after substitution in the previous equation, we have
λm+1 vm+1 = c1 λ1 v1 + · · · + cm λm vm
Now multiplying both sides of vm+1 = c1 v1 + · · · + cm vm by λm+1 , we also have
λm+1 vm+1 = c1 λm+1 v1 + · · · + cm λm+1 vm
By equating the last two expressions for λm+1 vm+1 we obtain
c1 λ1 v1 + · · · + cm λm vm = c1 λm+1 v1 + · · · + cm λm+1 vm
or equivalently,
c1 (λ1 − λm+1 )v1 + · · · + cm (λm − λm+1 )vm = 0
Since the vectors v1 , v2 ,
...
cm (λm − λm+1 ) = 0
Since all the eigenvalues are distinct, we have
λ1 − λm+1 ̸= 0
λ2 − λm+1 ̸= 0
...
cm = 0
This contradicts the assumption that the nonzero vector vm+1 is a nontrivial linear
combination of v1 , v2 ,
...
Confirming Pages
5
...
Show that every 2 × 2 real symmetric matrix is diagonalizable
...
Every 2 × 2 symmetric
matrix has the form
a b
A=
b d
See Example 5 of Sec
...
3
...
If (a − d)2 + 4b2 = 0, then (a − d)2 = 0 and b2 = 0, which holds
if and only if a = d and b = 0
...
If
(a − d)2 + 4b2 > 0, then A has two distinct eigenvalues; so by Corollary 1, the
matrix A is diagonalizable
...
In Theorem 4 we show that
the same can be said about any two similar matrices
...
Then A and B have the same eigenvalues
...
Now
det(B − λI ) = det(P −1 AP − λI )
= det(P −1 (AP − P (λI )))
= det(P −1 (AP − λI P ))
= det(P −1 (A − λI )P )
Confirming Pages
294
Chapter 5 Eigenvalues and Eigenvectors
Applying Theorem 15 and Corollary 1 of Sec
...
6, we have
det(B − λI ) = det(P −1 ) det(A − λI ) det(P )
= det(P −1 ) det(P ) det(A − λI )
= det(A − λI )
Since the characteristic polynomials of A and B are the same, their eigenvalues
are equal
...
Solution
The characteristic equation for A is
det(A − λI ) = (1 − λ)(3 − λ) = 0
so the eigenvalues of A are λ1 = 1 and λ2 = 3
...
In Sec
...
5, we saw that a linear operator on a finite dimensional vector space
can have different matrix representations depending on the basis used to construct the
matrix
...
These matrix representations also have the same eigenvalues
...
Then [T ]B1 and [T ]B2 have the same eigenvalues
...
Then by Theorem 15 of
Sec
...
5, P is invertible and [T ]B2 = P −1 [T ]B1 P
...
Confirming Pages
5
...
For the matrix A the eigenspaces corresponding to λ1 = −1
and λ2 = 2 are
⎧⎡
⎧⎡
⎤⎫
⎤⎡
⎤⎫
−1 ⎬
⎨ −1
⎨ 1 ⎬
and
Vλ2 = span ⎣ 1 ⎦
Vλ1 = span ⎣ 1 ⎦, ⎣ 0 ⎦
⎭
⎭
⎩
⎩
1
0
1
whereas the eigenspaces for B are
⎧⎡
⎤⎫
⎨ 1 ⎬
Vλ1 = span ⎣ 0 ⎦
⎭
⎩
0
and
Vλ2
⎧⎡
⎤⎫
⎨ 1 ⎬
= span ⎣ 3 ⎦
⎭
⎩
9
Notice that for the matrix A, we have dim(Vλ1 ) = 2 and dim(Vλ2 ) = 1, which,
respectively, are equal to the corresponding algebraic multiplicities in the characteristic polynomial
...
Moreover, for A, we have dim(Vλ1 ) +
dim(Vλ2 ) = 3 = n
...
THEOREM 5
Let A be an n × n matrix, and suppose that the characteristic polynomial is
c(x − λ1 )d1 (x − λ2 )d2 · · · (x − λk )dk
...
, n, and
d1 + d2 + · · · + dk = dim(Vλ1 ) + dim(Vλ2 ) + · · · + dim(Vλk ) = n
To summarize Theorem 5, an n × n matrix A is diagonalizable if and only if the
algebraic multiplicity for each eigenvalue is equal to the dimension of the corresponding eigenspace, which is the corresponding geometric multiplicity, and the common
sum of these multiplicities is n
...
4
...
The particular matrix for the
operator depends on the ordered basis used
...
This allows us to make
the following definition
...
The operator T is called diagonalizable if there
is a basis B for V such that the matrix for T relative to B is a diagonal matrix
...
, vn } a basis for V consisting of n eigenvectors
...
, n the vector vi is an eigenvector, then T (vi ) = λi vi , where
λi is the corresponding eigenvalue
...
, vn , we have
vi = 0v1 + · · · + 0vi−1 + vi + 0vi+1 + · · · + 0vn
Then the coordinate vector of T (vi ) relative to B is
⎡
⎤
0
⎢
...
⎥
⎢
...
⎥
⎣
...
0
Therefore, [T ]B is a diagonal matrix
...
As an illustration, define the
linear operator T: 2ޒ →− 2ޒby
x
y
T
=
2x
x+y
Observe that
v1 =
1
1
and
v2 =
0
1
are eigenvectors of T with corresponding eigenvalues λ1 = 2 and λ2 = 1, respectively
...
In practice it is not always so easy to determine the eigenvalues and eigenvectors
of T
...
That is, if B ′ is the basis consisting of the
column vectors of P , then [T ]B ′ = P −1 [T ]B P is a diagonal matrix
...
Confirming Pages
5
...
Solution
Let B = {e1 , e2 , e3 } be the standard basis for
...
1
...
The
matrix D is a diagonal matrix with diagonal entries the eigenvalues of A
...
2
...
If the
columns of P are permuted, then the diagonal entries of D are permuted in
the same way
...
The matrix A is diagonalizable if and only if A has n linearly independent
eigenvectors
...
If A has n distinct eigenvalues, then A is diagonalizable
...
Every 2 × 2 real symmetric matrix is diagonalizable and has real
eigenvalues
...
Similar matrices have the same eigenvalues
...
If A is diagonalizable, then the algebraic multiplicity for each eigenvalue is
equal to the dimension of the corresponding eigenspace (the geometric
multiplicity)
...
8
...
If V has an ordered basis B consisting of eigenvectors of T, then
[T ]B is a diagonal matrix
...
Let T: V −→ V be a linear operator and B1 and B2 ordered bases for V
...
Exercise Set 5
...
−2 0
1
0
P =
1
...
A =
−1
0
1
−2
6
...
A =
−1
1
−3 −5
7
...
A =
−3
2
−2
1
9
...
A = ⎣ 2 −2 0 ⎦
0
2 0
⎡
⎤
0
0 3
2
P = ⎣ 0 −1 1 ⎦
1
1 2
⎤
⎡
−1
2 2
2 0 ⎦
4
...
10
...
A = ⎣ 2 2 2 ⎦
0 0 3
⎤
⎡
−1 3 2
12
...
A = ⎣ 2 −1 −1 ⎦
−1
1
2
⎡
Confirming Pages
5
...
15
...
17
...
⎤
−1
0
0
2 −1 ⎦
A = ⎣ −1
0 −1
2
⎤
⎡
1 0 0
⎣ −1 0 0 ⎦
A=
−1 0 0
⎤
⎡
0
1
0
0 −1 ⎦
A=⎣ 1
0 −1
0
⎤
⎡
0 0 0 0
⎢ 1 0 1 0 ⎥
⎥
A=⎢
⎣ 0 1 0 1 ⎦
1 1 1 1
⎤
⎡
1 0 0 1
⎢ 0 0 1 1 ⎥
⎥
A=⎢
⎣ 1 0 0 1 ⎦
0 1 0 1
⎡
In Exercises 19–26, diagonalize the matrix A
...
A =
2
0
−1 −1
20
...
A = ⎢
⎣ 1
1
0
1
1
1
1
0
1
1
299
⎤
1
0 ⎥
⎥
1 ⎦
1
27
...
Show that for any positive integer k,
Ak = P D k P −1
28
...
Then find A6
...
29
...
Then find Ak , for any positive
integer k
...
30
...
Find a matrix that diagonalizes At
...
A = ⎣ 0 −2 1 ⎦
1 −2 1
⎤
⎡
0 −1 2
2 2 ⎦
22
...
A = ⎣ −1 1 0 ⎦
0 0 1
⎤
⎡
1
0 0
0 0 ⎦
24
...
A = ⎢
⎣ 1 0 1 1 ⎦
0 0 0 1
31
...
Show that if B is a matrix similar to A,
then B is diagonalizable
...
Show that if A is invertible and diagonalizable,
then A−1 is diagonalizable
...
33
...
Show that A is
diagonalizable if and only if A = λI
...
An n × n matrix A is called nilpotent if there is a
positive integer k such that Ak = 0
...
35
...
Find the matrix A for T relative to the
standard basis {1, x, x 2 }
...
Find the matrix B for T relative to the basis
{x, x − 1, x 2 }
...
Show the eigenvalues of A and B are the same
...
Explain why T is not diagonalizable
...
Define a vector space V = span{sin x, cos x} and
a linear operator T: V → V by T (f (x)) = f ′ (x)
...
37
...
3
Show that T is not diagonalizable
...
Define a linear operator T: 3ޒ → 3ޒby
⎛⎡
⎤⎞ ⎡
⎤
x1
4x1 + 2x2 + 4x3
T ⎝⎣ x2 ⎦⎠ = ⎣ 4x1 + 2x2 + 4x3 ⎦
x3
4x3
Show that T is diagonalizable
...
Let T be a linear operator on a finite dimensional
vector space, A the matrix for T relative to a
basis B1 , and B the matrix for T relative to a
basis B2
...
Application: Systems of Linear Differential
Equations
In Sec
...
5 we considered only a single differential equation where the solution
involved a single function
...
It is more likely
that the rate of change of a variable quantity will be linked to other functions outside
itself
...
One of
the most familiar examples of this is the predator-prey model
...
The growth rate of the foxes is dependent on not only the number of foxes but also the
number of rabbits in their territory
...
The mathematical model required to describe this relationship
is a system of differential equations of the form
′
y1 (t)
′
y2 (t)
= f (t, y1 , y2 )
= g(t, y1 , y2 )
In this section we consider systems of linear differential equations
...
Uncoupled Systems
At the beginning of Sec
...
5 we saw that the differential equation given by
y ′ = ay
has the solution y(t) = Ceat , where C = y(0)
...
3 Application: Systems of Linear Differential Equations
301
where a and b are constants and y1 and y2 are functions of a common variable t
...
The general solution of the system is found by solving each equation separately and
is given by
y1 (t) = C1 eat
y2 (t) = C2 ebt
and
where C1 = y1 (0) and C2 = y2 (0)
...
To do this, define
y′ =
′
y1
′
y2
A=
0
b
a
0
and
y=
y1
y2
Then the uncoupled system above is equivalent to the matrix equation
y′ = Ay
The matrix form of the solution is given by
eat
0
y(t) =
0
ebt
y(0)
y1 (0)
y2 (0)
As an illustration, consider the system of differential equations
where y(0) =
′
y1
′
y2
= −y1
= 2y2
In matrix form the system is written as
−1 0
0 2
y′ = Ay =
y
The solution to the system is
y=
e−t
0
0
e2t
y(0)
that is,
y1 (t) = y1 (0)e−t
and
y2 (t) = y2 (0)e2t
The Phase Plane
In the case of a single differential equation, it is possible to sketch particular solutions
in the plane to see explicitly how y(t) depends on the independent variable t
...
A particular solution can be viewed as
a parameterized curve or trajectory in the plane, called the phase plane
...
1 are trajectories for several particular solutions of the system
′
y1
′
y2
= −y1
= 2y2
Confirming Pages
302
Chapter 5 Eigenvalues and Eigenvectors
Figure 1
The vectors shown in Fig
...
This sketch is called the phase
portrait for the system
...
We have done so here to give a more complete picture of the system and its solutions
...
We now consider more general systems of the form
y′ = Ay
for which A is not a diagonal matrix, but is diagonalizable with real distinct eigenvalues
...
To develop this idea, let A be a 2 × 2 diagonalizable matrix with distinct real
eigenvalues
...
5
...
The column vectors of P are the corresponding eigenvectors
...
3 Application: Systems of Linear Differential Equations
303
Differentiating both sides of the last equation gives
w′ = (P −1 y)′ = P −1 y′
= P −1 Ay
= P −1 (P DP −1 )y = (P −1 P )(DP −1 )y
= DP −1 y
= Dw
Since D is a diagonal matrix, the original linear system y′ = Ay is transformed into
the uncoupled linear system
w′ = P −1 AP w = Dw
The general solution of this new system is given by
w(t) =
eλ1 t
0
0
eλ2 t
w(0)
Now, to find the solution to the original system, we again use the substitution w =
P −1 y to obtain
0
eλ1 t
P −1 y(0)
P −1 y(t) =
0 eλ2 t
Hence, the solution to the original system is
y(t) = P
EXAMPLE 1
eλ1 t
0
0
eλ2 t
P −1 y(0)
Find the general solution to the system of differential equations
′
y1
′
y2
= −y1
= 3y1 + 2y2
Sketch several trajectories in the phase plane
...
5
...
2
...
In particular, notice in Fig
...
This is so because the sign of λ1 = −1 is negative
...
Figure 2
Confirming Pages
5
...
EXAMPLE 2
Find the general solution to the system of differential equations
′
y1
′
y2
Solution
= y1 + 3y2
=
2y2
The system of differential equations is given in matrix form by
1 3
0 2
y′ = Ay =
y
The eigenvalues of A are λ1 = 1 and λ2 = 2 with corresponding eigenvectors
1
0
v1 =
v2 =
3
1
P −1 =
1
0
and
The matrix that diagonalizes A is then
P =
1 3
0 1
with
−3
1
The uncoupled system is given by
w′ =
=
−3
1
1
0
1 0
0 2
1 3
0 1
1 3
0 2
w
w
with general solution
et
0
w(0)
0 e2t
Hence, the solution to the original system is given by
w(t) =
y(t) =
=
1 3
0 1
et
0
et
0
0
e2t
−3et + 3e2t
e2t
1 −3
0
1
y(0)
y(0)
The general solution can also be written in the form
y1 (t) = y1 (0) − 3y2 (0) et + 3y2 (0)e2t
and
y2 (t) = y2 (0)e2t
Confirming Pages
306
Chapter 5 Eigenvalues and Eigenvectors
The phase portrait is shown in Fig
...
For this example, since λ1 and λ2 are both
positive, the flow is oriented outward along the lines spanned by v1 and v2
...
EXAMPLE 3
Solution
Find the general solution to the system of differential equations
⎧
′
⎪y1 = −y1
⎨
y ′ = 2y1 + y2
⎪ 2
⎩ ′
y3 = 4y1 + y2 + 2y3
The system of differential equations in matrix
⎡
−1
y′ = Ay = ⎣ 2
4
form is
⎤
0 0
1 0 ⎦y
1 2
Since A is triangular, the eigenvalues of A are the diagonal entries λ1
and λ3 = 2 with corresponding eigenvectors
⎤
⎤
⎡
⎡
⎡
−1
0
v2 = ⎣ 1 ⎦
and
v3 = ⎣
v1 = ⎣ 1 ⎦
1
−1
= −1, λ2 = 1,
⎤
0
0 ⎦
2
respectively
...
5
...
Now, by Theorem 2 of Sec
...
2, the diagonalizing
matrix is given by
⎤
⎡
⎡
⎤
−1 0 0
−1
0 0
1 0 ⎦
with
P −1 = ⎣ 1 1 0 ⎦
P =⎣ 1
1
1 −1 2
1 1 2
2
Confirming Pages
5
...
Example 4 gives an illustration of how a linear system of differential equations
can be used to model the concentration of salt in two interconnected tanks
...
The first pipe allows water from tank 1 to enter tank 2 at
a rate of 5 gal/min
...
Initially, the first tank contains
a well-mixed solution of 8 lb of salt in 50 gal of water, while the second tank
contains 100 gal of pure water
...
Find the linear system of differential equations to describe the amount of salt
in each tank at time t
...
Solve the system of equations by reducing it to an uncoupled system
...
Determine the amount of salt in each tank as t increases to infinity and explain
the result
...
Let y1 (t) and y2 (t) be the amount of salt (in pounds) in each tank after t min
...
To develop a system of equations, note that for each
tank
Rate of change of salt = rate in − rate out
Since the volume of brine in each tank remains constant, for tank 1, the rate in
5
5
5
is 100 y2 (t) while the rate out is 50 y1 (t)
...
The system of differential equations is then given by
′
y1 (t) =
5
100
5
′
y2 (t) = 50
y2 (t) −
y1 (t) −
5
50
y1 (t)
5
100
1
′
y1 (t) = − 10 y1 (t) +
that is,
1
10
′
y2 (t) =
y2 (t)
y1 (t) −
1
20
y2 (t)
1
20
y2 (t)
Since the initial amounts of salt in tank 1 and tank 2 are 8 and 0 lb, respectively,
the initial conditions on the system are y1 (0) = 8 and y2 (0) = 0
...
The system of equations in matrix form is given by
y′ =
1
20
1
− 20
1
− 10
1
10
y
with
y(0) =
8
0
3
The eigenvalues of the matrix are λ1 = − 20 and λ2 = 0 with corresponding
1
−1
...
3 Application: Systems of Linear Differential Equations
309
Hence, the solution to the original system is given by
−2
3
3
e− 20 t 0
0 1
−1 1
1 2
y(t) =
3
1
3
3
=
1
3
2e− 20 t + 1 −e− 20 t + 1
3
3
−2e− 20 t + 2
e− 20 t + 2
=
8
3
1
3
1
3
y(0)
2e− 20 t + 1
3
−2e− 20 t + 2
8
0
3
c
...
Exercise Set 5
...
1
...
′
y1
′
y2
′
y1
′
y2
= −y1 + y2
=
− 2y2
= −y1 + 2y2
= y1
3
...
′
y1
′
y2
= y1 − y2
= −y1 + y2
⎧
′
⎪ y1
⎨
5
...
y′
⎪ 2
⎩ ′
y3
= −4y1 − 3y2 − 3y3
= 2y1 + 3y2 + 2y3
= 4y1 + 2y2 + 3y3
= −3y1 − 4y2 − 4y3
= 7y1 + 11y2 + 13y3
= −5y1 − 8y2 − 10y3
In Exercises 7 and 8, solve the initial-value problem
...
′
y1
′
y2
= −y1
y1 (0) = 1
= 2y1 + y2
y2 (0) = −1
Confirming Pages
310
⎧
′
⎪y1
⎨
′
8
...
Suppose that two brine storage tanks are
connected with two pipes used to exchange
solutions between them
...
The second pipe reverses the process,
allowing water to flow from tank 2 to tank 1, also
at a rate of 1 gal/min
...
a
...
b
...
c
...
10
...
m
...
ß
5
...
Further suppose the temperature of the
first floor is 70◦ F and that of the second floor is
60◦ F when the furnace fails
...
2
0
...
1
0
...
5
1
2
0
...
Use the balance law
Net rate of change = rate in − rate out
to set up an initial-value problem to model the
heat flow
...
Solve the initial-value problem found in
part (a)
...
Compute how long it takes for each floor to
reach 32◦ F
...
A critical factor when computing the probabilities of a succession of events is whether the events are dependent on one another
...
A Markov process is useful in describing the tendencies of
conditionally dependent random events, where the likelihood of each event depends
on what happened previously
...
7
0
...
3
0
...
If today is sunny, then there is a 70 percent chance that tomorrow will be sunny
...
If today is cloudy, then there is a 50 percent chance that tomorrow will be cloudy
...
The column headings in Table 1 describe today’s weather, and the row headings
the weather for tomorrow
...
4 Application: Markov Chains
311
followed by another sunny day tomorrow is 0
...
3
...
In a Markov process, these observations are applied iteratively, giving us the
ability to entertain questions such as, If today is sunny, what is the probability that it
will be sunny one week from today?
State Vectors and Transition Matrices
To develop the Markov process required to make predictions about the weather using
v1
the observations above, we start with a vector v =
whose components are the
v2
probabilities for the current weather conditions
...
Each day the components of v change in accordance with the probabilities, listed in Table 1, giving us
the current state of the weather
...
Using Table 1, the state
′
v1
vector v′ =
for the weather tomorrow has components
′
v2
′
v1 = 0
...
5v2
and
′
v1 = 0
...
5(0) = 0
...
3v1 + 0
...
7 times the probability of a
sunny day today plus 0
...
Likewise, the
′
probability v2 of a cloudy day tomorrow is 0
...
5 times the probability of a cloudy day today
...
3(1) + 0
...
3
which is in agreement with the observations above
...
7 0
...
3 0
...
7 0
...
3 0
...
If n is the number of possible states, then the transition matrix
T is an n × n matrix where the ij entry is the probability of moving from state j
to state i
...
5 gives the probability that a cloudy day
is followed by one that is sunny
...
A matrix whose column vectors are probability vectors
is called a stochastic matrix
...
Confirming Pages
312
Chapter 5 Eigenvalues and Eigenvectors
Returning to the weather example, to predict the weather 2 days forward, we
apply the transition matrix T to the vector v′ so that
′′
v1
′′
v2
=
=
0
...
5
0
...
5
′
v1
′
v2
0
...
5
0
...
5
2
v1
v2
0
...
60
0
...
40
=
v1
v2
Thus, for example, if today is sunny, the state vector for the weather 2 days from now
is given by
′′
v1
1
0
0
...
60
0
...
40
=
′′
v2
0
...
36
=
In general, after n days the state vector for the weather is given by
T nv =
n
0
...
5
0
...
5
v1
v2
To answer the question posed earlier about the weather one week after a sunny day,
we compute
0
...
5
0
...
5
7
1
0
=
1
0
0
...
625
0
...
375
=
0
...
375
That is, if today is sunny, then the probability that it will be sunny one week after
today is 0
...
375
...
To facilitate the computations, we use the methods
of Sec
...
2 to diagonalize the transition matrix
...
Observe that T has distinct eigenvalues given by
λ1 = 1
and
λ2 =
and
v2 =
2
10
with corresponding eigenvectors
v1 =
5
3
1
−1
1
Confirming Pages
5
...
Observe that this new vector
5
8
3
8
v1 =
is also an eigenvector since it is in the eigenspace Vλ1
...
5
...
5
...
5
...
Another benefit from this representation is that the matrix D n
approaches
1 0
0 0
as n gets large
...
Steady-State Vector
Given an initial state vector v, of interest is the long-run behavior of this vector in a
Markov chain, that is, the tendency of the vector T n v for large n
...
In our weather model we saw that the transition matrix T has an eigenvalue λ = 1
and a corresponding probability eigenvector given by
v1 =
5
8
3
8
=
0
...
375
Confirming Pages
314
Chapter 5 Eigenvalues and Eigenvectors
We claim that this vector is a steady-state vector for the weather model
...
4
...
6
T 10 u =
0
...
3750000046
and
T 20 u =
0
...
3750000002
which suggests that T n u approaches v1
...
Before doing so, we note that a regular transition matrix T is a transition
matrix such that for some n, all the entries of T n are positive
...
Moreover, s is the steady-state vector for
any initial probability vector
...
Suppose that the percentages of the total number of participants enrolled in
each plan are 25 percent, 30 percent, and 45 percent, respectively
...
A
B
C
A 0
...
25 0
...
15 0
...
4
C 0
...
3
0
...
Find the percent of participants enrolled in each plan after 5 years
...
Find the steady-state vector for the system
...
75 0
...
2
T = ⎣ 0
...
45 0
...
1 0
...
4
⎡
a
...
47
0
...
49776 0
...
45608
T 5 v = ⎣ 0
...
30432 0
...
30 ⎦ = ⎣ 0
...
22
0
...
21760 0
...
23728
so approximately 47 percent will be enrolled in plan A, 30 percent in plan B,
and 22 percent in plan C
...
The steady-state vector for the system is the probability eigenvector corresponding to the eigenvalue λ = 1, that is,
⎤
⎡
0
...
30 ⎦
0
...
4 Application: Markov Chains
315
Exercise Set 5
...
Each year it is estimated that 15 percent of the
population in a city moves to the surrounding
suburbs and 8 percent of people living in the
suburbs move to the city
...
4 million living in the
city
...
Write the transition matrix for the Markov
chain describing the migration pattern
...
Compute the expected population after 10
years
...
Find the steady-state probability vector
...
After opening a new mass transit system, the
transit authority studied the user patterns to try to
determine the number of people who switched
from using an automobile to the system
...
Suppose that the population
remains constant and that initially 35 percent of
the commuters use mass transit
...
Write the transition matrix for the Markov
chain describing the system
...
Compute the expected number of commuters
who will be using the mass transit system in 2
years
...
c
...
3
...
When a variety with red flowers is cross-bred
with another variety, the probabilities of the new
plant having red, pink, or white flowers are given
in the table
...
5
0
...
1
P
0
...
4
0
...
1
0
...
7
Suppose initially there are only plants with pink
flowers which are bred with other varieties with
the same likelihood
...
After 10
generations
...
A fleet of taxis picks up and delivers commuters
between two nearby cities A and B and the
surrounding suburbs S
...
The taxi company is interested in
knowing on average where the taxis are
...
6
0
...
4
B
0
...
4
0
...
3
0
...
3
a
...
Suppose 30 percent of the taxis are in city A,
35 percent are in city B, and 35 percent are in
the suburbs
...
c
...
5
...
Determine whether
the epidemic will be eradicated
...
6
...
If 70
percent of the population are smokers, what
fraction will be smoking in 5 years? In 10 years?
In the long run?
7
...
The pads are arranged in a square
...
Each time the frog jumps, the probability of
Confirming Pages
316
Chapter 5 Eigenvalues and Eigenvectors
jumping to an adjacent pad is 1/4, the probability
of jumping to the diagonal pad is 1/6, and the
probability of landing on the same pad is 1/3
...
Write the transition matrix for the Markov
process
...
Find the probability state vector after the frog
has made n jumps starting at pad A
...
Find the steady-state vector
...
Let the transition matrix for a Markov process be
T =
constant population where residents can move
between two locations
...
9
...
Find the eigenvalues of T
...
Find T n for n ≥ 1
...
c
...
Suppose the transition matrix T for a Markov
process is a 2 × 2 stochastic matrix that is also
symmetric
...
Find the eigenvalues for the matrix T
...
Find the steady-state probability vector for the
Markov process
...
Let
a b
A=
b a
for some real numbers a and b
...
a
...
Find the eigenvalues of A
...
Find the eigenvectors corresponding to each
eigenvalue found in part (b)
...
Diagonalize the matrix A, using the
eigenvectors found in part (b)
...
Specify the diagonal matrix
...
Let
⎤
0 0
2
0 ⎦
A=⎣ 0 2
0 0 −1
⎡
a
...
b
...
c
...
d
...
e
...
f
...
Specify the diagonal
matrix D such that D = P −1 AP
...
Repeat Exercise 2 with
⎡
1
⎢ 1
A=⎢
⎣ 0
1
0
1
0
0
1
1
0
1
⎤
0
0 ⎥
⎥
0 ⎦
0
4
...
Find the characteristic polynomial for A
...
Find the eigenvalues of A
...
Find the dimension of each eigenspace of A
...
4 Application: Markov Chains
d
...
e
...
f
...
5
...
Show the characteristic equation of A is
λ3 − 3λ + k = 0
...
Sketch the graph of y(λ) = λ3 − 3λ + k for
k < −2, k = 0, and k > 2
...
Determine the values of k for which the matrix
A has three distinct real eigenvalues
...
Suppose that B = P −1 AP and v is an eigenvector
of B corresponding to the eigenvalue λ
...
7
...
a
...
b
...
Let V be a vector space and T: V −→ V a linear
operator
...
a
...
b
...
c
...
9
...
Two linear operators S and T on a vector
space V are said to commute if S(T (v)) =
T (S(v)) for every vector v in V
...
b
...
Suppose that
T has n distinct eigenvalues
...
c
...
Show that if
S and T are simultaneously diagonalizable
linear operators on an n-dimensional vector
space V, then S and T commute
...
Show directly that the matrices
⎡
and
3 0
A=⎣ 0 2
1 0
⎤
1
0 ⎦
3
⎤
1 0 −2
0 ⎦
B=⎣ 0 1
−2 0
1
are simultaneously diagonalizable
...
The Taylor series expansion (about x = 0) for the
natural exponential function is
ex = 1 + x +
1 2
1
x + x3 + · · · =
2!
3!
∞
k=0
1 k
x
n!
If A is an n × n matrix, we can define the matrix
exponential as
1 2
1
A + A3 + · · ·
2!
3!
1
1
1 m
A )
= lim (I + A + A2 + A3 + · · · +
m→∞
2!
3!
m!
eA = I + A +
Confirming Pages
318
Chapter 5 Eigenvalues and Eigenvectors
a
...
0
⎢ 0 λ2 0
...
...
...
...
...
...
0
...
⎤
⎥
⎥
⎥
⎦
b
...
Show that eA = P eD P −1
...
Use parts (a) and (b) to compute eA for the
matrix
−1
2
6
3
A=
Chapter 5: Chapter Test
In Exercises 1–40, determine whether the statement is
true or false
...
The matrix
P =
6
...
The matrix
⎡
A=⎣
is diagonalizable
...
−1
0 0
0
1 0 ⎦
−1 −1 1
9
...
10
...
11
...
0
−3
12
...
is λ3 + 2λ2 + λ − 4
...
8
...
The characteristic polynomial
⎡
−1 −1
0
A=⎣ 0
2 −2
1
3
3 −2
2 −1
has an eigenvalue λ1 = 1 and Vλ1 has
dimension 1
...
The eigenvalues of
A=
0
1
7
...
The matrix
−4
0
3 −5
A=
1
1
k
1
has
only one eigenvalue
...
If A is a 2 × 2 invertible matrix, then A and A−1
have the same eigenvalues
...
If A is similar to B, then tr(A) = tr(B)
...
The matrix A =
1 1
0 1
is diagonalizable
...
4 Application: Markov Chains
16
...
If an n × n matrix A is diagonalizable, then A has
n linearly independent eigenvectors
...
If A and B are n × n matrices, then AB and BA
have the same eigenvalues
...
29
...
In Exercises 17–19, let
⎡
⎤
1 0
0
0 ⎦
A=⎣ 0 2
0 0 −1
and
⎡
−1 0
B=⎣ 0 1
0 0
30
...
31
...
⎤
0
0 ⎦
2
17
...
⎤
0 1 0
P =⎣ 0 0 1 ⎦
1 0 0
then B = P −1 AP
...
If A and B are n × n diagonalizable matrices with
the same diagonalizing matrix, then AB = BA
...
If a 2 × 2 matrix has eigenvectors
1
−2
−1
1
and
, then it has the form
2α − β
β − 2α
32
...
18
...
19
...
The only matrix similar to the identity matrix is
the identity matrix
...
If λ = 0 is an eigenvalue of A, then the matrix A
is not invertible
...
If A is diagonalizable, then A is similar to a
unique diagonal matrix
...
If an n × n matrix A has only m distinct
eigenvalues with m < n, then A is not
diagonalizable
...
If an n × n matrix A has n distinct eigenvalues,
then A is diagonalizable
...
If an n × n matrix A has a set of eigenvectors that
is a basis for ޒn , then A is diagonalizable
...
If λ is an eigenvalue of the n × n matrix A, then
the set of all eigenvectors corresponding to λ is a
subspace of ޒn
...
If each column sum of an n × n matrix A is a
constant c, then c is an eigenvalue of A
...
If A and B are similar, then they have the same
characteristic equation
...
If λ is an eigenvalue of A, then λ2 is an
eigenvalue of A2
...
If A is a 2 × 2 matrix with characteristic
polynomial λ2 + λ − 6, then the eigenvalues of
A2 are λ1 = 4 and λ2 = 9
...
Define a linear operator T: P1 → P1 , by
T (a + bx) = a + (a + b)x
...
40
...
Confirming Pages
Confirming Pages
CHAPTER
Inner Product Spaces
CHAPTER OUTLINE
6
...
2
6
...
4
6
...
6
6
...
8
The Dot Product on ޒn 323
Inner Product Spaces 333
Orthonormal Bases 342
Orthogonal Complements 355
Application: Least Squares Approximation 366
Diagonalization of Symmetric Matrices 377
Application: Quadratic Forms 385
Application: Singular Value Decomposition 392
A
Billion Tons
8
2000
1995
1990
1985
1980
1975
1970
1965
1960
1955
1950
Figure 1
ccording to a growing number of scientists,
a contributing factor in the rise in global
temperatures is the emission of greenhouse gases
such as carbon dioxide
...
Table 1∗ gives the global
carbon emissions, in billions of tons, from burning fossil fuels during the period from 1950
through 2000
...
1, exhibits an increasing trend which can
be approximated with a straight line, also shown
in Fig
...
63
1980
5
...
04
1985
5
...
58
1990
6
...
14
1995
6
...
08
2000
6
...
62
∗ Worldwatch Institute, Vital Signs 2006–2007
...
W
...
321
Confirming Pages
322
Chapter 6 Inner Product Spaces
there is no one line that passes through all the points
...
, 11, denote the data points where xi is the year, starting with x1 = 1950,
and yi is the amount of greenhouse gas being released into the atmosphere
...
63 − (1950m − b)]2 + · · · + [6
...
One method for finding the numbers m and b uses results from multivariable calculus
...
To use this approach, we attempt to look for numbers
m and b such that the linear system
⎧
⎪m(1950) + b = 1
...
04
...
⎪
⎪
...
64
is satisfied
...
63
2
...
58
3
...
08
4
...
32
5
...
14
6
...
64
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
Now, since there is no one line going through each of the data points, an exact solution
to the previous linear system does not exist! However, as we will see, the best-fit line
comes from finding a vector x so that Ax is as close as possible to b
...
1, is given by
y = 0
...
462
In the last several chapters we have focused our attention on algebraic properties
of abstract vector spaces derived from our knowledge of Euclidean space
...
2
...
1 The Dot Product on ޒn
323
Sec
...
1
...
These geometric ideas are developed from
a generalization of the dot product of two vectors in ޒn , called the inner product,
which we define in Sec
...
2
...
ß
6
...
1
...
⎥
u=⎢
...
⎦
⎣
...
...
Using the distance formula, the length (or
norm) of v, which we denote by || v ||, is defined as the distance from the terminal
point of v to the origin and is given by
|| v || =
2
2
2
v 1 + v2 + v3
Observe that the quantity under the square root symbol can be written as the dot
product of v with itself
...
DEFINITION 1
Length of a Vector in ޒn
The length (or norm) of a vector
⎡
⎤
v1
⎢ v2 ⎥
⎢
⎥
v=⎢
...
⎦
...
Then
−1
⎡
y
u−v
u
v
|| v || =
x
√
v·v =
12 + 22 + (−1)2 =
√
6
In Sec
...
1, it was shown that the difference u − v, of two vectors u and v in
standard position, is a vector from the terminal point of v to the terminal point of u,
as shown in Fig
...
This provides the rationale for the following definition
...
⎥
⎣
...
un
be vectors in
ޒn
...
...
vn
⎤
⎥
⎥
⎥
⎦
The distance between u and v is defined by
|| u − v || =
(u − v) · (u − v)
Since the orientation of a vector does not affect its length, the distance from u to
v is equal to the distance from v to u, so that
|| u − v || = || v − u ||
EXAMPLE 1
Show that if v is a vector in ޒn and c is a real number, then
|| cv || = | c | || v ||
Solution
Let
⎤
v1
⎢ v2 ⎥
v=⎢
...
⎦
...
1 The Dot Product on ޒn
325
Then
|| cv || =
=
(cv1 )2 + (cv2 )2 + · · · + (cvn )2
(cv) · (cv) =
2
2
2
c 2 v1 + c 2 v2 + · · · + c 2 vn =
= |c|
2
2
2
c2 (v1 + v2 + · · · + vn )
2
2
2
v1 + v2 + · · · + vn = | c | || v ||
The result of Example 1 provides verification of the remarks following Definition
2 of Sec
...
1 on the effect of multiplying a vector v by a real number c
...
If, in addition, c < 0, then the
direction of cv is reversed
...
Then 2v has length 20
...
If the length of a vector in ޒn is 1, then v is called a unit vector
...
Then
uv =
is a unit vector in the direction of v
...
EXAMPLE 2
Solution
Let
⎤
1
v=⎣ 2 ⎦
−2
Find the unit vector uv in the direction of v
...
Then by Proposition 1, we have
⎤
⎡
1
1⎣
1
2 ⎦
uv = v =
3
3 −2
Revised Confirming Pages
326
Chapter 6 Inner Product Spaces
Theorem 1 gives useful properties of the dot product
...
THEOREM 1
Let u, v, and w be vectors in ޒn and c a scalar
...
2
...
4
...
EXAMPLE 3
Solution
u·u ≥ 0
u · u = 0 if and only if u = 0
u·v = v·u
u · (v + w) = u · v + u · w and (u + v) · w = u · w + v · w
(cu) · v = c(u · v)
Let u and v be vectors in ޒn
...
By repeated use of part 4, we have
(u + v) · (u + v) = (u + v) · u + (u + v) · v
= u·u + v·u + u·v + v·v
Now, by part 3, v · u = u · v, so that
(u + v) · (u + v) = u · u + 2u · v + v · v
or equivalently,
(u + v) · (u + v) = || u ||2 + 2u · v + || v ||2
The next result, know as the Cauchy-Schwartz inequality, is fundamental in developing a geometry on ޒn
...
THEOREM 2
Cauchy-Schwartz Inequality
If u and v are in vectors in ޒn , then
|u · v| ≤ || u || || v ||
Proof If u = 0, then u · v = 0
...
Now suppose that u ̸= 0 and k is a real number
...
By Theorem 1, part 1,
we have
(ku + v) · (ku + v) ≥ 0
Now, by Theorem 1, part 4, the left-hand side can be expanded to obtain
k 2 (u · u) + 2k(u · v) + v · v ≥ 0
Observe that the expression on the left-hand side is quadratic in the variable k
with real coefficients
...
1 The Dot Product on ޒn
327
This inequality imposes conditions on the coefficients a, b, and c
...
Thus, by the
quadratic formula, the discriminant (2b)2 − 4ac ≤ 0, or equivalently,
(u · v)2 ≤ (u · u)(v · v)
After taking the square root of both sides, we obtain
|u · u| ≤ || v || || v ||
as desired
...
To motivate this idea, let u and v be nonzero vectors in
2ޒwith u − v the vector connecting the terminal point of v to the terminal point of
u, as shown in Fig
...
As these three vectors form a triangle in , 2ޒwe apply the law
of cosines to obtain
|| u − v ||2 = || u ||2 + || v ||2 − 2 || u || || v || cos θ
Figure 3
Using Theorem 1, we rewrite this equation as
u · u − 2u · v + v · v = u · u + v · v − 2 || u || || v || cos θ
After simplifying and solving for cos θ, we obtain
cos θ =
u·v
|| u || || v ||
Our aim now is to extend this result and use it as the definition of the cosine
of the angle between vectors in n-dimensional Euclidean space
...
But this fact follows immediately from the CauchySchwartz inequality
...
Revised Confirming Pages
328
Chapter 6 Inner Product Spaces
DEFINITION 3
EXAMPLE 4
Solution
Angle Between Vectors in ޒn If u and v are vectors in ޒn , then the cosine
of the angle θ between the vectors is defined by
u·v
cos θ =
|| u || || v ||
Find the angle between the two vectors
⎤
⎡
2
and
u = ⎣ −2 ⎦
3
The lengths of the vectors are
|| u || =
22 + (−2)2 + 32 =
√
17
and
⎤
−1
v=⎣ 2 ⎦
2
⎡
|| v || =
(−1)2 + 22 + 22 = 3
and the dot product of the vectors is
u · v = 2(−1) + (−2)2 + 3(2) = 0
By Definition 3, the cosine of the angle between u and v is given by
u·v
=0
cos θ =
|| u || || v ||
Hence, θ = π/2 and the vectors are perpendicular
...
DEFINITION 4
Orthogonal Vectors The vectors u and v are called orthogonal if the angle
between them is π/2
...
On the other hand, if u and v
are orthogonal, then cos θ = 0, so that
u·v
=0
therefore
u·v = 0
|| u || || v ||
The zero vector is orthogonal to every vector in ޒn since 0 · v = 0, for every vector v
...
Revised Confirming Pages
6
...
The
zero vector is orthogonal to every vector in ޒn
...
Theorem 3 gives several useful properties of the norm in ޒn
...
2
...
4
...
|| v || ≥ 0
|| v || = 0 if and only if v = 0
|| cv || = |c| || v ||
(Triangle inequality) || u + v || ≤ || u || + || v ||
Proof Parts 1 and 2 follow immediately from Definition 1 and Theorem 1
...
To establish part 4, we have
|| u + v ||2 = (u + v) · (u + v)
= (u · u) + 2(u · v) + (v · v)
= || u ||2 + 2(u · v) + || v ||2
≤ || u ||2 + 2|u · v| + || v ||2
y
Now, by the Cauchy-Schwartz inequality, |u · v| ≤ || u || || v ||, so that
|| u + v ||
|| v ||
|| v ||
|| u ||
= (|| u || + || v ||)2
After taking square roots of both sides of this equation, we obtain
x
Figure 4
|| u + v ||2 ≤ || u ||2 + 2 || u || || v || + || v ||2
|| u + v || ≤ || u || + || v ||
Geometrically, part 4 of Theorem 3 confirms our intuition that the shortest distance
between two points is a straight line, as seen in Fig
...
Revised Confirming Pages
330
Chapter 6 Inner Product Spaces
PROPOSITION 3
Let u and v be vectors in ޒn
...
Proof First suppose that the vectors have the same direction
...
Therefore,
|| u + v ||2 = (u + v) · (u + v)
= || u ||2 + 2(u · v) + || v ||2
= || u ||2 + 2 || u || || v || + || v ||2
= (|| u || + || v ||)2
Taking square roots of both sides of the previous equation gives || u + v || =
|| u || + || v ||
...
After squaring both sides,
we obtain
|| u + v ||2 = || u ||2 + 2 || u || || v || + || v ||2
However, we also have
|| u + v ||2 = (u + v) · (u + v) = || u ||2 + 2u · v + || v ||2
Equating both expressions for || u + v ||2 gives
|| u ||2 + 2 || u || || v || + || v ||2 = || u ||2 + 2u · v + || v ||2
Simplifying the last equation, we obtain u · v = || u || || v || and hence
u·v
=1
|| u || || v ||
Therefore, cos θ = 1, so that θ = 0 and the vectors have the same direction
...
1
...
3ޒ
2
...
The dot product of two vectors is
commutative and distributes through vector addition
...
By using the Cauchy-Schwartz inequality |u · v| ≤ || u || || v ||, the angle
between vectors is defined by
u·v
cos θ =
|| u || || v ||
4
...
Confirming Pages
6
...
The norm of a vector is nonnegative, is 0 only when the vector is the zero
vector, and satisfies
|| cu || = |c| || u ||
and
|| u + v || ≤ || u || + || v ||
Equality holds in the last inequality only when the vectors are in the same
direction
...
If u and v are orthogonal vectors, then the Pythagorean theorem
|| u + v ||2 = || u ||2 + || v ||2
holds
...
1
In Exercises 1–4, let
⎡ ⎤
0
u=⎣ 1 ⎦
3
⎤
⎡
1
w=⎣ 1 ⎦
−3
⎤
⎡
1
v = ⎣ −1 ⎦
2
10
...
In Exercises 11–16, let
⎤
⎡
−3
u = ⎣ −2 ⎦
3
⎤
−1
v = ⎣ −1 ⎦
−3
⎡
11
...
Compute the quantity
...
Find the distance between u and v
...
u · v
u·v
2
...
u · (v + 2w)
u·w
w
4
...
Find a unit vector in the direction of u
...
Find the cosine of the angle between the two
vectors
...
15
...
16
...
1
5
v=
2
1
5
...
6
...
7
...
8
...
Are the vectors orthogonal? Explain
...
Find a vector in the direction of v with length 10
...
Find a scalar c, so that
−1
...
Find a scalar c, so that ⎣ c ⎦ is orthogonal to
2
⎤
⎡
0
⎣ 2 ⎦
...
Determine which of the vectors are orthogonal
...
Determine which of the vectors are in the same
direction
...
Determine which of the vectors are in the
opposite direction
...
Determine which of the vectors are unit vectors
...
Sketch the three vectors u, v, and w
...
u =
2
3
24
...
u =
4
3
v=
4
0
v=
v=
⎤
⎡
5
26
...
u = ⎣ 0 ⎦ v = ⎣
0
⎡
4
0
3
1
⎤
1
0 ⎦
0
⎤
5
2 ⎦
1
⎤
⎤
⎡
0
2
28
...
Let S = {u1 , u2 ,
...
, n
...
30
...
Show that S is a subspace of
ޒn
...
Let S = {v1 , v2 ,
...
That is, if
i ̸= j , then vi · vj = 0
...
32
...
Show that if
i ̸= j , then row vector i of A and column vector
j of A−1 are orthogonal
...
Show that for all vectors u and v in ޒn ,
|| u + v ||2 + || u − v ||2
= 2 || u ||2 + 2 || v ||2
34
...
Find a vector that is orthogonal to every vector
in the plane P : x + 2y − z = 0
...
Find a matrix A such that the null space N (A)
is the plane x + 2y − z = 0
...
Suppose that the column vectors of an n × n
matrix A are pairwise orthogonal
...
36
...
Show that
u · (Av) = (At u) · v
37
...
Show that A is
symmetric if and only if
(Au) · v = u · (Av)
for all u and v in ޒn
...
Confirming Pages
6
...
2
333
Inner Product Spaces
In Sec
...
1 we introduced the concepts of the length of a vector and the angle between
vectors in Euclidean space
...
Notice that the dot product on ޒn defines a
function from ޒn × ޒn into
...
To extend these ideas to an abstract vector space V ,
we require a function from V × V into ޒthat generalizes the properties of the dot
product given in Theorem 1 of Sec
...
1
...
ޒAn inner product on V is
a function that associates with each pair of vectors u and v in V a real number,
denoted by ⟨u, v⟩, that satisfies the following axioms:
1
...
3
...
⟨u, u⟩ ≥ 0 and ⟨u, u⟩ = 0 if and only if u = 0 (positive definite)
⟨u, v⟩ = ⟨v, u⟩ (symmetry)
⟨u + v, w⟩ = ⟨u, w⟩ + ⟨v, w⟩
⟨cu, v⟩ = c ⟨u, v⟩
The last two properties make the inner product linear in the first variable
...
⟨u, v + w⟩ = ⟨u, v⟩ + ⟨u, w⟩
′
4
...
A vector space V with an inner product is called an inner product space
...
6
...
Thus, ޒn with the dot product is an inner product space
...
Find all vectors u in 2ޒsuch that ⟨u, v⟩ = 0, where the inner
3
product is the dot product
...
Therefore, the set of all vectors such
3
v
1
...
1
...
that ⟨u, v⟩ = 0 is given by S = span
Ϫ5
5
x
S
Ϫ5
Figure 1
For another example, consider the vector space of polynomials P2
...
Now
let ⟨ · , · ⟩ : P2 × P2 → ޒbe the function defined by
⟨p, q⟩ = a0 b0 + a1 b1 + a2 b2
Notice that this function is similar to the dot product on
...
6
...
Another way to define an inner product on P2 is to use the definite integral
...
The justification, in this case, is based
on the fundamental properties of the Riemann integral which can be found in any text
on real analysis
...
Let p(x) = 1 − x 2 and q(x) = 1 − x + 2x 2
...
b
...
Verify that ⟨p, p⟩ > 0
...
Using the definition given for the inner product, we have
1
⟨p, q⟩ =
=
0
0
1
(1 − x 2 )(1 − x + 2x 2 ) dx
(1 − x + x 2 + x 3 − 2x 4 ) dx
1
1
1
2
x − x2 + x3 + x4 − x5
2
3
4
5
41
=
60
=
1
0
Confirming Pages
6
...
The inner product of p with itself is given by
1
⟨p, p⟩ =
=
0
0
1
(1 − x 2 )(1 − x 2 ) dx
(1 − 2x 2 + x 4 ) dx
2
1
x − x3 + x5
3
5
8
>0
=
15
=
1
0
Example 3 gives an illustration of an inner product on ޒn that is not the dot
product
...
Let k be a fixed positive real number, and define the function
⟨ · , · ⟩: ޒ → 2ޒ × 2ޒby
⟨u, v⟩ = u1 v1 + ku2 v2
Show that V is an inner product space
...
From the definition above, we have
⟨u, u⟩ = u2 + ku2
1
2
Since k > 0, then u2 + ku2 ≥ 0 for every vector u
...
The property of symmetry also holds since
⟨u, v⟩ = u1 v1 + ku2 v2 = v1 u1 + kv2 u2 = ⟨v, u⟩
Next, let w =
w1
w2
be another vector in
...
Notice that in Example 3 the requirement that k > 0 is necessary
...
Again using ޒn as our model, we now define the length (or norm) of a vector v
in an inner product space V as
|| v || =
⟨v, v⟩
The distance between two vectors u and v in V is then defined by
⟨u − v, u − v⟩
|| u − v || =
The norm in an inner product space satisfies the same properties as the norm in
ޒn
...
THEOREM 4
Properties of the Norm in an Inner Product Space Let u and v be vectors
in an inner product space V and c a scalar
...
2
...
4
...
EXAMPLE 4
|| v || ≥ 0
|| v || = 0 if and only if v = 0
|| cv || = |c| || v ||
| ⟨u, v⟩ | ≤ || u || || v || (Cauchy-Schwartz inequality)
|| u + v || ≤ || u || + || v || (Triangle inequality)
Let V = 2ޒwith inner product defined by
⟨u, v⟩ = u1 v1 + 3u2 v2
Let
u=
2
−2
and
v=
1
4
a
...
b
...
Revised Confirming Pages
6
...
Using the given definition for the inner product, we have
|⟨u, v⟩| = |(2)(1) + 3(−2)(4)| = | − 22| = 22
The norms of u and v are given, respectively, by
|| u || =
(2)2 + 3(−2)2 =
⟨u, u⟩ =
and
Since
|| v || =
(1)2 + 3(4)2 =
⟨v, v⟩ =
√
16 = 4
√
49 = 7
22 = |⟨u, v⟩| < || u || || v || = 28
the Cauchy-Schwartz inequality is satisfied for the vectors u and v
...
To verify the triangle inequality, observe that
u+v=
so that
Since
2
−2
+
=
(3)2 + 3(2)2 =
|| u + v || =
√
1
4
3
2
√
21
21 = || u + v || < || u || + || v || = 4 + 7 = 11
the triangle inequality holds for u and v
...
6
...
EXAMPLE 5
Let V = P2 with inner product defined by
1
⟨p, q⟩ =
a
...
b
...
Confirming Pages
338
Chapter 6 Inner Product Spaces
Solution
a
...
b
...
, vn } in an inner product space is
called orthogonal if the vectors are mutually orthogonal; that is, if i ̸= j , then
vi , vj = 0
...
n, then the set of vectors
is called orthonormal
...
They do not, however, form an orthonormal set
...
PROPOSITION 4
Let V be an inner product space
...
Proof Let v be a vector in V
...
A useful property of orthogonal sets of nonzero vectors is that they are linearly
independent
...
2 Inner Product Spaces
339
and linearly independent
...
THEOREM 5
If S = {v1 , v2 ,
...
Proof Since the set S is an orthogonal set of nonzero vectors,
vi , vj = 0
for i ̸= j
⟨vi , vi ⟩ = || vi ||2 ̸= 0
and
for all i
Now suppose that
c1 v1 + c2 v2 + · · · + cn vn = 0
The vectors are linearly independent if and only if the only solution to the previous
equation is the trivial solution c1 = c2 = · · · = cn = 0
...
Take the inner product on both sides of the previous equation with vj so that
vj , (c1 v1 + c2 v2 + · · · + cj −1 vj −1 + cj vj + cj +1 vj +1 + · · · + cn vn ) = vj , 0
By the linearity of the inner product and the fact that S is orthogonal, this equation
reduces to
cj vj , vj = vj , 0
Now, by Proposition 4 and the fact that
cj
vj
2
=0
vj
̸= 0, we have
so that
cj = 0
Since this holds for each j = 1,
...
COROLLARY 1
If V is an inner product space of dimension n, then any orthogonal set of n nonzero
vectors is a basis for V
...
3
...
Theorem
6 provides us with an easy way to find the coordinates of a vector relative to
an orthonormal basis
...
THEOREM 6
If B = {v1 , v2 ,
...
, n
...
Taking the inner product on both sides of
v = c1 v1 + c2 v2 + · · · + ci−1 vi−1 + ci vi + ci+1 vi+1 + · · · + cn vn
Confirming Pages
340
Chapter 6 Inner Product Spaces
with vi on the right gives
⟨v, vi ⟩ = ⟨(c1 v1 + c2 v2 + · · · + ci−1 vi−1 + ci vi + ci+1 vi+1 + · · · + cn vn ), vi ⟩
= c1 ⟨v1 , vi ⟩ + · · · + ci ⟨vi , vi ⟩ + · · · + cn ⟨vn , vi ⟩
Since B is an orthonormal set, this reduces to
⟨v, vi ⟩ = ci ⟨vi , vi ⟩ = ci
As this argument can be carried out for any vector in B, then ci = ⟨v, vi ⟩ for all
i = 1, 2,
...
In Theorem 6, if the ordered basis B is orthogonal and v is any vector in V , then
the coordinates relative to B are given by
ci =
so that
v=
⟨v, vi ⟩
⟨vi , vi ⟩
for each i = 1,
...
1
...
2
...
3
...
Thus, any set of n
orthogonal vectors is a basis for an inner product space of dimension n
...
When an arbitrary vector is written in terms of the vectors in an orthogonal
basis, the coefficients are given explicitly by an expression in terms of the
inner product
...
, vn } is the orthogonal basis and v is an arbitrary
vector, then
⟨v, v1 ⟩
⟨v, v2 ⟩
⟨v, vn ⟩
v1 +
v2 + · · · +
vn
⟨v1 , v1 ⟩
⟨v2 , v2 ⟩
⟨vn , vn ⟩
If case B is an orthonormal basis, then
v=
v = ⟨v, v1 ⟩ v1 + · · · + ⟨v, vn ⟩ vn
Confirming Pages
6
...
2
In Exercises 1–10, determine whether V is an inner
product space
...
V = 2ޒ
⟨u, v⟩ = u1 v1 − 2u1 v2 − 2u2 v1 + 3u2 v2
2
...
V = 2ޒ
2
2
⟨u, v⟩ = u2 v1 + u2 v2
1
2
4
...
V = ޒn
⟨u, v⟩ = u · v
6
...
V = Mm×n
m
⟨A, B⟩ =
8
...
Find the distance between the vectors f and g
...
Find the cosine of the angle between the
vectors f and g
...
f (x) = 3x − 2, g(x) = x 2 + 1; a = 0, b = 1
16
...
f (x) = x, g(x) = ex ; a = 0, b = 1
18
...
Find the distance between the vectors p and q
...
Find the cosine of the angle between the
vectors p and q
...
p(x) = x − 3, q(x) = 2x − 6
pi qi
i=0
9
...
V = C (0) [−1, 1]
1
⟨f, g⟩ = −1 f (x)g(x)x dx
In Exercises 11–14, let V = C (0) [a, b] with inner
product
b
⟨f, g⟩ =
f (x)g(x) dx
a
19
...
Find the distance between the vectors A and B
...
Find the cosine of the angle between the
vectors A and B
...
A =
1
2
2
−1
B=
22
...
11
...
1, x, 1 (5x 3 − 3x) ; a = −1, b = 1
2
13
...
{1, cos x, sin x, cos 2x, sin 2x}; a = −π, b = π
In Exercises 15–18, let V = C (0) [a, b] with inner
product
⎡
2 1
1 3
0
1
⎤
1
0 −2
1
1 ⎦
23
...
A = ⎣ 3 1 0 ⎦
3 2 1
⎤
⎡
0 0 1
B=⎣ 3 3 2 ⎦
1 0 2
⎡
30
...
Verify that if A = I , then the function defines
an inner product
...
Show that if A =
−1
2
function defines an inner product
...
Show that if A =
2 0
does not define an inner product
...
Describe the set of all vectors in 2ޒthat are
2
...
Describe the set of all vectors in 2ޒthat are
1
...
Describe the set of all vectors in 3ޒthat are
⎤
⎡
2
orthogonal to ⎣ −3 ⎦
...
Describe the set of all vectors in 3ޒthat are
⎤
⎡
1
orthogonal to ⎣ 1 ⎦
...
Define an inner product on C (0) [−a, a] by
a
⟨f, g⟩ =
Show that if f is an even function and g is an
odd function, then f and g are orthogonal
...
Define an inner product on C (0) [−π, π] by
π
⟨f, g⟩ =
Show
29
...
Find ex , e−x
...
Find the angle between f (x) = 1 and
g(x) = x
...
Find the distance between f (x) = 1 and
g(x) = x
...
b
...
d
...
3
f (x)g(x) dx
−π
{1, cos x, sin x, cos 2x, sin 2x,
...
(See Exercise 31
...
In an inner product space, show that if the set
{u1 , u2 } is orthogonal, then for scalars c1 and c2
the set {c1 u1 , c2 u2 } is also orthogonal
...
Show that if ⟨u, v⟩ and ⟨⟨u, v⟩⟩ are two different
inner products on V , then their sum
⟨⟨⟨u, v⟩⟩⟩ = ⟨u, v⟩ + ⟨⟨u, v⟩⟩
defines another inner product
...
6
...
, vn } is an ordered orthonormal
basis of an inner product space V , then the coordinates of any vector v in V are
given by an explicit formula using the inner product on the space
...
n
...
As we have already seen,
Revised Confirming Pages
6
...
, en } is an orthonormal basis for ޒn
...
Orthogonal Projections
Of course, most of the bases we encounter are not orthonormal, or even orthogonal
...
The method, called the Gram-Schmidt process, involves projections
of vectors onto other vectors
...
1(a)
...
1(b)
...
6
...
Now, to find w, we take
the product of the scalar projection with a unit vector in the direction of v, so that
cos θ =
w=
u·v
|| v ||
u·v
v
=
v
|| v ||
|| v ||2
Moreover, since || v ||2 = v · v, the vector w can be written in the form
u·v
v
v·v
This vector is called the orthogonal projection of u onto v and is denoted by projv u,
so that
u·v
projv u =
v
v·v
w=
u
u − projv u
projv u
Figure 2
v
Another useful vector, shown in Fig
...
From the manner in which projv u is defined, the vector u − projv u
is orthogonal to projv u, as shown in Fig
...
To verify algebraically that projv u and
u − projv u are orthogonal, we show that the dot product of these two vectors is zero
...
1, the angle θ shown is an acute angle
...
3
...
Let
u=
1
3
and
1
1
v=
a
...
b
...
Solution
a
...
Using the result of part (a), we have
u − projv u =
y
u − projv u
u
projv u
v
x
Figure 4
1
3
−
2
2
To show that projv u is orthogonal to u − projv u, we compute the dot product
...
4
...
Confirming Pages
345
6
...
The
orthogonal projection of u onto v, denoted by projv u, is defined by
⟨u, v⟩
v
⟨v, v⟩
The vector u − projv u is orthogonal to projv u
...
a
...
b
...
Solution
a
...
From part (a), we have
5
p − projq p = x − x 2
4
To show that the vectors p and p − projq p are orthogonal, we show that the
inner product is zero
...
The key to this construction is the projection of one vector onto
another
...
5
...
6
...
2ޒ
v2
v2
v1
v1 = w1
w2
projv1 v2
B = {v1 , v2 } is a basis for 2ޒ
Figure 5
w2 = v2 − projv1 v2
Figure 6
To construct an orthonormal basis for ޒn , we first need to extend this idea to
general inner product spaces
...
Proof The proof is by induction on the dimension n of the inner product space
...
Now assume that every inner
product space of dimension n has an orthogonal basis
...
, vn , vn+1 } is a basis
...
, vn }
...
By the inductive hypothesis, W has an orthogonal basis B
...
, wn }
...
, wn , vn+1 } is another basis for V
...
6
...
(Here is
where we extend the idea presented just prior to the theorem
...
To complete the proof, we must show that w is orthogonal to
Confirming Pages
6
...
To see this, let wi be a vector in B
...
, wn } is mutually orthogonal, the previous
equation reduces to
⟨w, wi ⟩ = ⟨vn+1 , wi ⟩ − 0 − 0 − · · · −
B′
= ⟨vn+1 , wi ⟩ − ⟨vn+1 , wi ⟩ = 0
⟨vn+1 , wi ⟩
⟨wi , wi ⟩ − 0 − · · · − 0
⟨wi , wi ⟩
= {w1 , w2 ,
...
Therefore
That B ′ is a basis for V is due to Corollary 1 of Sec
...
2
...
That is, if B = {w1 , w2 ,
...
,
|| w1 || || w2 ||
|| wn ||
Gram-Schmidt Process
Theorem 7 guarantees the existence of an orthogonal basis in a finite dimensional inner
product space
...
The algorithm, called the
Gram-Schmidt process, is summarized here
...
Let B = {v1 , v2 ,
...
2
...
...
w2 = v2 − projw1 v2 = v2 −
wn = vn − projw1 vn − projw2 vn − · · · − projwn−1 vn
= vn −
⟨vn , w1 ⟩
⟨vn , w2 ⟩
⟨vn , wn−1 ⟩
w1 −
w2 − · · · −
wn−1
⟨w1 , w1 ⟩
⟨w2 , w2 ⟩
⟨wn−1 , wn−1 ⟩
Confirming Pages
348
Chapter 6 Inner Product Spaces
3
...
, wn } is an orthogonal basis for V
...
Dividing each of the vectors in B ′ by its length gives an orthonormal basis for
the vector space V
w2
wn
w1
,
,
...
See Figs
...
As
seen above, to construct an orthogonal basis from the basis B = {v1 , v2 , v3 }, the first
step in the Gram-Schmidt process is to let w1 = v1 , and then perform an orthogonal
projection of v2 onto span{v1 }
...
Our aim in the next step is to find a vector w3 that is orthogonal to the two-dimensional
subspace span{w1 , w2 }
...
7
...
7
...
7
...
, span{wn }
...
, wn is obtained by subtracting each projection from the vector vn
...
3 Orthonormal Bases
EXAMPLE 3
349
Let B be the basis for 3ޒgiven by
⎧⎡
⎤⎫
⎤ ⎡
⎤ ⎡
−1 ⎬
−1
⎨ 1
B = {v1 , v2 , v3 } = ⎣ 1 ⎦, ⎣ 1 ⎦, ⎣ 0 ⎦
⎭
⎩
1
0
1
Apply the Gram-Schmidt process to B to find an orthonormal basis for
...
Notice that v1 · v2 = 0,
so that the vectors v1 and v2 are already orthogonal
...
Following the steps outlined above, we
have
⎤
⎡
1
w1 = v1 = ⎣ 1 ⎦
1
⎤
⎡
−1
v2 · w1
w2 = v2 −
w1 = v2 − 0 w1 = v2 = ⎣ 1 ⎦
3
w 1 · w1
0
Next note that v1 and v3 are also orthogonal, so that in this case only one projection
is required
...
3ޒSee Fig
...
An orthonormal basis is then given
⎧
⎤
⎤
⎡
⎡
⎡
1
−1
−1
⎨ 1
1 ⎣
1 ⎣
w2
w3
w1
−1
1 ⎦, √
,
,
= √ ⎣ 1 ⎦, √
B ′′ =
⎩ 3
|| w1 || || w2 || || w3 ||
6
2
1
2
0
w3
v3
z
v3 − projw2 v3
w1
w2
x
y
Figure 8
by
⎤⎫
⎬
⎦
⎭
Confirming Pages
350
Chapter 6 Inner Product Spaces
Example 4 illustrates the use of the Gram-Schmidt process on a space of polynomials
...
First note that B is not orthogonal, since
x, x 3 =
1
−1
x 4 dx =
2
5
We can simplify some of the work by noting that since the interval [−1, 1] is a
symmetric interval,
1
p(x) dx = 0
When p is an odd function, then
−1
1
When p is an even function, then
−1
1
p(x) dx = 2
p(x) dx
0
Now, since f (x) = x, g(x) = x 3 , and h(x) = x 5 are all odd functions,
1
⟨v1 , v2 ⟩ =
−1
1
⟨v4 , v1 ⟩ =
−1
1
x dx = 0
⟨v2 , v3 ⟩ =
x 3 dx = 0
⟨v4 , v3 ⟩ =
−1
1
−1
x 3 dx = 0
x 5 dx = 0
Since v1 and v2 are orthogonal, proceeding with the Gram-Schmidt process, we
have
w1 = v1
and
w2 = v2
Next, to find w3 , the required computation is
⟨v3 , w1 ⟩
⟨v3 , w2 ⟩
w1 −
w2
w3 = v3 −
⟨w1 , w1 ⟩
⟨w2 , w2 ⟩
⟨v3 , v1 ⟩
⟨v3 , v2 ⟩
v1 −
v2
= v3 −
⟨v1 , v1 ⟩
⟨v2 , v2 ⟩
⟨v3 , v1 ⟩
⟨v2 , v3 ⟩
v1 −
v2
= v3 −
⟨v1 , v1 ⟩
⟨v2 , v2 ⟩
But we have already noted above that 0 = ⟨v2 , v3 ⟩ and since
1
⟨v3 , v1 ⟩ =
then
−1
x 2 dx = 2
1
0
x 2 dx =
1
2
3
w3 = x 2 −
and
1
3
⟨v1 , v1 ⟩ =
−1
dx = 2
Confirming Pages
351
6
...
Hence,
3
1
⟨v4 , w3 ⟩ =
Consequently,
−1
x 5 − 1 x 3 dx = 0
3
⟨v4 , v2 ⟩
v2 = x 3 − 3 x
5
⟨v2 , v2 ⟩
An orthogonal basis for P3 is therefore given by
w4 = v4 −
B ′ = 1, x, x 2 − 1 , x 3 − 3 x
3
5
By normalizing each of these vectors, we obtain the orthonormal basis
√
√ √
√
2
6 3 10 2 1 5 14 3 3
′′
,
x,
x −3 ,
x − 5x
B =
2
2
4
4
EXAMPLE 5
Let U be the subspace of 4ޒwith basis
⎧⎡
⎤ ⎡
⎤ ⎡
1
−1
⎪ −1
⎪
⎨⎢
1 ⎥ ⎢ 0 ⎥ ⎢ 0
⎥, ⎢
⎥, ⎢
B = {u1 , u2 , u3 } = ⎢
⎪⎣ 1 ⎦ ⎣ 1 ⎦ ⎣ 0
⎪
⎩
1
0
0
⎤⎫
⎪
⎪
⎥⎬
⎥
⎦⎪
⎪
⎭
where the inner product is the dot product
...
Solution
Following the Gram-Schmidt process, we let w1 = u1
...
To find w3 , we use the
computation
u3 · w1
u3 · w2
w3 = u3 −
w1 −
w2
w 1 · w1
w2 · w⎤
2
⎤
⎤
⎤
⎡
⎡
⎡
⎡
1
−1
1
1
⎢ 0 ⎥
1 ⎢ 1 ⎥ 1⎢ 2 ⎥ 1⎢ 0 ⎥
⎥
⎥
⎥
⎥
⎢
⎢
⎢
=⎢
⎣ 0 ⎦ − − 3 ⎣ 1 ⎦ − 6 ⎣ −1 ⎦ = 2 ⎣ 1 ⎦
1
0
0
2
As before we replace w3 with
⎤
1
⎢ 0 ⎥
w3 = ⎢ ⎥
⎣ 1 ⎦
2
An orthogonal basis for U is then given by
⎧⎡
⎤ ⎡
⎤ ⎡
1
⎪ −1
⎪
⎨⎢
1 ⎥ ⎢ 2 ⎥ ⎢
⎥ ⎢
⎥ ⎢
B′ = ⎢
⎣ 1 ⎦, ⎣ −1 ⎦, ⎣
⎪
⎪
⎩
0
0
⎡
⎤⎫
1 ⎪
⎪
⎬
0 ⎥
⎥
1 ⎦⎪
⎪
⎭
2
Normalizing each of the vectors of B ′ produces the orthonormal basis
⎧
⎤
⎤
⎤⎫
⎡
⎡
⎡
−1
1
1 ⎪
⎪
⎪
⎪
⎬
⎨ 1 ⎢
1 ⎥ 1 ⎢ 2 ⎥ 1 ⎢ 0 ⎥
⎥, √ ⎢
⎥, √ ⎢
⎥
B ′′ = √ ⎢
⎪ 3 ⎣ 1 ⎦ 6 ⎣ −1 ⎦ 6 ⎣ 1 ⎦⎪
⎪
⎪
⎭
⎩
0
0
2
Fact Summary
1
...
2
...
Exercise Set 6
...
a
...
b
...
1
...
u =
3
−2
v=
1
−2
Confirming Pages
6
...
u =
1
−2
v=
4
...
B = ⎣
⎩
1
2
−2
−2
⎡
5
...
u = ⎣
⎡
7
...
u = ⎣
⎤
⎤
⎡
1
−1
3 ⎦ v = ⎣ −1 ⎦
−1
0
⎤
⎤
⎡
3
1
0 ⎦v = ⎣ 2 ⎦
−1
1
⎤
⎤
⎡
0
1
−1 ⎦ v = ⎣ 0 ⎦
1
−1
⎤
⎤
⎡
1
3
2 ⎦v = ⎣ 0 ⎦
−1
0
In Exercises 9–12, use the inner product on P2
defined by
1
⟨p, q⟩ =
p(x)q(x) dx
0
a
...
b
...
9
...
p(x) = x 2 − x + 1, q(x) = 2x − 1
11
...
p(x) = −4x + 1, q(x) = x
In Exercises 13–16, use the standard inner product on
ޒn
...
13
...
B =
1
−1
,
2
−1
,
1
−2
3
−2
⎧⎡
⎤⎫
⎤ ⎡
⎤ ⎡
0 ⎬
0
⎨ 1
15
...
17
...
B = {x 2 − x, x, 2x + 1}
In Exercises 19–22, use the standard inner product on
ޒn to find an orthonormal basis for the subspace
span(W )
...
W = ⎣ 1 ⎦, ⎣ −1 ⎦
⎭
⎩
−1
1
⎧⎡
⎤⎫
⎤ ⎡
−1 ⎬
⎨ 0
20
...
W = ⎢
⎪⎣ 0 ⎦ ⎣ −1 ⎦ ⎣ 0 ⎦⎪
⎪
⎪
⎭
⎩
1
−1
1
⎧⎡
⎤⎫
⎤ ⎡
⎤ ⎡
0 ⎪
−1
1
⎪
⎪
⎪
⎬
⎨⎢
−2 ⎥ ⎢ 3 ⎥ ⎢ −1 ⎥
⎥
⎥, ⎢
⎥, ⎢
22
...
23
...
W = {1, x + 2, x 3 − 1}
Confirming Pages
354
Chapter 6 Inner Product Spaces
25
...
Find a basis for the
2
⎡
inner product find an
⎤ ⎡
−1
2
⎥ ⎢ −3 ⎥ ⎢ 2
⎥ ⎢
⎥, ⎢
⎦ ⎣ 5 ⎦, ⎣ −3
1
−1
⎤ ⎡
⎤⎫
⎪
⎪
⎥⎬
⎥
⎦⎪
⎪
⎭
26
...
Let {u1 , u2 ,
...
Show that
|| v ||2 = |v · u1 |2 + · · · + |v · un |2
for every vector v in ޒn
...
Let A be an n × n matrix
...
a
...
The row vectors of A form an orthonormal
basis for ޒn
...
The column vectors of A form an orthonormal
basis for ޒn
...
Show that an n × n matrix A has orthonormal
column vectors if and only if At A = I
...
Let A be an m × n matrix, x a vector in ޒm , and
y a vector in ޒn
...
31
...
32
...
33
...
34
...
35
...
, um } be a set of vectors in ޒn
...
In Exercises 36–41, a (real) n × n matrix A is called
positive semidefinite if A is symmetric and ut Au ≥ 0
for every nonzero vector u in ޒn
...
36
...
Show that the
function ⟨u, v⟩ = ut Av defines an inner product
on ޒn
...
)
3 1
1 3
37
...
39
...
41
...
...
Show that if A is positive definite, then the
diagonal entries are positive
...
Show that At A is
positive semidefinite
...
Show that the eigenvalues of a positive definite
matrix are positive
...
Are the vectors v1 =
v2 =
2
−4
−2
−1
and
orthogonal?
−2 −1
...
Find det(At A)
...
Show that the area of the rectangle spanned by
√
−2
2
and v2 =
is det(At A)
...
Show that the area of the rectangle is | det(A)|
...
If v1 and v2 are any two orthogonal vectors in
, 2ޒshow that the area of the rectangle spanned
Confirming Pages
6
...
f
...
Show
that the area of the parallelogram is | det(A)|,
where A is the matrix with row vectors v1
and v2
...
If v1 , v2 , and v3 are mutually orthogonal
vectors in , 3ޒshow that the volume of the box
spanned by the three vectors is | det(A)|, where
A is the matrix with row vectors v1 , v2 , and v3
...
4
355
y
v2
p
v1
x
Orthogonal Complements
Throughout this chapter we have seen the importance of orthogonal vectors and bases
in inner product spaces
...
In this section we extend the notion of orthogonality to subspaces
of inner product spaces
...
We say that v is orthogonal to W if and only if
⟨v, w⟩ = 0
for each vector w ∈ W
As an illustration, let W be the yz plane in the Euclidean space
...
3
...
Using the dot product as the inner product on , 3ޒthe coordinate vector
⎡ ⎤
1
⎣ 0 ⎦
e1 =
0
is orthogonal to W since
⎤
⎤⎡
0
1
⎣ 0 ⎦·⎣ y ⎦ = 0
z
0
⎡
for every y, z ∈
...
Example 1 gives an illustration of how to find vectors orthogonal to a subspace
...
Solution
Let
⎤
1
w = ⎣ −2 ⎦
3
⎡
Thus, any vector in W has the form cw, for some real number c
...
This last equation is equivalent
to the equation
whose solution set is given by
x − 2y + 3z = 0
S = {(2s − 3t, s, t) | s, t ∈ }ޒ
Therefore, the set of vectors orthogonal to W is given by
⎫
⎫ ⎧ ⎡
⎧⎡
⎤
⎤
⎡
⎤
−3
2
⎬
⎬ ⎨
⎨ 2s − 3t
⎦ s, t ∈ = ޒs ⎣ 1 ⎦ + t ⎣ 0 ⎦ s, t ∈ ޒ
s
S′ = ⎣
⎭
⎭ ⎩
⎩
1
0
t
Letting s = t = 1 gives the particular vector
⎤
⎡
−1
v=⎣ 1 ⎦
1
w
S′
5
z
which is orthogonal to W since
⎤
⎤ ⎡
⎡
1
−1
v· w = ⎣ 1 ⎦· ⎣ −2 ⎦ = (−1)(1) + (1)(−2) + (1)(3) = 0
3
1
0
Ϫ5
0x
Ϫ5
y 0
If the vectors in S ′ are placed in standard position, then the solution set describes
a plane in , 3ޒas shown in Fig
...
This is in support of our intuition as the set of
vectors orthogonal to a single vector in 3ޒshould all lie in a plane perpendicular
to that vector, which is called the normal vector
...
The following definition generalizes this idea to
inner product spaces
...
4 Orthogonal Complements
DEFINITION 1
357
Orthogonal Complement Let W be a subspace of an inner product space V
...
That is,
W ⊥ = {v ∈ V | ⟨v, w⟩ = 0 for all w ∈ W }
EXAMPLE 2
Let V = P3 and define an inner product on V by
1
⟨p, q⟩ =
p(x)q(x) dx
0
Find W ⊥ if W is the subspace of constant polynomials
...
Then f is in W ⊥ if and only if
1
⟨f, p⟩ =
0
k(a + bx + cx 2 + dx 3 ) dx = k a +
Since this equation must hold for all k ∈ ,ޒ
W⊥ =
a + bx + cx 2 + dx 3 a +
d
b c
+ +
2 3 4
=0
b
c
d
+ + =0
2 3 4
Notice in Examples 1 and 2 that the zero vector is an element of the orthogonal
complement W ⊥
...
This leads to Theorem 8
...
1
...
2
...
Proof (1) Let u and v be vectors in W ⊥ , and w a vector in W, so that
⟨u, w⟩ = 0
and
⟨v, w⟩ = 0
Now for any scalar c, we have
⟨u + cv, w⟩ = ⟨u, w⟩ + ⟨cv, w⟩
= ⟨u, w⟩ + c ⟨v, w⟩
=0+0=0
Thus, u + cv is in W ⊥ , and therefore by Theorem 4 of Sec
...
2, W ⊥ is a subspace
of V
...
Then
⟨w, w⟩ = 0
and hence w = 0 (see Definition 1 of Sec
...
2)
...
To determine whether a vector v is in the orthogonal complement of a subspace,
it suffices to show that v is orthogonal to each one of the vectors in a basis for the
subspace
...
, wm } a basis
for W
...
Proof First suppose that v is orthogonal to each vector in B
...
Then there are scalars c1 ,
...
, m, we have ⟨v, w⟩ = 0 and hence v ∈ W ⊥
...
In
particular, v is orthogonal to wj , for all j = 1, 2,
...
EXAMPLE 3
Let V = 4ޒwith the dot product as the inner product,
⎧⎡
⎤⎡
0
1
⎪
⎪
⎨⎢
0 ⎥⎢ 1
⎥, ⎢
W = span ⎢
⎪⎣ −1 ⎦ ⎣ −1
⎪
⎩
1
−1
a
...
c
...
Find a basis for W
...
Find an orthonormal basis for
...
and let
⎤⎫
⎪
⎪
⎥⎬
⎥
⎦⎪
⎪
⎭
⎤
⎥
⎥
⎦
of a vector from W and a vector
Confirming Pages
6
...
Let
⎤
1
⎢ 0 ⎥
⎥
w1 = ⎢
⎣ −1 ⎦
−1
⎡
359
⎤
0
⎢ 1 ⎥
⎥
w2 = ⎢
⎣ −1 ⎦
1
⎡
and
Notice that w1 and w2 are orthogonal and hence by Theorem 5 of Sec
...
2 are
linearly independent
...
b
...
This requirement leads to
the linear system
−z−w =0
y−z+w =0
The two-parameter solution set for this linear system is
⎫
⎧⎡
⎤
⎪
⎪ s+t
⎪
⎪
⎬
⎨⎢
s−t ⎥
⎥ s, t ∈ ޒ
S= ⎢
⎪
⎪⎣ s ⎦
⎪
⎪
⎭
⎩
t
x
The solution to this system, in vector form, provides a description of the
orthogonal complement of W and is given by
⎧⎡
⎤⎫
⎤ ⎡
1 ⎪
⎪ 1
⎪
⎪
⎬
⎨⎢
1 ⎥ ⎢ −1 ⎥
⎥
⎥, ⎢
W ⊥ = span ⎢
⎪ ⎣ 1 ⎦ ⎣ 0 ⎦⎪
⎪
⎪
⎭
⎩
1
0
Let
⎤
1
⎢ 1 ⎥
v1 = ⎢ ⎥
⎣ 1 ⎦
0
⎡
and
⎤
1
⎢ −1 ⎥
⎥
v2 = ⎢
⎣ 0 ⎦
1
⎡
Since v1 and v2 are orthogonal, by Theorem 5 of Sec
...
2 they are linearly
independent and hence a basis for W ⊥
...
Let B be the set of vectors B = {w1 , w2 , v1 , v2 }
...
6
...
4ޒDividing each of these vectors by its length, we obtain the (ordered)
orthonormal basis for 4ޒgiven by
Confirming Pages
360
Chapter 6 Inner Product Spaces
⎧
⎪
⎪
⎨
⎤
⎡
1
0
⎢ 0 ⎥ 1 ⎢ 1
1 ⎢
′ = {b , b , b , b } =
⎥, √ ⎢
√
B
1
2
3
4
⎪ 3 ⎣ −1 ⎦ 3 ⎣ −1
⎪
⎩
−1
1
⎡
⎤
⎤⎫
⎡
1
1 ⎪
⎪
⎥ 1 ⎢ 1 ⎥ 1 ⎢ −1 ⎥⎬
⎥, √ ⎢
⎥, √ ⎢
⎥
⎦ 3 ⎣ 1 ⎦ 3 ⎣ 0 ⎦⎪
⎪
⎭
0
1
⎤
⎡
d
...
6
...
So
1
c1 = √
3
c2 = 0
1
c3 = √
3
1
c4 = √
3
Now, observe that the first two vectors of B ′ are an orthonormal basis for W
while the second two vectors are an orthonormal basis for W ⊥
...
The
situation, in general, is the content of Definition 2 and Theorem 9
...
If each vector in V can be written uniquely as the sum of a vector from W1 and a vector
from W2 , then V is called the direct sum of W1 and W2
...
Confirming Pages
6
...
Then
W1 ∩ W2 = {0}
...
Then
v = w1 + 0
and
v = 0 + w2
with w1 ∈ W1 and w2 ∈ W2
...
THEOREM 9
Projection Theorem If W is a finite dimensional subspace of an inner product
space V , then
V = W ⊕ W⊥
Proof The proof of this theorem has two parts
...
Then
we must show that this representation is unique
...
, wn } be a basis for W
...
6
...
Now, let v be a vector
in V , and let the vectors w and u be defined by
w = ⟨v, w1 ⟩ w1 + ⟨v, w2 ⟩ w2 + · · · + ⟨v, wn ⟩ wn
and
u=v−w
Since w is a linear combination of the vectors in B, then w ∈ W
...
, n and invoke Proposition 5
...
Then
⟨u, wi ⟩ = ⟨v − w, wi ⟩
= ⟨v, wi ⟩ − ⟨w, wi ⟩
n
= ⟨v, wi ⟩ −
Since B is an orthonormal basis,
Hence,
⟨wi , wi ⟩ = 1
and
v, wj
w j , wi
j =1
wj , wi = 0
for i ̸= j
⟨u, wi ⟩ = ⟨v, wi ⟩ − ⟨v, wi ⟩ ⟨wi , wi ⟩ = 0
Since this holds for each i = 1,
...
For the second part of the proof, let
v=w+u
and
v = w′ + u′
with w and w′ in W and u and u′ in W ⊥
...
However, u − u′ is also in
W ⊥ since it is the difference of two vectors in W ⊥
...
This being the case, we now have w′ − w = 0, so
that w′ = w
...
Motivated by the terminology of Example 3, we call the vector w, of Theorem 9,
the orthogonal projection of v onto W , which we denote by projW v, and call u the
component of v orthogonal to W
...
3
...
The column space of
A, denoted by col(A), is the subspace of ޒm spanned by the column vectors of A
...
Finally, the row space of A, which we discussed in
Sec
...
2, denoted by row(A), is the subspace of ޒn spanned by the row vectors of
A
...
These four
subspaces
N (A)
N (At )
col(A)
and
col(At )
are referred to as the four fundamental subspaces associated with the matrix A
...
THEOREM 10
Let A be an m × n matrix
...
N (A) = col(At )⊥
2
...
, vm denote the row vectors of A
...
Ax = ⎢
...
vm · x
First let x be a vector in N (A) so that Ax = 0
...
, m
...
On the other hand, let x
be a vector in col(At )⊥ = row(A)⊥
...
, m, so that
Ax = 0
...
Hence, N (A) = col(At )⊥
...
Confirming Pages
6
...
In light of Theorem 10, we are now in a position to
provide an analysis of the linear system Ax = b in terms of the geometric structure
of Euclidean space and the fundamental subspaces of A
...
Since row(A) = col(At ), by Theorem 10, N (A)
is the orthogonal complement of row(A)
...
Now, multiplying
x by A, we have
Ax = A(xrow + xnull ) = Axrow + Axnull
Since Axnull = 0, the mapping T : ޒn −→ ޒm defined by T (x) = Ax maps the row
space of A to the column space of A
...
See Fig
...
xrow
A
col(A)
row(A)
x
Ax
dim r
dim n − r
dim r
A
xnull
N (At )
A
dim m − r
N (A)
ޒn
ޒm
Figure 2
We now consider, again from a geometric point of view, the consistency of the
linear system Ax = b for an m × n matrix A and a given vector b in ޒm
...
3
...
By Theorem 10, this system is consistent if and only if b is perpendicular
to the left null space of A, or equivalently, if and only if b is orthogonal to every
vector in ޒm , which is orthogonal to the column vectors of A
...
However, in cases where a basis for the null space of At consists of only
a few vectors, we can perform an easy check to see if Ax = b is consistent
...
Therefore, the linear system Ax = b is consistent
...
1
...
2
...
3
...
4
...
That is, V = W ⊕ W ⊥
...
If B is a basis for W, then v is in W ⊥ if and only if v is orthogonal to each
vector in B
...
If A is an m × n matrix, then N (A) = col(At )⊥ and N (At ) = col(A)⊥
...
4
In Exercises 1–8, find the orthogonal complement of
W in ޒn with the standard inner product
...
W = span
1
−2
2
...
W
4
...
W
6
...
W = span ⎢
⎪⎣
⎪
⎩
⎧⎡
⎪
⎪
⎨⎢
8
...
⎧⎡
⎤⎫
⎤ ⎡
−1 ⎬
⎨ 2
9
...
W = span ⎣ −1 ⎦, ⎣ 2 ⎦
⎭
⎩
−2
1
Confirming Pages
6
...
W = span ⎢
⎪⎣
⎪
⎩
⎧⎡
⎪
⎪
⎨⎢
12
...
W = span{x − 1, x 2 }
14
...
Let W be the subspace of , 4ޒwith the standard
inner product, consisting of all vectors w such that
w1 + w2 + w3 + w4 = 0
...
In Exercises 16–21, W is a subspace of ޒn with the
standard inner product
...
, wm }
is an orthogonal basis for W, then the orthogonal
projection of v onto W is given by
m
projW v =
i=1
⟨v, wi ⟩
wi
⟨wi , wi ⟩
Find the orthogonal projection of v onto W
...
⎧⎡
⎤ ⎡ ⎤⎫
2 ⎬
1
⎨
16
...
W = span ⎣ 0 ⎦, ⎣ −1 ⎦
⎭
⎩
0
0
⎤
⎡
1
v=⎣ 2 ⎦
−3
⎧⎡
⎤⎫
⎤ ⎡
−2 ⎬
3
⎨
18
...
W = span ⎣ 2 ⎦, ⎣ 3 ⎦
⎭
⎩
2
1
⎤
⎡
1
v = ⎣ −3 ⎦
5
⎧⎡
1
⎪
⎪
⎨⎢
2
20
...
W = span ⎢
⎪⎣ −1
⎪
⎩
2
365
⎤⎫
⎪
⎪
⎥⎬
⎥
⎦⎪
⎪
⎭
⎤ ⎡
⎤
0
⎢ 0 ⎥
v=⎢ ⎥
⎣ 1 ⎦
0
⎡
⎤
1
⎢ 2 ⎥
⎥
v=⎢
⎣ 1 ⎦
−1
⎡
⎤ ⎡
3
1
⎥ ⎢ 3 ⎥ ⎢ 0
⎥ ⎢
⎥, ⎢
⎦ ⎣ −1 ⎦, ⎣ 1
−1
0
⎤⎫
−6 ⎪
⎪
⎥ ⎢ 0 ⎥⎬
⎥
⎥, ⎢
⎦ ⎣ 2 ⎦⎪
⎪
⎭
4
In Exercises 22–25, W is a subspace of ޒn with the
standard inner product
...
Find W ⊥
...
Find the orthogonal projection of v onto W
...
)
c
...
d
...
e
...
Confirming Pages
366
Chapter 6 Inner Product Spaces
1
2
22
...
W = span ⎣ 1 ⎦
⎭
⎩
0
23
...
W = span ⎣
⎩
⎤
2
v=⎣ 1 ⎦
1
⎡
c
...
d
...
Verify
2
g(−x) = g(x) and h(−x) = −h(x), so every
f can be written as the sum of a function in W
and a function in W ⊥
...
Let V = M2×2 with the inner product
⟨A, B⟩ = tr(B t A)
Let W = {A ∈ V | A is symmetric}
...
Show that
⎤⎫
⎤ ⎡
−1 ⎬
1
1 ⎦, ⎣ 2 ⎦
⎭
4
−1
26
...
27
...
28
...
a
...
b
...
ß
6
...
Show that every A in V can be written as the
sum of matrices from W and W ⊥
...
In 2ޒwith the standard inner product, the
transformation that sends a vector to the
orthogonal projection onto a subspace W is a
2
...
Let W = span
1
a
...
1
...
Let v =
1
result is the same by applying the matrix P
found in part (a)
...
Show P 2 = P
...
If W is a finite dimensional subspace of an inner
product space, show that (W ⊥ )⊥ = W
...
Consider the problem of finding the equation
of a line going through the points (1, 2), (2, 1), and (3, 3)
...
1 that
this problem has no solution as the three points are noncollinear
...
There are different ways
of solving this new problem
...
5 Application: Least Squares Approximation
367
y
5
Ϫ5
5
x
Ϫ5
Figure 1
of finding the line that minimizes the sum of the square distances between itself and
each of the points
...
To illustrate the technique, we consider the original problem
of finding an equation of the form y = mx + b that is satisfied by the points (1, 2),
(2, 1), and (3, 3)
...
As a first step toward finding an
optimal approximate solution, we let
⎤
⎡
⎤
⎡
2
1 1
m
and
b=⎣ 1 ⎦
x=
A=⎣ 2 1 ⎦
b
3
3 1
and we write the linear system as Ax = b
...
Thus, the best we can do is to look
for a vector w in col(A) that is as close as possible to b, as shown in Fig
...
b
w−b
w
col(A)
Figure 2
Confirming Pages
368
Chapter 6 Inner Product Spaces
We will soon see that the optimal choice is to let w be the orthogonal projection of b
onto col(A)
...
6
...
By Theorem 10 of Sec
...
4, we have W ⊥ = N (At )
...
6
...
To find
y, we use Definition ⎤ of Sec
...
3 and compute the orthogonal projection of b onto
1
⎡
1
W ⊥
...
3
...
Solving the linear system, we obtain m = 1 and b = 1,
2
1
y = 2 x + 1 giving us the slope and the y intercept, respectively, for the best-fit line y = 1 x + 1
...
3
...
Confirming Pages
6
...
An exact solution exists if b is in col(A); moreover, the solution
is unique if the columns of A are linearly independent
...
Using the standard inner product on ޒm to define the length of a vector,
we have
|| b − Ax ||2 = (b − Ax) · (b − Ax)
= [b1 − (Ax)1 ]2 + [b2 − (Ax)2 ]2 + · · · + [bm − (Ax)m ]2
This equation gives the rationale for the term least squares solution
...
As W is a finite dimensional subspace of ޒm , by Theorem 9 of Sec
...
4, the
vector b can be written uniquely as
W⊥
b
w2
b = w 1 + w2
w1
where w1 is the orthogonal projection of b onto W and w2 is the component of b
col(A) orthogonal to W , as shown in Fig
...
Figure 4
We now show that the orthogonal projection minimizes the error term || b − Ax ||,
for all x in col(A)
...
We call any vector x in ޒn such
that Ax = w1 a least squares solution of Ax = b
...
Occasionally, as was the case for the example at the beginning of this section,
it is possible to find w1 directly
...
In most cases, however, the vector w1 is hard to obtain
...
At Ax = At b
Confirming Pages
370
Chapter 6 Inner Product Spaces
THEOREM 11
Let A be an m × n matrix and b a vector in ޒm
...
Proof From the discussion just before the theorem, we know that a least squares
solution x to Ax = b exists
...
6
...
First assume that x is a least squares solution
...
Moreover, since x is a least squares solution,
Ax = w1
...
Conversely, we now show that if x is a solution to At Ax = At b, then it is also
a least squares solution to Ax = b
...
Since the columns of A span W, the vector b − Ax is in W ⊥
...
Again, by Theorem 9 of Sec
...
4,
this decomposition of the vector b is unique and hence Ax = w1
...
EXAMPLE 1
Solution
Let
⎤
⎤
⎡
1
−2
3
and
b = ⎣ −1 ⎦
A = ⎣ 1 −2 ⎦
2
1 −1
a
...
b
...
⎡
a
...
By Theorem 11, the least squares solution
can be found by solving the normal equation
At Ax = At b
In this case the normal equation becomes
⎤
⎡
−2
3
x
−2
1
1 ⎣
1 −2 ⎦
y
3 −2 −1
1 −1
=
−2
3
1
−2
1
−1
⎤
1
⎣ −1 ⎦
2
⎡
Confirming Pages
6
...
To find the orthogonal projection w1 of b onto col(A), we use the fact that
w1 = Ax
...
Linear Regression
Example 2 illustrates the use of least squares approximation to find trends in data sets
...
5, give the
average temperature, in degree celsius (o C), of the earth’s surface from 1975 through
2002
...
∗ Worldwatch Institute, Vital Signs 2006–2007
...
W
...
Confirming Pages
Chapter 6 Inner Product Spaces
Table 1
Average Global Temperatures 1975–2002
1975 13
...
03 1994 14
...
86 1986 14
...
37
1977 14
...
27 1996 14
...
02 1988 14
...
40
1979 14
...
19 1998 14
...
16 1990 14
...
32
1981 14
...
32 2000 14
...
04 1992 14
...
46
1983 14
...
14 2002 14
...
07
Year
Figure 5
Solution
Denote the data points by (xi , yi ), for i = 1, 2,
...
A line with
equation y = mx + b will pass through all the data points if the linear system
⎧
⎪m(1975) + b = 13
...
86
...
⎪
⎪
...
52
has a solution
...
94
1975 1
⎢ 13
...
⎥
⎥
⎢
...
⎦
⎣
...
...
52
Since the linear system is inconsistent, to obtain the best fit of the data we seek
m
is a least squares solution
...
94
1975 1
1975 · · · 2002 ⎢
1975 · · · 2002 ⎢
...
...
⎦
⎦ b =
...
1
···
1
1
···
1
14
...
5 Application: Least Squares Approximation
373
which simplifies to
110,717,530 55,678
55,678
28
m
b
=
791,553
...
05
The least squares solution is
m
b
=
0
...
31197592
Temperature
The line that best fits the data is then given by y = 0
...
31197592,
as shown in Fig
...
Year
Figure 6
The procedure used in Example 2 can be extended to fit data with a polynomial
of any degree n ≥ 1
...
See Exercise 6
...
, an , bn are real numbers
...
The vector space PC[−π, π] is an inner product space with inner product defined by
π
⟨f, g⟩ =
f (x)g(x) dx
−π
Suppose now that given a piecewise continuous function f defined on [−π, π], which
may or may not be a trigonometric polynomial, we wish to find the trigonometric
polynomial of degree n that best approximates the function
...
Let f0 (x) = 1/ 2π, and for k ≥ 1, let
1
fk (x) = √ cos kx
π
and
1
gk (x) = √ sin kx
π
Define the set B by
B = {f0 , f1 , f2 ,
...
, gn }
=
1
√1 , √
π
2π
1
1
1
1
1
cos x, √π cos 2x,
...
, √π sin nx
It can be verified that relative to the inner product above, B is an orthonormal basis
for W
...
Since W is finite dimensional, f has
the unique decomposition
W ⊥
...
6
...
In this case we have
fW = ⟨f, f0 ⟩ f0 + ⟨f, f1 ⟩ f1 + · · · + ⟨f, fn ⟩ fn + ⟨f, g1 ⟩ g1 + · · · + ⟨f, gn ⟩ gn
We now claim that fW , defined in this way, is the best approximation for f in W
...
So
2
|| f − w ||2 = fW ⊥ + || fW − w ||2
Observe that the right-hand side is minimized if w = fW , that is, if we choose w to
be the orthogonal projection of f onto W
...
EXAMPLE 3
Let
−1
−π ≤ x < 0
1
0
...
5 Application: Least Squares Approximation
Solution
The graph of y = f (x) is shown in Fig
...
Since f (x) is an odd function and
fk (x) is an even function for k ≥ 0, the product f (x)fk (x) is also an odd function
...
8 we see the function and its Fourier approximations for n = 1, 3, and 5
...
5
1
...
Find the least squares solution to Ax = b
...
Find the orthogonal projection of b onto
W = col(A) and the decomposition of the
vector b = w1 + w2 , where w1 is in W and w2
is in W ⊥
...
Let
⎤
2 2
A=⎣ 1 2 ⎦
1 1
⎡
and
⎤
−2
b=⎣ 0 ⎦
1
⎡
5
...
1950
1960
3
...
1965
927
1990
2185
1970
1187
1995
2513
1975
1449
2000
1710
2004
1980
4
...
28
2000
6
...
Sketch a scatter plot of the data
...
Find the linear function that is the best fit to
the data
...
71
2713
1980
3
...
Find the least squares solution to Ax = b
...
Find the orthogonal projection of b onto
W = col(A) and the decomposition of the
vector b = w1 + w2 , where w1 is in W and w2
is in W ⊥
...
56
2004
6
...
1980
1955
157
1985
78
1960
141
1990
70
1965
119
1995
66
1970
104
2000
62
1975
93
2005
57
1980
87
a
...
b
...
29
...
7
1997
40
...
The table gives world infant mortality rates in
deaths per 1000 live births
...
4
2000
57
...
5
2002
67
...
7
1992
a
...
b
...
0
...
1
a
...
b
...
7
...
a
...
b
...
8
...
6 Diagonalization of Symmetric Matrices
a
...
b
...
9
...
a
...
b
...
ß
6
...
Let A be an m × n matrix with rank(A) = n, and
suppose A = QR is a QR factorization of A
...
)
Show that the best least squares solution to the
linear system Ax = b can be found by back
substitution on the upper triangular system
Rx = Qt b
...
5
...
A characterization was also provided to determine which n × n matrices were diagonalizable
...
5
...
As we have seen, the
application of this theorem requires finding all eigenvectors of a matrix
...
An example of
such a case was given in Example 4 of Sec
...
2, where it was shown that any 2 × 2
real symmetric matrix is diagonalizable with real eigenvalues
...
In the remarks preceding Example 8 of Sec
...
1, we defined the set of complex numbers
...
In particular, if
z = a + bi is a complex number, then the conjugate of z, denoted by z, is given by
z = a − bi
...
From this we know that a complex number z = z if and only if z is a real
number
...
Then bi = −bi or 2bi = 0 and hence
b = 0
...
Conversely, if z is a real
number, then z = a + 0i = a and z = a − 0i = a so that z = z
...
So if v is a vector
with complex components and M is a matrix with complex entries, then
⎡
⎤
⎡
⎤
a11 a12
...
a2n ⎥
⎢ v2 ⎥
⎢
⎥
v=⎢
...
and
...
⎥
...
⎦
...
⎦
⎣
...
...
...
vn
am1 a12
...
Confirming Pages
378
Chapter 6 Inner Product Spaces
THEOREM 12
The eigenvalues of an n × n real symmetric matrix A are all real numbers
...
To show
that λ is a real number, we will show that λ = λ
...
1
...
Also since A has real entries, then A = A
...
By an extension
to the complex numbers of Theorem 2 (part 4) of Sec
...
1, we have λ − λ = 0;
hence λ = λ, establishing that λ is a real number
...
To see this, let A be a symmetric matrix with real
entries and v an eigenvector corresponding to the real eigenvalue λ = a
...
By Theorem 7 of
Sec
...
2, N (A − aI ) is a subspace of ޒn
...
EXAMPLE 1
Let A be the symmetric matrix defined by
⎤
⎡
2
0
2
0 −2 ⎦
A=⎣ 0
2 −2
1
Verify that the eigenvalues and corresponding eigenvectors of A are real
...
6 Diagonalization of Symmetric Matrices
Solution
379
The characteristic equation of A is
det(A − λI ) = −λ3 + 3λ2 + 6λ − 8 = 0
After factoring the characteristic polynomial we obtain
(λ − 1)(λ + 2)(λ − 4) = 0
Thus, the eigenvalues of A are λ1 = 1, λ2 = −2, and λ3 = 4
...
To do this, we see that
⎤
⎤
⎡
⎡
1 0 2
1
0
2
⎣ 0 1 2 ⎦
reduces to
A − I = ⎣ 0 −1 −2 ⎦
0 0 0
2 −2
0
⎤
⎡
−2
Thus, an eigenvector corresponding to λ1 = 1 is v1 = ⎣ −2 ⎦
...
−2
−2
Orthogonal Diagonalization
In Sec
...
1 we showed that two vectors u and v in ޒn are orthogonal if and only if their
dot product u · v = 0
...
To do this, observe that if u and v are vectors in ޒn ,
then vt u is a matrix with a single entry equal to u · v
...
Theorem 13 shows that eigenvectors which correspond to distinct eigenvalues of
a real symmetric matrix are orthogonal
...
Then v1 and v2 are orthogonal
...
To show that
they are orthogonal, we show that vt v2 = 0
...
Hence, by Theorem 2, part 4, of Sec
...
1, we
have vt v2 = 0, which, by the remarks preceding this theorem, gives that v1 is
1
orthogonal to v2
...
Solution
The characteristic equation of A is
det(A − λI ) = −(λ − 1)2 (λ + 1) = 0
so the eigenvalues are λ1 = 1 and λ2 = −1
...
5
...
6
...
Hence, every eigenvector in Vλ2 is
orthogonal to every eigenvector in Vλ1
...
Notice, moreover, that the vectors within Vλ1
are orthogonal to one another
...
We
normalize the spanning vectors of the eigenspaces to obtain
⎡
⎤
⎡
⎤
0
⎤
⎡
0
1
1
√ ⎥
⎢
1
⎢ −√ ⎥
⎢ 2 ⎥
⎣ 0 ⎦
and
2 ⎦
⎣
⎣ 1 ⎦
1
√
0
√
2
2
Confirming Pages
6
...
⎡
⎤
⎡
1
0
0
1
1
1 ⎥
⎢ 0
√
√
2
2 ⎥⎣ 0
P −1 AP = ⎢
⎣
⎦
1
1
0
0 − √2 √2
That is,
381
⎤
0
1
− √2 ⎥
⎥
⎦
1
√
2
⎤⎡ 1
0 0
⎢
0 1 ⎦⎣ 0
1 0
0
0
1
√
2
1
√
2
⎤ ⎡
0
1 0
1
− √2 ⎥ = ⎣ 0 1
⎦
1
0 0
√
2
⎤
0
0 ⎦
−1
Observe in this case that the diagonalizing matrix P has the special property that
P P t = I, so that P −1 = P t
...
DEFINITION 1
Orthogonal Matrix A square matrix P is called an orthogonal matrix if it is
invertible and P −1 = P t
...
That is, the
vectors of this basis are all mutually orthogonal and have unit length
...
So by Theorem 2 of Sec
...
2 a
real symmetric matrix has n linearly independent eigenvectors
...
Producing an orthogonal
matrix P to diagonalize A required only that we normalize the eigenvectors
...
Specifically, by Theorem 2, eigenvectors corresponding
to distinct eigenvalues are orthogonal
...
In this case we can use the Gram-Schmidt
process, given in Sec
...
3, to find an orthonormal basis from the linearly independent
eigenvectors
...
THEOREM 14
Let A be an n × n real symmetric matrix
...
The eigenvalues are the
diagonal entries of D
...
Confirming Pages
382
Chapter 6 Inner Product Spaces
1
...
2
...
If necessary, use the Gram-Schmidt process to find an orthonormal set of eigenvectors
...
Form the orthogonal matrix P with column vectors determined in Step 2
...
The matrix P −1 AP = P t AP = D is a diagonal matrix
...
Solution
The characteristic equation for A is given by
det(A − λI ) = −λ3 + 3λ + 2 = −(λ − 2)(λ + 1)2 = 0
Thus, the eigenvalues are λ1 = −1 and λ2 = 2
...
5
...
To find an orthogonal matrix P which diagonalizes A, we use
the Gram-Schmidt process on B
...
6
...
Morevover,
⎤
⎡
2
0
0
0 ⎦
P t AP = ⎣ 0 −1
0
0 −1
Confirming Pages
6
...
1
...
3
...
The eigenvalues of A are all real numbers
...
The matrix A is diagonalizable
...
Exercise Set 6
...
1
...
A =
−1
3
3 −1
⎡
1
3
...
A = ⎣ 1 −1
−2
2
1
In Exercises 5–8, verify that the eigenvectors of the
symmetric matrix corresponding to distinct
eigenvalues are orthogonal
...
A =
1
2
2
−2
6
...
A = ⎣ 2 −1 −2 ⎦
0 −2
1
⎤
⎡
1
0 −2
0 ⎦
8
...
⎤
⎡
1
0 2
9
...
A = ⎣ 0 −1 0 ⎦
1
0 1
⎡
2
1
1
⎢ 1 −2
1
11
...
A = ⎢
⎣ 0
0 −1
0
0
0
⎡
⎤
1
1 ⎥
⎥
0 ⎦
−1
⎤
0
0 ⎥
⎥
0 ⎦
1
In Exercises 13–16, determine whether the matrix is
orthogonal
...
A =
14
...
A = ⎣
⎡
⎢
16
...
17
...
A =
5 2
2 5
19
...
A =
1
2
2
−2
⎤
1 −1 1
21
...
A = ⎣ 0 −1
−1
0
1
⎡
23
...
24
...
25
...
26
...
27
...
Show that the matrix
cos θ
A=
sin θ
− sin θ
cos θ
is orthogonal
...
Suppose that A is a 2 × 2 orthogonal matrix
...
)
A=
c
...
Show that if det(A) = 1, then
T is a rotation and if det(A) = −1, then T is a
reflection about the x axis followed by a
rotation
...
Matrices A are B are orthogonally similar if there
is an orthogonal matrix P such that B = P t AP
...
a
...
b
...
29
...
(Matrix A is
called orthogonally diagonalizable
...
30
...
Show that A−1 is orthogonally
diagonalizable
...
)
31
...
a
...
b
...
[Hint: Consider the quantity
vt (λv)
...
7 Application: Quadratic Forms
ß
6
...
In this
section we show how certain transformations of the coordinate axes in 2ޒcan be
used to simplify equations that describe conic sections, that is, equations in x and
y whose graphs are parabolas, hyperbolas, circles, and ellipses
...
The graph
is shown in Fig
...
To further simplify this equation, we can translate the coordinate
axes by means of the equations
x′ = x − 2
y′ = y − 3
and
The equation of the circle then becomes
x′
2
+ y′
2
= 16
This is the equation of the circle in standard position in the x ′ y ′ plane with center at
the origin
...
The graph of a quadratic equation in x and y is a conic section (including
possible degenerate cases), the particular one being dependent on the values of the
coefficients
...
The
expression
ax 2 + bxy + cy 2
is called the associated quadratic form
...
This fact enables us to develop a transformation that we can use
to simplify the equation
...
To produce such a mapping, first recall from Theorem 14 of Sec
...
6 that if A is a
real symmetric matrix, then there exists an orthogonal matrix P and a diagonal matrix
D such that A = P DP −1 = P DP t
...
Confirming Pages
6
...
6
...
4
...
The
matrix B ′ is not a rotation (relative to any basis)
...
Then
1
0
B′ =
0 −1
Relative to the basis Q, this matrix produces a reflection through the line spanned by
v1
...
These results are summarized in Theorem 15
...
The change of coordinates given by
x′
y′
=B
x
y
is a rotation if and only if det(B) = 1
...
Start
with C a conic section with equation
xt Ax + bt x + f = 0
Let P be the orthogonal matrix that diagonalizes A, so that
A = P DP t
where
D=
λ1
0
0
λ2
with λ1 and λ2 being the eigenvalues of A
...
If det(P ) = −1, then interchange
the column vectors of P , along with the diagonal entries of D
...
To obtain the equation for C
in the x ′ y ′ coordinate system, substitute x = P x′ into xt Ax + bt x + f = 0 to obtain
t
P x′ A P x′ + bt P x′ + f = 0
By Theorem 6 of Sec
...
3, if the product of A and B is defined, then (AB)t = B t At ,
and since matrix multiplication is associative, we have
(x′ )t P t AP x′ + bt P x′ + f = 0
Let bt P =
d′
e′
that is,
(x′ )t Dx′ + bt P x′ + f = 0
...
The type of conic section depends on the eigenvalues
...
An ellipse if λ1 and λ2 have the same sign
2
...
A parabola if either λ1 or λ2 is zero
EXAMPLE 1
Let C be the conic section whose equation is x 2 − xy + y 2 − 8 = 0
...
Transform the equation to x ′ y ′ coordinates so that C is in standard position
with no x ′ y ′ term
...
Find the angle of rotation between the standard coordinate axes and the x ′ y ′
coordinate system
...
The matrix form of this equation is given by
xt Ax − 8 = 0
The eigenvalues of A are λ1 =
vectors
1
1
v1 = √
2 1
Then the orthogonal matrix
with A =
1
2
1
−1
2
−1
2
1
and λ2 = 3 , with corresponding (unit) eigen2
1
v2 = √
2
and
−1
1
1
1 −1
P =√
1
2 1
diagonalizes A
...
Making the substitution x = P x′ in the matrix
equation above gives
t
x′ P t AP x′ − 8 = 0
that is,
1
0
where
D= 2 3
(x′ )t Dx′ − 8 = 0
0 2
This last equation can now be written as
[x ′ y ′ ]
1
2
0
0
3
2
x′
y′
−8=0
so that the standard form for the equation of the ellipse in the x ′ y ′ coordinate
system is
3(y ′ )2
(x ′ )2
+
=1
16
16
Confirming Pages
6
...
2
...
To find the angle between the original axes and the x ′ y ′ coordinate system,
observe that the eigenvector v1 points in the direction of the x ′ axis
...
6
...
EXAMPLE 2
Describe the conic section C whose equation is
2x 2 − 4xy − y 2 − 4x − 8y + 14 = 0
Solution
The equation for C has the form xt Ax + bt x + f = 0 given by
[x y]
2 −2
−2 −1
The eigenvalues of A =
(unit) eigenvectors
2 −2
−2 −1
1
v1 = √
5
1
2
x
y
+ [−4 − 8]
x
y
+ 14 = 0
are λ1 = −2 and λ2 = 3, with corresponding
and
1
v2 = √
5
−2
1
Confirming Pages
Chapter 6 Inner Product Spaces
Since the eigenvalues have opposite sign, the conic section C is a hyperbola
...
Using the unit eigenvectors, the
orthogonal matrix that diagonalizes A is
1
−2 0
1 −2
= P t AP
with
P =√
0 3
1
5 2
Making the substitution x = P x′ in the equation xt Ax + bt x + f = 0 gives
′
′
[x y ]
−2 0
0 3
x′
y′
1
√
5
2
√
5
+ [−4 − 8]
−2
√
5
1
√
5
x′
y′
+ 14 = 0
After simplification of this equation we obtain
√
−2(x ′ )2 − 4 5x ′ + 3(y ′ )2 + 14 = 0
that is,
√
−2[(x ′ )2 + 2 5(x ′ )] + 3(y ′ )2 + 14 = 0
After completing the square on x ′ , we obtain
√
−2[(x ′ )2 + 2 5(x ′ ) + 5] + 3(y ′ )2 = −14 − 10
that is,
√
(y ′ )2
(x ′ + 5)2
−
=1
12
8
This last equation describes a hyperbola with x ′ as the major axis
...
If we let
√
x ′′ = x ′ + 5
and
y ′′ = y ′
then the equation now becomes
(y ′′ )2
(x ′′ )2
−
=1
12
8
The graph is shown in Fig
...
y
10
x'
10
10
y'
Ϫ10
Ϫ10
Figure 3
Ϫ1
0
10
Ϫ1
0
390
x
Confirming Pages
6
...
As in the two-dimensional case,
the terms gx, hy, and iz produce translations from standard form, while the mixed
terms xy, xz, and yz produce rotations
...
We omit the details
...
Solution
Let
⎤
5 0
4
0 ⎦
A=⎣ 0 4
4 0 −5
Then the quadratic equation can be written as
⎤
⎤⎡
⎡
x
5 0
4
0 ⎦ ⎣ y ⎦ = 36
[x y z] ⎣ 0 4
z
4 0 −5
⎡
The eigenvalues of the matrix A are
√
√
λ1 = 41
λ2 = − 41
λ3 = 4
Hence, the quadric surface, in standard position, has the equation
√
√
41(x ′ )2 − 41(y ′ )2 + 4(z′ )2 = 36
Figure 4
The graph of the surface, which is a hyperboloid of one sheet, is shown in Fig
...
Confirming Pages
392
Chapter 6 Inner Product Spaces
Exercise Set 6
...
Transform the equation to x ′ y ′
coordinates so that C is in standard position with no
x ′ y ′ term
...
27x 2 − 18xy + 3y 2 + x + 3y = 0
2
...
12x 2 + 8xy + 12y 2 − 8 = 0
4
...
−x 2 − 6xy − y 2 + 8 = 0
a
...
b
...
9
...
a
...
b
...
6
...
Let C denote the conic section in standard
position given by the equation 4x 2 + 16y 2 = 16
...
Write the quadratic equation in matrix form
...
Find the quadratic equation that describes the
conic C rotated by 45◦
...
Let C denote the conic section in standard
position given by the equation x 2 − y 2 = 1
...
8
10
...
a
...
b
...
Application: Singular Value Decomposition
In earlier sections we have examined various ways to write a given matrix as a product
of other matrices with special properties
...
1
...
Also in Sec
...
7,
we showed that if A is invertible, then it could be written as the product of elementary
matrices
...
5
...
As a special case, if A is symmetric,
then A has the factorization
A = QDQt
where Q is an orthogonal matrix
...
Specifically, we introduce the singular value decomposition, abbreviated as SVD,
which enables us to write any m × n matrix as
A = U Vt
where U is an m × m orthogonal matrix, V is an n × n orthogonal matrix, and
an m × n matrix with numbers, called singular values, on its diagonal
...
8 Application: Singular Value Decomposition
393
Singular Values of an m × n Matrix
To define the singular values of an m × n matrix A, we consider the matrix At A
...
This new matrix is symmetric since (At A)t = At Att = At A
...
6
...
...
...
...
...
⎥
...
λn
Since by Exercise 39 of Sec
...
3 the matrix At A is positive semidefinite, we also
have, by Exercise 41 of Sec
...
3, that λi ≥ 0 for 1 ≤ i ≤ n
...
DEFINITION 1
Singular Values Let A be an m × n matrix
...
, λn of
At A
...
5
...
EXAMPLE 1
Let A be the matrix given by
Find the singular values of A
...
We have already seen that orthogonal bases are desirable and the Gram-Schmidt
process can be used to construct an orthogonal basis from any basis
...
, vr are the eigenvectors of At A, then we will see that
{Av1 ,
...
We begin with the connection
between the singular values of A and the vectors Av1 ,
...
THEOREM 16
Let A be an m × n matrix and let B = {v1 , v2 ,
...
, λn
...
|| Avi || = σi for each i = 1, 2,
...
2
...
Proof For the first statement recall from Sec
...
6 that the length of a vector v in
√
Euclidean space can be given by the matrix product || v || = vt v
...
Part 1 is established
due
by noting that σi = λi = || Avi ||
...
6
...
Thus, since B is an orthonormal basis of ޒn , if
i ̸= j , then
(Avi ) · (Avj ) = (Avi )t (Avj ) = vt (At A)vj = vt λj vi = λj vt vj = 0
i
i
i
In Theorem 16, the set of vectors {Av1 , Av2 ,
...
In Theorem 17 we establish that the eigenvectors of At A, after multiplication by A,
are an orthogonal basis for col(A)
...
, vn } an orthonormal basis of ޒn consisting of eigenvectors of At A
...
Then B ′ = {Av1 , Av2 ,
...
Revised Confirming Pages
6
...
, Avr are all nonzero vectors in col(A)
...
, Avr } is an orthogonal set
of vectors in ޒm
...
6
...
Now
to show that these vectors span the column space of A, let w be a vector in col(A)
...
Since B = {v1 , v2 ,
...
, cn such that
v = c1 v1 + c2 v2 + · · · + cn vn
Multiplying both sides of the last equation by A, we obtain
Av = c1 Av1 + c2 Av2 + · · · + cn Avn
Now, using the fact that Avr+1 = Avr+2 = · · · = Avn = 0, then
Av = c1 Av1 + c2 Av2 + · · · + cr Avr
so that w = Av is in span{Av1 , Av2 ,
...
Consequently, B ′ =
{Av1 , Av2 ,
...
EXAMPLE 2
Solution
Let A be the matrix given by
⎤
1 1
A=⎣ 0 1 ⎦
1 0
⎡
Find the image of the unit circle under the linear transformation T : 3ޒ → 2ޒ
defined by T (v) = Av
...
The singular values of A are then σ1 = 3 and σ2 = 1
...
The image of C(t)
under T is given by
T (C(t)) = cos (t)Av1 + sin (t)Av2
By Theorem 17, B ′ =
1
1
σ1 Av1 , σ2 Av2
is a basis for the range of T
...
Observe that
(x ′ )2
x′ 2
√
+ (y ′ )2 = cos2 t + sin2 t = 1
+ (y ′ )2 =
3
3
which is an ellipse with the length of the semimajor axis equal to σ1 and length of
the semiminor axis equal to σ2 , as shown in Fig
...
Confirming Pages
396
Chapter 6 Inner Product Spaces
y
z
2
y
σ2
σ1
A
2
Ϫ2
Multiplication by
x
x
Ϫ2
x
y
Figure 1
For certain matrices, some of the singular values may be zero
...
...
Hence, the rank of A is equal to 1
...
Now, multiplying v1
and v2 by A gives
√
0
√5
and
Av2 =
Av1 =
0
3 5
Observe that Av1 spans the one dimensional column space of A
...
2
...
THEOREM 18
SVD Let A be an m × n matrix of rank r, with r nonzero singular values
σ1 , σ2 ,
...
Then there exists an m × n matrix , an m × m orthogonal matrix
Confirming Pages
6
...
6
...
, vn } of ޒn , consisting of eigenvectors of At A
...
, Avr } is an orthogonal basis for col(A)
...
, ur }
be the orthonormal basis for col(A), given by
1
1
ui =
Avi = Avi
for
i = 1,
...
, ur } to the orthonormal basis {u1 ,
...
We can
now define the orthogonal matrices V and U , using the vectors {v1 ,
...
, um }, respectively, as column vectors, so that
V = [ v1
v2
· · · vn ]
and
U = [ u1
Moreover, since Avi = σi ui , for i = 1,
...
0
⎢ 0 σ2
...
...
...
⎢
...
...
⎢
...
0
⎢
⎢
...
...
...
0
...
0
...
...
...
...
0
...
...
0
u2
· · · um ]
· · · σr ur
0
0
...
...
...
...
EXAMPLE 3
Solution
Find a singular value decomposition of the matrix
⎤
⎡
−1
1
1 ⎦
A = ⎣ −1
2 −2
A procedure for finding an SVD of A is included in the proof of Theorem 18
...
Step 1
...
The eigenvalues of the matrix
At A =
6 −6
−6
6
in decreasing order are given by λ1 = 12 and λ2 = 0
...
Find the singular values of A and define the matrix
...
In this case,
⎤
⎡ √
2 3 0
=⎣
0 0 ⎦
0 0
Confirming Pages
6
...
Define the matrix U
...
Therefore, the first column of U is
√ ⎤
⎡
1/√6
1
u1 = Av1 = ⎣ 1/√6 ⎦
σ1
−2/ 6
Next we extend the set {u1 } to an orthonormal basis for 3ޒby adding to it the
vectors
√ ⎤
⎡ √ ⎤
⎡
2/ 5
−1/√2
u2 = ⎣ √0 ⎦
and
u3 = ⎣ 1/ 2 ⎦
1/ 5
0
so that
√ ⎤
√
√
⎡
1/√6 2/ 5 −1/√2
U = ⎣ 1/√6
0
1/ 2 ⎦
√
0
−2/ 6 1/ 5
The singular value decomposition of A is then given by
√ ⎤⎡ √
√
√
⎤
⎡
√
√
1/√6 2/ 5 −1/ 2
2 3 0
√
−1/√2 1/√2
A = U V t = ⎣ 1/ √
0 ⎦
6
0
1/ 2 ⎦ ⎣ 0
√
1/ 2 1/ 2
0
0
0
−2/ 6 1/ 5
⎤
⎡
−1
1
1 ⎦
= ⎣ −1
2 −2
In Example 3, the process of finding a singular value decomposition of A was
complicated by the task of extending the set {u1 ,
...
Alternatively, we can use At A to find V and AAt to find U
...
After multiplying A on the
left by its transpose, we obtain
At A = V
t
U t U V t = V D1 V t
where D1 is an n × n diagonal matrix with diagonal entries the eigenvalues of At A
...
On the other hand,
AAt = U V t V
t
U t = U D2 U t
where D2 is an m × m diagonal matrix with diagonal entries the eigenvalues of AAt
and U is an orthogonal matrix that diagonalizes AAt
...
(See Exercise 22 of Sec
...
1
...
The matrices U and V found using this
Confirming Pages
400
Chapter 6 Inner Product Spaces
procedure are not unique
...
As a
result, finding an SVD of A may require changing the signs of certain columns of U
or V
...
EXAMPLE 4
Find a singular value decomposition of the matrix
1
3
A=
Solution
1
−3
First observe that
At A =
1
3
1 −3
By inspection we see that v1 =
1
√
2
1
3
1
−3
1
−1
−8
10
10
−8
=
is a unit eigenvector of At A with
corresponding eigenvalue λ1 = 18, and v2 =
1
√
2
1
1
is a unit eigenvector of
At A with corresponding eigenvalue λ2 = 2
...
Thus,
0 1
1 0
A singular value decomposition of A is then given by
⎡ 1
√
√
3 2 √0 ⎣ 2
0 1
A = U Vt =
1
1 0
√
2
0
2
−1
√
2
1
√
2
⎤
⎦=
1
3
1
−3
Confirming Pages
6
...
6
...
To develop this idea, let A be an m × n matrix of rank
r ≤ n and B = {v1 ,
...
First, from
the proof of Theorem 17 if σ1 , · · · , σr are the nonzero singular values of A, then
C′ =
1
1
Av1 ,
...
, ur }
is a basis for col(A)
...
, ur , ur+1 ,
...
We claim that
C ′′ = {ur+1 ,
...
To see this, observe that each
vector of C ′ is orthogonal to each vector of C ′′
...
6
...
, um } = col(A)⊥
...
6
...
, um } = N (At ), so that C ′′ = {ur+1 ,
...
We now turn our attention to the matrix V
...
Consequently, span{vr+1 ,
...
Now by Theorem 5 of Sec
...
2,
dim(N (A)) + dim(col(A)) = n
N(A)
v1
row(A)
A
col(A)
u2
N(At )
u1
Av1
u3
so that dim(N (A)) = n − r
...
, vn } is an orthogonal, and hence
linearly independent, set of n − r vectors in N (A), by Theorem 12, part (1), of
Sec
...
3, B ′′ is a basis for N (A)
...
, vn } is an orthonormal
basis for ޒn , each vector of B ′′ is orthogonal to every vector in B ′ = {v1 ,
...
Hence,
span{v1 ,
...
To illustrate the ideas of this discussion, consider the matrix A of Example 3 and
its SVD
...
3
...
As a preliminary step, suppose that a matrix A of rank r (with r nonzero
Confirming Pages
402
Chapter 6 Inner Product Spaces
singular values) has the SVD A = U V t
...
Consequently, the sum
i
of the first k terms of the last equation is a matrix of rank k ≤ r, which gives an
approximation to the matrix A
...
As an illustration of the utility of such an approximation, suppose that A is the
356 × 500 matrix, where each entry is a numeric value for a pixel, of the gray scale
image of the surface of Mars shown in Fig
...
A simple algorithm using the method
above for approximating the image stored in the matrix A is given by the following:
Figure 4
1
...
2
...
, k, with k ≤ r = rank(A)
...
The matrix (Av1 )vt + (Av2 )vt + · · · + (Avk )vt is an approximation of the origk
1
2
inal image
...
, vk of At A and the vectors Av1 ,
...
The images in Fig
...
Figure 5
The storage requirements for each of the images are given in Table 1
...
8 Application: Singular Value Decomposition
403
Table 1
Image
Storage Requirement
Percent of Original
Rank
Original
356 × 500 = 178, 000
100%
Approximation 1
2 × 500 = 1, 000
0
...
8
In Exercises 1–4, find the singular values for the
matrix
...
A =
−2 −2
1
1
2
...
A = ⎣ 2 −1 −1 ⎦
−2
1
1
⎡
⎤
⎡
1 −1 0
0 1 ⎦
4
...
5
...
A =
2
2
4 −1
7
...
A =
−2 1 −1
0 1
1
In Exercises 9 and 10, the condition number of a
matrix A is the ratio σ1 /σr , of the largest to the
smallest singular value
...
A linear
system is ill-conditioned when the condition number
is too large and called singular when the condition
number is infinite (the matrix is not invertible)
...
Let A =
1 1
1 1
...
a
...
Solve the linear system
Ax =
2
2
...
Find the condition number for A
...
Let b = ⎣ 3 ⎦
...
Let
b
...
⎤
−2
...
001
0
−0
...
01
0
−2
1
⎡
Solve the linear system Bx = b
...
Find the condition number for A
...
Let V be the inner product space 3ޒwith the
standard inner product and let
⎧⎡
⎤ ⎡ ⎤ ⎡ ⎤⎫
2 ⎬
1
⎨ 1
B = ⎣ 0 ⎦, ⎣ 0 ⎦, ⎣ 1 ⎦
⎭
⎩
0
0
1
a
...
3ޒ
3
b
...
ޒ
⎤ ⎡ ⎤⎫
1 ⎬
⎨ 1
c
...
Find projW v
...
6
...
)
2
...
a
...
b
...
c
...
d
...
e
...
⎡
−2
⎢ 0
f
...
⎤
⎥
⎥
⎦
3
...
⎤
⎡
a
a
...
c
b
...
⎤
W
x1
c
...
Find projW ⊥ v
...
Find projW ⊥ v
...
Define on P2 an inner product by
1
⟨p, q⟩ =
p(x)q(x) dx
−1
Let p(x) = x and q(x) = x 2 − x + 1
...
Find ⟨p, q⟩
...
Find the distance between p and q
...
Are p and q orthogonal? Explain
...
Find the cosine of the angle between p and q
...
Find projq p
...
Let W = span{p}
...
Confirming Pages
6
...
Let V be the inner product space C (0) [−π, π]
with inner product defined by
π
⟨f, g⟩ =
f (x)g(x) dx
6
...
, vn } be an orthonormal basis for
an inner product space V , and let v be a vector
in V
...
Find the coordinate ⎢
...
⎣
...
cn
b
...
, n
...
Let ⎧ ⎡
⎤
⎤
⎤⎫
⎡
⎡
1
1
1 ⎬
⎨
1
1
1
B = √2 ⎣ 1 ⎦, √2 ⎣ −1 ⎦, √6 ⎣ 1 ⎦
⎭
⎩
0
0
−2
be an orthonormal basis for , 3ޒwith the
standard inner product, and let
⎡ 1
1 ⎤
√ + √
2
3
⎢ 1
1 ⎥
v = ⎣ √ − √3 ⎦
...
Let {v1 ,
...
Show that
m
−π
Let W = span{1, cos x, sin x}
...
Verify that the set {1, cos x, sin x} is
orthogonal
...
Find an orthonormal basis for W
...
Find projW x 2
...
Find projW x 2
...
⎥ of v relative to B
...
⎦
...
Show that if B is an orthonormal basis for ޒn ,
with the standard inner product, and
⎡
⎤
c1
⎢ c2 ⎥
[v]B = ⎢
...
⎦
...
)
9
...
In this
exercise we will describe a process to write
A = QR, where Q is an m × n matrix whose
column vectors form an orthonormal basis for
col(A) and R is an n × n upper triangular matrix
that is invertible
...
Let B = {v1 , v2 , v3 } be the set of column
vectors of the matrix A
...
b
...
c
...
d
...
Define the
upper triangle matrix R for i = 1, 2, 3 by
rij =
0
vj · qi
if i > j
if i ≤ j, j = i,
...
Verify that A = QR
...
Let B = {v1 , v2 ,
...
, cn
arbitrary nonzero scalars
...
B1 = {c1 v1 , c2 v2 ,
...
How can the scalars
be chosen so that B1 is an orthonormal basis?
Confirming Pages
406
Chapter 6 Inner Product Spaces
Chapter 6: Chapter Test
In Exercises 1–40, determine whether the statement is
true or false
...
If u is orthogonal to both v1 and v2 , then u is
orthogonal span{v1 , v2 }
...
If W is a subspace of an inner product space V
and v ∈ V , then v − projW v ∈ W ⊥
...
If W is a subspace of an inner product space V ,
then W ∩ W ⊥ contains a nonzero vector
...
Not every orthogonal set in an inner product
space is linearly independent
...
11
...
12
...
3ޒ
⎡ ⎤
1
13
...
3
14
...
15
...
16
...
3ޒ
In Exercises 17–23, use the inner product defined on
P2 defined by
1
⟨p, q⟩ =
be vectors in 4ޒwith inner product the standard dot
product
...
|| v1 || = 30
17
...
The distance between the vectors v1 and v2 is
√
2 14
...
The polynomials p(x) = x and q(x) = x 2 − 1 are
orthogonal
...
The vector u =
√1
30
v1 is a unit vector
...
The polynomials p(x) = 1 and q(x) = x 2 − 1 are
orthogonal
...
The vectors v1 and v2 are orthogonal
...
The cosine of the√
angle between the vectors
4
v1 and v2 is − 15 10
...
projv1 v2 = ⎢
⎣ −16/15 ⎦
−12/15
In Exercises 11–16, let
⎤
⎡
1
v1 = ⎣ 0 ⎦
1
⎡
v2 = ⎣
⎤
2
v3 = ⎣ 4 ⎦
−2
⎡
⎤
−1
1 ⎦
1
20
...
3
21
...
22
...
23
...
24
...
25
...
26
...
8 Application: Singular Value Decomposition
27
...
If
1
⟨p, q⟩ =
then a basis for
W⊥
p(x)q(x) dx
0
is {x}
...
If {u1 ,
...
, vm } is a basis
for W ⊥ , then {u1 ,
...
, vm } is a basis
for V
...
If A is an n × n matrix whose column vectors
form an orthogonal set in ޒn with the standard
inner product, then col(A) = ޒn
...
In 2ޒwith the standard inner product, the
orthogonal complement of y = 2x is y = 1 x
...
In 3ޒwith the standard inner product, the
orthogonal complement of −3x + 3z = 0 is
⎧⎡
⎤⎫
⎨ −3 ⎬
span ⎣ 0 ⎦
...
Every finite dimensional inner product space has
an orthonormal basis
...
If
⎧⎡ ⎤ ⎡
⎤⎫
0 ⎬
⎨ 1
W = span ⎣ 2 ⎦, ⎣ 1 ⎦
⎭
⎩
−1
1
then a basis for W ⊥ is also a basis for the null
space of
⎤
⎡
1
0
1 ⎦
A=⎣ 2
1 −1
407
⎧⎡ ⎤ ⎡
⎤⎫
−1 ⎬
⎨ 1
W = span ⎣ 0 ⎦, ⎣ 1 ⎦
⎭
⎩
0
1
then dim(W ⊥ ) = 2
...
If
then
⎧⎡
⎤ ⎡ ⎤⎫
1 ⎬
⎨ 0
W = span ⎣ 1 ⎦, ⎣ 0 ⎦
⎭
⎩
1
1
W⊥
⎧⎡
⎤⎫
⎨ −1 ⎬
= span ⎣ −1 ⎦
⎭
⎩
1
36
...
37
...
38
...
39
...
40
...
Confirming Pages
Confirming Pages
APPENDIX
Preliminaries
ß
A
...
For example, we can consider the
collection of all even numbers, or the collection of all polynomials of degree 3
...
By this we mean that a clear process exists
for deciding whether an object is contained in the set
...
To indicate
that x is an element of a set S, we write x ∈ S
...
The color orange, however, is not one of the colors
of the rainbow and therefore is not an element of C
...
/
There are several ways to write a set
...
Another example is
S = {−3, −2, 0, 1, 4, 7}
If a pattern exists among its elements, a set can be described by specifying only a
few of them
...
, 36}
is the set of all even numbers between 2 and 36, inclusive
...
}
Special sets of numbers are often given special symbols
...
The set of natural numbers, denoted by ,ގis the set
}
...
,3 ,2 ,1 ,0 ,1− ,2− ,3− ,
...
Examples of irrational numbers are 2 and π
...
For example,
S = {x ∈ ≤ 1− | ޒx < 4}
is the set of all real numbers greater than or equal to −1 and less than 4
...
” In some cases
L is omitted if a universal set is implied or understood
...
Denote two sets by A and
B
...
When this happens,
we say that A is a subset of B and write A ⊆ B
...
However, A is not a subset
of C since 1 ∈ A but 1 ∈ C
...
For the sets of natural
/
numbers, integers, rational numbers, and real numbers, we have
ޒ⊆ޑ⊆ޚ⊆ގ
The set with no elements is called the empty set, or null set, and is denoted by φ
...
Two sets A and B are equal if they have the same elements
...
In this case we write A = B
...
The intersection of two sets A and B, denoted by
A ∩ B, is the set of all elements that are in both A and B, that is,
A ∩ B = {x | x ∈ A and x ∈ B}
The union of two sets A and B, denoted by A ∪ B, is the set of all elements that are
in A or B, that is,
A ∪ B = {x | x ∈ A or x ∈ B}
As an illustration, let A = {1, 3, 5} and B = {1, 2, 4}
...
The Venn diagrams for the intersection and union of two sets are shown in Fig
...
Confirming Pages
A
...
Find A ∩ B
and A ∪ B
...
/
/
The complement of the set A relative to the set B, denoted by B\A, consists of all
elements of B that are not elements of A
...
Then
B\A = [0, 1) ∪ (2, 5]
If A is taken from a known universal set, then the complement of A is denoted
by Ac
...
Then the complement of A relative to
the set of real numbers is
\ޒA = Ac = (−∞, 1) ∪ (2, ∞)
Another operation on sets is the Cartesian product
...
So
A × B = {(x, y) | x ∈ A and y ∈ B}
For example, if A = {1, 2} and B = {10, 20}, then
A × B = {(1, 10), (1, 20), (2, 10), (2, 20)}
This last set is a subset of the Euclidean plane, which can be written as the Cartesian
product of ޒwith itself, so that
({ = ޒ × ޒ = 2ޒx, y) | x, y ∈ }ޒ
Confirming Pages
412
Appendix A
Preliminaries
EXAMPLE 2
y
5
Solution
[−3, 2) × (−2, 1]
Ϫ5
5
x
Let A = [−3, 2) and B = (−2, 1]
...
Since A × B consists of all ordered pairs whose first component comes from A and
second from B, we have
−3 ≤ x < 2 and − 2 < y ≤ 1
The points that satisfy these two conditions lie in the rectangular region shown in
Fig
...
Ϫ5
Figure 2
EXAMPLE 3
Solution
Example 3 shows that operations on sets can be combined to produce results
similar to the arithmetic properties of real numbers
...
The Venn diagrams in Fig
...
The quantities inside the parentheses are
carried out first
...
To establish
the fact, we must show that the set on the left-hand side of the equation above is
a subset of the set on the right, and vice versa
...
This is equivalent to
the statement x ∈ A and (x ∈ B or x ∈ C), which in turn is also equivalent to
(x ∈ A and x ∈ B)
or
(x ∈ A and x ∈ C)
Hence, x ∈ (A ∩ B) ∪ (A ∩ C), and we have shown that
A ∩ (B ∪ C) ⊆ (A ∩ B) ∪ (A ∩ C)
On the other hand, let x ∈ (A ∩ B) ∪ (A ∩ C), which can also be written as
x ∈ (A ∩ B)
or
x ∈ (A ∩ C)
x∈B
or
x∈A
This gives
x∈A
and
and
In either case, x ∈ A and, in addition, x ∈ B or x ∈ C, so that
x ∈ A ∩ (B ∪ C)
x∈C
Confirming Pages
A
...
The verifications of the remaining properties are left as exercises
...
1
...
3
...
5
...
THEOREM 2
A ∩ A = A, A ∪ A = A
(Ac )c = A
A ∩ Ac = φ, A ∪ Ac = U
A ∩ B = B ∩ A, A ∪ B = B ∪ A
(A ∩ B) ∩ C = A ∩ (B ∩ C), (A ∪ B) ∪ C = A ∪ (B ∪ C)
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
DeMorgan’s Laws Let A, B, and C be sets
...
A\(B ∪ C) = (A\B) ∩ (A\C)
2
...
We begin by letting x ∈ A\(B ∪ C)
...
This is equivalent to the statement
/
x∈A
(x ∈ B and x ∈ C)
/
/
and
which is then equivalent to
x∈A
and
x∈B
/
and
x∈A
and
x∈C
/
This last pair of statements gives
x ∈ (A\B) ∩ (A\C)
so that
A\(B ∪ C) ⊆ (A\B) ∩ (A\C)
To show containment in the other direction, we let x ∈ (A\B) ∩ (A\C)
...
Exercise Set A
...
C × B
18
...
A × (B ∩ C)
20
...
1
...
A ∪ B
3
...
(A ∪ B)c
Verify that the statement holds
...
A\B
21
...
B\A
22
...
24
...
A\(B ∪ C) = (A\B) ∩ (A\C)
7
...
A\(B ∩ C) = (A\B) ∪ (A\C)
8
...
9
...
(Ac )c = A
10
...
The set A ∪ Ac is the universal set
...
A\C
29
...
(A ∪ B)c ∩ C
30
...
(A ∪ B)\C
31
...
B\(A ∩ C)
32
...
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
B = [1, 4]
C = [0, 2]
33
...
34
...
A × B
35
...
B × C
A\B = A ∩ B c
Confirming Pages
A
...
If A and B are sets, show that
c
(A ∪ B) ∩ A = B\A
37
...
If A and B are sets, show that
(A ∩ B) = A\(A\B)
ß
A
...
If A, B, and C are sets, show that
A × (B ∩ C) = (A × B) ∩ (A × C)
40
...
A
...
Sets act as nouns defining objects and functions as
verbs describing actions to be performed on the elements of a set
...
The functions that are
studied in calculus are defined on sets of real numbers
...
The following definition is
general enough for a wide variety of abstract settings
...
ޙ
Before continuing with a description of functions, we note that there are other
ways of associating the elements of two sets
...
ޘ
A function, then, is a relation that is well defined with a clear procedure that associates
a unique element of ޙwith each element of
...
A function f is also called a mapping from ޘto ޙand is written f :
...
The set ޘis called the domain of f and is denoted by
dom(f )
...
That is,
range(f ) = {f (x) | x ∈ dom(f )}
If A is a subset of the domain, then the image of A is defined by
f (A) = {f (x) | x ∈ A}
Using this notation, we have range(f ) = f (
...
The pictures shown in Fig
...
Confirming Pages
416
Appendix A
Preliminaries
f
g
x1
ޘ
y1
x2
ޘ
ޙ
y2
x3
y1
x2
y4
y3
x4
x1
ޙ
y2
y3
x3
y4
A function
Not a function
Figure 1
y
5
Ϫ5
5
x
The relation f , shown in Fig
...
ޙNotice that more
than one element in the domain of the function f can be associated with the same
element in the range
...
However, the relation
g, also shown in Fig
...
Notice in this example that f ( )ޘis not equal to ,ޙsince y4 is not in the range of
f
...
ޙ
The graph of a function f : ޙ →− ޘis a subset of the Cartesian product ޙ × ޘ
and is defined by
graph(f ) = {(x, y) | x ∈ ޘand y = f (x) ∈ range(f )}
For a function f : ޒ →− ޒthe graph is a subset of , 2ޒthe Cartesian plane
...
ޒFor the range, since the vertex of the parabola is (2, −1), then
range(f ) = [−1, ∞)
...
2
...
Notice that in this example
it is also the case that f (4) = 3, so {0, 4} is the set of all real numbers with image
equal to 3
...
This motivates the
next concept
...
That is,
f −1 (B) = {x ∈ | ޘf (x) ∈ B}
Figure 3
The set f −1 (B) is also called the set of preimages of the set B
...
The graph is shown in Fig
...
We see from the graph that
f −1 ([0, 1]) = [0, π]
and
f −1 ([−1, 0]) = [π, 2π]
Confirming Pages
A
...
Define the sets
A = [0, 3]
a
...
c
...
Solution
417
Compare
Compare
Compare
Compare
the
the
the
the
sets
sets
sets
sets
B = [1, 4]
C = [−1, 3]
D = [0, 3]
f (A ∩ B) and f (A) ∩ f (B)
...
f −1 (C ∩ D) and f −1 (C) ∩ f −1 (D)
...
a
...
2 that f (A ∩ B) = [−1, 0]
...
Hence, we have shown
that
f (A ∩ B) ⊆ f (A) ∩ f (B) with f (A ∩ B) ̸= f (A) ∩ f (B)
b
...
Also f (A) = [−1, 3] =
f (B), so that f (A) ∪ f (B) = [−1, 3]
...
Since C ∩ D = [0, 3], we have
f −1 (C ∩ D) = {x ∈ | ޒf (x) ∈ [0, 3]}
= {x ∈ ≤ 0 | ޒf (x) ≤ 3}
= {x ∈ ( ≤ 0 | ޒx − 2)2 − 1 ≤ 3}
We see from Fig
...
Since C ∪ D = [−1, 3], we have from the results in part (c)
f −1 (C ∪ D) = f −1 (C) ∪ f −1 (D)
Theorem 3 summarizes several results about images of sets and inverse images
of sets including the observations made in Example 1
...
ޙThen
1
...
3
...
5
...
f (A ∩ B) ⊆ f (A) ∩ f (B)
f (A ∪ B) = f (A) ∪ f (B)
f −1 (C ∩ D) = f −1 (C) ∩ f −1 (D)
f −1 (C ∪ D) = f −1 (C) ∪ f −1 (D)
A ⊆ f −1 (f (A))
f (f −1 (C)) ⊆ C
Proof (1) Let y ∈ f (A ∩ B)
...
This means that y ∈ f (A) and y ∈ f (B), and hence y ∈ f (A) ∩ f (B)
...
(3) To show that the sets are equal, we show that each set is a subset of the other
...
Therefore, x ∈ f −1 (C) and x ∈ f −1 (D), and we have
f −1 (C ∩ D) ⊆ f −1 (C) ∩ f −1 (D)
...
Then f (x) ∈ C and f (x) ∈ D, so that f (x) ∈ C ∩ D
...
(5) If x ∈ A, then f (x) ∈ f (A), and hence x ∈ f −1 (f (A))
...
The proofs of parts 2, 4, and 6 are left as exercises
...
Inverse Functions
An inverse function of a function f , when it exists, is a function that reverses the action
of f
...
For
example, if f (x) = 3x − 1 and g(x) = (x + 1)/3, then f (2) = 5 and g(5) = 2
...
For a function to have an inverse function, the inverse image for each element of the range of the function must be well defined
...
For example, the function f : ޒ →− ޒdefined by f (x) = x 2 cannot be reversed as a
function since the inverse image of the set {4} is the set {−2, 2}
...
A function that has an inverse is called invertible
...
This will justify the use of the definite article and the symbol f −1 when referring to
the inverse of the function f
...
2 Functions
419
property called one-to-one
...
1 is not one-to-one, since
both x3 and x4 are sent to the same element of
...
DEFINITION 2
One-to-One Function Let f : ޙ →− ޘbe a function
...
Alternatively, f is one-to-one if whenever f (x1 ) = f (x2 ), then x1 = x2
...
When this happens, f passes the horizontal line test and
is thus invertible
...
The inverse of f is denoted by f −1 with f −1 : range(f ) −→
...
We omit the
proof
...
The function f has
an inverse function if and only if f is one-to-one
...
Since the
graph, which is a straight line, satisfies the horizontal line test, the function is oneto-one and hence has an inverse function
...
We can solve for x in terms of y to obtain
y−1
x=
3
The inverse function is then written using the same independent variable, so that
x−1
3
It is also possible to show that a function has an inverse even when it is difficult to
find the inverse
...
By Theorem 4, to show that f is invertible, we show that f is one-to-one
...
We wish to show that f (x1 ) ̸= f (x2 )
...
To see how, suppose that (a, b) is a point on the graph of y = f (x)
...
Consequently, the point (b, a) is on the graph of
y = f −1 (x)
...
The graph of the function
and its inverse in Example 2 are shown in Fig
...
y
5
f (x) = x 3 + x
y = f −1 (x)
Ϫ5
5
x
Ϫ5
Figure 4
When f : ޙ →− ޘis a function such that the set of images is all of ,ޙthat is,
f ( ,ޙ = )ޘwe call the function onto
...
ޙ
The function f : ޙ →− ޘis called onto, or surjective, if
For example, the function of Example 2 is onto since the range of f is all of
...
4
...
Notice that the function f : ޒ →− ޒwith f (x) = x 2 is not onto since range (f ) =
[0, ∞)
...
So the function
f : )∞ ,0[ →− ޒdefined by f (x) = x 2 is onto
...
That is, the function f : [0, ∞) −→ [0, ∞) defined
by f (x) = x 2 is a bijection
...
Notice
also that a function has an inverse if and only if it is bijective
...
For example,
if f : 1ޙ →− 1ޘand g: 2ޙ →− 2ޘare real-valued functions of a real variable, then
the standard arithmetic operations on functions are defined by
(f + g)(x) = f (x) + g(x)
(f − g)(x) = f (x) − g(x)
(f g)(x) = f (x)g(x)
f
g
(x) =
f (x)
g(x)
Confirming Pages
A
...
Another method of
combining functions is through the composition of two functions
...
For example, if f (x) = x and g(x) = x 2 − x − 2, then f (g(3)) =
f (4) = 2 is the composition of f with g evaluated at the number 3 and is denoted
by (f ◦g)(3)
...
The composition f ◦g: A −→ C is defined by
(f ◦g)(x) = f (g(x))
The domain of the composition is dom(f ◦g) = {x ∈ dom(g) | g(x) ∈ dom(f )}
...
For example,
let f (x) = 2x − 1
...
Notice that
2x − 1 + 1
f (x) + 1
(f −1 ◦f )(x) = f −1 (f (x)) =
=
=x
2
2
and
x+1
(f ◦f −1 )(x) = f (f −1 (x)) = 2
−1=x+1−1=x
2
THEOREM 5
Suppose that f : ޙ →− ޘis a bijection
...
(f −1 ◦f )(x) = x for all x ∈ ޘ
2
...
To see this,
let f : ޙ →− ޘbe an invertible function and f −1 an inverse function
...
Let I ޘbe the identity function on ޘ
and I ޙthe identity function on
...
ޙIf y is in ,ޙthen
g(y) = g ◦I( ޙy) = g ◦(f ◦f −1 )(y)
= g(f (f −1 (y))) = (g ◦f )◦f −1 (y)
= I( ޘf −1 (y)) = f −1 (y)
Since this holds for all y in ,ޙthen g = f −1
...
This justifies the use of the symbol f −1 for the inverse of f , when
it exists
...
Then
1
...
(f −1 )−1 = f
THEOREM 7
Let A, B, and C be nonempty sets and f : B −→ C and g: A −→ B be functions
...
2
...
4
...
If
If
If
If
If
f and g are injections, then f ◦g is an injection
...
f and g are bijections, then f ◦g is a bijection
...
f ◦g is a surjection, then f is a surjection
...
Then by
the definition of composition, we have
f (g(x1 )) = f (g(x2 ))
Since f is an injection, g(x1 ) = g(x2 )
...
Therefore, f ◦g is an injection
...
Since f ◦g: A −→ C is a surjection, there is some a ∈ A such that
(f ◦g)(a) = c
...
But g(a) ∈ B, so there is an element of B
with image under f equal to c
...
The proofs of parts 2, 3, and 4 are left as exercises
...
If
f and g are bijections, then the function f ◦g has an inverse function and (f ◦g)−1 =
g −1 ◦f −1
...
Moreover, the function
g −1 ◦f −1 also maps C to A
...
Let c ∈ C
...
Next, since g is onto, there is an a ∈ A such that g(a) = b, which
is equivalent to a = g −1 (b)
...
We also have g −1 (f −1 (c)) = (g −1 ◦f −1 )(c) = a
...
Confirming Pages
A
...
2
15
...
Find
the inverse function of f
...
Explain why f is a function
...
Is f a one-to-one function? Explain
...
Is f an onto function? Specify range(f )
...
Let A = {1, 2, 4}
...
5
...
16
...
Show that the inverse function of f exists
...
Given a function f , define for each positive
integer n
f n (x) = (f ◦f ◦ · · · ◦f )(x)
where the composition is taken n − 1 times
...
18
...
Find f −1 (f ({1}))
...
Does f have an inverse function? Explain
...
Is it possible to define a function with domain ޘ
that is onto ?ޙExplain
...
Define a function g: ޙ →− ޘthat is one-to-one
...
Is it possible to define a function g: ޘ →− ޙthat
is onto? Explain
...
Let A = (−3, 5) and B = [0, 7)
...
Let C = [1, ∞) and D = [3, 5]
...
Let A = [−2, 0] and B = [0, 2]
...
2
14
...
If A = [0, 5) and B = [2, 7), verify that
g(A ∩ B) = g(A) ∩ g(B)
What property does g have that f does not?
f (x) =
2x
2 − 2x
if 0 ≤ x ≤ 1
2
if 1 < x ≤ 1
2
Sketch the graphs of y = f (x) and y = (f ◦f )(x)
...
Define a function f : ޒ → ޒby
f (x) = e2x−1
a
...
b
...
c
...
d
...
20
...
21
...
Show that f is one-to-one
...
Is f onto? Explain
...
If E denotes the set of even positive integers
and O the odd positive integers, find f −1 (E)
and f −1 (O)
...
Define a function f : ޚ → ޚby
f (n) =
n+1
n−3
if n is even
if n is odd
Let E denote the set of even integers and O the
set of odd integers
...
23
...
Let A = {(p, q) | p and q are odd}
...
b
...
Find f (B)
...
Find f −1 ({0})
...
Let E denote the set of even integers
...
e
...
Find
f −1 (O)
...
Show that f is not one-to-one
...
Show that f is onto
...
Let A be the set of all points that lie on the
line y = x + 1
...
In Exercises 25–27, f : ޙ → ޘis a function, A and B
are subsets of ,ޘand C and D are subsets of
...
25
...
f −1 (C ∪ D) = f −1 (C) ∪ f −1 (D)
27
...
Prove the statements
...
If f and g are surjections, then f ◦g is a
surjection
...
If f and g are bijections, then f ◦g is a bijection
...
If f ◦g is an injection, then g is an injection
...
If f : ޙ → ޘis a function and A and B are
subsets of ,ޘshow that
f (A)\f (B) ⊆ f (A\B)
24
...
Show that f is one-to-one
...
Is f onto? Justify your answer
...
3
32
...
A few of these, called axioms, are accepted as selfevident and do not require justification
...
A proof is the process of establishing the validity of a statement
...
The
first part, called the hypothesis, is a set of assumptions
...
It is customary to use the letter P to
denote the hypotheses (or hypothesis if there is only one) and the letter Q to denote
the conclusion
...
” The
converse of a theorem is symbolized by
Q
⇒ P
Revised Confirming Pages
A
...
” For example, let P be the statement
Mary lives in Iowa and Q the statement that Mary lives in the United States
...
But Q ⇒ P is not a theorem since, for example, if Mary is a
resident of California, then she is a resident of the United States but not a resident of
Iowa
...
In terms
of sets, if A is the set of residents of Iowa and B is the set of residents of the United
States, then the statement P is Mary is in A and Q is Mary is in B
...
It is also clear that if Mary is in B\A, then Mary is in B does
not imply that Mary is in A
...
In the example above, if Mary is
not a resident of the United States, then Mary is not a resident of Iowa
...
/
There are other statements in mathematics that require proof
...
A statement that is not yet
proven is called a conjecture
...
A single counterexample is enough to refute a false conjecture
...
In this section we briefly introduce three main types of proof
...
A
...
Direct Argument
In a direct argument, a sequence of logical steps links the hypotheses P to the
conclusion Q
...
EXAMPLE 1
Solution
Prove that if p and q are odd integers, then p + q is an even integer
...
Then there are integers m and n such that
p = 2m + 1
and
q = 2n + 1
Adding p and q gives
p + q = 2m + 1 + 2n + 1
= 2(m + n) + 2
= 2(m + n + 1)
Since p + q is a multiple of 2, it is an even integer
...
The notation ∼Q denotes the negation of the statement Q
...
In a contrapositive argument the hypothesis is ∼Q, and we proceed
with a direct argument to show that ∼P holds
...
In a direct argument we assume that p2 is even, so that we can write p2 = 2k for
some integer k
...
To use a contrapositive argument, we assume that p is not an even integer
...
Then there is an integer k such that
p = 2k + 1
...
Therefore, the original statement holds
...
For example, to prove that
the set of natural numbers ގis infinite, we would assume the set of natural numbers is
finite and argue that this leads to a contradiction
...
Since both P and ∼P cannot be true, we
have a contradiction
...
EXAMPLE 3
Solution
Prove that
√
2 is an irrational number
...
That is, we
Confirming Pages
A
...
We will arrive at a contradiction by
showing that if 2 = p/q, then p and q do have a common factor
...
Since p2 is an even integer, then by Example 2 so is p
...
Substituting 2k for p in the equation 2q 2 = p2
gives
so that
q 2 = 2k 2
2q 2 = p2 = (2k)2 = 4k 2
Hence, q is also an even integer
...
Quantifiers
Often statements in mathematics are quantified using the universal quantifier for
all, denoted by the symbol ∀, or by the existential quantifier there exists, denoted
by the symbol ∃
...
To prove that the statement is true, we have to verify that the
statement P (x) holds for every choice of x
...
To prove that a statement of the form
∃x, P (x)
holds requires finding at least one x such that P (x) holds
...
When we negate a statement involving quantifiers, ∼∃ becomes ∀ and ∼∀
becomes ∃
...
3
1
...
2
...
3
...
4
...
5
...
6
...
7
...
8
...
9
...
10
...
11
...
12
...
Prove
that if x is in the set S = {x ∈ ≤ 0 | ޒx ≤ 3},
then f (x) ≤ g(x)
...
Prove that if n is an integer and n2 is odd, then n
is odd
...
Prove that if n is an integer and n3 is even, then n
is even
...
Prove that if p and q are positive real numbers
√
such that pq ̸= (p + q)/2, then p ̸= q
...
Prove that if c is an odd integer, then the equation
n2 + n − c = 0 has no integer solution for n
...
Prove that if x is a nonnegative real number such
that x < ϵ, for every real number ϵ > 0, then x = 0
...
Prove that if x is a rational number and x + y is
an irrational number, then y is an irrational
number
...
Prove that 3 2 is irrational
...
Prove that if n in ,ގthen
n
n
>
n+1
n+2
21
...
Prove that if 7xy ≤ 3x 2 + 2y 2 , then
3x ≤ y
...
Define a function f : ޙ → ޘand sets A and B in
ޘthat is a counterexample to show the statement
If f (A) ⊆ f (B), then A ⊂ B
is false
...
Define a function f : ޙ → ޘand sets C and D in
ޙthat is a counterexample to show the statement
If f −1 (C) ⊆ f −1 (D), then C ⊂ D
is false
...
ޙProve
the statements
...
If A ⊆ B, then f (A) ⊆ f (B)
...
If C ⊆ D, then f −1 (C) ⊆ f −1 (D)
...
If f is an injection, then for all A and B
f (A ∩ B) = f (A) ∩ f (B)
27
...
If f is an injection, then for all A
f −1 (f (A)) = A
29
...
4 Mathematical Induction
ß
A
...
Some simple examples are the following three statements, the third being a
well-known puzzle, called the Tower of Hanoi puzzle
...
For every natural number n, the sum of the first n natural numbers is given by
n(n + 1)
2
2
...
3
...
This is
under the restriction that a disk can be placed on top of another disk only when
it has smaller diameter
...
If the statement is false, often a counterexample is found quickly,
allowing us to reject the statement
...
However, if n = 4, then 6(4) + 1 = 25, which is not a
prime number, and the statement is not true for all natural numbers n
...
Of course, to
establish the fact for all n requires a proof, which we postpone until Example 1
...
A solution for
n = 3 is given by the moves
D3 −→ P 3, D2 −→ P 2, D3 −→ P 2, D1 −→ P 3, D3 −→ P 1,
D2 −→ P 3, D1 −→ P 3
Table 1
1 + 2 + 3 + ··· + n
n(n+1)
2
1
(1)(2)
2
1+2=3
(2)(3)
2
1+2+3 =6
1 + 2 + 3 + 4 = 10
1 + 2 + 3 + 4 + 5 = 15
1 + 2 + 3 + 4 + 5 + 6 = 21
1 + 2 + 3 + 4 + 5 + 6 + 7 = 28
(3)(4)
2
(4)(5)
2
(5)(6)
2
(6)(7)
2
(7)(8)
2
=1
=3
=6
= 10
= 15
= 21
= 28
Confirming Pages
430
Appendix A
Preliminaries
where D1, D2, and D3 represent the three disks of decreasing diameters and P 1, P 2,
and P 3 represent the three pegs
...
Again, the evidence is leading toward the result being true, but we have not
given a satisfactory proof
...
How can we use the
result for three disks to argue that this result holds for four disks? The same sequence
of steps we gave for the solution of the three-disk problem can be used to move the
stack from P 1 to either P 2 or P 3
...
Since the bottom disk is the largest, P 1 can be used as before to move the top three
disks
...
Next, move the remaining (largest) disk on P 1 to P 3, which requires 1 move
...
The total number of moves is now
2(23 − 1) + 1 = 24 − 2 + 1 = 24 − 1 = 15
This approach contains the essentials of mathematical induction
...
The next step, called the
inductive hypothesis, provides a mechanism for advancing from one natural number
to the next
...
The inductive hypothesis is to assume that the result holds when there are n disks on
P 1
...
We did this for
n = 3
...
The proof of this statement, which we omit, is based on the axiomatic foundations of
the natural numbers
...
THEOREM 9
The Principle of Mathematical Induction
Let P be a statement that depends on the natural number n
...
P is true for n = 1
and
2
...
The principle of mathematical induction is also referred to as mathematical induction, or simply induction
...
If
the dominoes are set up so that whenever a domino falls its successor will fall (the
inductive hypothesis), then the entire row of dominoes will fall once the first domino
is toppled (base case)
...
4 Mathematical Induction
431
The principle of mathematical induction is used to prove a statement holds for
all natural numbers, or for all natural numbers beyond a fixed natural number
...
EXAMPLE 1
Prove that for every natural number n,
n
k=1
Solution
k = 1+ 2 + 3 + ···+ n =
n(n + 1)
2
To establish the base case when n = 1, notice that
(1)(2)
2
The inductive hypothesis is to assume that the statement is true for some fixed
natural number n
...
Therefore, by induction the statement holds for all natural numbers
...
In Table 2 we have verified that for n = 1, 2, 3, 4, and 5 the number 3n − 1 is
divisible by 2
...
Next, we
assume that the statement 3 n − 1 is divisible by 2 holds
...
Since 3n − 1 is
Confirming Pages
432
Appendix A
Preliminaries
divisible by 2, then there is a natural number q such that
Table 2
n
3n − 1
1
2
2
8
3
26
4
80
5
242
3n − 1 = 2q
which gives
3n = 2q + 1
Next, we rewrite the expression 3n+1 − 1 to include 3n in order to use the inductive
hypothesis
...
Recall that factorial notation is used to express the product of consecutive natural
numbers
...
...
20! = 2, 432, 902, 008, 176, 640, 000
For a natural number n, the definition of n factorial is the positive integer
n! = n(n − 1)(n − 2) · · · 3 · 2 · 1
We also define 0! = 1
...
Now assume
that the statement n! ≥ 2n−1 holds
...
Applying the inductive
hypothesis to n! gives the inequality
(n + 1)! ≥ (n + 1)2n−1
Confirming Pages
A
...
EXAMPLE 4
Solution
For any natural number n, find the sum of the odd natural numbers from 1 to
2n − 1
...
Table 3
n
2n − 1
1 + 3 + · · · + (2n − 1)
1
1
1
2
3
1+3 =4
3
5
1+3+5 =9
4
7
1 + 3 + 5 + 7 = 16
5
9
1 + 3 + 5 + 7 + 9 = 25
The data in Table 3 suggest that for each n ≥ 1,
1 + 3 + 5 + 7 + · · · + (2n − 1) = n2
Starting with the case for n = 1, we see that the left-hand side is 1 and the
expression on the right is 12 = 1
...
Next,
we assume that 1 + 3 + 5 + · · · + (2n − 1) = n2
...
Confirming Pages
434
Appendix A
Preliminaries
EXAMPLE 5
Let P1 , P2 ,
...
Verify that the number of line segments joining all pairs of points is
n2 − n
2
Solution
In Fig
...
The number of line segments
connecting pairs of points is 10 = (52 − 5)/2
...
1, the result is the graph
shown in Fig
...
Moreover, adding the one additional point requires adding five
additional line segments, one to connect the new point to each of the five original
points
...
P2
P3
P1
P5
P4
Figure 2
These observations lead to the following proof by induction
...
Also since
(12 − 1)/2 = 0, the statement holds for n = 1
...
If there is one
additional point, that is, n + 1 points, then n additional line segments are required
...
4 Mathematical Induction
435
n + 1 points is
n2 − n + 2n
n2 − n
+n=
2
2
n2 + 2n + 1 − 1 − n
=
2
(n + 1)2 − (n + 1)
=
2
Therefore, by induction the statement holds for all natural numbers
...
3 are the first eight rows of Pascal’s triangle
...
1
1
1
1
1
1
1
1
7
3
4
5
6
1
3
6
10
15
21
1
2
1
4
10
20
35
1
5
15
35
1
6
21
1
7
1
Figure 3
In Fig
...
, 7
...
(a + b)0
(a + b)1
(a + b)2
(a + b)3
(a + b)4
(a + b)5
(a + b)6
(a + b)7
1
a+b
a 2 + 2ab + b2
3 + 3a 2 b + 3ab2 + b3
a
a 4 + 4a 3 b + 6a 2 b2 + 4ab3 + b4
a 5 + 5a 4 b + 10a 3 b2 + 10a 2 b3 + 5ab4 + b5
a 6 + 6a 5 b + 15a 4 b2 + 20a 3 b3 + 15a 2 b4 + 6ab5 + b6
a 7 + 7a 6 b + 21a 5 b2 + 35a 4 b3 + 35a 3 b4 + 21a 2 b5 + 7ab6 + b7
Figure 4
The numbers in Pascal’s triangle or the coefficients of an expansion of the form
(a + b)n are called the binomial coefficients
...
3, is
Confirming Pages
436
Appendix A
Preliminaries
located in row 6 (starting with a row 0) and column 3 (starting with a column 0)
...
The next identity
is the equivalent statement about binomial coefficients
...
4 Mathematical Induction
THEOREM 10
Binomial Theorem If a and b are any numbers and n is a nonnegative integer,
then
n
n
n
an +
a n−1 b +
a n−2 b2
(a + b)n =
0
1
2
n
r
+ ···+
n
n−1
a n−r br + · · · +
n
n
abn−1 +
bn
Proof The proof is by induction on the exponent n
...
Next assume that the statement
n
0
(a + b)n =
n
1
an +
n
n−1
a n−1 b + · · · +
n
n
abn−1 +
bn
holds
...
This gives
(a + b)n+1 = (a + b)(a + b)n
n
0
an +
an +
n
1
= (a + b)
n
0
=a
n
0
+b
n
0
=
+
n
0
n
1
n
1
anb +
a n−1 b + · · · +
a n−1 b + · · · +
anb + · · · +
n
1
n
n−1
n
n−1
a n−1 b + · · · +
an +
a n+1 +
n
1
n
n−1
a n−1 b2 + · · · +
n
n
abn−1 +
n
n−1
n
n
bn
abn
n
n
abn +
bn
bn
n
n
abn−1 +
a 2 bn−1 +
n
n−1
n
n
abn−1 +
bn+1
Now, combine the terms with the same exponents on a and b to obtain
(a + b)n+1 =
n
0
a n+1 +
n
0
+ ··· +
+
n
1
n
n−1
anb +
+
n
n
n
1
n
2
+
abn +
n
n
a n−1 b2
bn+1
Confirming Pages
438
Appendix A
Preliminaries
Finally by repeated use of Proposition 1, we have
(a + b)n+1 =
n+1
0
a n+1 +
n+1
1
+ ···+
anb +
n+1
n
n+1
2
abn +
a n−1 b2
n+1
n+1
bn+1
Therefore, by induction the statement holds for all natural numbers
...
4
In Exercises 1–10, use mathematical induction to
show that the summation formula holds for all natural
numbers
...
12 + 22 + 32 + · · · + n2 =
n(n+1)(2n+1)
6
2
...
1 + 4 + 7 + · · · + (3n − 2) =
n(3n−1)
2
4
...
2 + 5 + 8 + · · · + (3n − 1) =
n(3n+1)
2
6
...
3 + 6 + 9 + · · · + 3n =
3n(n+1)
2
8
...
10
...
Find a formula for all natural numbers n for the
sum
2 + 4 + 6 + 8 + · · · + 2n
Verify your answer, using mathematical induction
...
Find a formula for all natural numbers n for the
sum
n
k=1
(4k − 3)
13
...
First show the inequality
holds for n = 5, and then proceed to the second
step when using mathematical induction
...
Show that for all natural numbers n ≥ 3, the
inequality n2 > 2n + 1 holds
...
15
...
16
...
Note
that x 2 − y 2 is divisible by x − y since
x 2 − y 2 = (x + y)(x − y)
...
Use mathematical induction to show that for a
real number r and all natural numbers n,
1 + r + r 2 + r 3 + · · · + r n−1 =
rn − 1
r −1
18
...
a
...
That is,
determine f1 + f2 , f1 + f2 + f3 , f1 + f2 +
f3 + f4 , and f1 + f2 + f3 + f4 + f5
...
Find a formula for the sum of the first n
Fibonacci numbers
...
Show that the formula found in part (b) holds
for all natural numbers
...
4 Mathematical Induction
19
...
be sets
...
Verify that if 0 ≤ r ≤ n, then
n
r
A ∩ (B1 ∪ B2 ∪ · · · ∪ Bn )
= (A ∩ B1 ) ∪ · · · ∪ (A ∩ Bn )
20
...
Verify that
n
r −1
+
n+1
r
n
r
=
n
k
= 2n
23
...
k=0
24
...
1
1
...
x1 = 2 − 3x4 , x2 = 1 − x4 , x3 = −1 − 2x4 , x4 ∈ ޒ
5
...
x = 1, y = 0
2t+4
9
...
x = 0, y = 1, z = 0
−1 − 5t , 6t + 1 , t
2
13
...
S =
17
...
21
...
25
...
29
...
5
3 t , −s
3−
−
4
3t
t ∈ޒ
t ∈ޒ
+ 3, s, t
2
33
...
y = − (x − 2)2 + 3; vertex: (2, 3)
37
...
(2, 3)
b
...
440
s, t ∈ ޒ
x = −2a + b, y = −3a + 2b
x = 2a + 6b − c, y = a + 3b, z = −2a − 7b + c
Consistent if a = −1
Consistent if b = −a
Consistent for all a, b, and c such that c − a − b = 0
Inconsistent if a = 2
Inconsistent for a ̸= 6
39
...
x
x
x
2x
x
3x
c
...
a
...
43
...
b
...
S
S
k
k
k
+
+
=
=
y
3y
2
−6
= {(3 − 2s − t , 2 + s − 2t , s, t ) | s, t ∈ }ޒ
= {(7 − 2s − 5t , s, −2 + s + 2t , t ) | s, t ∈ }ޒ
=3
= −3
̸= ±3
Section 1
...
−1
1 −3
⎡
⎤
2 0 −1 4
1 2 ⎦
3
...
⎡
2
1
2
7
...
0 1
⎡
⎤
1 0 0
31
...
11
...
15
...
19
...
23
...
27
...
1
0
0
1
−1
0
Confirming Pages
441
Answers to Odd-Numbered Exercises
35
...
39
...
43
...
x1 = 1 − 1 x4 , x2 = 1 − 1 x4 , x3 = 1 − 1 x4 , x4 ∈ ޒ
2
2
2
47
...
a
...
c
...
51
...
b
...
d
...
a = 1, b = 0, c = 1; x = −2, y = 2, z = 1
a + 2b − c = 0
a + 2b − c ̸= 0
Infinitely many solutions
a = 0, b = 0, c = 0; x = 4 , y = 1 , z = 1
5
5
Section 1
...
A + B =
0
6
3
...
(A − B) + C = ⎣ 0
1
⎡
−7
2A + B = ⎣ −3
2
−3
5
−2
3
10
2
−2
; BA =
−8
7
...
AB =
−9
−13
⎡
= A + (B + C )
4
7
⎤
4
−18 ⎦
6
−6
6
−7
5
11
...
A(B + C ) =
15
...
2At − B t = ⎣ −1
−3
−18
0
⎤
5
3 ⎦
−2
⎤
9
6 ⎦
10
⎤
9
6 ⎦
11
2
−7
−7
−5
−4
1
⎡
⎤
−1
7
8 ⎦
21
...
(A
−18 −22 −15
19
...
AB = AC =
27
...
A =
−1
0
1
0
31
...
A20 = ⎣ 0
0
1
0
−1
1
0
1
,
1
0
b
−1
,
−1
0
b
1
,
0
−1
1
0
,B =
0
1
0
−1
1
⎤
0
0 ⎦
1
−1
1
35
...
⎡
⎤
1
⎢ 0 ⎥
⎢
⎥
37
...
⎥, then Ax = 0 implies the first column of
⎣
...
0
⎡
⎤
0
⎢ 1 ⎥
⎢
⎥
A has all 0 entries
...
⎥ and so on, to
⎣
...
0
show that each column of A has all 0 entries
...
The only matrix is the 2 × 2 zero matrix
...
Since (AAt )t = (At )t At = AAt , the matrix AAt is
symmetric
...
43
...
Section 1
...
A−1 =
5 −3 1
3
...
⎡
⎤
3
1 −2
−1
3 ⎦
5
...
a
...
b
...
Let x1 ,
...
Then AB = 0
...
(AB)t = B t At = BA = AB
33
...
Now
(AB −1 )t = (B −1 )t At = (B t )−1 At = B −1 A = AB −1
...
If At = A−1 and B t = B −1 , then
(AB)t = B t At = B −1 A−1 = (AB)−1
...
a
...
The matrix is not invertible
...
A−1 = ⎢ 0
1 ⎥
0 −1
⎣
2 ⎦
0
0
0 −1
2
⎡
⎤
3
0
0 0
⎢ −6
3
0 0 ⎥
⎥
11
...
The matrix is not invertible
...
A−1 = ⎢
⎣ 1 −2 −1 1 ⎦
0 −1 −1 1
3
10
17
...
A−1
b
...
a
...
1
= (2I − A)
5
1
− 5 (−5I ) = I
...
If λ = −2, then the matrix is not invertible
...
(A + B)A−1 (A − B) =
=
=
=
ax1 + bx3
cx1 + dx3
so
B=
0
0
1
0
is invertible
...
If A is invertible, then the augmented matrix [A|I ] can
be row-reduced to [I |A−1 ]
...
Similarly, the inverse for an
invertible lower triangle matrix is also lower
triangular
...
From part (a), we have the two linear systems
25
...
a
...
a
...
⎡
⎤
λ
1
λ
− λ−1
− λ−1
λ−1
⎢
⎥
1
1
1
b
...
If A2 − 2A + 5I = 0, then A2 − 2A = −5I , so that
A
)−1
(AA−1 + BA−1 )(A − B)
(I + BA−1 )(A − B)
A − B + B − BA−1 B
A − BA−1 B
Similarly, (A − B)A−1 (A + B) = A − BA−1 B
...
c
...
Notice that if
in addition either a = 0 or c = 0, then the matrix is
not invertible
...
If a and c
are not zero, then these equations are inconsistent
and the matrix is not invertible
...
5
1
...
A = ⎣ −1
3
⎡
−1
b = ⎣ −1
3
⎡
4
5
...
9
...
⎧
⎨
⎩
−3
−1
−2
⎤
⎦
3
−3
−3
⎤
⎦
2x
2x
−
+
2x
3x
−
−
−
2y
y
y
+
+
23
...
−
+
−
−
1
27
...
From the fact that Au = Av, we have A(u − v) = 0
...
1
−1
31
...
x =
b
...
6
3
1
−1
+
−
⎤
1
1 ⎦
1
2
2
2
1
c
...
The determinant is the product of the terms on the
diagonal and equals 24
...
The determinant is the product of the terms on the
diagonal and equals −10
...
Since the determinant is 2, the matrix is invertible
...
Since the determinant is −6, the matrix is invertible
...
a–c
...
det ⎝⎣ 3 −1
2
0
1
10
⎡
S =
⎤⎞
−2
4 ⎦⎠ = 5
1
e
...
Then det(B ′ ) = −2 det(B) =
−10
...
2
−16
9
⎤
−11
4 ⎦
x=⎣
12
⎡
⎤
0
1⎢ 0 ⎥
⎥
x= ⎢
3⎣ 1 ⎦
−1
1
7
a
...
x =
8
5
The general solution is
17
...
=
=
⎡
⎤
⎡
⎤
x1
−3
⎢ x ⎥
0 ⎦, x = ⎢ 2 ⎥, and
⎣ x3 ⎦
−4
x4
−2
1
4
5x2
x2
2x1
3x1
⎡
5y
y
⎤
1
13
...
x = ⎢
⎣ −8 ⎦
7
19
...
11
...
15
...
19
...
23
...
27
...
f
...
The row operation
does not change the determinant, so
det(B ′′ ) = det(B ′ ) = −10
...
Since det(A) ̸= 0, the matrix A does have an
inverse
...
Since the determinant of the matrix is −5x 2 + 10x =
−5x (x − 2), the determinant is 0 if and only if x = 0 or
x = 2
...
y =
b2 −a2
b1 −a1 x
⎡
+
b
...
a
...
det(A) = 2
c
...
⎡
⎤
3
d
...
a
...
a
...
y
b
...
Since the determinant of the coefficient matrix is 0,
A is not invertible
...
x
5
Ϫ5
5
x
d
...
a
...
45
...
x =
Ϫ5
x
5
41
...
x2
0
0
1
4
y2
16
16
4
9
x
0
0
1
2
y
−4
4
−2
3
1
1
1
1
1
=136x 2 − 16y 2 − 328x + 256 = 0
49
...
x = − 103 , y =
25
=− , y =
28
10
103 , z
=
−1
−8
−1
−8
4
3
−3
4
=−
42
103
53
...
29
28
Confirming Pages
445
Answers to Odd-Numbered Exercises
Section 1
...
A = E1 E2 · · · E6
⎤
1 0 0
1
...
E = ⎣ 2 1 0 ⎦
0 0 1
⎡
⎤
1 2
1
4 ⎦
b
...
a
...
EA = ⎣ 3
−8
2
1
−2
5
...
I = E3 E2 E1 A
=
1
0
−3
1
1
13
...
A = LU = ⎣ 1
−1
0
1
10
1
−2
0
1
1
2
0
1
7
...
I = E5 E4 E3 E2 E1 A
⎡
1 0
E1 = ⎣ −2 1
0 0
⎡
1 −2
1
E3 = ⎣ 0
0
0
0
10
⎤
0
0 ⎦
1
⎤
0
0 ⎦
1
⎡
1
E5 = ⎣ 0
0
−1 −1 −1 −1 −1
b
...
a
...
• LU factorization:
−1 −1 −1
b
...
A = LU =
0
1
0
0
1
0
⎤
0
0 ⎦
1
⎤
11
0 ⎦
1
⎡
⎡
1
U =⎣ 0
0
−2
1
0
4
1
0
⎤
−3
2 ⎦
1
⎤
x1 + 4x2 − 3x3
⎦
x2 + 2x3
• y = Ux = ⎣
x3
⎡
⎤
0
• Solve Ly = ⎣ −3 ⎦: y1 = 0, y2 = −3, y3 = 1
1
• Solve U x = y: x1 = 23, x2 = −5, x3 = 1
⎤
0
0 ⎦
1
⎤
1 0 0
E4 = ⎣ 0 1 1 ⎦
0 0 1
⎡
⎤
1 0
0
0 ⎦
E6 = ⎣ 0 1
0 0 −1
1
E2 = ⎣ 0
0
⎡
19
...
• LU factorization:
⎡
1
1
2
−1
0
1
0
−1
0
0
1
0
1
⎢ 0
⎢
U =⎣
0
0
−2
1
0
0
3
2
1
0
⎢
L=⎢
⎣
⎡
⎤
0
0 ⎥
⎥
0 ⎦
1
⎤
1
2 ⎥
⎥
1 ⎦
1
Confirming Pages
446
Answers to Odd-Numbered Exercises
⎡
⎤
x1 − 2x2 + 3x3 + x4
⎢
⎥
x2 + 2x3 + 2x4
⎥
• y = Ux = ⎢
⎣
⎦
x3 + x4
x4
⎡
⎤
5
⎢ 6 ⎥
⎥
• Solve Ly = ⎢
⎣ 14 ⎦ :
−8
Section 1
...
x1 = 2, x2 = 9, x3 = 3, x4 = 9
3
...
Then x1 = x5 = 3, x2 = 1 x5 = 1, x3 =
3
1
3 x5 = 1, x4 = x5 = 3
...
Let x1 , x2 ,
...
y1 = 5, y2 = 1, y3 = 4, y4 = −2
⎤⎡
1
0
0 ⎦⎣ 2
0
0
1
−3
25
...
A = LU = ⎣ 1
1
0
1
1
⎤⎡
1
0
0 ⎦⎣ 0
1
0
−5
0
5
1
−4
1
⎤⎡
0
2
0 ⎦⎣ 0
1
0
−4
1
1
1
0
A−1 = U −1 L−1
⎡
⎤
1
1
−2 0 ⎡ 1
2
⎢
⎥
=⎢ 0
1 1 ⎥ ⎣ −1
3 ⎦
⎣
0
0
0 1
3
⎤
⎡
1 −1 0
2
⎢
1 ⎥
2
= ⎣ −1
3
3 ⎦
0
−1
3
0
c
d
0
−3
1
0
2
4
−5
1
4
1
1
0
1
3
⎦
0
1
⎤
−1
−1 ⎦
3
0
1
−1
⎤
0
0 ⎦
1
1
3
e
f
=
0
1
x3
⎤
29
...
A = PLU
⎡
0 1
=⎣ 1 0
0 0
300
x1
1
0
This gives the system of equations ad = 0, ae = 1,
bd = 1, be + cf = 0
...
But this is
incompatible with the third equation
...
If A is invertible, there are elementary matrices
E1 ,
...
Similarly, there are
elementary matrices D1 ,
...
Then A = Ek · · · E1 Dℓ · · · D1 B, so A
is row equivalent to B
...
As a sample solution
let x4 = 200, x6 = 300, x7 = 100; then x1 = 700,
x2 = 500, x3 = 1000, x5 = 500
...
x1 = 150 − x4 , x2 = 50 − x4 − x5 , x3 = 50 + x4 + x5
...
x1 = 1
...
2, x3 = 1
...
2
⎡
⎤
0
...
04 0
...
a
...
03 0
...
04 ⎦
0
...
3
0
...
The internal demand vector is
⎡
⎤ ⎡
⎤
300
22
A ⎣ 150 ⎦ = ⎣ 20 ⎦
...
⎡
⎤
1
...
06 0
...
(I − A)−1 ≈ ⎣ 0
...
04 0
...
05 0
...
13
d
...
02 0
...
03 1
...
05 0
...
2
= ⎣ 454
...
3
⎤⎡
⎤
0
...
05 ⎦ ⎣ 400 ⎦
1
...
a
...
⎩
3, 960, 100a
d
...
a =
1970b
1980b
1990b
+
+
+
c
c
c
=
=
=
80
250
690
1000
800
4I1
b
...
1200
⎧
⎨
⎩
I1
4I1
+
3I2
3I2
−
+
I2
3I2
3I2
+
5I3
+
I3
+
5I3
=
=
8
10
=
=
=
0
8
10
Solution: I1 ≈ 0
...
7, I3 ≈ 0
...
The model gives an estimate, in billions of dollars,
for health care costs in 2010 at
10, 631
27
(2010)2 −
(2010) + 5, 232, 400 = 2380
20
2
b
...
a
...
a
...
9
0
...
08
0
...
A2
1, 500, 000
600, 000
d
...
The transition matrix is
⎡
0
...
1
0
1, 314, 360
785, 640
21
...
The resulting linear system is
⎧
− d = 50
⎪ 4a − b
⎪
⎨
−a + 4b − c
= 55
− b
− d = 45
⎪
⎪
⎩
−a
− c + 4d = 40
The solution is a ≈ 24
...
6, c ≈ 23
...
9
...
a
...
det(A) = −8
0
...
5
0
...
1
0
...
6
so the numbers of people in each category after
1 month are given by
⎡
⎤ ⎡
⎤
20, 000
23, 000
A ⎣ 20, 000 ⎦ = ⎣ 15, 000 ⎦
10, 000
12, 000
c
...
d
...
e
...
⎡
⎤
−3 −8 −2
7
1⎢ 5
8
6 −9 ⎥
⎥
A−1 = ⎢
0 −2 −1 ⎦
8⎣ 5
−4
0
0
4
Confirming Pages
448
Answers to Odd-Numbered Exercises
⎡
⎤
3
⎢ 1 ⎥
⎥
f
...
T
25
...
a
c
b
−a
a
c
b
−a
2
=
a + bc
0
= (a 2 + bc)I
0
a 2 + bc
c
...
By part (a), M 2 = kI for some
k
...
a
...
b
...
9
...
B t = (A + At )t = At + (At )t = At + A = B;
C t = (A − At )t = At − (At )t = At − A = −C
b
...
T
35
...
T
37
...
F
39
...
T
33
...
T
31
...
T
41
...
T
29
...
a
...
T
27
...
a = 0, c = 0, b = 0; a = 0, c = 1, b ∈ ;ޒ
a = 1, c = 0, b ∈ ;ޒa = 1, b = 0, c = 1
24
...
F
43
...
F
45
...
1
1
...
5
...
Chapter Test: Chapter 1
1
...
F
3
...
T
5
...
T
7
...
T
9
...
T
11
...
T
13
...
T
15
...
T
17
...
T
19
...
T
21
...
T
9
...
All vectors of the form ⎣
2a − 3b
a, b ∈
...
v = 2e1 + 4e2 + e3
13
...
w = ⎣
2
⎥
1 ⎦
−1
17
...
+
−
c1
−2c1
Section 2
...
−2
−1
5
Solution: c1 = 7 , c2 = − 4
4
−2
−1
The vector
3
−2
is a combination of
and
−
−
=
=
c2
2c2
3
1
Solution: The linear system is inconsistent
...
Yes
1
−2
1
2
⎧
⎨ −4c1
4c1
21
...
+
−
−
c3
5c3
5c3 =
=
=
= − 238 , c3 =
121
−3
−3
4
3
121
−3
The vector ⎣ −3 ⎦ is a combination of the three
4
vectors
...
⎩
2
c1 + c2 − c3 =
Solution: The linear system is inconsistent
⎡
⎤
−1
The vector ⎣ 0 ⎦ cannot be written as a
2
combination of the other vectors
...
All 2 × 2 vectors
...
All vectors of the form
such that a ∈
...
All 3 × 3 vectors
...
Yes
⎡
−→
1
1
−→
3
−6
⎡
−2
⎣ 3
4
2
⎣ −2
0
9
...
c1
2c1
19
...
Yes
⎡
2
⎢ −3
⎢
⎣ 4
1
3
0
−3
−1
−1
3
1
6
−1
2
2
3
1
0
0
1
1
−3
2
0
0
; yes
0
1
; no
⎡
⎤
1
−3
10 ⎦ −→ ⎣ 0
10
0
0
1
0
⎤
2
1 ⎦
0
−2
0
−1
⎡
⎤
1
2
8 ⎦ −→ ⎣ 0
2
0
0
1
0
0
0
1
0
1
2
⎤
⎡
−1
1
1 ⎦ −→ ⎣ 0
5
0
0
1
0
0
1
0
⎤
0
0 ⎦
1
−1
−1
2
3
⎤
⎡
3
1
⎢
−17 ⎥
⎥ −→ ⎢ 0
⎣ 0
17 ⎦
7
0
0
1
0
0
0
0
1
0
⎤
3
−1 ⎥
⎥
2 ⎦
0
1
4
2
−4
2
3
−4
⎤
⎦
13
...
Infinitely many ways
c1 = 3 + 6c4 , c2 = −2 − c4 , c3 = 2 + 2c4 , c4 ∈ ޒ
17
...
No
⎡
2
⎢ 2
⎢
⎣ −1
3
−2
3
1
4
3
−1
2
−2
−1
3
2
1
3
−1
2
2
⎡
⎤
1
−2
⎢ 0
4 ⎥
⎥ −→ ⎢
⎣ 0
4 ⎦
0
0
⎡
⎤
1
2
⎢
1 ⎥
⎥ −→ ⎢ 0
⎣ 0
−1 ⎦
2
0
0
1
0
0
0
1
0
0
0
0
1
0
0
0
1
0
⎤
−1
−1 ⎥
⎥
3 ⎦
0
⎤
0
0 ⎥
⎥
0 ⎦
1
Confirming Pages
450
Answers to Odd-Numbered Exercises
21
...
Since
3
1
−
23
...
Not possible
...
x 3 − 2x + 1 = 1 (1 + x ) + 2(−x ) + 0(x 2 + 1)+
2
1
3
2 (2x
− x + 1)
⎡
⎤
a
29
...
c
31
...
Since c1 ̸= 0, v1 = − c2 v2 − · · · −
1
cn
c1 vn
...
Let v ∈ S1
...
If v ∈ S2 , then v = c1 v1 + · · · + (cck )vk , so
v ∈ S1
...
37
...
Since the linear system
is assumed to be consistent, it must have infinitely
many solutions
...
3
1
...
2
−3
= 1, the vectors are linearly
25
...
⎡
⎡
⎤
⎤
−1 2
−1 2
5
...
29
...
33
...
7
...
⎡
3
⎢ −1
⎢
9
...
= 0, the vectors are linearly
⎡
⎤
3
3
⎢ 0
−1 ⎥
⎥ −→ ⎢
⎣ 0
0 ⎦
1
0
vectors are linearly independent
...
13
...
1
15
...
Any set of vectors containing the zero vector is linearly
dependent
...
a
...
A3 = A1 + A2
21
...
a
...
b
...
Since
−4
4
−1
⎡
37
...
Linear independent
Linearly dependent
If x = 0, then c1 = 0, and if x = 1 , then c2 = 0
...
Now letting x = 1 and x = −1,
c1 = c2 = c3 = 0
...
If a ̸= 0,
then u = −(b/a)v
...
Setting a linear combination of w1 , w2 , w3 to 0, we have
0 = c1 w1 + c2 w2 + c3 w3
= c1 v1 + (c1 + c2 + c3 )v2 + (−c2 + c3 )v3
if and only if c1 = 0, c1 + c2 + c3 = 0, and
−c2 + c3 = 0 if and only if c1 = c2 = c3 = 0
...
Consider c1 v1 + c2 v2 + c3 v3 = 0, which is true if and
only if c3 v3 = −c1 v1 − c2 v2
...
Therefore, c3 = 0
...
41
...
, An are linearly independent, if
Ax = x1 A1 + · · · + xn An = 0
then x1 = x2 = · · · = xn = 0
...
If ad − bc = 0, then the
column vectors are linearly dependent
...
Since
3
...
So the vectors are linearly
independent if and only if a ̸= ±1, and a ̸= 0
...
a
...
b
...
d
...
7
...
b
...
Yes, since the determinant of A is nonzero
...
Since the determinant of the coefficient matrix is
nonzero, the matrix A is invertible, so the linear
system has a unique solution
...
x = 11 , y = − 17 , z = 7 , w = 1
4
4
4
⎡
⎡
⎡
⎤
⎤
⎤
⎤ ⎡
9
...
1
3
2
b1
x1 ⎣ 2 ⎦ + x2 ⎣ −1 ⎦ + x3 ⎣ 3 ⎦ = ⎣ b2 ⎦
b3
1
1
−1
b
...
c
...
Yes, since the determinant of A is nonzero, A−1
exists and the linear system has a unique solution
...
If a = 1, b = 1, c = 3, then the system is
⎡
⎤
1
inconsistent and v = ⎣ 1 ⎦ is not a linear
3
combination of the vectors
...
b=⎣
−2 ⎦
5
451
1
...
F
3
...
T
5
...
F
7
...
T
9
...
F
11
...
T
13
...
T
15
...
T
17
...
F
19
...
T
21
...
F
23
...
F
25
...
F
27
...
F
29
...
F
31
...
F
33
...
1
1
...
3
...
7
...
y
9
...
b
...
⎡
⎤ ⎡
⎤ ⎡
⎤
1+t
1 + 0t
1
c
...
Each of the 10 vector space axioms is satisfied
...
Each of the 10 vector space axioms is satisfied
...
Since ( f + g)(0) = f (0) + g(0) = 1 + 1 = 2, then V is
not closed under addition and hence is not a vector
space
...
a
...
b
...
Section 3
...
The set S is a subspace of
...
The set S is not a subspace of
...
Since this
0
vector is not in V , then V is not a vector space
...
Since V is not closed under vector addition, V is not
a vector space
...
Each of the 10 vector space axioms are satisfied
with vector addition and scalar multiplication
defined in this way
...
No, V is not a vector space
...
Then A + B is not invertible and hence not in V
...
v=
15
...
19
...
a
...
b
...
⎡
1
0
0
1
, and the
⎤
1
23
...
The additive identity is 0 = ⎣ 2 ⎦
...
Then the additive inverse is
3 + 2a
⎡
⎤
1−a
−u = ⎣ 2 + a ⎦
...
/
5
...
2ޒIf u =
11
...
−1
3
2
−1
∈ S
...
Since
⎡
⎤
⎤ ⎡
⎤
y1
x1 + cy1
x1
⎣ x2 ⎦ + c ⎣ y2 ⎦ = ⎣ x2 + cy2 ⎦
x3
y3
x3 + cy3
⎡
and (x1 + cy1 ) + (x3 + cy3 ) = −2(c + 1) = 2 if and
only if c = −2, so S is not a subspace of
...
Since
⎡
⎡
⎤
⎤
s − 2t
x − 2y
⎣
⎦+c⎣
⎦
s
x
t +s
y +x
⎡
⎤
(s + cx ) − 2(t + cy)
⎦
s + cx
=⎣
(t + cy) + (s + cx )
is in S, then S is a subspace
...
No, S is not a subspace
...
Yes, S is a subspace
...
21
...
11
...
15
...
19
...
No, S is not a subspace
...
Since
⎡
1
⎣ 1
0
−1
−1
1
⎡
⎤
1
1
−1 ⎦ → ⎣ 0
1
0
−1
2
0
−1
1
0
−1
0
3
the vector v is in the span
...
Since
⎡
1
⎢ 1
⎢
⎣ 0
−1
0
1
2
1
⎤
⎡
−2
1
⎢
1 ⎥
⎥→⎢ 0
⎣ 0
6 ⎦
5
0
1
−1
−4
−3
0
1
0
0
1
−2
0
0
the vector v is in the span
...
⎩
⎭
2
3
Therefore, S is a subspace
...
Yes, the vectors are linearly independent
...
S ̸= 3ޒ
1
1
1
47
...
49
...
Since
A(B1 + cB2 ) = AB1 + cAB2
= B1 A + c(B2 A)
= (B1 + cB2 )A
29
...
⎧⎡
⎫
⎤
⎨ a
⎬
31
...
span(S ) =
a
b
a+b
3
2a−b
3
a, b ∈ ޒ
ax 2 + bx + c a − c = 0
⎧⎡
⎫
⎤
a
⎨
⎬
37
...
span(S ) = ⎣ b ⎦ a, b ∈ ޒ
⎩ b−2a
⎭
35
...
Yes, S is linearly independent
...
a
...
Yes, S is linearly independent
...
a
...
c
...
span(S ) = 3ޒ
No, S is linearly dependent
...
span(H ) = ; 3ޒH is linearly independent
...
a
...
No, S is linearly dependent
...
2x 2 + 3x + 5 = 2(1) − (x − 3) + 2(x 2 + 2x )
d
...
a–b Since
⎡
⎤
⎡
⎤
⎡
⎤
−s
−1
0
⎣ s − 5t ⎦ = s ⎣ 1 ⎦ + t ⎣ −5 ⎦
2s + 3t
2
3
453
then B1 + cB2 ∈ S and S is a subspace
...
3
1
...
3 = ) 3ޒ
3
...
5
...
7
...
2ޒ
9
...
3ޒ
11
...
Since dim (M2×2 ) = 4, then S is a basis
...
The set S is a linearly independent set of three vectors
in 3ޒand so is a basis
...
The set S is linearly dependent and is therefore not a
basis for
...
The set S is a linearly independent set of three vectors
in P2 so S is a basis
...
A basis for S is B = ⎣ −1 ⎦, ⎣ 1 ⎦ and
⎩
⎭
0
1
dim(S ) = 2
...
A basis for S is
B=
1
0
0
0
,
0
1
1
0
,
0
0
0
1
and dim(S ) = 3
...
A basis for S is B = x , x 2 and dim(S ) = 2
...
The set S is already a basis for 3ޒsince it is a linearly
independent set of three vectors in
...
A basis for the span of S is given by
⎧⎡
⎤⎡
⎤⎡
⎤⎫
2
0
−1 ⎬
⎨
B = ⎣ −3 ⎦, ⎣ 2 ⎦, ⎣ −1 ⎦
...
span(S ) = ޒ
29
...
Observe that
⎩
⎭
0
2
4
span(S ) =
...
A basis for 3ޒcontaining S is
⎧⎡
⎤⎡
⎤⎡
⎤⎫
2
1
1 ⎬
⎨
B = ⎣ −1 ⎦, ⎣ 0 ⎦, ⎣ 0 ⎦
⎩
⎭
3
2
0
33
...
A basis for 3ޒcontaining S is
⎧⎡
⎤⎡
⎤⎡
⎤⎫
1
1 ⎬
⎨ −1
B = ⎣ 1 ⎦, ⎣ 1 ⎦, ⎣ 0 ⎦
⎩
⎭
3
1
0
37
...
dim(W ) = 2
Section 3
...
[v]B =
⎡
2
−1
B
⎡
⎤
⎤
1
1
= ⎣ 2 ⎦; [v]B2 = ⎣ 1 ⎦
−1
0
1
−4
B
[v]B2 = [I ]B2 [v]B1 =
1
⎡
3
B
⎢
15
...
[I ]B2 = ⎣ 1 0 0 ⎦
1
0 1 0
⎡
⎤
5
B2
[v]B2 = [I ]B1 [v]B1 = ⎣ 2 ⎦
3
⎡
⎡
⎤
⎤
−a − b + c
a
⎦
a +b
19
...
a
...
[v]B2 = [I ]B1 2 ⎦ = ⎣ 1 ⎦
3
3
1
2
4
2
c
...
; [v]B2 =
−1
1
1
1
13
...
a
...
[v]B = ⎣ −1 ⎦
3
⎡
⎤
5
5
...
[v]B = ⎢
⎣ −2 ⎦
4
9
...
[v]B1
⎡
B
B
1
0
1
2
=
3
4
1
4
=
6
4
4
4
B
=
5
8
B
=
8
8
Confirming Pages
Answers to Odd-Numbered Exercises
d
...
Since v1 can be written as
v1 =
−c2
c1
v2 +
−c3
c1
v3 + · · · +
−cn
c1
vn
then
V = span{v2 , v3 ,
...
a
...
To see this, consider
B
25
...
[I ]B2
1
⎡
−1
=⎣ 2
0
−1
2
−1
b
...
5
1
...
y1 = e 2x , y2 = e 3x
b
...
c
...
y(x ) = e x + 2xe x
7
...
yc (x ) = C1 e 3x + C2 e x
b
...
y(x ) =
⎤ ⎡
⎤
2
1
⎣ −3 ⎦ = ⎣ −3 ⎦
1
4
xe −2x
e −2x − 2xe −2x
=
c
...
u1 v2 − v1 u2
and
x
y
v1
v2
Chapter Test: Chapter 3
1
...
T
3
...
F
5
...
F
7
...
F
9
...
The matrix
is not in S
...
a
...
b
...
10
...
T
Review Exercises Chapter 3
1
...
a
...
b
...
1
,
1
α=
= e 5x > 0 for all x
...
α
, then
b
...
y(x ) = C1 e 2x + C2 e 3x
3
...
y1 = e −2x , y2 = xe −2x
e −2x
b
...
T
13
...
T
15
...
F
17
...
F
19
...
T
21
...
T
23
...
T
25
...
F
27
...
T
29
...
F
31
...
F
33
...
F
35
...
1
1
...
3
...
5
...
7
...
9
...
11
...
13
...
15
...
2
−2
17
...
T (u) =
; T (v) =
3
−2
b
...
Yes
0
0
19
...
T (u) =
; T (v) =
0
−1
b
...
T (u + v) =
c
...
21
...
T (−3 + x − x 2 ) = −1 − 4x + 4x 2
3
22
=
7
−11
a
...
The polynomial 2x 2 − 3x + 2 cannot be written
as a linear combination of x 2 , −3x , and −x 2 + 3x
...
Yes
...
A =
0 −1
−1
0
b
...
Observe
0
−1
that ⎤⎞
⎛⎡ these are the column vectors of A
...
The zero vector is the only vector in 3ޒsuch that
⎛⎡
⎤⎞
x
0
T ⎝⎣ y ⎦⎠ =
0
z
⎛⎡
⎤⎞ ⎡
⎤
1
7
b
...
T
27
...
31
...
0
−1
cT1 (v) + T1 (w)
cT2 (v) + T2 (w)
35
...
T (kA + C ) = (kA + C )B − B(kA + C )
= kAB − kBA + CB − BC
= kT (A) + T (C )
1
39
...
T (cf + g) =
cf (x ) + g(x ) dx
0
1
=
0
1
cf (x ) dx +
g(x ) dx
0
1
=c
0
1
f (x ) dx +
g(x ) dx
0
= cT ( f ) + T (g)
b
...
Since neither v nor w is the zero vector, if either
T (v) = 0 or T (w) = 0, then the conclusion holds
...
Since v and w are
linearly independent, then a0 v + b0 w ̸= 0 and since T
is linear, then T (a0 v + b0 w) = 0
...
Let T (v) = 0 for all v in
...
2
1
...
−5
, v is not in N (T )
...
Since T (p(x )) = 2x , p(x ) is not in N (T )
...
Since T (v) =
7
...
⎛⎡
⎤⎞
−1
9
...
1
11
...
13
...
15
...
0
17
...
⎣ 1 ⎦
⎩
⎭
1
Confirming Pages
Answers to Odd-Numbered Exercises
⎧⎡
⎤⎡
⎨ 2
21
...
x , x
⎧⎡
⎤⎡
⎨ 1
25
...
⎣ 0 ⎦, ⎣
⎩
0
29
...
a
...
0
⎧⎡
⎤ ⎡
⎤⎫
0 ⎬
⎨ −2
b
...
Since dim(N (T )) + dim(R(T )) = dim( 3 = ) 3ޒand
dim(R(T )) = 2, then dim(N (T )) = 1
...
a
...
b
...
T ⎝⎣ y ⎦⎠ =
y
z
37
...
The range R(T ) is the subspace of Pn consisting of
all polynomials of degree n − 1 or less
...
dim(R(T )) = n
c
...
a
...
dim(N (T )) = 1
41
...
a
...
b
...
45
...
Section 4
...
T is one-to-one
...
T is one-to-one
...
T is one-to-one
...
T is onto
...
T is onto
...
13
...
17
...
21
...
a basis
a basis
a basis
a basis
a basis
Since det(A) = det
1
−2
then T is an isomorphism
...
A−1 = − 1
3
c
...
a
...
⎡
⎤
−1 −1 −1
−1 = ⎣
0
0
1 ⎦
b
...
A T
z
⎡
⎤⎡
⎤
−1 −1 −1
−2x + z
0
1 ⎦⎣ x − y − z ⎦
=⎣ 0
−1 −2 −2
y
⎡
⎤
x
=⎣ y ⎦
z
25
...
27
...
29
...
Since T (A) = 0 implies that A = 0,
T is one-to-one
...
Hence,
T is an isomorphism
...
Since T (kB + C ) = A(kB + C )A−1 = kABA−1 +
ACA−1 = kT (B) + T (C ), T is linear
...
If C is
Confirming Pages
458
Answers to Odd-Numbered Exercises
a matrix in Mn×n and B = A−1 CA, then
T (B) = T (A−1 CA) = A(A−1 CA)A−1 = C , so T is onto
...
⎛⎡
⎤⎞
33
...
Since
⎧⎡
⎨
V = ⎣
⎩
define T : V → 2ޒby
⎛⎡
x
⎦ x, y ∈ ޒ
y
⎭
x + 2y
⎤⎞
x
⎦⎠ =
y
x + 2y
T ⎝⎣
x
y
−2
6
T
−1
−2
=
−3
;
−3
T
−1
−2
=
−3
3
−2
6
−1
−2
=
b
...
a
...
a
...
T
37
...
3ޒThen a line L through
the origin can be given by
−2
−4
B′
L = {t v| t ∈ }ޒ
Now, let T : 3ޒ →− 3ޒbe an isomorphism
...
Also, by Theorem 8, T (v) is
nonzero
...
The proof for a
plane is similar with the plane being given by
P = {su + t v| s, t ∈ }ޒ
for two linearly independent vectors u and v in
...
a
...
a
...
T (x 2 − 3x + 3) = x 2 − 3x + 3;
T (x 2 − 3x + 3)
B′
′
= [T ]B [x 2 − 3x + 3]B
B
⎡
⎤ ⎡
⎤
1
3
B′ ⎣
1 ⎦ = ⎣ −3 ⎦
= [T ]B
1
1
a
b
, then T (A) =
c −a
⎡
⎤
0
0 0
a
...
If A =
2
1
=
9
−1
⎡
⎤
⎤ ⎡
⎤
1
1
3
b
...
T
B
= [T ]B
B
8
3
Section 4
...
a
...
T
T
1
−2
2
3
1
−2
0
6
=
B
−2
;
0
⎡
0
2c
−2b
0
...
a
...
[T ]B ′
1
=
9
c
...
[T ]B ′ =
B
1
3
5
−1
2
5
′
1
9
−2
5
5
1
′
1
9
⎡
22
−1
1
11
⎤
e
...
[T ]B ′ =
C
c
...
[S ]B ′ =
B
1
0
B′
[T ]B [S ]B ′
B
0
2
1
0
0
1
0
=⎣ 0
0
0
1
0
′
e
...
The function S ◦ T is the identity map; that is,
(S ◦ T )(ax + b) = ax + b so S reverses the action
of T
...
[T ]B =
1
0
0
−1
◦
T ]B = [S ]B [T ]B =
2
1
1
4
−1
10
⎤
3
3
1
27
...
[−3T + 2S ]B = ⎣ 2 −6 −6 ⎦
3 −3 −1
⎡
⎤
3
b
...
a
...
⎣ −5 ⎦
5
⎡
⎤
0 0 0 6
0
⎢ 0 0 0 0 24 ⎥
⎢
⎥
0 ⎥
31
...
a
...
[T ]B = ⎣ 0
C
b
...
22
−1
5
−1
23
...
[2T + S ]B = 2[T ]B + [S ]B =
25
...
[S
2
−1
1
11
′
21
...
19
...
[S ]B = ⎢
B
⎣ 0 1 0 ⎦
0 0 1
⎡
⎤
0 1 0 0
′
[D]B = ⎣ 0 0 2 0 ⎦
B
0 0 0 3
⎡
⎤
1 0 0
′
′
[D]B [S ]B = ⎣ 0 2 0 ⎦ = [T ]B
B
B
0 0 3
2
7
459
Confirming Pages
460
Answers to Odd-Numbered Exercises
a
c
35
...
[T ]B = ⎢
⎢
⎢
⎢
⎣
1
0
0
...
...
...
0
0
0
0
1
1
...
...
5
1
...
...
0
0
0
...
...
...
0
0
0
1
−1
2
−1
[T ]B2 [v]B2 =
b
0
d −a
−b
0
0
0
...
...
...
...
...
...
2
3
1
2
⎤
0
b ⎥
⎥
−c ⎦
0
0
0
0
...
...
a
...
−1
0
+ (−9)
⎤
1
1
; [T ]B2 =
[T ]B1 [v]B1 =
1
1
1
1
[T ]B2 [v]B2 =
2
0
2
0
0
0
3
−2
1
2
−5
2
[T ]B2 = P −1 [T ]B1 P
=
5
...
[T ]B1
[T ]B2
b
...
P = [I ]B1 =
2
1
1
=
23
2
1
3
1
−2
−1
=
1
3
9
2
B
=⎣
=
1
1
9
...
[T ]B1 =
−1
6
−2
6
1
−2
−1
1
−1
2
B
P = [I ]B1 =
2
⎤
0
0 ⎦
1
0
0
0
=
1
2
⎡
=⎣
To show the results are the same, observe that
1
−1
1
3
−1
B
7
...
−7
=
⎤⎡
⎤ ⎡
⎤
0
3
1
0 ⎦⎣ 2 ⎦ = ⎣ 0 ⎦
1
−4
−2
To show that the results are the same, observe that
⎡
⎤
⎡
⎤
⎡
⎤ ⎡
⎤
1
−1
0
1
1 ⎣ 0 ⎦ + 0 ⎣ 1 ⎦ + (−2) ⎣ 0 ⎦ = ⎣ 0 ⎦
1
0
1
−1
To show the results are the same, observe that
1
1
−1
0
1
1
=⎣ 0
0
[T ]B2 [v]B2
T is
⎡
⎡
2
0
0
3
2
3
1
2
−1
1
[T ]B2 = P −1 [T ]B1 P
⎤⎡
⎤
⎡
⎤
0
1
1
0 ⎦⎣ 2 ⎦ = ⎣ 0 ⎦
1
−1
−1
=
1
−2
=
1
2
1
−1
1
1
1
−2
−1
1
−1
2
−1
1
Confirming Pages
Answers to Odd-Numbered Exercises
⎡
0
=⎣ 0
0
⎡
0
=⎣ 0
0
15
...
[T ]−1 =
S
5
...
[T ]S =
b
...
17
...
Also since B and C are
similar, there is an invertible matrix Q such that
C = Q −1 BQ
...
19
...
Now, since A
and B are similar matrices, there exists an invertible
matrix P such that B = P −1 AP
...
[T ]−1
S
√
−√2/2
2/2
=
⎡ √
−1/2
√
3/2
0
3/2
7
...
⎣ 1/2
0
b
...
1
0
x
5
Ϫ5
⎡ √
3/2
c
...
a
...
b
...
Since A and B are similar matrices, there exists an
invertible matrix P such that B = P −1 AP
...
6
1
0
1
...
0 −1
x
5
0
0
9
...
3
0
b
...
[T ]S =
B
Ϫ10
10
Ϫ10
x
c
...
[T ]B = ⎣
7
−2
2
0
=
y
5
A
i
...
1
0
0
0
=
0
0
0
1
1
0
2
2
=
2
2
0
1
d
...
N (T ) = {0}
d
...
⎧⎡
⎤⎡
⎤⎫
0 ⎪
⎪ 1
⎪
⎪
⎨⎢
⎬
1 ⎥⎢ 1 ⎥
⎥, ⎢
⎥
e
...
No, T is not onto since dim(R(T )) = 2 and
⎡
⎤
a
⎢ b ⎥
⎥
dim(
...
a
...
⎡
⎤
x
⎢ x +y ⎥
x
⎥
b
...
⎧⎡
⎤⎡
⎤⎡
−1
⎪ 1
⎪
⎨⎢
0 ⎥⎢ 1 ⎥⎢
⎥⎢
⎥⎢
g
...
a
...
[S ]B =
c
...
0
1
⎥
⎥
⎦
1
3x
2
3x
⎤
B
+ 1y
3
− 1y
3
⎤
⎥
⎥
⎦
−x
y
−1
0
= [S
◦
T and T
0
1
T ]B
◦
S reflect a vector
1
0
1
0
5
...
[T ]B =
◦
x
y
=
0
; [T ]B =
−1
−1
0
⎥
⎥
⎦
⎤
x
⎢ x +y ⎥
⎥
=⎢
⎣ x − y ⎦
...
[T ]B =
B
⎡
⎤
1 0 0
7
...
[T ]B = ⎣ 0 0 1 ⎦
0 1 0
⎡
⎡
⎡ ⎡
⎤⎤
⎤
⎤
−1
−1
−1
b
...
N (T ) = ⎣ 0 ⎦
⎩
⎭
0
d
...
[T n ]B = ⎣ 0
0
⎤n
0
1 ⎦
0
0
0
1
9
...
Then
(T ◦ (I − T ))(v) = T ((I − T )(v)) = T (v − T (v))
= T (v) − T 2 (v) = I (v) = v
Chapter Test: Chapter 4
1
...
F
3
...
T
5
...
F
7
...
F
9
...
F
11
...
F
13
...
T
15
...
T
17
...
T
19
...
T
21
...
F
23
...
F
25
...
F
27
...
T
29
...
T
31
...
F
33
...
F
35
...
T
37
...
T
39
...
T
Chapter 5
Section 5
...
λ = 3
3
...
λ = 1
7
...
λ2 + 5λ = 0
b
...
v1 =
1
−2
3
d
...
a
...
λ1 = 1
c
...
1
0
1
0
−2
1
1
0
=
1
0
= 1
11
...
(λ + 1)2 (λ − 1) = 0
b
...
v1 = ⎣ 0 ⎦, v2 = ⎣ 2 ⎦
0
2
⎡
⎤⎡
⎤ ⎡
d
...
a
...
λ1 = 2, λ2 = 1
⎡
⎡
⎤
⎤
1
−3
c
...
2 1
2
1
2
1
⎣ 0 2 −1 ⎦ ⎣ 0 ⎦ = ⎣ 0 ⎦ = 2 ⎣ 0 ⎦
0 1
0
0
0
0
⎡
⎤⎡
⎤ ⎡
⎤
⎡
⎤
2 1
2
−3
−3
−3
⎣ 0 2 −1 ⎦ ⎣ 1 ⎦ = ⎣ 1 ⎦ = 1 ⎣ 1 ⎦
0 1
0
1
1
1
15
...
(λ + 1)(λ − 2)(λ + 2)(λ − 4) = 0
b
...
v1 = ⎢
⎣ 0 ⎦, v2 = ⎣ 0 ⎦, v3 = ⎣
0
0
⎡
⎤
0
⎢ 0 ⎥
⎥
v4 = ⎢
⎣ 0 ⎦
1
⎤
0
0 ⎥
⎥,
1 ⎦
0
Confirming Pages
464
Answers to Odd-Numbered Exercises
d
...
Let T
x
−y
=
and λ = −1 with corresponding eigenvectors
0
1
and
a
c
b
d
33
...
Hence, T
...
Observe that the
coefficient of λ is −(a + d ), which is equal to −tr(A)
...
19
...
Then the homogeneous
equation Ax = 0 has a nontrivial solution x0
...
On the other
hand, suppose that λ = 0 is an eigenvalue of A
...
21
...
Then A2 v = λAv, so Av = λ2 v
...
The other cases are similar
...
Let A =
...
Since
v ̸= 0, then λ(λ − 1) = 0, so that either λ = 0 or λ = 1
...
Let A be such that An = 0 for some n, and let λ be an
eigenvalue of A with corresponding eigenvector v, so
that Av = λv
...
Continuing in
this way, we see that An v = λn v
...
Since v ̸= 0, then λn = 0, so that λ = 0
...
If A is invertible, then
det(AB − λI ) = det(A−1 (AB − λI )A)
= det(BA − λI )
27
...
29
...
Let C = B −1 AB
...
Then A(Bv) = λ(Bv)
...
cannot be expressed by scalar
multiplication as this only performs a contraction or a
dilation
...
In this case every vector in 2ޒis an
eigenvector with corresponding eigenvalue equal to 1
...
In this case every vector in 2ޒis an eigenvector with
eigenvalue equal to −1
...
a
...
[T ]B ′
−1
−1
1
= ⎣ −1
−1
1
−1
0
⎡
1
⎤
0
0 ⎦
1
c
...
Hence,
the eigenvalues are the same
...
2
1
0
0
−3
0
=⎣ 0
0
0
−2
0
1
...
P
−1 AP
⎡
⎤
0
0 ⎦
1
5
...
7
...
9
...
11
...
Confirming Pages
Answers to Odd-Numbered Exercises
13
...
15
...
17
...
−3
1
19
...
P = ⎣ 1
1
2
1
3
⎡
0
1
; P −1 AP =
⎤
0
1 ⎦
2
⎤
−1 0 0
P −1 AP = ⎣ 0 1 0 ⎦
0 0 0
⎡
⎤
2 0 0
23
...
P = ⎢
⎣ 0 0
1 1 ⎦
1 0
0 0
⎡
⎤
1 0 0 0
⎢ 0 1 0 0 ⎥
⎥
P −1 AP = ⎢
⎣ 0 0 0 0 ⎦
0 0 0 2
2
0
0
−1
27
...
If k = 1, then Ak = A = PDP −1 =
PD k P −1
...
Then
465
Ak +1 = (PDP −1 )k +1
= (PDP −1 )k (PDP −1 )
= (PD k P −1 )(PDP −1 )
= (PD k )(P −1 P)(DP −1 )
= PD k +1 P −1
⎡
⎤
1
0 1
0
29
...
Since A is diagonalizable, there is an invertible P and
diagonal D such that A = PDP −1
...
Then
D = P −1 QBQ −1 P = (Q −1 P)−1 B(Q −1 P)
33
...
On the other
hand, if A = λI , then A is a diagonal matrix
...
a
...
[T ]B2 = ⎣ −1 −1 0 ⎦
0
0 0
c
...
d
...
0
37
...
Since there are only two
1
linearly independent eigenvectors, T is not
diagonalizable
...
Since A and B are matrix representations for the same
linear operator, they are similar
...
The
matrix A is diagonalizable if and only if D = P −1 AP
for some invertible matrix P and diagonal matrix D
...
The proof of the converse is
identical
...
3
y1 (t ) = [y1 (0) + y2 (0)]e −t − y2 (0)e −2t
1
...
y1 (t ) = [2y1 (0) + y2 (0) + y3 (0)]e −t
5
...
y1 (t ) = e −t , y2 (t ) = −e −t
1
1
′
9
...
y1 (t ) = − 60 y1 + 120 y2 ,
′
y2 (t ) =
1
60 y1
−
1
⎤
0
...
4 0
...
T = ⎣ 0
...
4 0
...
1 0
...
7
⎡
⎤ ⎡
⎤
0
0
...
35 ⎦
T
0
0
...
33
T 10 ⎣ 1 ⎦ ≈ ⎣ 0
...
33
⎡
⎤
0
...
T = ⎣ 0
...
75 0 ⎦
0
0
...
⎡
⎤
0
...
25 0
...
25
⎢ 0
...
33 0
...
17 ⎥
⎥
7
...
T = ⎢
⎣ 0
...
25 0
...
25 ⎦
0
...
17 0
...
33
⎤
⎡
⎤ ⎡
1
0
...
16)n + 0
...
25
⎥
⎥ ⎢
b
...
5(0
...
25 ⎦
0
...
25
⎢ 0
...
⎣
0
...
25
9
...
limt→∞ y1 (t ) = 4, limt→∞ y2 (t ) = 8
The 12 lb of salt will be evenly distributed in a ratio
of 1:2 between the two tanks
...
85
0
...
a
...
T 10
c
...
7
0
...
35
0
...
37
0
...
a
...
08
0
...
y1 (t ) = 4 + 8e − 40 t , y2 (t ) = 8 − 8e − 40 t
Section 5
...
λ1 = a + b, λ2 = a − b
c
...
...
a
...
P =
0
a −b
1
...
F
3
...
No conclusion can be drawn from part (a) about the
diagonalizability of A
...
λ1 = 0: v1 = ⎢
v2 = ⎢
⎣ 0 ⎦
⎣ 1 ⎦
1
0
⎡
⎤
0
⎢ 1 ⎥
⎢
⎥
λ2 = 1: v3 = ⎣
0 ⎦
0
d
...
e
...
4
...
T
6
...
T
8
...
F
14
...
F
16
...
T
18
...
T
20
...
T
22
...
F
24
...
T
26
...
T
28
...
F
32
...
T
34
...
T
36
...
T
k=4
k=3
k = 2
...
F
31
...
T
29
...
T
11
...
T
39
...
a
...
Chapter Test: Chapter 5
40
...
1
k = −2
...
−2 < k < 2
⎡
1
⎢ 1
⎢
7
...
Let v = ⎢
...
...
5
3
...
26
⎤
1
7
...
Then
⎦
⎡
⎢
⎢
Av = ⎢
⎣
λ
λ
...
...
...
1
⎤
⎥
⎥
⎥
⎦
so λ is an eigenvalue of A corresponding to the
eigenvector v
...
Yes, since A and At have the same eigenvalues
...
√
5
√
11
...
√
22
3
⎡
⎤
1
3 ⎣
1 ⎦
15
...
c = 6
467
Confirming Pages
468
Answers to Odd-Numbered Exercises
19
...
Since v3 = −v1 , the vectors v1 and v3 are in opposite
directions
...
w =
31
...
w =
3
2
3
1
c1 v1 ·v1 + c2 v1 ·v2 + · · · + cn v1 ·vn = 0
Since S is an orthogonal set of vectors, this equation
reduces to
c1 ||v1 ||2 = 0
and since ||v1 || ̸= 0, then c1 = 0
...
Hence, S is linearly
independent
...
Since ||u||2 = u·u,
y
||u + v||2 + ||u − v||2 = (u + v)·(u + v)
5
u
v
Ϫ5
so
+ (u − v)·(u − v)
= u·u + 2u·v + v·v
w
5
+ u·u − 2u·v + v·v
x
= 2||u||2 + 2||v||2
Ϫ5
35
...
Consequently,
⎡
⎤
5
1⎣
2 ⎦
27
...
Thus,
⎡
u
w
v
29
...
Then there
exist scalars c1 , c2 , · · · , cn such that
u = c1 u1 + c2 u2 + · · · + cn un
⎢
⎢
t
AA=⎢
⎢
⎣
||A1 ||2
0
···
0
...
...
0
···
0
...
...
0
||An ||2
⎤
⎥
⎥
⎥
⎥
⎦
37
...
By
Exercise 36,
u·(Av) = (At u)·v
and by hypothesis
u·(Av) = (Au)·v
Confirming Pages
Answers to Odd-Numbered Exercises
for all u and v in ޒn
...
t
(A u)·v = (Au)·v
ޒn
...
Hence At = A, so A is symmetric
...
Then by
Exercise 36,
u·(Av) = (At u)·v = (Au)·v
Section 6
...
Since ⟨u, u⟩ = 0 when u1 = 3u2 or u1 = u2 , V is not an
inner product space
...
Since ⟨u + v, w⟩ and ⟨u, v⟩ + ⟨v, w⟩ are not equal for all
u, v, and w, V is not an inner product space
...
Yes, V is an inner product space
...
Yes, V is an inner product space
...
Yes, V is an inner product space
...
sin x dx =
cos x dx
−π
−π
π
=
13
...
cos θ = − 168 105
15
...
∥ −3 + 3x −
1 2
2e
17
...
∥ x − e x ∥=
√
b
...
cos θ =
b
...
a
...
e x , e −x =
1
0
dx = 1
∥ x ∥=
1
0
x 2 dx =
d
...
∥ 1 ∥=
√
3
3
3
√
2 3
e
...
If f is an even function and g is an odd function, then
fg is an odd function
...
33
...
a
...
a
...
a
...
u − projv u =
3
21
...
∥ A − B ∥=
⎧⎡
⎫
⎤
⎨ x
⎬
⎣ y ⎦ 2x − 3y + z = 0
27
...
a
...
cos θ =
2x + 3y = 0
Section 6
...
u − projv u =
8
5
4
−5
1
·
2
v·(u − projv u) =
⎡
⎢
5
...
projv u = ⎣
−4
3
4
3
4
3
⎤
⎥
⎦
Confirming Pages
470
Answers to Odd-Numbered Exercises
⎡
⎢
b
...
a
...
u − projv u = ⎣ −1 ⎦
0
⎡
⎤ ⎡
⎤
0
1
v·(u − projv u) = ⎣ 0 ⎦· ⎣ −1 ⎦ = 0
1
0
9
...
projq p = 5 x −
4
⎡
⎡
⎡
⎤
⎤
−1
−2
1
1 ⎢ −2 ⎥ 1 ⎢ 1 ⎥ 1 ⎢ 0
⎢
⎥, √ ⎢
⎥, √ ⎢
√
21
...
⎧
⎪
⎪
⎨
0
11
...
projq p = − 7 x 2 +
4
27
...
√
31
...
⎩ 2
⎭
6
3
−1
−1
1
then ||Ax||2 = x·x = ||x||2 so ||Ax|| = ||x||
...
By Exercise 32, Ax·Ay = x·y
...
Let
W = {v | v·ui = 0 for all i = 1, 2,
...
17
...
Since
17
12
9
17
(3x − 1) x 2 − x +
4
12
b
...
⎪ 3⎣ 1 ⎦
3 ⎣ −1
⎪
⎩
1
1
q, p − projq p
=
√
5
12
b
...
, n
...
√
√
3(x − 1), 3x − 1, 6 5(x 2 − x + 1 )
6
⎧
⎡
⎡
⎤
⎤⎫
1
2 ⎬
⎨ 1
1 ⎣
√ ⎣ 1 ⎦, √
−1 ⎦
19
...
xt At Ax = (Ax)t Ax = (Ax)·(Ax)
= ||Ax||2 ≥ 0
Confirming Pages
Answers to Odd-Numbered Exercises
41
...
Since A is positive
definite and x is not the zero vector, then xt Ax > 0, so
λ > 0
...
4
1
...
W⊥
5
...
W
⎢
⎥,
⎪⎣ 1 ⎦ ⎣ 0 ⎦⎪
⎪
⎪
⎪
⎪
⎩
1 ⎭
0
⎧⎡ 1 ⎤ ⎫
⎪ −3 ⎪
⎨
⎬
⎢
⎥
9
...
⎢
,
⎪⎣ 1 ⎦ ⎣ 0 ⎦ ⎪
⎪
⎪
⎪
⎪
⎩
⎭
0
1
13
...
a
...
projW v =
−3
1
c
...
1
10
3
9
·
e
...
⎢
⎪⎣
⎪
⎩
− 52 x + 1
9
⎤⎫
1 ⎪
⎪
⎬
1 ⎥
⎥
1 ⎦⎪
⎪
⎭
1
21
...
An orthogonal basis is
⎧⎡
⎤⎡
⎤⎫
0 ⎬
⎨ 2
B = ⎣ 0 ⎦, ⎣ −1 ⎦
⎩
⎭
0
0
⎡
⎤
1
projW v = ⎣ 2 ⎦
0
⎡
⎤
0
19
...
Notice that the vectors v1 and v2 are not orthogonal
...
W ⊥ = span ⎣ −1 ⎦
⎩
⎭
1
⎡
⎤
2
1⎣
5 ⎦
b
...
u = v − projW v =
3
2
Confirming Pages
472
Answers to Odd-Numbered Exercises
⎡
⎤
2
d
...
e
...
a
...
y =
W⊥
653,089
13,148 x
−
317,689,173
3287
5
...
27
...
Since
W1 ⊆ W2 , then ⟨w, u⟩ = 0 for all u ∈ W1
...
29
...
Let A =
d
f
e
g
a
b
and B =
a
b
⟨A, B⟩ = tr
b
c
d
f
ad + bf
bd + cf
= tr
b
c
...
This implies
A=
b
...
That is, A is skew-symmetric
...
y = 0
...
2780952
7
...
p2 (x ) = 2 sin x − sin 2x
2
p3 (x ) = 2 sin x − sin 2x + sin 3x
3
2
p4 (x ) = 2 sin x − sin 2x + sin 3x
3
1
− sin 4x
2
2
p5 (x ) = 2 sin x − sin 2x + sin 3x
3
2
1
− sin 4x + sin 5x
2
5
b
...
5
1
...
x =
5
2
0
⎡
⎢
b
...
a
...
⎧⎡
⎤⎫
0 ⎪
⎪
⎪
⎪
⎨⎢
⎬
−2 ⎥
⎥
= span ⎢
⎪⎣ 1 ⎦⎪
⎪
⎪
⎩
⎭
1
⎧⎡
⎤ ⎡
0
⎪ −1
⎪
⎨⎢
1 ⎥ ⎢ 0
⎥, ⎢
= span ⎢
⎪⎣ 2 ⎦ ⎣ −1
⎪
⎩
0
1
⎤⎫
⎪
⎪
⎥⎬
⎥
⎦⎪
⎪
⎭
dim (V3 ) + dim (V−3 ) + dim (V−1 )
=1+1+2=4
13
...
√
3/2
−1/2
Section 6
...
λ1 = 3, λ2 = −1
3
...
λ1 = −3 with eigenvector v1 =
−1
2
; λ2 = 2 with
2
...
1
⎡
⎤
1
7
...
Observe that v1 ·v2 = v1 ·v3 =
1
15
...
⎡
√
√2/2
⎣ − 2/2
0
eigenvector v2 =
v2 ·v3 = 0
...
V3 = span ⎣ 0 ⎦
⎩
⎭
1
⎧⎡
⎤ ⎡
⎤⎫
0 ⎬
⎨ −1
V−1 = span ⎣ 0 ⎦, ⎣ 1 ⎦
⎩
⎭
1
0
dim (V3 ) + dim (V−1 ) = 1 + 2 = 3
⎧⎡
⎤⎫
⎪ 3 ⎪
⎪
⎪
⎨⎢
⎬
1 ⎥
⎥
11
...
21
...
−1/2
√
3/2
⎤⎡ √
0
√2/2
0 ⎦ ⎣ 2/2
1
0
1
=⎣ 0
0
17
...
25
...
Confirming Pages
474
Answers to Odd-Numbered Exercises
27
...
Since cos2 θ + sin2 θ = 1, then
− sin θ
cos θ
cos θ
sin θ
cos θ
− sin θ
sin θ
cos θ
=
1
0
0
1
29
...
Then
P(P t AP)P t = P(P t At P)P t , so A = At
...
a
...
The transpose of both sides of the equation Av = λv
gives vt At = λvt
...
Now, right multiplication of both
sides by v gives vt (−Av) = λvt v, so vt (−λv) =
2
2
λvt v
...
Section 6
...
30(y ′ )2
3
...
(x ′ )2
2
+
+
−
√
10x ′
(y ′ )2
(y ′ )2
4
7
...
[x y]
=1
x
y
− 16 = 0
7
...
⎡
⎤
a
ax1 +bx2 +cx3 ⎣
b ⎦
c
...
8
√
1
...
σ1 = 2 3, σ2 = 5, σ3 = 0
1
√
2
1
√
2
⎤⎡
⎤
x
a
⎣ y ⎦· ⎣ b ⎦ = ax + by + cz = 0
z
c
That is, W ⊥
b
...
a
...
7(x − 3)2 + 6 3(x − 3)(y − 2) + 13(y − 2)2 −
16 = 0
5
...
c
⎧⎡
⎤⎫
⎨ a ⎬
⊥ = span ⎣ b ⎦
b
...
a
...
⎣ 1 ⎦, ⎣ √
⎩
0
− 2/2
⎡
⎤
−2
c
...
a
...
∥ projW ⊥ v ∥= √1 2 22 3
2
a +b +c
1
√
2
1
√
2
0
2
0
0
⎡
0
⎢
1
⎣
0
9
...
x1 = 2, x2 = 0 b
...
σ1 /σ2 ≈ 6, 324, 555
Note: This gives the distance from the point (x1 , x2 , x3 )
to the plane
...
a
...
1
√1 , √
π
2π
cos x ,
π
−π cos x
1
√ sin x
π
sin x dx = 0
c
...
∥ projW x
3
Confirming Pages
Answers to Odd-Numbered Exercises
7
...
a
...
B1 = ⎢
⎪⎣
⎪
⎪
⎪
⎩
⎧⎡
⎪
⎪
⎪⎢
⎪
⎨⎢
c
...
Q = ⎢
⎢
⎣
⎡
1
2
1
2
1
2
1
2
2
R=⎣ 0
0
⎡
⎤
1 ⎢
1 ⎥⎢
⎥, ⎢
1 ⎦⎢
⎣
1
⎤
1
2
1
−2
1
2
1
−2
1
2
1
2
1
2
1
2
1
2
−1
2
1
2
−1
2
⎤⎡
⎥⎢
⎥⎢
⎥, ⎢
⎥⎢
⎦⎣
1
2
1
−2
−1
1
0
⎡
⎥ −1
⎥⎢ 0
⎥, ⎢
⎥⎣ 1
⎦
0
⎤⎡
√
− 22
⎥⎢
⎥⎢
⎥, ⎢
⎥⎢
⎦⎣
⎤
⎥
0 ⎥
⎥
√ ⎥
2 ⎥
2 ⎦
0
1
2
1
−2
0
1
0
0
⎤
2
−2 ⎦
√
2
e
...
F
10
...
T
12
...
T
14
...
F
16
...
T
19
...
F
17
...
T
20
...
F
22
...
F
24
...
F
26
...
T
28
...
T
30
...
T
32
...
F
34
...
T
36
...
T
38
...
F
40
...
1
1
...
A × B = {(a, b) | a ∈ A, b ∈ B}
There are 9 × 9 = 81 ordered pairs in A × B
...
A\B = {−4, 0, 1, 3, 5, 7}
7
...
A\B = (−11, 0)
11
...
(A ∪ B)\C = (−11, −9)
15
...
T
2
...
F
4
...
F
6
...
y
5
5
x
5
Ϫ5
x
Ϫ5
19
...
(A ∩ B) ∩ C = {5} = A ∩ (B ∩ C )
23
...
A\(B ∪ C ) = {3, 9, 11} = (A\B) ∩ (A\C )
Section A
...
Since for each first coordinate there is a unique second
coordinate, f is a function
...
Since there is no x such that f (x ) = 14, the function is
not onto
...
5
...
Since f is not one-to-one, f does not have an inverse
...
{(1, −2), (2, −1), (3, 3), (4, 5), (5, 9), (6, 11)}
11
...
f (A ∩ B) = f ({0}) = {0}
f (A) ∩ f (B) = [0, 4] ∩ [0, 4] = [0, 4]
Therefore, f (A ∩ B) ⊂ f (A) ∩ f (B), but
f (A ∩ B) ̸= f (A) ∩ f (B)
...
f −1 (x ) = x −b
a
17
...
If n is even, then
f (n) (x ) = x
...
a
...
Since the exponential function is always positive,
f is not onto
...
Define g : )∞ ,0( → ޒby g(x ) = e 2x −1
...
g −1 (x ) = 1 (1 + ln x )
...
a
...
b
...
ގ
c
...
a
...
f (B) = {2k + 1 | k ∈ }ޚ
c
...
f −1 (E ) = {(m, n) | n is even}
e
...
Since f ((1, −2)) = 0 = f ((0, 0)), then f is not
one-to-one
...
If z ∈ ,ޚlet m = 0 and n = z , so that f (m, n) = z
...
3
1
...
3
...
√
3
2 x,
so the area
5
...
Then
c = bℓ = (ak )ℓ = (k ℓ)a, so a divides c
...
If n is odd, there is some k such that n = 2k + 1
...
9
...
11
...
Then m 2 + n 2 = 13, which is
not divisible by 4
...
Contrapositive: Suppose n is even, so there is some k
such that n = 2k
...
15
...
Then
√
pq = p 2 = p = (p + q)/2
...
Contrapositive: Suppose x > 0
...
√
19
...
Then 2q 3 = p 3 , so p 3 is even
and hence p is even
...
21
...
There are two cases: either both
factors are greater than or equal to 0, or both are less
Confirming Pages
Answers to Odd-Numbered Exercises
than or equal to 0
...
Therefore, 3x ≤ y
...
Define f : ޒ → ޒby f (x ) = x 2
...
Then f −1 (C ) = [−2, 2] = f −1 (D) but
D
...
If x ∈ f −1 (C ), then f (x ) ∈ C
...
Hence, x ∈ f −1 (D)
...
If y ∈ f (A\B), there is some x such that y = f (x ) with
x ∈ A and x ∈ B
...
Now suppose y ∈ f (A)\ f (B)
...
Since f is
one-to-one, this is the only preimage for y, so x ∈ A\B
...
29
...
A
...
Let y ∈ C
...
So
x ∈ f −1 (C ), and hence y = f (x ) ∈ f (f −1 (C ))
...
Section A
...
Base case: n = 1 : 12 = 1(2)(3)
6
Inductive hypothesis: Assume the summation formula
holds for the natural number n
...
Base case: n = 1 : 1 = 1(3−1)
2
Inductive hypothesis: Assume the summation formula
holds for the natural number n
...
Base case: n = 1 : 2 = 1(4)
2
Inductive hypothesis: Assume the summation formula
holds for the natural number n
...
Base case: n = 1 : 3 = 3(2)
2
Inductive hypothesis: Assume the summation formula
holds for the natural number n
...
Base case: n = 1 : 21 = 22 − 2
Inductive hypothesis: Assume the summation formula
holds for the natural number n
...
From the data in the table
n
2 + 4 + · · · + 2n
1
2
3
4
5
2 = 1(2)
6 = 2(3)
12 = 3(4)
40 = 4(5)
30 = 5(6)
we make the conjecture that
2 + 4 + 6 + · · · + (2n) = n(n + 1)
Confirming Pages
478
Answers to Odd-Numbered Exercises
Base case: n = 1 : 2 = 1(2)
Inductive hypothesis: Assume the summation formula
holds for the natural number n
...
Base case: n = 5 : 32 = 25 > 25 = 52
Inductive hypothesis: Assume 2n > n 2 holds for the
natural number n
...
But since 2n 2 −
(n + 1)2 = n 2 − 2n − 1 = (n − 1)2 − 2 > 0, for all
n ≥ 5, we have 2n+1 > (n + 1)2
...
Base case: n = 1 : 12 + 1 = 2, which is divisible by 2
...
Consider (n + 1)2 + (n + 1) = n 2 + n + 2n + 2
...
Alternatively,
observe that n 2 + n = n(n + 1), which is the product of
consecutive integers and is therefore even
...
Base case: n = 1 : 1 = r−1
r−1
Inductive hypothesis: Assume the formula holds for the
natural number n
...
Base case: n = 2 : A ∩ (B1 ∪ B2 ) = (A ∩ B1 ) ∪ (A ∩ B2 ),
by Theorem 1 of Sec
...
1
Inductive hypothesis: Assume the formula holds for the
natural number n
...
n
r
n!
r!(n − r)!
n!
=
(n − r)!(n − (n − r))!
n
=
n −r
=
23
...
See also
Matrix algebra
Angles, between vectors,
327–330
Arguments
contradiction, 426–427
contrapositive, 426
direct, 425
Associated quadratic form, 386
Associative property, 97, 98
Augmented matrix
for consistent linear systems, 22
explanation of, 15, 16
facts about, 23
as solution to linear systems, 16–17
Axioms, 424
B
Back substitution, 4
Balance law, 310
Balancing chemical equations, 79
Basis
change of, 177–181
explanation of, 149
facts about, 171
method for finding, 166–170
ordered, 174–176
orthogonal, 339–340
orthonormal, 342–352
standard, 159, 162, 163
for vector space, 159–164
Best-fit line, 322
Bijective functions, 420, 422
Bijective mapping, 226
Bilinear inner product, 333
Binary vectors, 127–128
Binomial coefficients, 435–436
Binomial theorem, 437–438
Brin, Sergey, 276
C
Cartesian product, of two sets, 411
Cauchy-Schwartz inequality, 326–327,
336, 337
Characteristic equation, 279
Characteristic polynomials, 279
Check matrix, 128
Chemical equation balancing
application, 1, 79
Circle, equation of, 385
Codewords, 127–129
Coefficient matrix, 15
Cofactors, of matrices, 56
Column rank, 221, 222
Column space, 152
Column vectors, 27
Commutative operation, 28
Commutative property, 97, 129
Commute, matrices that, 32, 33
Complement, of sets, 411
Complementary solution, 193
Complex numbers
conjugate of, 377
equality and, 377
imaginary part, 144
real part, 144
set of, 134
Components, of vectors, 27, 95
Composition, of functions,
420–422
Computer graphics applications
explanation of, 255
projection, 265–268
479
Confirming Pages
480
Index
Computer graphics applications (continued)
reflection, 260, 261
reversing graphics operations, 261–262
rotation, 264–265
scaling and shearing, 256–259
translation, 262–264
types of, 199
Conclusions, 424
Condition number, 403
Conic sections
eigenvalues and, 388–390
explanation of, 61, 386
simplifying equations that
describe, 385
Conservation of mass law, 1
Contained sets, 410
Continuous signals, 93
Contraction, 257
Contradiction argument, 426–427
Contrapositive argument, 426
Contrapositive statement, 425, 426
Converse, of theorems, 424–425
Convex, set on 2ޒas, 272
Corollaries, 425
Cosine, of angle between vectors,
328, 337
Counter example, 425
Cramer’s rule, 62–64
D
Damping coefficient, 192
Data compression, 401–403
Data sets, least squares approximation
to find trends in, 371–373
Demand vectors, 83
DeMorgan’s laws, 413–414
Determinants
facts about, 64–65
to find equation of conic sections, 61
linear independence and, 118–119
method to find, 55–56
properties of, 57–62
to solve linear systems, 62–64
of square matrices, 56
of 3 x 3 matrix, 54–55
of triangular matrices, 56–57
of 2 x 2 matrix, 54, 55
Diagonalization
conditions for matrix, 291–292
eigenvalues and, 293
eigenvectors and, 282
examples of, 289–291, 293
explanation of, 287–289, 377
facts about, 297–298
linear operators and, 295–297
orthogonal, 379–382
similar matrices and, 293–294
of symmetric matrices, 377–383
symmetric matrices and, 293
systems of linear differential equations
and, 302–309
of transition matrices, 312–313
Diagonal matrix, 56
...
, 415, 416
vector space of real-valued,
133–134
Fundamental frequency,
93–94
Fundamental sets of solutions
superposition principle and, 188–189
theorem of, 190–191
Wronskian and, 189–191
G
Gaussian elimination
explanation of, 4
to solve linear systems, 6–11,
14, 15, 68
Gauss-Jordan elimination, 19
General solution, 3
Goodness of fit, measurement of, 366
Google, 276
481
Confirming Pages
482
Index
Gram-Schmidt process
examples of, 349–352
explanation of, 344, 347–348, 394
geometric interpretation of,
348–349
Graphics operations in 2ޒ
reflection, 260, 261
reversing, 261–262
rotation, 264–265
scaling and shearing, 256–259
translation, 262–264
Graphs
of conic sections, 61
of functions, 416
H
Hamming, Richard, 127
Hamming’s code, 127, 129
Homogeneous coordinates, 262–264
Homogeneous linear systems,
49–51, 113
Horizontal line test, 419
Horizontal scaling, 256, 257
Horizontal shear, 258
Hypothesis
explanation of, 424
inductive, 430
I
Identity matrix, 39
Images
explanation of, 415, 418
inverse, 416, 418
Imaginary part, complex
numbers, 134
Inconsistent linear systems
explanation of, 2, 10
reduced matrix for, 21–22
Independence, linear
...
See One-to-one mapping
Injective functions, 419
Inner product
examples of, 334–336
explanation of, 333
that is not dot product, 335
Inner product spaces
diagonalization of symmetric matrices
and, 377–383
explanation of, 333–334
facts about, 340
least squares approximation and,
366–375
orthogonal complements and, 355–364
orthogonal sets and, 338–340
orthonormal bases and, 342–352
properties of norm in, 336–337
quadratic forms and, 385–391
singular value decomposition and,
392–403
subspaces of, 355
Input-output matrix, 83
Integers, set of, 409
Internal demand, 83
Intersection, of sets, 410, 411
Inverse functions
explanation of, 418–420
unique nature of, 421
Inverse images, 416, 418
Inverse of elementary matrix, 71–72
Inverse of square matrix
definition of, 40
explanation of, 40–45
facts about, 45
Inverse transformations, 230–231
Invertible functions, 418–420
Invertible matrix
elementary matrices and, 72
explanation of, 41, 54
inverse of product of, 44–45
square, 60–61
Isomorphisms
definition of, 229
explanation of, 226
inverse and, 230–231
linear transformations as,
229–231
one-to-one and onto mappings
and, 226–230
vector space, 232–233
K
Kepler, Johannes, 61
Kirchhoff’s laws, 88
Confirming Pages
Index
L
Law of conservation of mass, 1
Leading variables, 10
Least squares approximation
background of, 366–368
to find trends in data sets, 371–373
Fourier polynomials and, 373–375
linear regression and, 371–373
Least squares solutions, 369–371
Lemmas, 425
Length, of vectors, 95
Leontief input-output model, 82
Linear codes, 129
Linear combinations
definition of, 102, 146
of elements of fundamental
set, 94
matrix multiplication and, 107
of vectors, 102–106, 146
Linear dependence
definition of, 111, 157
explanation of, 111, 157
of vectors, 112, 158
Linear equations, in n variables, 3
Linear independence
definition of, 111, 157
determinants and, 118–119
explanation of, 111–112
of vectors, 112–117, 158
Linear operators
diagonalizable, 295–297
eigenvalues and eigenvectors of, 283–284
explanation of, 202, 237
similarity and, 249–252
Linear regression, 368, 371–373
Linear systems
augmented matrices to solve, 16–17, 22
consistent, 2, 117
converted to equivalent triangular
systems, 6–7, 10, 14
Cramer’s rule to solve, 62–64
definition of, 3
discussion of, 3–4
elimination method to solve, 4–11,
14, 15
equivalent, 4, 5
explanation of, 2–3
facts about, 11–12
with four variables, 2, 8–9
homogeneous, 49–51, 113
ill-conditioned, 403
inconsistent, 2, 10, 21–22
linear independence and, 117–118
LU factorization to solve, 75–76
matrix form of, 48
nullity of matrices and, 222–223
in terms of geometric structure of
Euclidean space, 363–364
3 x, 3, 7–8
triangular form of, 4, 6, 10
with two variables, 2–3
vector form of, 106–107
vector form of solution to,
48–50
Linear systems applications
balancing chemical equations, 79
economic input-output models, 82–84
network flow, 79–81
nutrition, 81–82
Linear transformations
computer graphics and, 199,
255–268
definition of, 202, 235
explanation of, 200–202, 235–236
from geometric perspective,
203–204
inverse of, 230
as isomorphisms, 229–231
isomorphisms as, 226–233
matrices and, 202–203, 221–222,
235–245
null space and range and, 214–223
operations with, 209–210
similarity and, 249–253
Lower triangular matrix
examples of, 57
explanation of, 56, 73
LU factorization
facts about, 77
of matrices, 69, 72–75, 392
solving linear systems using, 75–76
M
Mapping
bijective, 226
explanation of, 200–201, 415
linear transformations and, 201–202,
205–207, 241–242
one-to-one, 226–230
onto, 226, 227
483
Revised Confirming Pages
484
Index
Markov chains
applications of, 310–314
explanation of, 275–276
Markov process, 310
Mathematical induction
base case, 430
binomial coefficients and binomial
theorem and, 435–438
examples of, 431–435
inductive hypothesis, 430
introduction to, 429–430
principle of, 430–431
Matrices
addition of, 27–29
augmented, 15–17, 22, 23
check, 128
coefficient, 15
condition number of, 403
definition of, 14
determinants of, 54–65
diagonal, 56
discussion of, 14–15
echelon form of, 17–21
elementary, 69–72
finding singular value decomposition
of, 398–402
identity, 39
input-output, 83
inverse of product of invertible, 44–45
inverse of square, 39–45
linear independence of, 114
linear transformations and, 202–203,
221–222, 235–245
LU factorization of, 69, 72–75
minors and cofactors of, 56
nullity of, 221–223
null space of, 152–153
orthogonal, 381–382
permutation, 76–77
positive definite, 354
positive semidefinite, 354
rank of, 222
scalar multiplication, 27
singular values of, 393–396
stochastic, 275, 311, 314
subspaces and, 362
symmetric, 36
that commute, 32, 33
transition, 177–182, 275, 276,
311–313
transpose of, 35–36
triangular, 15, 56–57, 283
vector spaces of, 130
Matrix addition, 27–29
Matrix algebra
addition and scalar multiplication,
27–29
explanation of, 26–27
facts about, 36–37
matrix multiplication, 29–35
symmetric matrix, 36
transpose of matrix, 35–36
Matrix equations, 48–51
Matrix form, of linear systems, 48
Matrix multiplication
definition of, 32
explanation of, 29–35, 210
linear combinations and, 107
linear transformations between finite
dimensional vector spaces and,
236–237
properties of, 35
to write linear systems in terms of
matrices and vectors, 48–51
Members, of sets, 409
Minors, of matrices, 56
Multiplication
...
See also Mathematical
induction,
set of, 409
statements involving,
429–434
Network flow application,
79–81
Newton, Isaac, 61
Nilpotent, 299
Noninvertible matrix, 41
Normal equation, least squares solution
to, 369–370
Nullity, of matrices, 221–223
Null sets, 410
Null space,
of linear transformations, 214–221
of matrices, 152–153, 221
Nutrition application, 81–82
Confirming Pages
Index
O
P
One-parameter family, of
solutions, 9
One-to-one functions, 419
One-to-one mapping, 226–230
Onto functions, 420
Onto mapping, 226, 227
Ordered basis, 174–176
Ordinary differential equation, 185
Orthogonal basis
construction of, 347–348
of finite dimensional inner product space,
346–347
singular values and, 394
vectors that form, 355
Orthogonal complement
definition of, 357
examples of, 358–360
explanation of, 355–358
facts about, 364
inner product spaces and, 356
linear systems and, 363–364
matrices and, 362
projection theorem and, 361–362
subspaces and, 358
Orthogonal diagonalization, 379–382
Orthogonal matrix, 381–382
Orthogonal projection
explanation of, 343–345, 360, 362
Gram-Schmidt process and,
347, 348
Orthogonal sets
explanation of, 338
properties of, 338–340
Orthogonal vectors
explanation of, 328–329, 337
in inner product spaces, 338 (See also
Inner product spaces)
subspaces of inner product spaces and,
355–360
Orthonormal basis
Gram-Schmidt process and,
347–352
for inner product space, 345–347
ordered, 339–340, 342
orthogonal matrices and, 381
orthogonal projections and,
343–345
Orthonormal vectors, 338
Page, Larry, 276
Page range algorithm (Google),
276
Parabolas, general form of, 11
Parallelogram rule, 96
Parallel projection, 266
Parametric equations, 266
Pascal’s triangle, 435
Past plane, 301–302
Period, of wave, 93
Periodic motion, 93
Periodic signals, 93
Permutation matrix, 76–77
Phase portrait, 302–304, 306
Photosynthesis application, 1–2
Pitch, 199
Pivot, 18, 19
Pixels, 255
PLU factorization, 76–77
Polynomials
characteristic, 279
of degree n, 132
derivative of constant, 217
Fourier, 373–375
trigonometric, 373–374
use of Gram-Schmidt process on space
of, 350–351
vector space of, 132–133,
163, 334
zero, 132
Positive definite matrix, 354
Positive semidefinite matrix, 354
Predator-prey model, 300
Preimages, 416
Principle of mathematical induction
...
, 415, 416
Riemann integral, 334
Roll, 199
Rotation, 264–265
Rotation of axes, 385–390
Row echelon form
explanation of, 17, 19
reduced, 18–21
Row equivalent, 16
elementary matrices and, 72
Row operations, 16, 58
Row rank, of matrices, 222
Row vectors, 27
S
Scalar multiplication
explanation of, 27–29
linear transformations and, 209–210
of vectors, 95–99, 129, 161
Scalar product
of matrices, 27
of vectors, 96
Scalar projection, 343
Scaling, 96, 256–258
Scatterplots, 321
Second-order differential equations, with
constant coefficients, 186–188,
191–193
Sets,
empty, 410
explanation of, 409–410
null, 410
operations on, 410–414
orthogonal, 338–340
solution, 3
Shearing, 258–259
Signals, 93
Similar matrix
background of, 249–251
explanation of, 252, 253
Singular value decomposition (SVD)
data compression and,
401–403
explanation of, 392
four fundamental subspaces and, 401
method for, 398–400
theorem of, 396–398
Singular values, 392
definition of, 393
of m x n matrix, 393–396
Solutions, to linear systems with
n variables, 3
Solution set, 3
Span, of set of vectors, 146–152
Square matrix
determinant of, 56
inverse of, 39–45
invertibility of, 60
trace of, 142–143
Standard basis
explanation of, 162, 163
Confirming Pages
Index
matrix representation relative to, 235–237
polynomials of, 163
Standard position, of vectors, 95
State vectors, Markov chains and, 311–312
Steady-state vectors
explanation of, 276
Markov chain and, 313–314
Stochastic matrix, 275, 311, 314
Subsets, 410, 412
Subspaces
closure criteria for, 144
definition of, 140
examples of, 142–143
explanation of, 140–142
facts about, 153
four fundamental, 401
of inner product spaces, 355–360,
362
null space and column space of matrix
and, 152–153
span of set of vectors and, 146–152
trivial, 142
of vector spaces, 140, 145, 146
Substitution
back, 4
forward, 68–69
Superposition principle, 188–189
Surjective functions, 420
Surjective mapping
...
See
Linear systems
T
Terminal point, vector, 95
Theorems
converse of, 424–425
explanation of, 424
487
Tower of Hanoi puzzle,
429–430
Trace, of square matrices,
142–143
Trajectories, 301–302
Transformation, 199–200
...
Abstract theory is essential to understanding how linear
algebra is applied
...
Applications have been carefully chosen to highlight the utility of linear algebra in
order to see the relevancy of the subject matter in other areas of science as well as in mathematics
...
End of chapter True/False
questions help students connect concepts and facts presented in the chapter
...
Students are introduced to the study of linear algebra in a sequential and thorough
manner through an engaging writing style gaining a clear understanding of the theory essential for
applying linear algebra to mathematics or other fields of science
...
www
...
com/defranza
ISBN 978-0-07-353235-6
MHID 0-07-353235-5
www
...
com
Linear Algebra
Introduction to Linear Algebra with Applications provides students with the necessary tools for success:
Linear Algebra
Title: Introduction to Linear Algebra with Applications by Jim DeFranza
Description: Complete book from cover to cover in pdf format
Description: Complete book from cover to cover in pdf format