Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
Lecture 15
where a, b, c are arbitrary constants
...
Example 15
...
Let V be the vector space of differentiable functions on the interval [a, b]
...
Describe the kernel of the
linear mapping T : V → V defined as
T(f (x)) = f (x) + f ′ (x)
...
A function f is in the kernel of T if T(f (x)) = 0, that is, if f (x) + f ′ (x) = 0
...
What functions f do you know of satisfy f ′ (x) = −f (x)?
How about f (x) = e−x ? It is clear that f ′ (x) = −e−x = −f (x) and thus f (x) = e−x is in
ker(T)
...
It turns out that the elements of ker(T) are of the form f (x) = Ce−x for a
constant C
...
2
Null space and Column space
In the previous section, we introduced the kernel and range of a general linear mapping
T : V → U
...
In this case, v is in the kernel of TA if and only if TA(v) = Av = 0
...
Because the case when T is a matrix mapping arises so frequently, we give a name to the set
of vectors v such that Av = 0
...
9: The null space of a matrix A ∈ Mm×n , denoted by Null(A), is the
subset of Rn consisting of vectors v such that Av = 0
...
Using set notation:
Null(A) = {v ∈ Rn | Av = 0}
...
Because the kernel of a linear mapping is a subspace we obtain the following
...
10: If A ∈ Mm×n then Null(A) is a subspace of Rn
...
10, if u and v are two solutions to the linear system Ax = 0 then
αu + βv is also a solution:
A(αu + βv) = αAu + βAv = α · 0 + β · 0 = 0
...
11
...
Is W a subspace of V?
Solution
...
Hence, W = Null(A) and consequently W is a subspace
...
Therefore, one way to explicitly describe the null space
of A is to solve the system Ax = 0 and write the general solution in parametric vector form
...
Therefore, after performing back
substitution, we will obtain vectors v1 ,
...
, td are arbitrary numbers
...
, vd }
...
, vn form a spanning set for Null(A)
...
12
...
5 8 −4
Solution
...
Performing elementary row operations one obtains
1 −2 0 −1 3
A ∼ 0 0 1 2 −2
...
Letting x5 = t1 , and x4 = t2 , then from the 2nd row we obtain
x3 = −2t2 + 2t1
...
122
Lecture 15
Writing the general solution in parametric vector form we obtain
1
−3
2
0
0
1
+ t2 −2 + t3 0
2
x = t1
1
0
0
1
0
0
Therefore,
1
2
−3
0 0 1
, −2 0
2
Null(A) = span
0 1 0
0
0
1
| {z } | {z } |{z}
v1
v2
v3
You can verify that Av1 = Av2 = Av3 = 0
...
Recall that a vector
b in the co-domain Rm is in the range of TA if there exists some vector x in the domain
Rn such
that TA (x) = b
...
Now, if A has columns
A = v1 v2 · · · vn and x = (x1 , x2 ,
...
Thus, a vector b is in the range of A if it can
be written as a linear combination of the columns v1 , v2 ,
...
This motivates the
following definition
...
13: Let A ∈ Mm×n be a matrix
...
The column space of A is denoted by Col(A)
...
, vn }
...
and since Range(TA) is a subspace of Rm then so is Col(A)
...
14: The column space of a m × n matrix is a subspace of Rm
...
15
...
3
Solution
...
Hence, we must determineif Ax= b has a solution
...
Therefore, b is in
Col(A)
...
1
Linear Independence
Roughly speaking, the concept of linear independence evolves around the idea of working
with “efficient” spanning sets for a subspace
...
With these vague statements
out of the way, we introduce the formal definition of what it means for a set of vectors to be
“efficient”
...
1: Let V be a vector space and let {v1 , v2 ,
...
Then {v1 , v2 ,
...
, cp that
satisfy the equation
c1 v1 + c2 v2 + · · · + cp vp = 0
are the trivial scalars c1 = c2 = · · · = cp = 0
...
, vp } is not linearly
independent then we say that it is linearly dependent
...
If {v1 ,
...
, cp , at least one of which is
nonzero, such that
c1 v1 + c2 v2 + · · · + cp vp = 0
...
Then there are scalars
c1 , c2 , c3 , c4 , not all of them zero, such that equation (⋆) holds
...
Then,
v3 = −
c2
c4
c1
v1 − v2 − v4
...
It is in this sense that a set of linearly dependent
vectors are redundant
...
Theorem 16
...
, vp }, with v1 6= 0, is linearly dependent if
and only if some vj is a linear combination of the preceding vectors v1 ,
...
Example 16
...
Show that the following set of 2 × 2 matrices is linearly dependent:
5
0
−1 3
1 2
...
It is clear that A1 and A2 are linearly independent, i
...
, A1 cannot be written as
a scalar multiple of A2 , and vice-versa
...
Similary, since the (2, 2) entry
of A2 is zero, the only way to get the −3 in the (2, 2) entry of A3 is to multiply A1 by 3
...
Verify:
3
6
−2 6
5
0
3A1 − 2A2 =
−
=
= A3
0 −3
2 0
−2 −3
Therefore, 3A1 − 2A2 − A3 = 0 and thus we have found scalars c1 , c2 , c3 not all zero such
that c1 A1 + c2 A2 + c3 A3 = 0
...
2
Bases
We now introduce the important concept of a basis
...
, vp−1 , vp }
in V, we showed that W = span{v1 , v2 ,
...
If say vp is linearly
dependent on v1 , v2 ,
...
, vp−1 } still
spans all of W:
W = span{v1 , v2 ,
...
, vp−1 }
...
If some other
vector vj is linearly dependent on v1 ,
...
We can continue removing vectors until we obtain a
minimal set of vectors that are linearly independent and still span W
...
Definition 16
...
A set of vectors B = {v1 ,
...
, vk }, and
126
Lecture 16
(b) the set B is linearly independent
...
Indeed, if B = {v1 ,
...
, vp−1 } cannot be a basis for W
...
, vp } is a basis then it is linearly independent and therefore vp cannot
be written as a linear combination of the others
...
, vp−1 } and therefore B˜ is not a basis for W because a basis must be a spanning
set
...
, vp } for W and we add a new
vector u from W then B˜ = {v1 ,
...
Why? We still have that
span B˜ = W but now B˜ is not linearly independent
...
, vp } is a
basis for W, the vector u can be written as a linear combination of {v1 ,
...
Example 16
...
Show that the standard unit vectors form a basis for V = R3 :
1
0
0
e1 = 0 , e2 = 1 , e3 = 0
0
0
1
Solution
...
The set B = {e1 , e2 , e3 } is linearly independent
...
Therefore, by definition, B = {e1 , e2 , e3 }
is a basis for R3
...
Analogous arguments hold
for {e1 , e2 ,
...
Example 16
...
Is B = {v1 , v2 , v3 } a basis for R3 ?
2
−4
4
v1 = 0 , v2 = −2 , v3 = −6
−4
8
−6
Solution
...
B is linearly inde Therefore,
3
pendent
...
Therefore,
the columns of A span all of R3 :
Col(A) = span{v1 , v2 , v3 } = R3
...
Example 16
...
In V = R4 , consider the vectors
−1
2
1
4
−1
3
v1 =
0 , v2 = −2 , v3 = 2
...
Is B = {v1 , v2 , v3 } a basis for W?
Solution
...
Form the matrix, A = [v1 v2 v3 ] and row reduce to obtain
1 0 1
0 1 −1
A∼
0 0 0
0 0 0
Hence, rank(A) = 2 and thus B is linearly dependent
...
Therefore, B is
not a basis of W
...
8
...
Example 16
...
Recall that a n × n is skew-symmetric A if AT = −A
...
Find a basis for the set of 3 × 3 skew-symmetric
matrices
...
3
Dimension of a Vector Space
The following theorem will lead to the definition of the dimension of a vector space
...
10: Let V be a vector space
...
Proof: We will prove the theorem for the case that V = Rn
...
, en } is a basis of Rn
...
, up } be nonzero vectors in Rn and suppose first that p > n
...
7, we proved that any set
of vectors in Rn containing more than n vectors is automatically linearly dependent
...
Therefore, the solution set of Ax = 0 contains non-trivial
solutions
...
In Lecture 4, Theorem 4
...
, up } in Rn spans Rn if and only if the RREF of A has
exactly r = n leading ones
...
Therefore, if p < n
then {u1 , u2 ,
...
Thus, in either case (p > n or p < n), the set
{u1 , u2 ,
...
Hence, any basis in Rn must contain n vectors
...
, vn } of nonzero vectors in
R containing n vectors is automatically a basis for Rn
...
All that we can say is that a set of vectors in Rn containing
fewer or more than n vectors is automatically not a basis for Rn
...
10, any
basis in Rn must have exactly n vectors
...
, vn } is a basis for V then any other basis for V must have exactly n vectors also
...
Definition 16
...
The dimension of V, denoted dim V, is the
number of vectors in any basis of V
...
There is one subtle issue we are sweeping under the rug: Does every vector space have a
basis? The answer is yes but we will not prove this result here
...
, vn } in Rn containing exactly n
vectors
...
, vn } to be a basis of Rn , the set B must be linearly independent
and span B = Rn
...
For example, say the vectors {v1 , v2 ,
...
Then A−1
exists and therefore Ax = b is always solvable
...
, vn } = Rn
...
Theorem 16
...
, vn } be vectors in Rn
...
Or if span{v1 , v2 ,
...
129
Linear Independence, Bases, and Dimension
Example 16
...
Do the columns of the matrix A form a basis for R4 ?
2
3
3 −2
7
8 −6
4
A=
0
0
1
0
−4 −6 −6
3
Solution
...
Since we have n = 4 vectors in Rn , we
need only check that they are linearly independent
...
Therefore, the
vectors v1 , v2 , v3 , v4 form a basis for R4
...
By definition, if B = {v1 ,
...
, vk } = W, then B is a basis for W and in this case the dimension of W is k
...
As an example, in V = R3 subspaces can be classified by dimension:
1
...
2
...
These are spanned
by a single non-zero vector
...
The two dimensional subspaces in R3 are planes through the origin
...
4
...
Any set {v1 , v2 , v3 } in R3 that
is linearly independent is a basis for R3
...
14
...
2
−6
−3
1
A=
−3
8
2 −3
Solution
...
0
6
5
1 5/2 3/2
0
0
0
130
Lecture 16
The general solution to Ax = 0 in parametric form is
−6
−5
−3/2
+ s −5/2 = tv1 + sv2
x = t
1
0
0
1
By construction, the vectors
−5
−3/2
v1 =
0 ,
1
−6
−5/2
v2 =
1
0
span the null space (A) and they are linearly independent
...
In general, the dimension of the Null(A)
is the number of free parameters in the solution set of the system Ax = 0, that is,
dim Null(A) = d = n − rank(A)
Example 16
...
Find a basis for Col(A) and the dim Col(A) if
1 2
3 −4 8
0
1 2
A=
2 4 −3
3 6
0
2 8
...
By definition, the column space of A is the span of the columns of A, which we
denote by A = [v1 v2 v3 v4 v5 ]
...
For example,
first we determine if {v1 , v2 } is linearly independent
...
If {v1 , v2 } is not linearly independent then discard v2
and determine if {v1 , v3 } is linearly independent
...
Instead, we can use the fact that matrices that are row equivalent
induce the same solution set for the associated homogeneous system
...
It is easy to see that
b2 = 2b1 and b4 = 2b1 − 2b3
...
Thus, because b1 , b3 , b5 are linearly independent columns of B =rref(A), then v1 , v3 , v5 are linearly independent columns of A
...
This procedure works in general: To find a basis
for the Col(A), row reduce A ∼ B until you can determine which columns of B are linearly
independent
...
WARNING: Do not take the linearly independent columns of B as a basis for Col(A)
...
After this lecture you should know the following:
• what it means for a set to be linearly independent/dependents
• what a basis is (a spanning set that is linearly independent)
• what is the meaning of the dimension of a vector space
• how to determine if a given set in Rn is linearly independent
• how to find a basis for the null space and column space of a matrix A
132
Lecture 17
The Rank Theorem
17
...
Definition 17
...
We will
use rank(A) to denote the rank of A
...
The range of a mapping is sometimes called the image
...
Definition 17
...
We will use nullity(A) to denote the nullity of A
...
The rank and nullity of a matrix are connected via the following fundamental theorem
known as the Rank Theorem
...
3: (Rank Theorem) Let A be a m × n matrix
...
Moreover, the following equation holds:
n = rank(A) + nullity(A)
...
A basis for the column space is obtained by computing rref(A) and identifying the
columns that contain a leading 1
...
Therefore, if r is the number
of leading 1’s then r = rank(A)
...
The number of free parameters in the
The Rank Theorem
solution set of Ax = 0 is d and therefore a basis for Null(A) will contain d vectors, that is,
nullity(A) = d
...
Example 17
...
Find the rank and nullity of the matrix
1 −2
2
3 −6
1
1
...
Row reduce far enough to identify where the leading entries are:
1 −2
2 3 −6
2R +R2
0 −1 −3 1
1
A −−1−−→
0
0
1 0 −1
There are r = 3 leading entries and therefore rank(A) = 3
...
Example 17
...
Find the rank and nullity of the matrix
1 −3 −1
4
2
...
Row reduce far enough to identify where the leading entries are:
1 −3 −1
R1 +R2 ,R1 +R3
1
1
A −−−−−−−−→ 0
0
0 −1
There are r = 3 leading entries and therefore rank(A) = 3
...
Another way to see that nullity(A) = 0 is as follows
...
Therefore, there is only one vector in Null(A) = {0}
...
Using the rank and nullity of a matrix, we now provide further characterizations of
invertible matrices
...
6: Let A be a n × n matrix
...
(ii) Col(A) = Rn
(iii) rank(A) = n
(iv) Null(A) = {0}
134
Lecture 17
(v) nullity(A) = 0
(vi) A is an invertible matrix
...
1
Coordinates
Recall that a basis of a vector space V is a set of vectors B = {v1 , v2 ,
...
the set B spans all of V, that is, V = span(B), and
2
...
Hence, if B is a basis for V, each vector x∗ ∈ V can be written as a linear combination of B:
x∗ = c1 v1 + c2 v2 + · · · + cn vn
...
1, any vector
x ∈ span(B) can be written in only one way as a linear combination of v1 ,
...
In other
words, for the x∗ above, there does not exist other scalars t1 ,
...
To see this, suppose that we can write x∗ in two different ways using B:
x∗ = c1 v1 + c2 v2 + · · · + cn vn
x∗ = t1 v1 + t2 v2 + · · · + tn vn
...
Since B = {v1 ,
...
, vn
that gives the zero vector 0 is the trivial linear combination
...
, n
...
, vn }
...
, vn },
the scalars c1 , c2 ,
...
Our preceding discussion on the unique representation property of vectors in a given basis
leads to the following definition
...
1: Let B = {v1 ,
...
The coordinates
of x relative to the basis B are the unique scalars c1 , c2 ,
...
In vector notation, the B-coordinates of x will be denoted by
c1
c2
[x]B =
...
cn
and we will call [x]B the coordinate vector of x relative to B
...
If it is clear what basis we are working with, we will omit the subscript B and simply write
[x] for the coordinates of x relative to B
...
2
...
Find the coordinates of v =
relative to B
...
Let v1 = (1, 1) and let v2 = (−1, 1)
...
It is clear how the procedure of the previous example can be generalized
...
, vn } be a basis for R and let v be any vector in R
...
Then the B-coordinates of v is the unique column vector [v]B solving the linear system
Px = v
138
Lecture 18
that is, x = [v]B is the unique solution to Px = v
...
, vn are linearly
independent, the solution to Px = v is
[v]B = P−1 v
...
In summary, to find
coordinates with respect to a basis B in Rn , we need to solve a square linear system
...
3
...
One can show that B is linearly independent and therefore a basis for
W = span{v1 , v2 }
...
Solution
...
Therefore, x is in W, and the
B-coordinates of x are
2
[x]B =
3
Example 18
...
What are the coordinates of
in the standard basis E = {e1 , e2 , e3 }?
Solution
...
5
...
(i) Show that B = {1, t, t2 , t3 } is a basis for P3 [t]
...
Solution
...
Indeed, any polynomial
u(t) = c0 + c1 t + c2 t2 + c3 t3 is clearly a linear combination of 1, t, t2 , t3
...
Since the above equality must hold for all values of t, we conclude that c0 = c1 = c2 = c3 = 0
...
In the basis B, the
coordinates of v(t) = 3 − t2 − 7t3 are
3
0
[v(t)]B =
−1
−7
The basis B = {1, t, t2 , t3 } is called the standard basis in P3 [t]
...
6
...
is a basis for M2×2
...
Any matrix M =
m21 m22
trices in B:
0 0
0 0
0 1
1 0
m11 m12
+ m22
+ m21
+ m12
= m11
0 1
1 0
0 0
0 0
m21 m22
If
1 0
0 1
0 0
0 0
c1 c2
0 0
c1
+ c2
+ c3
+ c4
=
=
0 0
0 0
1 0
0 1
c3 c4
0 0
140
Lecture 18
then clearly c1 = c2 = c3 = c4 = 0
...
The coordinates of A =
−4 −1
0 0
0 0
0 1
1 0
,
,
,
B=
0 1
1 0
0 0
0 0
are
3
0
[A]B =
−4
−1
The basis B above is the standard basis of M2×2
...
2
Coordinate Mappings
Let B = {v1 , v2 ,
...
If x ∈ Rn and
[x]B are the B-coordinates of x relative to B then
x = P[x]B
...
For this reason, we call P the
change-of-coordinates matrix from the basis B to the standard basis in Rn
...
Multiplying equation (⋆) by P−1 we obtain
P−1 x = [x]B
...
Example 18
...
The columns of the matrix P form a basis B for R3 :
1
3
3
P = −1 −4 −2
...
(b) Find the B-coordinates of v = (2, −1, 0)
...
The matrix P maps B-coordinates to standard coordinates in R3
...
One can verify that
4
3
6
P−1 = −1 −1 −1
0
0 −1
Therefore, the B coordinates of v are
4
3
6
2
5
−1
−1 = −1
[v]B = P v = −1 −1 −1
0
0 −1
0
0
When V is an abstract vector space, e
...
Pn [t] or Mn×n , the notion of a coordinate
mapping is similar as the case when V = Rn
...
, vn } is a basis for V, we define the coordinate mapping P : V → Rn relative
to B as the mapping
P(v) = [v]B
...
8
...
What is P : M2×2 → R4 ?
Solution
...
3
a11 a12
a21 a22
a11
a12
=
a21
...
Then by definition
of a linear mapping, T(v + u) = T(v) + T(u) and T(αv) = αT(v) for every v, u ∈ V and
α ∈ R
...
, vn } be a basis of V and let γ = {w1 , w2 ,
...
Then for any v ∈ V there exists scalars c1 , c2 ,
...
, cn ) are the coordinates of v in the basis B By linearity of the
mapping T we have
T(v) = T(c1 v1 + c2 v2 + · · · + cn vn )
= c1 T(v1 ) + c2 T(v2 ) + · · · + cn T(vn )
Now each vector T(vj ) is in W and therefore because γ is a basis of W there are scalars
a1,j , a2,j ,
...
, am,j )
Substituting T(vj ) = a1,j w1 + a2,j w2 + · · · + am,j wm for each j = 1, 2,
...
Example 18
...
Consider the vector space V = P2 [t] of polynomial of degree no more than
two and let T : V → V be defined by
T(v(t)) = 4v′(t) − 2v(t)
It is straightforward to verify that T is a linear mapping
...
(a) Verify that B is a basis of V
...
(c) Find the matrix representation of T in the basis B
...
(a) Suppose that there are scalars c1 , c2 , c3 such that
c1 v1 + c2 v2 + c3 v3 = 0
Then expanding and then collecting like terms we obtain
c3 t2 + (c1 + 2c2 )t + (−c1 + 3c2 + c3 ) = 0
Since the above holds for all t ∈ R we must have
c3 = 0,
c1 + 2c2 = 0,
−c1 + 3c2 + c3 = 0
Solving for c1 , c2 , c3 we obtain c1 = 0, c2 = 0, c3 = 0
...
This proves
by definition that B is linearly independent
...
Hence,
[v]B = (1, 1, −1)
(c) The matrix representation A of T is
A = [T(v1 )]B [T(v2 )]B [T(v3 )]B
Now we compute directly that
T(v1 ) = −2t + 6,
And then one computes that
−18/5
[T(v1 )]B = 4/5 ,
0
And therefore
T(v2 ) = −4t + 2,
T(v3 ) = −2t2 + 8t − 2
−6/5
[T(v2 )]B = −2/5 ,
0
24/5
[T(v3 )]B = 8/5
−2
−18/5 −6/5 24/5
A = 4/5 −2/5 8/5
0
0
−2
144
Lecture 18
After this lecture you should know the following:
• what coordinates are (you need a basis)
• how to find coordinates relative to a basis
• the interpretation of the change-of-coordinates matrix as a mapping that transforms
one set of coordinates to another
145