Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
Homogeneous and Nonhomogeneous Systems
You should check that p = (1, −1, 0) solves the linear system Ax = b, and that v = (2, 3, 1)
solves the homogeneous system Ax = 0
...
7
...
2 −2 4
2
Solution
...
Let x3 = t1
and x2 = t2
...
3
Summary
The material in this lecture is so important that we will summarize the main results
...
, vd satisfies Avi = 0
...
, vd }
where p satisfies Ap = b and Avi = 0
...
4)
• how to write the solution set of a nonhomogeneous system in parametric vector form
Theorem 5
...
1
Linear independence
In Lecture 3, we defined the span of a set of vectors {v1 , v2 ,
...
, vn }
...
, vn } then by
definition there exists scalars t1 , t2 ,
...
A natural question that arises is whether or not there are multiple ways to express x as a
linear combination of the vectors v1 , v2 ,
...
For example, if v1 = (1, 2), v2 = (0, 1),
v3 = (−1, −1), and x = (3, −1) then you can verify that x ∈ span{v1 , v2 , v3 } and x can be
written in infinitely many ways using v1 , v2 , v3
...
The fact that x can be written in more than one way in terms of v1 , v2 , v3 suggests that there
might be a redundancy in the set {v1 , v2 , v3 }
...
The preceding discussion motivates the following definition
...
1: A set of vectors {v1 , v2 ,
...
, vj−1 , vj+1,
...
If {v1 , v2 ,
...
, vn } is linearly
independent
...
2
...
3
6
0
Show that they are linearly dependent
...
By inspection, we have
2
2
4
2v1 + v3 = 4 + 1 = 5 = v2
6
0
6
Thus, v2 ∈ span{v1 , v3 } and therefore {v1 , v2 , v3 } is linearly dependent
...
Hence, because {v1 , v2 v3 } is a linearly dependent set, it is possible to write the zero vector
0 as a linear combination of {v1 , v2 v3 } where not all the coefficients in the linear
combination are zero
...
Theorem 6
...
, vn } is linearly independent if and only if 0
can be written in only one way as a linear combination of {v1 , v2 ,
...
In other words,
if
t1 v1 + t2 v2 + · · · + tn vn = 0
then necessarily the coefficients t1 , t2 ,
...
Proof
...
, vn } is linearly independent then every vector x ∈ span{v1 , v2 ,
...
, vn }, and this applies to the
particular case of the zero vector x = 0
...
, vn }
...
Now take any x ∈ span{v1 , v2 ,
...
, vn }:
r1 v1 + r2 v2 + · · · + rn vn = x
s1 v1 + s2 v2 + · · · + sn vn = x
...
50
Lecture 6
The above equation is a linear combination of v1 , v2 ,
...
But we are assuming that the only way to write 0 in terms of {v1 , v2 ,
...
Therefore, we must have r1 − s1 = 0, r2 − s2 = 0,
...
, rn = sn
...
Therefore, each x ∈ span{v1 , v2 ,
...
, vn }, and thus {v1 , v2 ,
...
Because of Theorem 6
...
, vn } is that the vector equation
x1 v1 + x2 v2 + · · · + xn vn = 0
has only the trivial solution, i
...
, the solution x1 = x2 = · · · = xn = 0
...
, vn }
is linearly dependent, then there exist scalars x1 , x2 ,
...
Hence, if we suppose for instance that xn 6= 0 then we can write vn in terms of the vectors
v1 ,
...
xn
In other words, vn ∈ span{v1 , v2 ,
...
According to Theorem 6
...
, vn } is linearly independent if
the equation
x1 v1 + x2 v2 + · · · + xn vn = 0
(6
...
Now, the vector equation (6
...
Therefore, the set {v1 , v2 ,
...
But the homogeneous system Ax = 0 has only
the trivial solution if there are no free parameters in its solution set
...
Theorem 6
...
, vn } is linearly independent if and only if the the rank
of A is r = n, that is, if the number of leading entries r in the REF (or RREF) of A is
exactly n
...
5
...
Let A be the matrix
A = v1 v2
0
1
4
v3 = 1 2 −1
5 8 0
Performing elementary row operations we obtain
1 2 −1
A ∼ 0 1 4
0 0 13
Clearly, r = rank(A) = 3, which is equal to the number of vectors n = 3
...
Example 6
...
Are the vectors below linearly independent?
1
4
2
v1 = 2 , v2 = 5 , v3 = 1
3
6
0
Solution
...
Therefore,
{v1 , v2 , v3 } is linearly dependent
...
The REF of A = [v1 v2 v3 ] is
1 4
2
A ∼ 0 −3 −3
0 0
0
Since r = 2, the solution set of the linear system Ax = 0 has d = n − r = 1 free parameter
...
Choosing for instance t = 2 we obtain the
solution
2
4
x = t −1 = −2
...
And, for instance,
v3 = −2v1 + v2
that is, v3 ∈ span{v1 , v2 }
...
Indeed, if v1
is non-zero then
tv1 = 0
is true if and only if t = 0
...
For example, if v2 = tv1 then
tv1 − v2 = 0
is a non-trivial linear combination of v1 , v2 giving the zero vector 0
...
, vp } containing the zero vector, say that vp = 0, is linearly dependent
...
6
...
Theorem 6
...
, vp } be a set of vectors in Rn
...
, vp
are linearly dependent
...
, vp in Rn are linearly independent then p ≤ n
...
Let A = v1 v2 · · · vp
...
Since A has n rows, the
maximum rank of A is n, that is r ≤ n
...
Thus, the homogeneous system Ax = 0 has non-trivial
solutions
...
, vp } is linearly dependent
...
7 will be used when we discuss the notion of the dimension of a space
...
, vp } consisting of more than n vectors
is automatically linearly dependent
...
8
...
7
3
1
6
−2
Solution
...
Therefore,
by Theorem 6
...
, v5 }
is linearly dependent
...
Then
1 0 0 0 −1
0 1 0 0 1
A∼
0 0 1 0 0
0 0 0 1 −2
One solution to the linear system Ax = 0 is x = (−1, 1, 0, −2, −1) and therefore
(−1)v1 + (1)v2 + (0)v3 + (−2)v4 + (−1)v5 = 0
Example 6
...
Suppose that the set {v1 , v2 , v3 , v4 } is linearly independent
...
Solution
...
Suppose then that there exists scalars x1 , x2 , x3 such
that
x1 v1 + x2 v2 + x3 v3 = 0
...
But the set {v1 , v2 , v3 , v4 } is linearly independent, and therefore, it is necessary that x1 , x2 , x3
are all zero
...
54
Lecture 6
The previous example can be generalized as follows: If {v1 , v2 ,
...
, vd } is also linearly independent
...
4)
• the relationship between the linear independence of {v1 , v2 ,
...
7)
55
Linear Independence
56
Lecture 7
Lecture 7
Introduction to Linear Mappings
7
...
The domain of T is Rn and the co-domain of T is Rm
...
In engineering or physics, the domain is sometimes called the input space and the
co-domain is called the output space
...
Definition 7
...
In other words, b is in the range of T if there is an input x in the domain of T that outputs
b = T(x)
...
For
example, consider the vector mapping T : R2 → R2 defined as
T(x) =
#
" 2
x1 sin(x2 ) − cos(x21 − 1)
x21 + x22 + 1
...
On the other hand, b = (−1, 2) is in the range of T because
2
1
1 sin(0) − cos(12 − 1)
−1
T
=
=
= b
...
In Figure 7
...
A
crucial idea is that the range of T may not equal the co-domain
...
1: The domain, co-domain, and range of a mapping
...
2
Linear mappings
For our purposes, vector mappings T : Rn → Rm can be organized into two categories: (1)
linear mappings and (2) nonlinear mappings
...
2: The vector mapping T : Rn → Rm is said to be linear if the following
conditions hold:
• For any u, v ∈ Rn , it holds that T(u + v) = T(u) + T(v)
...
If T is not linear then it is said to be nonlinear
...
To see this, previously we computed that
1
−1
T
=
...
2 the following must hold:
1
3
=T 3
T
0
0
1
= 3T
0
−1
=3
2
−3
...
6
Example 7
...
Is the vector mapping T : R2 → R3 linear?
2x1 − x2
x1
= x1 + x2
T
x2
−x1 − 3x2
Solution
...
2 hold
...
We compute:
u1 + v1
T (u + v) = T
u2 + v2
2(u1 + v1 ) − (u2 + v2 )
= (u1 + v1 ) + (u2 + v2 )
−(u1 + v1 ) − 3(u2 + v2 )
2u1 + 2v1 − u2 − v2
= u1 + v1 + u2 + v2
−u1 − v1 − 3u2 − 3v2
2u1 − u2 + 2v1 − v2
= u1 + u2 + v1 + v2
−u1 − 3u2 − v1 − 3v2
2u1 − u2
2v1 − v2
= u1 + u2 + v1 + v2
−u1 − 3u2
−v1 − 3v2
= T(u) + T(v)
59
Introduction to Linear Mappings
Therefore, for arbitrary u, v ∈ R2 , it holds that
T(u + v) = T(u) + T(v)
...
Then:
2(cu1 ) − (cu2 )
cu1
T(cu) = T
= (cu1) + (cu2)
cu2
−(cu1 ) − 3(cu2)
c(2u1 − u2 )
= c(u1 + u2 )
c(−u1 − 3u2 )
2u1 − u2
= c u1 + u2
−u1 − 3u2
= cT(u)
Therefore, both conditions of Definition 7
...
Example 7
...
Let α ≥ 0 and define the mapping T : Rn → Rn by the formula T(x) = αx
...
In
either case, show that T is a linear mapping
...
Let u and v be arbitrary
...
This shows that condition (1) in Definition 7
...
To show that the second condition
holds, let c is any number
...
Therefore, both conditions of Definition 7
...
To see a
particular example, consider the case α = 12 and n = 3
...
1
x
2 3
60
Lecture 7
7
...
We discussed that we could interpret A as a mapping that takes the input vector x ∈ Rn
and produces the output vector Ax ∈ Rm
...
Such a mapping T will be called a matrix mapping corresponding to A and when convenient we will use the notation TA to indicate that TA is associated to A
...
3), that for any u, v ∈ Rn , and scalar c, matrix-vector multiplication
satisfies the properties:
1
...
A(cu) = cAu
...
Theorem 7
...
Then T is a linear mapping
...
6
...
In Example 7
...
2
...
By Theorem 7
...
61
Introduction to Linear Mappings
Let T : Rn → Rm be a vector mapping
...
In this case, we say that b is the image
of x under T or that x is mapped to b under T
...
However, if T(x) = Ax
is a matrix mapping, then it is clear that finding such a vector x is equivalent to solving the
matrix equation Ax = b
...
Theorem 7
...
Then b ∈ Rm is in the range of T if and only if the matrix equation Ax = b
has a solution
...
We proved that the
output vector Ax is a linear combination of the columns of A where the coefficients in the
linear combination are the components of x
...
, xn ) then
Ax = x1 v1 + x2 v2 + · · · + xn vn
...
, vn }
...
Therefore, if v1 , v2 ,
...
Example 7
...
Let
1
3 −4
−2
5
2 , b = 4
...
From Theorem 7
...
To solve
A b :
1
3
1
5
−3 −7
in the range of T if and only if the the matrix equation
the system Ax = b, row reduce the augmented matrix
−4 −2
1 3 −4 −2
2
4 ∼ 0 1 3
3
−6 12
0 0 −12 0
The system is consistent and the (unique) solution is x = (−11, 3, 0)
...
7
...
, vp and scalars
c1 , c2 ,
...
(⋆)
62
Lecture 7
Therefore, if all you know are the values T(v1 ), T(v2 ),
...
, vp }
...
9
...
Find T(2u + 3v)
...
Because T is a linear mapping we have that
T(2u + 3v) = T(2u) + T(3v) = 2T(u) + 3T(v)
...
Therefore,
0
−2
3
...
10
...
Write down a formula for Tθ and show that Tθ is a linear
mapping
...
If v = (cos(α), sin(α)) then
Tθ (v) =
#
"
cos(α + θ)
...
v
63
Introduction to Linear Mappings
If we scale v by any c > 0 then performing the same computation as above we obtain that
Tθ (cv) = cT(v)
...
cos(θ)
Thus, Tθ is a linear mapping
...
11
...
x3
0
Show that T is a linear mapping and describe the range of T
...
First notice that
x1
x1
1 0 0 x1
T x2 = x2 = 0 1 0 x2
...
0 0 0
Therefore, T is a linear mapping
...
2
...
For each b in the range of T, there are infinitely
many x’s such that T(x) = b
...
2: Projection onto the (x1 , x2 ) plane
64
Lecture 7
After this lecture you should know the following:
• what a vector mapping is
• what the range of a vector mapping is
• that the co-domain and range of a vector mapping are generally not the same
• what a linear mapping is and how to check when a given mapping is linear
• what a matrix mapping is and that they are linear mappings
• how to determine if a vector b is in the range of a matrix mapping
• the formula for a rotation in R2 by an angle θ
65
Introduction to Linear Mappings
66
Lecture 8
Lecture 8
Onto and One-to-One Mappings,
and the Matrix of a Linear Mapping
8
...
For example, if TA (x) = Ax is a matrix mapping and b
is such that the equation Ax = b has no solutions then the range of T does not contain b
and thus the range is not the whole co-domain
...
1: A vector mapping T : Rn → Rm is said to be onto if for each b ∈ Rm
there is at least one x ∈ Rn such that T(x) = b
...
Therefore:
Theorem 8
...
Then TA is onto if and only if the columns of A span all of Rm
...
11 and Theorem 8
...
3: Let TA : Rn → Rm be the matrix mapping TA (x) = Ax, where A ∈ Rm×n
...
Example 8
...
Let T : R3 → R3 be the matrix mapping with corresponding matrix
1
2 −1
A = −3 −4 2
5
2
3
Is TA onto?
67
Onto, One-to-One, and Standard Matrix
Solution
...
The dimension of the co-domain is m = 3 and therefore TA is
onto
...
5
...
The rref(A) is
1 2 −1 4
1 0 −1 0
A = −1 4 1 8 ∼ 0 1 0 2
2 0 −2 0
0 0 0 0
Therefore, r = rank(A) = 2
...
Notice that v3 = −v1 and v4 = 2v2
...
Therefore,
span{v1 , v2 , v3 , v4 } = span{v1 , v2 } =
6 R3
...
Theorem 8
...
If TA is onto then m ≤ n
...
If TA is onto then the rref(A) has r = m leading 1’s
...
The number of columns of A is n
...
An equivalent way of stating Theorem 8
...
68
Lecture 8
Corollary 8
...
Intuitively, if the domain Rn is “smaller” than the co-domain Rm and TA : Rn → Rm is
linear then TA cannot be onto
...
Linearity plays a key role in this
...
This situation cannot happen when the mapping is linear
...
8
...
TA is onto because the domain is R2 and the co-domain is R3
...
Geometrically, two vectors in R3 span a 2D plane going
through the origin
...
8
...
Indeed, if b ∈ Range(T) then there exists a x ∈ Rm such that
T(x) = b
...
That is, does there
exist a distinct y such that T(y) = b
...
9: A vector mapping T : Rn → Rm is said to be one-to-one if for each
b ∈ Range(T) there exists only one x ∈ Rn such that T(x) = b
...
To do this, we use the fact that if T : Rn → Rm is linear then
T(0) = 0
...
Theorem 8
...
Then T is one-to-one if and only if T(x) = 0
implies that x = 0
...
10, TA is one-to-one
if and only if the only solution to Ax = 0 is x = 0
...
69
Onto, One-to-One, and Standard Matrix
Theorem 8
...
The following statements are equivalent:
1
...
2
...
3
...
, vn are linearly independent
...
12
...
2 −2 0
2
Is TA one-to-one?
Solution
...
11, TA is one-to-one if and only if the columns of A are linearly
independent
...
From Lecture 6, we
know then that the columns are not linearly independent
...
Alternatively, A will have rank at most r = 3 (why?)
...
Intuitively, because R4 is “larger” than R3 , the linear mapping TA will have to project R4
onto R3 and thus infinitely many vectors in R4 will be mapped to the same vector in R3
...
13
...
By inspection, we see that the columns of A are linearly independent
...
Alternatively, one can compute that
1 0
rref(A) = 0 1
0 0
Therefore, r = rank(A) = 2, which is equal to the number columns of A