Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: LINEAR ALGEBRA
Description: This book has been written for the use of the students of degree and Honours classes of Indian universities and international universities

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


1

VECTOR SPACES AND SUBSPACES

What is a vector? Many are familiar with the concept of a vector as:
• Something which has magnitude and direction
...

• a description for quantities such as Force, velocity and acceleration
...
The
properties of general vector spaces are based on the properties of Rn
...


1
...
, an )
...

When n = 1 each ordered n-tuple consists of one real number, and so R may be
viewed as the set of real numbers
...
This set has the geometrical
interpretation of describing all points and directed line segments in the Cartesian x−y
plane
...

In the study of 3-space, the symbol (a1 , a2 , a3 ) has two different geometric interpretations: it can be interpreted as a point, in which case a1 , a2 and a3 are the
coordinates, or it can be interpreted as a vector, in which case a1 , a2 and a3 are
the components
...
, an ) can be

1

viewed as a “generalized point” or a “generalized vector” - the distinction is mathematically unimportant
...

Definitions
• Two vectors u = (u1 , u2 ,
...
, vn ) in Rn are called equal
if
u1 = v1 , u2 = v2 ,
...
, un + vn )
• Let k be any scalar, then the scalar multiple ku is defined by
ku = (ku1 , ku2 ,
...

• The zero vector in Rn is denoted by 0 and is defined to be the vector
0 = (0, 0,
...
, −un )
• The difference of vectors in Rn is defined by
v − u = v + (−u)
The most important arithmetic properties of addition and scalar multiplication
of vectors in Rn are listed in the following theorem
...

2

Theorem 1
...
If u = (u1 , u2 ,
...
, vn ), and w = (w1 , w2 ,
...
u + v = v + u
2
...
u + 0 = 0 + u = u
4
...
k(lu) = (kl)u
6
...
(k + l)u = ku + lu
8
...
2

Generalized Vector Spaces

The time has now come to generalize the concept of a vector
...
The axioms were chosen by abstracting the most important properties (theorem 1
...
of vectors in Rn ; as a consequence, vectors in Rn automatically
satisfy these axioms
...
The new types of vectors include,
among other things, various kinds of matrices and functions
...
Addition is a rule for associating with
each pair of objects u and v in V an object u + v, and scalar multiplication is a rule
for associating with each scalar k ∈ F and each object u in V an object ku such that

3

1
...

2
...

3
...
u + (v + w) = (u + v) + w
5
...

6
...
k(lu) = (kl)u
8
...
(k + l)u = ku + lu
10
...
Note also that we often restrict our attention
to the case when F = R or C
...
In each example we specify a nonempty set of
objects V
...

1
...


4

2
...

3
...

4
...

5
...

1
...
1

Some Properties of Vectors

It is important to realise that the following results hold for all vector spaces
...

Theorem 1
...
If u, v, w ∈ V (a vector space) such that u + w = v + w, then u = v
...
1
...

Theorem 1
...
Let V be a vector space over the field F, u ∈ V , and k ∈ F
...


5

1
...
2

Quiz

True or false?
(a) Every vector space contains a zero vector
...

(c) In any vector space, au = bu implies a = b
...


1
...
This
section will look closely at this important concept
...

In general, all ten vector space axioms must be verified to show that a set W with
addition and scalar multiplication forms a vector space
...
For example, there is no need
to check that u + v = v + u (axiom 3) for W because this holds for all vectors in V
and consequently holds for all vectors in W
...
Thus to show that W is a subspace of a vector space V (and
hence that W is a vector space), only axioms 1, 2, 5 and 6 need to be verified
...

Theorem 1
...
If W is a set of one or more vectors from a vector space V , then W
is a subspace of V if and only if the following conditions hold
...

6

(b) If k is any scalar and u is any vector in W , then ku is in W
...
If W is a subspace of V , then all the vector space axioms are satisfied; in
particular, axioms 1 and 2 hold
...

Conversely, assume conditions (a) and (b) hold
...
Axioms 3, 4, 7, 8, 9 and 10 are automatically satisfied by the vectors
in W since they are satisfied by all vectors in V
...

Let u be any vector in W
...
Setting
k = 0, it follows from theorem 1
...

Remarks
• Note that a consequence of (b) is that 0 is an element of W
...
4 holds and closed under scalar
multiplication if condition (b) holds
...
4 states that W is a
subspace of V if and only if W is closed under addition and closed under scalar
multiplication
...
A plane through the origin of R3 forms a subspace of R3
...
Then u + v must lie in W
because it is the diagonal of the parallelogram determined by u and v, and ku
must lie in W for any scalar k because ku lies on a line through u
...


7

2
...
It is evident geometrically that the sum of two vectors on this line also lies on the line and that a
scalar multiple of a vector on the line is on the line as well
...

3
...
, an belong to some field F
...
The set W is a subspace
of P (F) (example 4 on page 5), and if F = R it is also a subspace of the vector
space of all real-valued functions (discussed in example 3 on page 5)
...
This
vector space W is denoted Pn (F)
...
The transpose AT of an m × n matrix A is the n × m matrix obtained from A
by interchanging rows and columns
...
The set of all symmetric matrices in Mn×n (F) is a subspace
of Mn×n (F)
...
The trace of an n × n matrix A, denoted tr(A), is the sum of the diagonal
entries of A
...

1
...
1

Operations on Vector Spaces

Definitions
• The addition of two subsets U and V of a vector space is defined by:
U + V = {u + v|u ∈ U, v ∈ V }
• The intersection ∩ of two subsets U and V of a vector space is defined by:
U ∩ V = {w|w ∈ U and w ∈ V }
• A vector space W is called the direct sum of U and V , denoted U ⊕ V , if U and
V are subspaces of W with U ∩ V = {0} and U + V = W
...

Theorem 1
...
Any intersection or sum of subspaces of a vector space V is also a
subspace of V
...
3
...

(b) The empty set is a subspace of every vector space
...

(d) The intersection of any two subsets of V is a subspace of V
...

9

1
...


...

Write in matrix form: Ax
A =
[aij ] is
the m × n coefficient matrix
...


...
 is the column vector of unknowns, and b = 
...





 is the column


Note: aij , bj ∈ R or C
...
4
...

1
...

2
...
(The pivot
...
Subtract multiples of row 1 from all other rows so all entries in column j below
the top are then 0
...
Cover top row; repeat 1 above on rest of rows
...
0 rows remain
...

1
...
2

Example

Use Gaussian elimination to solve:
x3 −

x4 = 2

−9x1 − 2x2 + 6x3 − 12x4 = −7
3x1 + x2 − 2x3 + 4x4 = 2
2x3
1
...
3

= 6

Definition (row echelon form)

A matrix is in row echelon form (r
...
f
...

The Gauss algorithm converts any matrix to one in row echelon form
...

1
...
4

Elementary row operations

1
...

2
...

3
...

The Gauss algorithm uses only 1 and 2
...
4
...
e
...
of [A|bb]
...
e
...
gives each variable a single value, so the
number of variables, n, equals the number of non-zero rows in the r
...
f
...
e
...
is (0 0
...
We can’t solve
0x1 + 0x2 + · · · + 0xm = d if d 6= 0; it says 0 = d
...

(3) Infinitely many solutions; here the number of rows of the r
...
f
...

Note that a homogeneous system has b = 0 , i
...
, all zero RHS
...

1
...
6

Examples

x1 + x2 − x3 = 0
2x1 − x2

= 0

4x1 + x2 − 2x3 = 0

x2 − 2x3 + 4x4 = 2
2x2 − 3x3 + 7x4 = 6
x3 − x4 = 2
1
...
7

Different right hand sides

x = b j , for j = 1,
...
b r ] and find its r
...
f
...
b 0r ]
...
e
...
corresponding to A
...
, r, by back substitution
...
4
...

So we find r
...
f
...
e n ], i
...
, determine the r
...
f
...

Once we have found the r
...
f
...

If the last row of U is all zeros, A has no inverse
...

If such a matrix C exists, it is unique
...

1
...
9

Example


1 −1
4




Does A =  1
0 −2  have an inverse?


2 −2 10
If so, find it
...
4
...
, vr if it can
be expressed in the form
w = k 1 v 1 + k2 v 2 + · · · + kr v r
where k1 , k2 ,
...

Example

13

1
...
Show that w =
(9, 2, 7) is a linear combination of u and v and that w0 = (4, −1, 8) is not a
linear combination of u and v
...
4
...
, vr are vectors in a vector space V , then generally some vectors in V may
be linear combinations of v1 , v2 ,
...
The following theorem
shows that if a set W is constructed consisting of all those vectors that are expressible
as linear combinations of v1 , v2 ,
...

Theorem 1
...
If v1 , v2 ,
...
, vr is a subspace of V
...
, vr every other subspace
of V that contains v1 , v2 ,
...


(a) To show that W is a subspace of V , it must be proven that it is closed

under addition and scalar multiplication
...
If u and v are vectors in W , then
u = c1 v1 + c2 v2 + · · · + cr vr
and
v = k1 v1 + k2 v2 + · · · + kr vr
where c1 , c2 ,
...
, kr are scalars
...
, vr and consequently
lie in W
...

14

(b) Each vector vi is a linear combination of v1 , v2 ,
...
, vr
...
, vr
...
, vr
...


Definitions
• If S = {v1 , v2 ,
...
, vr , and it is said that the vectors v1 , v2 ,
...
To indicate that W is the space spanned by the vectors in the set
S = {v1 , v2 ,
...

W = span(S) or W = span{v1 , v2 ,
...
, xn span the vector space Pn defined previously since each polynomial p in Pn can be written as
p = a0 + a1 x + · · · + an x n
which is a linear combination of 1, x, x2 ,
...
This can be denoted by writing
Pn = span{1, x, x2 ,
...
For example, any two noncolinear vectors that lie
in the x − y plane will span the x − y plane
...


15

Theorem 1
...
Let S = {v1 , v2 ,
...
, wk } be two sets of
vectors in a vector space V
...

Proof
...

If
vi 6= a1 w1 + a2 w2 + · · · + an wn
for all possible a1 , a2 ,
...

1
...
12

Quiz

True or false?
(a) 0 is a linear combination of any non-empty set of vectors
...

16

1
...
In
general, it is possible that there may be more than one way to express a vector in V
as a linear combination of vectors in a spanning set
...
Spanning sets with this property play a fundamental role in
the study of vector spaces
...
, vr } is a nonempty set of vectors, then the vector
equation
k1 v1 + k2 v2 + · · · + kr vr = 0
has at least one solution, namely
k1 = 0, k2 = 0,
...
If there are
other solutions, then S is called a linearly dependent set
...
If v1 = (2, −1, 0, 3), v2 = (1, 2, 5, −1) and v3 = (7, −1, 5, 8), then the set of
vectors S = {v1 , v2 , v3 } is linearly dependent, since 3v1 + v2 − v3 = 0
...
The polynomials
p1 = 1 − x, p2 = 5 + 3x − 2x2 , p3 = 1 + 3x − x2
form a linearly dependent set in P2 since 3p1 − p2 + 2p3 = 0
3
...
In terms of
components the vector equation
k1 i + k 2 j + k3 k = 0
17

becomes
k1 (1, 0, 0) + k2 (0, 1, 0) + k3 (0, 0, 1) = (0, 0, 0)
or equivalently,
(k1 , k2 , k3 ) = (0, 0, 0)
Thus the set S = {i, j, k} is linearly independent
...

4
...

5
−4
0 5
6 −2 −7
−1 −3 2
0 0 0
The following two theorems follow quite simply from the definition of linear independence and linear dependence
...
8
...

(b) Linearly independent if and only if no vector in S is expressible as a linear
combination of the other vectors in S
...
Recall that the vectors
v1 = (2, −1, 0, 3), v2 = (1, 2, 5, −1), v3 = (7, −1, 5, 8)

18

were linear dependent because
3v1 + v2 − v3 = 0
...
9
...

(b) A set with exactly two vectors is linearly independent if and only if neither vector
is a scalar multiple of the other
...
This section will attempt to make this intuitive notion of dimension
precise and extend it to general vector spaces
...
1

Coordinate systems of General Vector Spaces

A line is thought of as 1-Dimensional because every point on that line can be specified
by 1 coordinate
...
What defines
this coordinate system? The most common form of defining a coordinate system is
the use of coordinate axes
...
But there is also a way of specifying the coordinate system with vectors
...
In the case of the x − y plane the x and y-axes are
replaced by the well known unit vectors i and j respectively
...
The point P can be specified by the
vector OP
...

Informally stated, vectors such as i and j that specify a coordinate system are
called “basis vectors” for that system
...
As long as linear combinations of the vectors chosen are capable of
specifiying all points in the plane
...
Different basis vectors however do change the coordinates of
a point, as the following example demonstrates
...
Let the sets S,U and V be
three sets of basis vectors
...
The coordinates of P relative
to each set of basis vectors is:
S → (1, 2)
U → (1, 1)
T → (1, 1)
The following definition makes the preceding ideas more precise and enables the
extension of a coordinate system to general vector spaces
...
, vn } is a set of vectors in V , then
S is called a basis for V if the following two conditions hold:
(a) S is linearly independent
(b) S spans V
A basis is the vector space generalization of a coordinate system in 2-space and
3-space
...

Theorem 2
...
If S = {v1 , v2 ,
...

Proof
...
To see that
there is only one way to express a vector as a linear combination of the vectors in S,
suppose that some vector v can be written as
v = c1 v1 + c2 v2 + · · · + cn vn
and also as
v = k1 v1 + k2 v2 + · · · + kn vn
21

Subtracting the second equation from the first gives
0 = (c1 − k1 )v1 + (c2 − k2 )v2 + · · · + (cn − kn )vn
Since the right side of this equation is a linear combination of vectors in S, the linear
independence of S implies that
(c1 − k1 ) = 0, (c2 − k2 ) = 0,
...
, c n = k n
Thus the two expressions for v are the same
...
, vn } is a basis for a vector space V , and
v = c1 v1 + c2 v2 + · · · + cn vn
is the expression for a vector v in terms of the basis S, then the scalars
c1 , c2 ,
...
The vector
(c1 , c2 ,
...
, cn )
• If v = [v]S then S is called the standard basis
...

Examples
22

1
...
5 it was shown that if
i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1)
then S = {i, j, k} is a linearly independent set in R3
...
It is in fact a standard basis for R3
...


2
...
, vn } that forms a basis
...
In addition, the zero vector space is regarded as
finite-dimensional
...

• The vector space of all real valued functions defined on (−∞, ∞) is infinitedimensional
...
2
...
, vn } is any
basis, then:
23

(a) Every set with more than n vectors is linearly dependent
...

Proof
...
, wm } be any set of m vectors in V , where m > n
...
Since S = {v1 , v2 ,
...


...
, kn must be found, not
all zero, such that
k1 w1 + k2 w2 + · · · + km wm = 0
combining the above 2 systems of equations gives
(k1 a11 + k2 a12 + · · · + km a1m )v1
+ (k1 a21 + k2 a22 + · · · + km a2m )v2

...
, km , not
all zero, that satisfy
a11 k1 + a12 k2 + · · · + a1m km = 0
a21 k1 + a22 k2 + · · · + a2m km = 0

...

24

an1 k1 + an2 k2 + · · · + anm km = 0
As the system is homogenous and there are more unknowns than equations
(m > n), we have an infinite number of solutions, or in other words there are
non trivial solutions such that k1 , k2 ,
...

(b) Let S 0 = {w1 , w2 ,
...
It
remains to be shown that S 0 does not span V
...
This leads to a contradiction of the linear dependence of
the basis S = {v1 , v2 ,
...

If S 0 spans V , then every vector in V is a linear combination of the vectors in
S 0
...


...
, kn
not all zero, such that
k1 v1 + k2 v2 + · · · + kn vn = 0
Observe the similarity to the above two systems compared with those given in
the proof of (a)
...
Thus the above system in
the same way again reduces to the problem of finding k1 , k2 ,
...


...
, km are not all zero
...


The last theorem essentially states the following
...
Let S 0 be another set of vectors in V
consisting of m vectors
...
If m is less than n, S 0 cannot form a
basis for V because it does not span V
...
2 leads directly into one of
the most important theorems in linear algebra
...
3
...

And thus the concept of dimension is almost complete
...

Definition
• The dimension of a finite-dimensional vector space V , denoted by dim(V ), is
defined to be the number of vectors in a basis for V
...

Examples
1
...
Determine a basis (and hence dimension) for the solution space of the homogenous system:
2x1 + 2x2 −
−x1 −

x3

x2 + 2x3 − 3x4 + x5 = 0
x2 − 2x3

x1 +

x3 +

2
...
In
many ways these theorems form the building blocks of other results in linear algebra
...
4
...
Let S be a nonempty set of vectors in a
vector space V
...

(b) If v is a vector in S that is expressible as a linear combination of other vectors
in S, and if S − {v} denotes the set obtained by removing v from S, then S
and S − {v} span the same space: that is,
span(S) = span(S − {v})
A proof will not be included, but the theorem can be visualised in R3 as follows
...
These two vectors span a
plane
...


27

(b) Consider three non-colinear vectors in a plane that form a set S
...
If any one of the vectors is removed from S to give S 0 it is
clear that S 0 still spans the plane
...

Theorem 2
...
If V is an n-dimensional vector space and if S is a set in V with
exactly n vectors, then S is a basis for V if either S spans V or S is linearly independent
...
Assume that S has exactly n vectors and spans V
...
But if this is not so, then
some vector v in S is a linear combination of the remaining vectors
...
4(b) that the remaining set of
n-1 vectors still spans V
...
2(b),
that no set with fewer than n vectors can span an n-dimensional vector space
...

Assume S has exactly n vectors and is a linearly independent set
...
But if this is not so, then there is some
vector v in V that is not in span(S)
...
4(a) that this set of n+1 vectors is still linearly independent
...
2(a) that no set with more than n
vectors in an n-dimensional vector space can be linearly independent
...

Examples
• v1 = (−3, 8) and v2 = (1, 1) form a basis for R2 because R2 has dimension two
and v1 and v2 are linearly independent
...
6
...

(a) If S spans V but is not a basis for V , then S can be reduced to a basis for V by
removing appropriate vectors from S
...

Proof
...


Let vc1 be the first nonzero vector in the set S
...
Find the next
vector in the list which is not a linear combination of vc1 and vc2 and call it
vc3
...

(b) This proof is also constructive
...
Begin with u1 , u2 ,
...
Let v1 , v2 ,
...
Now it is necessary
and important that r < n
...
, ur , v1 , v2 ,
...
, ur

Theorem 2
...
If W is a subspace of a finite-dimensional vector space V , then
dim(W ) ≤ dim(V ); moreover, if dim(W ) = dim(V ), then W = V
Proof
...
, wm } be a basis for W
...
If it is, then dim(W ) = dim(V ) = m
...
Thus, dim(W ) ≤ dim(V ) in all cases
...
5, S is a basis for V
...


29

2
...
1

Quiz

True or false?

(a) The zero vector space has no basis
...

(c) Every vector space has a finite basis
...

(e) If a vector space has a finite basis, then the number of vectors in every basis is
the same
...
Then S1 cannot contain
more vectors than S2
...

(h) Every subspace of a finite dimensional vector space is finite dimensional
...

(j) If V is an n dimensional vector space, and if S is a subset of V with n vectors,
then S is linearly independent if and only if S spans V
...
In this section we introduce the idea of length through the structure of inner
product spaces
...

Definition
Let V be a vector space over F
...

The main example is when V = Fn
...
, un ) and v = (v1 , v2 ,
...

Definitions
• A vector space V over F endowed with a specific inner product is called an
inner product space
...

• The norm (or length, or magnitude) of a vector u is given by kuk =

31

p

hu, ui
...

• If u and v are orthogonal vectors and both u and v have a magnitude of one
(with respect to h, i), then u and v are said to be orthonormal
...
An orthogonal set in which
each vector has a magnitude of one is called an orthonormal set
...
1
...

(a) hx, y + zi = hx, yi + hx, zi
...

(c) hx, 0i = h0, xi = 0
...

(e) If hx, yi = hx, zi for all x ∈ V , then y = z
...
(a) - (d) exercises
(e) By part (a) and (b), hx, y − zi = 0 for all x ∈ V
...
By (d) this implies that y = z
...
The
proof of this result is extremely important, since it makes use of an algorithm, or
method, for converting an arbitary basis into an orthonormal basis
...
2
...

Proof
...
, um } be any basis for V
...
The following sequence of steps will produce an
orthogonal basis {v1 , v2 ,
...

32

Step 1 Let v1 = u1
...
This can be done using
the formula:

v2 = u2 −

hu2 , v1 i
hv1 , v1 i


v1

Of course, if v2 = 0, then v2 is not a basis vector
...
, un }
...
, un } ensures that v3 6= 0
...

Step 4 To determine a vector v4 that is orthogonal to v1 , v2 and v3 , compute the
component of u4 orthogonal to the space W3 spanned by v1 , v2 and v3 using
the formula

v4 = u4 −

hu4 , v1 i
hv1 , v1 i




v1 −

hu4 , v2 i
hv2 , v2 i




v2 −

hu4 , v3 i
hv3 , v3 i


v3

Continuing in this way, an orthogonal set of vectors, {v1 , v2 ,
...
Since V is an m-dimensional vector space and every orthogonal set is
linearly independent, the set {v1 , v2 ,
...


33

This preceding step-by-step construction for converting an arbitary basis into an
orthogonal basis is called the Gram-Schmidt process
...
Consider the vector space R3 with the Euclidean inner product
...

Step 1
v1 = u1 = (1, 1, 1)
Step 2


v2


u2 · v1
= u2 −
v1
v1 · v1


2
−2 1 1
= (0, 1, 1) − (1, 1, 1) =
, ,
3
3 3 3

Step 3



u3 · v2
u3 · v1
v1 −
v2
= u3 −
v1 · v1
v2 · v2


1
1/3 −2 1 1
= (0, 0, 1) − (1, 1, 1) −
, ,
3
2/3 3 3 3


1 1
=
0, − ,
2 2


v3

Thus,

v1 = (1, 1, 1), v2 =

2 1 1
− , ,
3 3 3




, v3 =

1 1
0, − ,
2 2



form an orthogonal basis for R3
...
, un } into an orthonormal basis {q1 , q2 ,
...
, qk } is an orthonormal basis for the space spanned by {u1 ,
...

• qk is orthogonal to {u1 , u2 ,
...

The proofs are omitted but these facts should become evident after some thoughtful
examination of the proof of Theorem 3
...


3
...

• An inner product space must be over the field of real or complex numbers
...

• If x, y and z are vectors in an inner product space such that hx, yi = hx, zi,
then y = z
...


35

4

LINEAR TRANSFORMATIONS AND MATRICES

Definitions
• Let V, W be vector spaces over a field F
...

• Let A be an m × n matrix and let T : Fn → Fm be the linear transformation
defined by T (x) = Ax for all x ∈ Fn
...

4
...
1

Basic Properties of Linear Transformations

Theorem 4
...
If T : V → W is a linear transformation, then:
(a) If T is linear, then T (0) = 0
(b) T is linear if and only if T (av + w) = aT (v) + T (w) for all v, w in V and
a ∈ F
...

Part (a) of the above theorem states that a linear transformation maps 0 into 0
...
Part (b)
is usually used to show that a transformation is linear
...
TA is a linear transformation
...
Let u and
v ∈ Fn , then
T (λu + v) = A(λu + v)
= λAu + Av
= λTA (u) + TA (v)
and thus TA is a linear transformation
...
If I is the n × n identity matrix, then for every vector x in Fn
TI (x) = Ix = x
so multiplication by I maps every vector in Fn into itself
...

3
...
Then Y = AX − XB is also n × n
...
Then Y = AX −XB
defines a transformation T : V → V
...
2
...

Example
1
...
1

Geometric Transformations in R2

This section consists of various different transformations of the form TA that have a
geometrical interpretation
...

Examples of Geometric Transformations
• Operators on R2 and R3 that map each vector into its symmetric image about
some line or plane are called reflection operators
...
There are three main reflections in R2
...
Considering the transformation from the coordinates (x, y)
to (w1 , w2 ) the properties of the operator are as follows
...
Reflection about the y-axis: The equations for this transformation are
w1 = −x
w2 = y
The standard matrix for the transformation is clearly


−1 0

A=
0 1
To demonstrate the reflection, consider the example below
...
Reflection about the x-axis: The equations for this transformation are
w1 = x
w2 = −y
38

The standard matrix for the transformation is clearly


1
0

A=
0 −1
To demonstrate the reflection, consider the example below
...
Reflection about the line y = x: The equations for this transformation
are
w1 = y
w2 = x
The standard matrix for the transformation is clearly


0 1

A=
1 0
To demonstrate the reflection, consider the example below
...


39

Such operators are of the form TA and are thus linear
...
These are summarised below
...

1
...

 
1
Let x =  
2

therefore TA (x) = Ax = 

1
0




2
...

 
1
Let x =  
2


0

therefore TA (x) = Ax = 

2




• An operator that rotates each vector in R2 , through a fixed angle θ is called
a rotation operator on R2
...
There is only one rotation in R2 , due to the generality of the formula
...
Considering the transformation from the
coordinates (x, y) to (w1 , w2 ) the properties of the operator are as follows
...
Rotation through an angle θ: The equations for this transformation
are
w1 = x cos θ − y sin θ
w2 = x sin θ + y cos θ
The standard matrix for the transformation is clearly


cos θ − sin θ

A=
sin θ
cos θ
To demonstrate the projection, consider the example below
...
Such operators are of the form TA and are thus linear
...
Considering
the transformation from the coordinates (x, y) to (w1 , w2 ) the properties of the
operator are as follows
...
Contraction with factor k on R2 , (0 ≤ k ≤ 1): The equations for this
transformation are
...

 
1
1
Let k = and let x =  
2
2

therefore TA (x) = Ax = 

1
2

1




2
...

 
1
Let k = 2 and let x =  
2
 
2
therefore TA (x) = Ax =  
4

4
...

Remark: Observe that this definition requires the domain of T2 (which is V ) to
contain the range of T1 ; this is essential for the formula T2 (T1 (u)) to make sense
...

Theorem 4
...
If T1 : U → V and T2 : V → W are linear transformations, then
(T2 ◦ T1 ) : U → W is also a linear transformation
...
If u and v are vectors in U and s ∈ F, then it follows from the definition of a
composite transformation and from the linearity of T1 and T2 that
T2 ◦ T1 (su + v) = T2 (T1 (su + v))
= T2 (sT1 (u) + T1 (v))
= sT2 (T1 (u)) + T2 (T1 (v))
= sT2 ◦ T1 (u) + T2 ◦ T1 (v)
and thus the proof is complete
...
Let A be an m × n matrix, and B be an n × p matrix, then AB is an m × p
matrix
...

Then
TA ◦ TB = TA (TB (x))
= ABx
= (AB)x
= TAB (x)
where x ∈ Fp
...

2
...

T (−v1 + 3v2 ) = −T (v1 ) + 3T (v2 )
= −2v1 − 3v2 + 3(−7v1 + 8v2 )
= −23v1 + 21v2
Hence
T ◦ T (−v1 + 3v2 ) = T (−23v1 + 21v2 )
= −23T (v1 ) + 21T (v2 )
= −23(2v1 + 3v2 ) + 21(−7v1 + 8v2 )
= −193v1 + 99v2

44

4
...
It is denoted by ker(T )
...
In mathematical notation:
Im(T ) = {w ∈ W | w = T (v) for some v ∈ V }
Examples
1
...
Since Iv = v for all vectors in V ,
every vector in V is the image of some vector (namely, itself); thus, Im(I) = V
...

2
...
The kernel
of T is the set of points that T maps into 0 = (0, 0, 0); these are the points on
the z-axis
...
But every point (x0 , y0 , 0) in the x − y
plane is the image under T of some point; in fact, it is the image of all points on
the vertical line that passes through (x0 , y0 , 0)
...

3
...
Since every vector in the x − y plane can be obtaine by
rotating some vector through the angle θ, one obtains Im(T ) = R2
...

45

4
...

This is no accident as the following theorem points out
...
4
...

(b) The range of T is a subspace of W
...


(a) To show that ker(T ) is a subspace, it must be shown that it contains at

least one vector and is closed under addition and scalar multiplication
...
1, the vector 0 is in ker(T ), so this set contains at least one
vector
...
Then
T (v1 + v2 ) = T (v1 ) + T (v2 ) = 0 + 0 = 0
so that v1 and v2 is in ker(T )
...

(b) Since T (0) = 0, there is at least one vector in Im(T )
...
To prove this part it must be shown
that w1 + w2 and kw1 are in the range of T ; that is, vectors a and b must be
found in V such that T (a) = w1 + w2 and T (b) = kw1
...
Let a = a1 + a2 and b = ka1
...


Theorem 4
...
If T : U → V is a linear transformation and {u1 , u2 ,
...
, T (un ))
This theorem is best demonstrated by a simple example
...
Then TA : Fn → Fm
...
, en } be the
standard basis for Fn
...
, TA (en ))
= span(Ae1 , Ae2 ,
...
, coln (A))

4
...

Example
• Let U be a vector space of dimension n, with basis {u1 , u2 ,
...

Theorem 4
...
If T : U → V is a linear transformation from an n-dimensional vector
space U to a vector space V , then
rank(T ) + nullity(T ) = dim(U ) = n
Proof
...


48

Case 1 Let U be the zero vector space
...
1 it is known that
T (0) = 0
...
, un }
...

(a) Consider the case where ker(T ) = {0}
...
As u ∈ U it can
be expressed as
u = x 1 u1 + x 2 u2 + · · · + x n un

(1)

As u ∈ ker(T ) it can be stated that
0 = T (u) = x1 T (u1 ) + x2 T (u2 ) + · · · + xn T (un )

(2)

Due to the fact that ker(T ) = {0}, u = 0
...
, un it follows from equation (1) that x1 , x2 ,
...
It then also follows from equation (2) that T (u1 ), T (u2 ),
...
It is known from Theorem 4
...
, T (un ))
...
, T (un ) are linearly
independent they form a basis for Im(T )
...
Theorem 4
...
, T (un ))
...
, un ∈ ker(T )
...
, T (un ) = 0
...
Therefore
rank(T )+nullity(T ) = 0 + n = n = dim(U )
49

(c) Consider the case where 1 ≤ nullity(T ) < n
...
, ur be a basis for the kernel
...
, ur }
form a linearly independent set, theorem 2
...
, un , such that {u1 ,
...
, un } is a basis
for U
...
, T (un )} form a basis for the image of T
...
If b is any vector in
Im(T ), then b = T (u) for some vector u in U
...
, ur , ur+1 ,
...
, ur lie in the kernel of T , it is clear that T (u1 ),
...

Finally, it shall be shown that S is a linearly independent set and consequently forms a basis for Im(T )
...
Since T is linear, equation (3)
can be rewritten as
T (kr+1 ur+1 + · · · + kn un ) = 0

50

which says that kr+1 ur+1 + · · · + kn un is in the kernel of T
...
, ur }, say
kr+1 ur+1 + · · · + kn un = k1 u1 + · · · + kr ur
Thus,
k1 u1 + · · · + kr ur − kr+1 ur+1 − · · · − kn un = 0
Since {u1 ,
...


Examples Let T : R2 → R2 be the linear operator that rotates each vector in the
x − y plane through an angle of θ
...
Thus,
rank(T )+nullity(T ) = 2 + 0 = 2 = dim(U )
which is consistent with the fact that the domain of T is two-dimensional
...
5

Matrix of a Linear Transformation

In this section it shall be shown that if U and V are finite-dimensional vector spaces,
then with a little ingenuity any linear transformation T : U → V can be regarded as
a matrix transformation
...

Definition
• Suppose that U is an n-dimensional vector space and V an m-dimensional vector
space
...
Let β and γ be bases for U
and V respectively, then for each x in U , the coordinate vector [x]β will be a

51

vector in Fn , and the coordinate vector [T (x)]γ will be a vector in Fm
...
7
...
, un } and γ = {v1 , v2 ,
...
If T : U → V is a linear
transformation then
(a) the matrix of the transformation relative to bases β and γ always exists
...
Let β = {u1 , u2 ,
...
, vm } be a basis for the m-dimensional space V
...


...


...


...


...
In particular, this equation must hold for
the basis vectors u1 , u2 ,
...
, A[un ]β = [T (un )]γ
52

(5)

But







[u1 ]β = 





1
0
0

...

0











0
0

 
 

 
 

 1 
 0 

 
 

 
 
 , [u2 ]β =  0  ,
...


...


...


...


...


...


...


...


...


...


...


...


...

 
...

 
...


...


...


...


...


am1 am2 · · · amn

53













0
0
0

...

1




  a1n
 
  a
  2n
=
...

 


amn










Substituting these results into equation (5) yields





a
a11
a12

 1n








 a
 a21 
 a 

 = [T (u1 )]γ ,  22  = [T (u2 )]γ ,
...


...


...


...
, T (un )
with respect to the basis γ
...

Examples
1
...
Let β = {E1 , E2 ,
...
, em } be the standard basis for Fm
...
 = colj (B)

...
Let U have the basis β = {u1 , u2 , u3 } and let V have the basis γ = {v1 , v2 }
...
Let V = M2×2 (R) and let T : V → V be the linear transformation given by
T (X) = BX − XB where X ∈ V and

B=

a b
c d




Let β = {E11 , E12 , E21 , E22 } be the standard basis for V where








1 0
0 1
0 0
0 0
 , E12 = 
 , E21 = 
 , E22 = 

E11 = 
0 0
0 0
1 0
0 1
To find [T ]ββ it is necessary to do the following calculations:


 


a b
1 0
1 0
a b

−


T (E11 ) = BE11 − E11 B = 
c d
0 0
0 0
c d

=

0 −b
c

0


 = 0E11 + −bE12 + cE21 + 0E22


T (E12 ) = BE12 − E12 B = 

=

−c a − d
0

c

a b
c d




0 1
0 0





−

0 1
0 0





 = −cE11 + (a − d)E12 + 0E21 + cE22
55

a b
c d





T (E21 ) = BE21 − E21 B = 

=

b

0

T (E22 ) = BE22 − E22 B = 
0

0 0
1 0



−

0 0
1 0




a b
c d




 = bE11 + 0E12 + (d − a)E21 + −bE22


=

c d







d − a −b



a b



b

−c 0

a b
c d




0 0
0 1





−

0 0
0 1




a b
c d





 = 0E11 + bE12 + −cE21 + 0E22

Therefore it follows that




0 −c
b
0




 −b a − d
0
b 
β

[T ]β = 


 c
0
d − a −c 


0
c
−b
0
The following theorem follows directly from the definition of the matrix of a linear
transformation
...
8
...
Then if u ∈ U
[T (u)]γ = [T ]γβ [u]β
Examples
1
...

Let T be the linear transformation defined by
T (u1 ) = 2v1 + v2 , T (u2 ) = v1 − v2 , T (u3 ) = 2v2
Given that u = 3u1 + −2u2 + 7u3




3


2
1
0



[u]β =  −2  , and [T ]γβ = 


1 −1 2
7
56

Hence







3


4



[T (u)]γ = [T ]γβ [u]β = 
 −2  = 


19
1 −1 2
7


2

1 0



Hence T (u) = 4v1 + 19v2
...
Let T : V → W be a linear transformation, and let β = {v1 , v2 , v3 } and γ =
{w1 , w2 , w3 } be bases for V and W respectively
...

The following theorem gives a recipe for finding bases for ker(T ) and the Im(T )
where possible
...
9
...
, vn } and γ = {w1 , w2 ,
...
Let s = nullity(A) and r = rank(A)
...


...
, colct (A) form a
basis for C(A) Then
57

1
...
, us defined by
uj = x1j v1 + x2j v2 + · · · + xnj vn
will be a basis for the kernel of T
...
, T (vcr ) form a basis for the image of T
...
If N (A) = {0}, then ker(T ) = {0}
If C(A) = {0}, then Im(T ) = {0}
3
...


58

Quiz: True or false?
Let U and V be vector spaces of dimension n and m respectively over a field F,
and let T be a linear transformation from U to V
...
, un } be a
basis for U and let γ = {v1 , v2 ,
...
Let u ∈ U
...
, an ∈ F,
n
n
X
X
T(
ai ui ) =
ai T (ui )
...
, T (un )} is a basis for Im(T )
...

(d) rank(T ) + nullity(T ) = n
(e) If α is a basis for ker(T ) and α ⊆ β, then β \ α is a basis for Im(T )
...

(g) [ui ]β = ei
...

(i) [u]β depends on the order of β
...

(k) If m = n and ui = vi for all i, then [T ]γβ = I
...


59

4
...
A subspace W of V is called T -invariant if
T (w) ∈ W, ∀w ∈ W
...
Veriy that the following are all
T -invariant:
• {0}
• V
• ker(T )
• Im(T )
• Eλ which is the space spanned by linearly independent eigenvectors of T corresponding to eigenvalue λ
...
}

4
...

Using the above definition, it is easily verified that if T1 , T2 are linear transformations, then the linear combination aT1 + T2 is also a linear transformation
...
In the case V = W we often write `(V )
...

This is leading up to the notion of associating the vector space `(V, W ) with
Mm×n in the case V and W are of dimension n and m respectively
...


4
...
We call
T −1 the inverse of T
...

• T −1 is linear
...

• [T −1 ]βγ = ([T ]γβ )−1
...
We write V ∼
= W to indicate that V is isomorphic to W
...

The main result of this section is the following:
If V and W are finite dimensional vector spaces over the same field, then V ∼
=W
if and only if dim(V ) = dim(W )
...




(b) T : P3 (F) → M2 (F) via T (a + bx + cx2 + dx3 ) = 
61

a+b b+c
c+d

d



...

This formalises our association of n dimensional vector spaces with Fn as I hinted
at when we looked at standard bases
...


4
...
A
vector space may have an infinite number of bases but each basis contains the same
number of vectors
...
The coordinate vector or coordinate matrix of a point changes with
any change in the basis used
...

Theorem 4
...
If the basis for a vector space is changed from some old basis β =
{u1 , u2 ,
...
, vn }, then the old coordinate
vector [w]β is related to the new coordinate vector [w]γ of the same vector w by the
equation
[w]γ = P [w]β
where the columns of P are the coordinate vectors of the old basis vectors relative to
the new basis; that is, the column vectors of P are
[u1 ]γ , [u2 ]γ ,
...

Proof
...
, un } and a new basis

62

γ = {v1 , v2 ,
...
Let w ∈ V
...
, an )
As γ is also a basis of V the elements of β can be expressed as follows
u1 = p11 v1 + p21 v2 + · · · + pn1 vn
u2 = p12 v1 + p22 v2 + · · · + pn2 vn

...

un = p1n v1 + p2n v2 + · · · + pnn vn
Combining this system of equations with the above expression for w gives
w = (p11 a1 + p12 a2 + · · · + p1n an )v1
+(p21 a1 + p22 a2 + · · · + p2n an )v2 +

...

+(pn1 a1 + pn2 a2 + · · · + pnn an )vn +
and thus it can be seen that


p a + p12 a2 + · · · + p1n an
 11 1

 p21 a1 + p22 a2 + · · · + p2n an
[w]γ = 


...


pn1 a1 + pn2 a2 + · · · + pnn an










which can be written as


p11 p12 · · ·

p1n



a1





 p21 p22 · · · p2n   a2


[w]γ = 
...


...


...


...



pn1 pn2 · · · pnn
an
63










from which it can be seen
[w]γ = P [w]β
where P ’s columns are
[u1 ]γ , [u2 ]γ ,
...
Consider the bases γ = {v1 , v2 } and β = {u1 , u2 } for R2 , where
v1 = (1, 0); v2 = (0, 1); u1 = (1, 1); u2 = (2, 1)
(a) Find the transition matrix from β to γ
...
By
inspection:
u1 = v1 + v2
u2 = 2v1 + v2
so that


[u1 ]γ = 

1
1





 and [u2 ]γ = 

2
1




Thus the transition matrix from β to γ


1 2

P =
1 1
(b) Use the transition matrix to find [v]γ if


−3

[v]β = 
5
It is known from the above change of basis theorem 4
...
It is left for the student to show that −3u1 +5u2 = 7v1 +2v2 = (7, 2)
...
10

Similar Matrices

The matrix of a linear operator T : V → V depends on the basis selected for V
...
This
section is devoted to the study of this problem
To demonstrate that certain bases produce a much simpler matrix of transformation than others, consider the following example
...
Standard bases do not necessarily produce the simplest matrices for linear operators
...
7, the matrix for T with respect to this basis is the standard
matrix for T; that is,
[T ]ββ = [T (e1 ) | T (e2 )]
From the definition of the linear transformation T ,


 
1
1
 , T (e2 ) =  
T (e1 ) = 
−2
4
so


[T ]ββ = 
65

1 1
−2 4




In comparison, consider the basis γ = {u1 , u2 }, where
 
 
1
1
u1 =   , u2 =  
1
2
By theorem 4
...

Much research has been devoted to determining the “simplest possible form” that
can be obtained for the matrix of a linear operator T : V → V , by choosing the basis
appropriately
...
Before pursuing this idea further, it is necessary
to grasp the theorem below
...

Theorem 4
...
If β and γ are bases for a finite-dimensional vector space V , and if
I : V → V is the identity operator, then [I]γβ is the transition matrix form β to γ
...
Suppose that β = {u1 , u2 ,
...
, vn } are bases for V
...

The ground work has been laid to consider the main problem in this section
...
Consider a vector v ∈ V
...
All four vector spaces involved in the
composition are the same (namely V ); however, the bases for the spaces vary
...
Thus, let P = [I]βγ , then P −1 = [I]γβ and hence it can be
written that
[T ]γγ = P −1 [T ]ββ P
This is all summarised in the following theorem
...
12
...
Then
[T ]γγ = P −1 [T ]ββ P

(6)

where P is the change of basis matrix from γ to β
...
12, it is easy to forget whether P is the change
of basis matrix from β to γ or the change of basis matrix from γ to β
...
Therefore, due to P ’s positioning in the formula, it must be the change of basis
matrix from γ to β
...
Let T : R2 → R2 be defined by

 

x1
x1 + x2
 = 

T 
x2
−2x1 + 4x2
Find the matrix of T with respect to the standard basis β = {e1 , e2 } for R2 , then
use theorem 4
...
12 the matrix of T relative to the basis γ is



 

2
−1
1
1
1
1
2
0


=

[T ]γγ = P −1 [T ]ββ P = 
−1
1
−2 4
1 2
0 3
which agrees with the previous result
...

Definition
• If A and B are square matrices, it is said that B is similar to A if there is an
invertible matrix P such that B = P −1 AP
Title: LINEAR ALGEBRA
Description: This book has been written for the use of the students of degree and Honours classes of Indian universities and international universities