Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
Lecture 22
Example 22
...
Find the eigenvalues of
−4 −6 −7
5
3
...
Compute
−4 −6 −7
λ 0 0
5
3 − 0 λ 0
A − λI = 3
0
0
3
0 0 λ
−4 − λ −6
−7
3
5−λ
3
=
0
0
3−λ
Then
5 − λ
−6 −7
3
− 3
det(A − λI) = (−4 − λ)
−λ 3 − λ
−λ 3 − λ
= (−4 − λ)[(3 − λ)(5 − λ) + 3λ] − 3[−6(3 − λ) − 7λ]
= λ3 − 4λ2 + λ + 6
Factor the characteristic polynomial:
p(λ) = λ3 − 4λ2 + λ + 6 = (λ − 2)(λ − 3)(λ + 1)
Therefore, the eigenvalues of A are
λ1 = 2,
λ2 = 3,
λ3 = −1
...
Example 22
...
For each eigenvalue of A from Example 22
...
Solution
...
0
171
The Characteristic Polynomial
Therefore, v1 is an eigenvector of A with eigenvalue λ1
...
Finally, for λ3 = −1 we compute
−3 −6 −7
6
3
A − λ3 I = 3
0
0
4
and the null space of A − λ3 I is spanned by
−2
v3 = 1
0
and therefore v3 is an eigenvector of A with eigenvalue λ3
...
Therefore, the set β = {v1 , v2 , v3 }
is linearly independent (by Theorem 21
...
You can verify,
for instance, that det([v1 v2 v3 ]) 6= 0
...
6, the previous example has the following generalization
...
6: Suppose that A is a n × n matrix and has n distinct eigenvalues
λ1 , λ2 ,
...
Let vi be an eigenvector of A corresponding to λi
...
, vn }
is a basis for Rn
...
In forthcoming lectures, we will see that it is very convenient
to work with matrices A that have a set of eigenvectors that form a basis of Rn ; this is one of
the main motivations for studying eigenvalues and eigenvectors in the first place
...
For
example, what if A does not have n distinct eigenvalues? In this case, does there exist a
172
Lecture 22
basis for Rn of eigenvectors of A? In some cases, the answer is yes as the next example
demonstrates
...
7
...
0 0
2 2
0 1
Solution
...
Notice that although p(λ) is a
polynomial of degree n = 3, it has only two distinct roots and hence A has only two
distinct eigenvalues
...
For λ1 = 1 one finds that the eigenspace Null(A − λ1 I) is spanned by
0
v1 = −2
1
and thus v1 is an eigenvector of A with eigenvalue λ1 = 1
...
A − 2I = 4 0
−2 0 −1
0 0 0
Therefore, rank(A − 2I) = 1 and thus by the Rank Theorem it follows that Null(A − 2I) is
a 2-dimensional eigenspace
...
Hence, for the repeated eigenvalue λ2 = 2 we were able to find two
linearly independent eigenvectors
...
In the previous Example 22
...
In general, if p(λ) is an
nth degree polynomial that can be completely factored into linear terms, then p(λ) can be
written in the form
p(λ) = (λ − λ1 )k1 (λ − λ2 )k2 · · · (λ − λp )kp
where k1 , k2 ,
...
, λk
...
Motivated by this, we introduce
the following definition
...
8: Suppose that A ∈ Mn×n has characteristic polynomial p(λ) that can be
factored as
p(λ) = (λ − λ1 )k1 (λ − λ2 )k2 · · · (λ − λp )kp
...
The dimension
Null(A − λi I) of the eigenspace associated to λi is called the geometric multiplicity of
λi
...
Example 22
...
A 6 × 6 matrix A has characteristic polynomial
p(λ) = λ6 − 4λ5 − 12λ4
...
Solution
...
Their algebraic multiplicities are k1 = 4, k2 = 1, and k3 = 1, respectively
...
In Example 22
...
For λ1 = 1, we found one linearly
independent eigenvector, and therefore λ1 has geometric multiplicity g1 = 1
...
However, as we will see in the next example, the geometric multiplicity gi is in
general less than the algebraic multiplicity ki :
g i ≤ ki
174
Lecture 22
Example 22
...
Find the eigenvalues of A and a
2
4
A = −4 −6
3
3
basis for each eigenspace:
3
−3
1
For each eigenvalue of A, find its algebraic and geometric multiplicity
...
One computes
p(λ) = −λ3 − 3λ2 + 4 = −(λ − 1)(λ + 2)2
and therefore the eigenvalues of A are λ1 = 1 and λ2 = −2
...
For λ1 = 1 we compute
1
4
3
A − I = −4 −7 −3
3
3
0
and then one finds that
1
v1 = −1
1
is a basis for the λ1 -eigenspace
...
For
λ2 = −2 we compute
4
4
3
4 4 3
1 1 1
A − λ2 I = −4 −4 −3 ∼ 1 1 1 ∼ 0 0 1
3
3
3
0 0 0
0 0 0
Therefore, since rank(A − λ2 I) = 2, the geometric multiplicity of λ2 = −2 is g2 = 1, which
is less than the algebraic multiplicity k2 = 2
...
Therefore, it is not possible to construct a basis for R3 consisting of
eigenvectors of A
...
In the next lecture, we will elaborate on this situation further
...
11
...
0
0 −7
175
The Characteristic Polynomial
22
...
In mathematics, there are many cases where one is interested in classifying objects into categories or classes
...
For example, all
fruits are classified into categories: apples, pears, bananas, oranges, avocados, etc
...
We have spent a lot of time working with matrices and we have now reached a point in our
study where we would like to begin classifying matrices
...
Definition 22
...
We will say that A is similar to B
if there exists an invertible matrix P such that
A = PBP−1
...
Hence, with Q = P−1 , we have that B = QAQ−1 and thus B is similar to A
...
Matrices that are similar are clearly not necessarily equal
...
Here are a few reasons why
...
13: If A and B are similar matrices then the following are true:
(a) rank(A) = rank(B)
(b) det(A) = det(B)
(c) A and B have the same eigenvalues
Proof
...
If A and B are similar then A = PAP−1 for some matrix P
...
In the next lecture, we will see that if Rn has a basis of eigenvectors of A then A is similar
to a diagonal matrix
...
1
Eigenvalues of Triangular Matrices
Before discussing diagonalization, we first consider the eigenvalues of triangular matrices
...
1: Let A be a triangular matrix (either upper or lower)
...
Proof
...
Suppose then that A is a 3 × 3 upper triangular matrix:
a11 a12 a13
A = 0 a22 a23
0
0 a33
Then
a11 − λ
a12
a13
a22 − λ
a23
...
In other words, the eigenvalues of A are simply the diagonal entries of A
...
2
...
0 −4 0
3 0 7
Diagonalization
(a) Find the characteristic polynomial and the eigenvalues of A
...
We now introduce a very special type of a triangular matrix, namely, a diagonal matrix
...
3: A matrix D whose off-diagonal entries are all zero is called a diagonal
matrix
...
0 0 −8
and here is a 5 × 5 diagonal matrix
6
0
D=
0
0
0
0 0
0 0
0 − 72
0 0
0 0
0 0
0 0
0 0
...
Moreover,
the powers of a diagonal
λ 0
then
matrix are easy to compute
...
, we have that
k
λ1 0
k
...
2
Diagonalization
Recall that two matrices A and B are said to be similar if there exists an invertible matrix
P such that
A = PBP−1
...
The problem of diagonalization is thus concerned with answering the
question of whether a given matrix is similar to a diagonal matrix
...
180
Lecture 23
Definition 23
...
In other words, if there exists an invertible P such that
A = PDP−1
...
Suppose then
that A is diag-
onalizable
...
4, there exists an invertible matrix P = v1 v2 · · · vn
and a diagonal matrix
λ1 0
...
0
D =
...
...
...
0 0
...
Multiplying on the right both sides of the equation A = PDP−1
by the matrix P we obtain that
AP = PD
...
Therefore, since it holds that AP = PD then
Av1 Av2 · · · Avn = λ1 v1 λ2 v2 · · · λn vn
...
Thus, the columns v1 , v2 ,
...
In conclusion, if A is diagonalizable then Rn has a basis consisting of
eigenvectors of A
...
, vn } is a basis of Rn consisting of eigenvectors of A
...
, λn be the eigenvalues of A associated to v1 , v2 ,
...
Then P is invertible because {v1 , v2 ,
...
Let
λ1 0
...
0
D =
...
...
...
0 0
...
Therefore, AP = λ1 v1 λ2 v2 · · · λn vn
...
0
0 λ2
...
...
...
...
λn
= λ1 v1 λ2 v2 · · · λn vn
...
Thus, if Rn has a basis of consisting of eigenvectors of A then A is diagonalizable
...
Theorem 23
...
, vn }
of Rn consisting of eigenvectors of A
...
5 is that the problem of diagonalization of a matrix A
is equivalent to finding a basis of Rn consisting of eigenvectors of A
...
23
...
Theorem 23
...
, λn
...
Proof
...
The set of eigenvectors {v1 , v2 ,
...
6)
...
, vn } is a basis of Rn consisting of eigenvectors of A and then by
Theorem 23
...
What if A does not have distinct eigenvalues? Can A still be diagonalizable? The
following theorem completely answers this question
...
7: A matrix A is diagonalizable if and only if the algebraic and geometric
multiplicities of each eigenvalue are equal
...
Let A be a n × n matrix and let λ1 , λ2 ,
...
Suppose that k1 , k2 ,
...
, gp are the geometric
multiplicities of the eigenvalues, respectively
...
, p
...
Therefore, there exists n linearly eigenvectors of A and consequently A is diagonalizable
...
Since the geometric multiplicity is at
most the algebraic multiplicity, the only way that g1 + g2 + · · · + gp = n is if gi = ki , i
...
,
that the geometric and algebraic multiplicities are equal
...
8
...
If yes, find a matrix P that diagonalizes
A
...
The characteristic polynomial of A is
p(λ) = det(A − λI) = (λ − 2)(λ − 3)(λ + 1)
and therefore λ1 = 2, λ2 = 3, and λ3 = −1 are the eigenvalues of A
...
6 A is diagonalizable
...
183
Diagonalization
Example 23
...
Determine if A is diagonalizable
...
2 0
A= 4 2
−2 0
Solution
...
An eigenvector corresponding to λ1 = 1 is
0
v1 = −2
1
One finds that g2 = dim(Null(A − λ2 I)) = 2, and two linearly independent eigenvectors for
λ2 are
0
−1
0 , 1
{v2 , v3 } =
2
0
Therefore, A is diagonalizable, and a matrix that diagonalizes A is
0
−1
0
P = v1 v2 v3 = −2 0 1
1
2 0
You can verify that
λ1 0 0
P 0 λ2 0 P−1 = A
0 0 λ3
Example 23
...
Determine if A is diagonalizable
...
2
4
3
A = −4 −6 −3
3
3
1
Solution
...
For λ2 = −2 one computes
1 1
0 1
0 0
We see that the dimension of the eigenspace of λ2 = −2 is g2 = 1, which is less than the
algebraic multiplicity k2 = 2
...
7 we can conclude that it is not
possible to construct a basis of eigenvectors of A, and therefore A is not diagonalizable
...
11
...
Show
that if A is invertible then v is an eigenvector of A−1 with corresponding eigenvalue λ1
...
12
...
Show
that if v is an eigenvector of A with corresponding eigenvalue then v is also an eigenvector
of B with corresponding eigenvalue λ
...
1
Symmetric Matrices
Recall that a square matrix A is said to be symmetric if AT = A
...
7
8 4
Symmetric matrices are ubiquitous in mathematics
...
, xn ) be a
function having continuous second order partial derivatives
...
∂xi ∂xj
∂xj ∂xi
Therefore, the Hessian matrix of f is symmetric:
∂f
∂x1 ∂x1
∂f
Hess(f ) = ∂x2 ∂x1
...
∂f
∂xn ∂x1
∂f
∂x1 ∂x2
∂f
∂x2 ∂x2
...
∂f
∂xn ∂x2
···
∂f
∂x1 ∂xn
···
...
···
∂f
∂x2 ∂xn
...
...
, an )
is a critical point of f , that is
∂f
∂f
∂f
(P ) =
(P ) = · · · =
(P ) = 0
∂x1
∂x2
∂xn
then
(i) P is a local minimum point of f if the matrix Hess(f ) has all positive eigenvalues,
(ii) P is a local maximum point of f if the matrix Hess(f ) has all negative eigenvalues,
and
187
Diagonalization of Symmetric Matrices
(iii) P is a saddle point of f if the matrix Hess(f ) has negative and positive eigenvalues
...
For
example, the matrix
0 −1
A=
1 0
has characteristic polynomial
p(λ) = λ2 + 1
√
√
the roots of which are clearly λ1 = −1 = i and λ2 = − −1 = −i
...
However, for
symmetric matrices we have the following
...
1: If A is a symmetric matrix then all of its eigenvalues are real numbers
...
24
...
, vk } are eigenvectors of a matrix A corresponding to
distinct eigenvalues λ1 , λ2 ,
...
, vk } is linearly independent (Theorem 21
...
For symmetric matrices we can say even more as the next theorem states
...
2: Let A be a symmetric matrix
...
Proof
...
Let λ1 6= λ2 be the eigenvalues associated to v1 and v2
...
Therefore, λ1 v1T v2 = λ2 v1T v2 which implies that (λ1 − λ2 )v1T v2 = 0
...
24
...
As it turns out, any symmetric A is diagonalizable and
moreover (and perhaps more importantly) there exists an orthogonal eigenvector matrix P
that diagonalizes A
...
Theorem 24
...
In fact, there is an
orthonormal basis of Rn of eigenvectors {v1 , v2 ,
...
In other words, the matrix
P = [v1 v2 · · · vn ] is orthogonal, PT P = I, and A = PDPT
...
The punchline of Theorem 24
...
Moreover, we are
guaranteed to find an orthogonal matrix that diagonalizes a given symmetric matrix
...
4
...
A= 0 1
−1 1
2
Solution
...
Eigenvectors of A associated to
λ1 , λ2 , λ3 are
1
1
−1
u1 = −1 , u2 = 1 , u3 = 1
...
2, the eigenvectors u1 , u2 , u3 form an orthogonal set:
uT1 u2 = 0, uT1 u3 = 0, uT2 u3 = 0
...
To that end, first compute uT1 u1 = 3,
uT2 u2 = 2, and uT3 u3 = 6
...
Therefore,
an orthogonal matrix that diagonalizes A is
1
√
√1
− √16
3
2
− √1 √1
1
√
P = v1 v2 v3 = 3
2
6
2
√1
√
0
3
6
You can easily verify that PT P = I, and that
0 0 0
A = P 0 1 0 PT
0 0 3
189
Diagonalization of Symmetric Matrices
Example 24
...
Let A and B be n × n matrices
...
After this lecture you should know the following:
• a symmetric matrix is diagonalizable with an orthonormal set of eigenvectors
190
Lecture 25
The PageRank Algortihm
In this lecture, we will see how linear algebra is used in Google’s webpage ranking algorithm
used in everyday Google searches
...
1
Search Engine Retrieval Process
Search engines perform a two-stage process to retrieve search results1
...
g
...
After Stage 1, there is a large amount of relevant pages
...
Or “homework
help” results in 49,400,000 pages (03/31/15)
...
The ranking is based on the hyperlinked
or networked structure of the web, and the ranking is based on a popularity contest; if many
pages link to page Pi then Pi must be an important page and should therefore have a high
popularity score
...
g
...
teoma
...
At Stanford, doctoral students Sergey Brin
and Larry Page were busy working on a similar project which they had begun in 1995
...
Google is designed to crawl and index the
Web efficiently and produce much more satisfying search results than existing systems
...
stanford
...
”
1
A
...
Langville and C
...
Meyer, Google’s PageRank and Beyond, Princeton University Press, 2006
J
...
Brin and L
...
1
...
1: A tiny web represented as a directed graph
...
2
A Description of the PageRank Algorithm
In the PageRank algorithm, each inlink is viewed as a recommendation (or vote)
...
However, the
quality of the inlink (vote) is important
...
The PageRank of page i, denoted xi ,
is the sum of all the weighted PageRanks of all the pages pointing to i:
X xj
xi =
|Nj |
j→i
where
(1) Nj is the number of outlinks from page j
(2) j → i means page j links to page i
Example 25
...
Find the PageRank of each page for the network in Figure 25
...
From the previous example, we see that the PageRank of each page can be found by
solving an eigenvalue/eigenvector problem
...
1 billion in 2006) and directly
solving the equations is not possible
...
One starts with an initial guess, say x0 = ( 14 , 41 , 41 , 41 )
...
In other words, we have a discrete dynamical system
xk+1 = Hxk
...
2
...
, 51 ) we obtain that for k ≥ 39,
the vectors xk = Hk x0 cycle between (0, 0, 0, 0
...
40) and (0, 0, 0, 0
...
28)
...
does not converge
...
4
2
5
3
1
0 0 0
1
0
0
0
0
2
1
H = 0 3 0 0 0
1 1
0 3 2 0 1
0 0 0 1 0
0
1
3
Figure 25
...
3
...
g
...
Starting with x0 =
( 15 ,
...
k→∞
Therefore, in this case the sequence x0 , x1 , x2 ,
...
4
2
1
5
3
1
0 3 0 0 0
0 0 21 12 0
1
H=
0
0
0
0
3
1 1
0 3 2 0 1
0 0 0 12 0
Figure 25
...
To deal with a dangling node, Brin and Page replaced
the associated zero-column with the vector n1 1 = ( n1 , n1 ,
...
The justification for this
adjustment is that if a random surfer reaches a dangling node, the surfer will “teleport” to
any page in the web with equal probability
...
To deal with cycles, a surfer may abandon the hyperlink
structure of the web by ocassionally moving to a random page by typing its address in the
193
The PageRank Algortihm
browser
...
Hence, let 0 < α < 1 be
the proportion of time the random surfer uses the hyperlink structure
...
The matrix G goes by the name of the Google matrix, and it is reported that Google uses
α = 0
...
The Google matrix G is now a primitive and
stochastic matrix
...
e
...
Primitive means that there exists k ≥ 1 such
that Gk has all positive entries (k = 1 in our case)
...
Theorem 25
...
(ii) G∗ = q q · · · q where q is a probability vector
...
(iv) The vector q is the unique probability vector which is an eigenvector of G with
eigenvalue λ1 = 1
...
, λn have |λj | < 1
...
We will prove a special case4
...
If x = Gx, and x has mixed signs, then
n
n
X
X
Gij |xj |
...
Therefore, all the eigenvectors in the λ1 = 1 eigenspace are either
negative or positive
...
This proves that there is a unique probability vector q such that
q = Gq
...
Bryan, T
...
, λn be the eigenvalues of G
...
, n
...
, vn be the remaining
eigenvectors of G
...
From this we see that
lim Gk q0 = q
...
3
Computation of the PageRank Vector
The Google matrix G is completely dense, which is computationally undesirable
...
A vector-matrix multiplication generally
requires O(n2) computation (n ≈ 8, 000, 000, 000 in 2006)
...
This means that
multiplication with H reduces to O(n) computation
...
e
...
Brin
and Page, and others, have confirmed that only 50-100 iterations are needed for a satisfactory
approximation of the PageRank vector q for the web
...
1
Discrete Dynamical Systems
Many interesting problems in engineering, science, and mathematics can be studied within
the framework of discrete dynamical systems
...
The state of the system (economic, ecologic, engineering, etc
...
The
relationship between the vector xk and the next vector xk+1 is what constitutes a model
...
1: A linear discrete dynamical system on Rn is an infinite sequence
{x0 , x1 , x2 ,
...
The vectors xk are called the state of the dynamical system and x0 is the initial condition
of the system
...
,
can be found by iterating the equation xk+1 = Axk
...
2
Population Model
Consider the dynamic system consisting of the population movement between a city and its
suburbs
...
x=
s
For simplicity, we assume that c + s = 1, i
...
, c and s are population percentages of the total
population
...
Suppose it is known that after each year 5% of the city’s population
197
Discrete Dynamical Systems
moves to the suburbs and that 3% of the suburban population moves to the city
...
95c0 + 0
...
05c0 + 0
...
The equations
c1 = 0
...
03s0
s1 = 0
...
97s0
can be written in matrix form as
" #
c1
s1
=
"
#" #
0
...
03 c0
0
...
97
...
95 0
...
s2
0
...
97 s1
Hence, the population movement is a linear dynamical system with matrix and state vector
"
#
" #
0
...
03
ck
A=
, xk =
...
05 0
...
70
x0 =
...
30
Then,
x1 = Ax0 =
#
#"
"
0
...
03 0
...
05 0
...
30
"
#"
#
0
...
03 0
...
674
...
326
"
#
0
...
0
...
97 0
...
349
In a similar fashion, one can compute that up to 3 decimal places:
#
#
"
"
0
...
375
...
625
0
...
We predict that in the year 2400, 38% of the total population will live in the city and 62%
in the suburbs
...
198
Lecture 26
Definition 26
...
An equilibrium
state for A is a vector q such that Aq = q
...
Thus, if the system
starts at the equilibrium q then it remains at q for all time
...
Therefore, q is an equilibrium for A if and only if q is in the nullspace of the matrix A − I:
q ∈ Null(A − I)
...
3
...
95 0
...
A=
0
...
97
Does the initial condition of the population x0 change the long term behavior of the
discrete dynamical system? We will know the answer once we perform an eigenvalue analysis
on A (Lecture 22)
...
To see how the
last equation was obtained, notice that
x1 = Ax0
and therefore
x2 = Ax1 = A(Ax0 ) = A2 x0
and therefore
x3 = Ax2 = A(A2x0 ) = A3 x0
etc
...
3
Stability of Discrete Dynamical Systems
We first formally define the notion of stability of a discrete dynamical system
...
4: Consider the discrete dynamical system xk+1 = Axk where A ∈ Rn×n
...
k→∞
k→∞
The following theorem characterizes when a discrete linear dynamical system is asymptotically stable
...
5: Let λ1 ,
...
If |λj | < 1 for all j = 1, 2,
...
Solution
...
Let {v1 ,
...
, λn respectively
...
, cn such that
x0 = c1 v1 + · · · + cn vn
...
Ak vi = λki vi
Then
xk = Ak x0
= Ak (c1 v1 + · · · + cn vn )
= c1 Ak v1 + · · · + cn Ak vn
= c1 λk1 v1 + · · · + cn λkn vn
...
Therefore,
k→∞
lim xk = lim (c1 λk1 v1 + · · · + cn λkn vn )
k→∞
k→∞
= c1
lim λk1 v1 + · · · + cn lim λkn vn
k→∞
k→∞
= 0v1 + · · · + 0vn
= 0
...
200
Lecture 26
As an example of an asymptotically stable dynamical system, consider the 2D system
1
...
4
xk+1 =
x
...
15 0
...
1 −0
...
8 and λ2 = 0
...
Hence, by Theorem 26
...
15 0
...
, } converges to the origin in R2
...
1, we plot four different
state
{x0 , x1 , x2,
...
As expected,
7
−7
7
−7
all trajectories converge to the origin
...
1: A 2D asymptotically stable linear system
After this lecture you should know the following:
• what a dynamical system is
• and how to find its equilibrium states
• how to determine if a discrete dynamical system has the origin as an asymptotically
stable equilibrium
201