Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Linear Algebra - Inverses and Determinants
Description: These notes cover the concepts of Inverses and Determinants

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


An Application: Linear Regression
This is a particular use of the concept of projections, to statistics
...
We’ll see what we can make of it
...
We’ll start with finding a linear best fit, a
line that is as close to as many of the data points as possible
...
If we
were to try to solve for a SINGLE line that matched those points, the system would have
a0 + a1 (1) = 2

a0 + a1 (2) = 4

a0 + a2 (3) = 3

which forms the system





1 1 2
2
1 1
2 
[D|y] =  1 2 4  −→ Linear Reduction −→  0 1
1 3 3
0 0 −1
[
]
a0
so no single solution
...

Da − y =
a1
1 3
3
So, how do we do that? Look at the error’s expression
...
The first is a multiplication of a matrix with a column vector, so an element of the
column space of the matrix
...
So, what we’re looking for
is minimizing the distance between a subspace of R3 and a vector
...
So, Da will be equal to the projection of y
onto the column space of D
...
So,
Gram-Schmidt:
 
   
    

1   1
−1 
1
 1 
 1
6
C =  1  −→ C =  1  ,  2  −  1  =  1  ,  0 
...

3
2
7
1
1
2
Those coefficients, unfortunately, are not those for the actual system
...

−R1  0 1 1 
2
2
−R1
0 2 1
−2R2
0 0 0
1 3 7
2
1

This makes our best fit the line 2 + 1 x
...
We can use that
1
with the projection coefficients b1 = 3 and b2 = 2 for
1
1
y = 3y1 + y2 = 3v1 + (v2 − 2v1 )
2
2
1
= 3v1 + v2 − v1
2
1
= 2v1 + v2
...

1
2

Those errors are orthogonal to the columns of the data matrix
...


2

Matrix Inverses
Definition: The inverse the matrix A, written A−1 , has the property
A A−1 = A−1 A = I
where I is the identity matrix
...
A matrix that
has an inverse can be called ‘invertible,’ and those that do NOT have an inverse are called
‘singular
...
Remember, as usual, that we don’t use random matrices
...

There are slightly weaker versions of inverses we will see very little of:
Definition: The Left Inverse of A is a matrix B such that BA = I
...

We won’t use they often, since for a square matrix they are the same thing:
Property: if A and B are n × n matrices and AB = I then B = A−1
...
Notice that A is not stated to be invertible
...

Note that this, alone, does not prove that BA − I = 0, it does, however, prove that every
column of BA − I is orthogonal to every row of A:
Col(BA − I) ⊥ Row(A)
...
Col(AB) is a subset of Col(A):
Col(AB) = {(AB)x|x ∈ Rn } = {A(Bx)|x ∈ Rn } ⊆ {Ax|x ∈ Rn } = Col(A)
...
This means that the rows of A span Rn , meaning
that
(BA − I) ⊥ Rn =⇒ (BA − I) = 0,
meaning BA = I
...
Also, this takes advantage (slightly) of another property of invertible matrices:
you can almost treat them like non-zero numbers
...

3

Properties:
• A−1 exists implies (AT )−1 = (A−1 )T
...

• A−1 exists means (A−1 )−1 = A
...
(cA)−1 =

A−1

...
I −1 = I
...

• If A is invertible, its inverse is unique
...


=⇒

Having an invertible A on the left hand side means you can get the solution that way (usually
not worth the effort, just solve it), so
• You ALWAYS get a solution (always consistent)
...

This is primarily of value when one has to calculate many solutions of the form Ax = b
...

Calculating Inverses: Not as hard as you may expect
...

[1]
For a 1 × 1, [a]−1 = a
...

[
]
[
]
a b
d −b
1

...


Once you’re dealing with a bigger matrix than ]that, you have to row reduce
...


4

Example:



0 2
1 1 0 0
 1 0 −1 0 1 0 
−1 1
1 0 0 1
First, step, put that leading one on top then clear below it:



R2
1 0 −1 0 1 0
1 0 −1
 0 2
R1  0 2
1 1 0 0 
1
−1 1
1 0 0 1
R3 + R1
0 1
0



1 0 −1 0 1 0
1 0 −1 0
 0 1
R3  0 1
0 0 1 1 
0 0
R2
0 2
1 1 0 0
R3 − 2R2
0 0
1 1


R1 + R3
1 0 0 1 −1 −2
[
]
 0 1 0 0
1
1  = I A−1
...

0 1

Only check one side, though
...

Notice that, using the earlier property, this means that A (n × n) is invertible if and only
if we can reduce it down to I
...


5

Solving/Simplifying Matrix Equations
These come up every so often
...

Example: Solve for X (as much as possible) in
AXB T A − C + DA = 0

given A, B are invertible
...
First
step, get the constants onto the right:
AXB T A = C − DA
then multiply each side by A−1 on the right and left (remember, there’s a difference)
XB T = A−1 CA−1 − A−1 D
then multiply by the inverse of B T on the right for
X = A−1 CA−1 (B T )−1 − A−1 D(B −1 )T
...
For
a 1 × 1 matrix it is simply the only entry (not absolute valued, positive or negative)
...
It uses a concept called a Cofactor
...

That power of (−1) means that the
following checkerboard pattern:

+
 −

 +

 −


...


...


...

This actually works, since we know how to calculate anything for n = 1 or 2, we can therefore
do it for n = 3, n = 4, etc
...

6

Example: Calculate

1 −2 0
3
1 4
...


If we had taken the middle column we’d have
1 −2 0
3
1 4
0 −2 2

= −(−2)

1 0
1 0
3 4
− (−2)
+ (1)
3 4
0 2
0 2

= 2(6) + 2 + 8

= 22
...
It’s generally easier to use the row/column with the most zeros in it
...
Here’s the
main one:
Theorem: det(AB) = det(A) det(B) for all n × n matrices A and B
...
5: 2
...
bdf), 7
...
b), 19
...
1: 3
...
bd)

7


Title: Linear Algebra - Inverses and Determinants
Description: These notes cover the concepts of Inverses and Determinants