Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: POLYLINEAR PART OF ADVANCED ALGEBRA
Description: POLYLINEAR PART OF ADVANCED ALGEBRA

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


Polylinear part
of advanced linear algerba MAT 315
Oleg Viro

1
...
1
...
Linear maps V → F has many
names
...
They are called
also linear forms, linear functionals, dual vectors and covectors
...


1
...
Dual vector space
Let V be a vector space over a field F
...

As we know, this is a vector space over F
...

1

1
...
3
...

X

X

Define a map T : W → V

X

X

by formula T (ϕ) = ϕ ◦ T
...


1
...
Theorem
...


X

Proof
...

X

X

X

X

1
...
Theorem
...

X

1
...
If T is an isomorphism then T is an isomorphism,
X
X
and (T )−1 = (T −1 )
...
4
...
D Theorem
...
Then
X
X
X
• If T is injective, then T : W → V is surjective
...

1
...
Injective ⇐⇒ left invertible
Under assumptions of 1
...

Proof of Lemma 1
...

=⇒ Assume that T is injective
...
, up ) of V
...
, T up ) are linearly independent and can be extended to
a basis of W
...
, p and mapping the rest of the basis to 0
...
Hence S ◦ T = id
...
The ST (u) = S0 = 0
...
Hence u = 0
...


1
...
F Lemma
...
D,
T is surjective ⇐⇒ ∃ a linear map S : W → V such that T ◦ S = id
...
F
...
Choose a basis w = (w1 ,
...
Since T is surjective, T −1 (wi ) 6= ∅ for each i
...

There exists a unique linear map S : W → V such that S(wi ) = vi for
each i
...
Hence T ◦ S = id
...
Since T ◦ S = id, u = T S(u) = T (S(u))
...
Hence W = range T and T is surjective
...
D
...
Then by Lemma 1
...
By Theorem 1
...
Hence, by Lemma 1
...

Assume T is surjective
...
F there exists a linear map
X
X
X
S : W → V with T ◦ S = id
...
B, S ◦ T = (T ◦ S) =
X
X
id = id
...
E, T is injective
...
Right and left invertibility of a morphism can be defined for
morphisms of any category:
a morphism f is left invertible
if there exists a morphism g such that g ◦ f = id;
a morphism f is right invertible
if there exists a morphism g such that f ◦ g = id
...
These are notions from the set theory
...
F and 1
...
This allows to prove that surjactivity and injectivity
are dual to each other, because right and left invertibilities are dual to
each other
...
5
...
Indeed, any linear map maps 0, the only element
of F0 , to 0
...

X

X

F = F
...
It is defined, due to its linearity, by the image of 1, and any
element of F may be the image of 1
...


1
...
G
...
It maps a list
u = (u1 ,
...
, xn ) 7→
xi u1
...
T (en )),
where e1 ,
...

X

1
...
Self-duality to the coordinate space: (Fn ) = Fn
...
G, we have a bijection (Fn ) = L(Fn , F) → Fn
...
, ϕ(en )) ∈ Fn ,
which can be an arbitrary element of Fn
...

The values ϕ1 = ϕ(e1 ),
...
, en can be considered as coordinates of ϕ
X
in (Fn )
...
, en of (Fn ) corresponding to these coordinates is
defined by formulas ej (x1 ,
...

X

Indeed, for any ϕ ∈ (Fn ) and x = (x1 ,
...
, xn )ϕ(ei )

n
X

i

ϕ(ei )e (x) =

i=1

n
X

ϕi ei
...


In particular,
(
1, if i = j
ej (ei ) =
0, if i 6= j


...


1
...
I Corollary
...

X

X

Proof
...
Hence, V is isomorphic to (Fn ) by 1
...

X
As we have just seen, (Fn ) is isomorphic to Fn
...
, vn ) of V defines an isomorphism T : V → Fn which
maps the basis (v1 ,
...
, en ) of Fn
...
, en )
X
X
X
to some basis of V
...
, T en are denoted by v 1 ,
...
The basis (v 1 ,
...
, vn )
...
, v n ) is related to (v1 ,
...


1
...
The second dual
The proof of Corollary 1
...
To construct an isomorphism
X
V → V , we use an isomorphism between V and Fn
...
This dependence is not a defect of our presentation
...

X X

Contrary to this, the space (V ) which is dual to V
isomorphic to V , as we will see in this section
...
J
...
Canonical map to the second dual
Let V be a vector space over a field F
...
It is defined by formula u 7→ (V → F : ϕ 7→ ϕ(u))
...
Linearity of the map V → F : ϕ 7→ ϕ(u) (that is its belonging
X X
to (V ) ) follows immediately from the definition of linear operations
X
in V : (aϕ + bψ)u = aϕ(u) + bψ(u)
...
This map is linear
...

X X

The construction of the map V → (V ) above does not involve any
choice, it is natural and universal
...
K
...
If V is finite dimensional then the natural map V →
X X
(V ) is an isomorphism
...
L Lemma
...


1
...
L
...
As a list of
vectors which consists of a single non-zero vector, u is linear independent
...
Let u, u1 ,
...
The first covector of the dual basis takes value 1 on u
...
Hence u is not in the kernel of V → (V )
...
Thus, the map
is injective
...
K
...
I, any finite dimensional vector space
X
V is isomorphic to its dual V , which, in turn, is isomorphic to its dual
X X
X X
(V )
...
Hence our map
X X
V → (V ) , being an injective linear map between vector spaces of the
same finite dimension, is an isomorphism
...
In the proof of Lemma 1
...
However, Lemma 1
...
For any non-zero vector in any vector space one can
find a linear functional which takes non-zero value on this vector
...
Nonetheless, Theorem 1
...


1
...
Bracket, bra and ket
We see that in the finite dimensional case a vector space and its dual
have the same dimension and play symmetric rˆoles: the space dual to
X
X
V is identified with V
...
This suggests to make notations more symmetric
...
This defines a map
X

V × V → F : (ϕ, u) 7→ hϕ|ui
...
The
first of these equalities is linearity of ϕ, the second, definition of linear
operations with covectors
...

The definition of dual linear map gets a new look under the bracket
X
notation
...
In the bracket notation this equality looks
X
as follows: hT ϕ|ui = hϕ|T ui
...
DUAL SPACE

7

There is a tendency, especially among physicists, to identify an object
with its action to other objects, provided that this action can characterize
the acting object completely
...

Dirac went further
...
According to Dirac, covectors
should always be enclosed in the left-hand half of the bracket, like this:
hϕ|, and called bra vectors, while vectors should be dressed in the righthand half of the bracket, like that: |ui, and called ket vectors
...
Let us formalize this structure
...
A map b : V × W → F is
called a bilinear pairing, if it is linear against each of the variables
...


X

In the case of the canonical pairing V × V → F the associated linX
X
ear maps are the identities V → V and V → V
...
A non-singular bilinear pairing b : V × W → F is essentially
X
X
the canonical pairing W × W → F: at least, replacement V by W by
X
the associated isomorphism V → W turns b to the canonical pairing
...
8
...
M
...
Let (v1 ,
...
, wq ) be a basis of vector space W
...
, vp ) and
X
X
X
(w1 ,
...
Then the matrix of T : W → V with respect to
the dual bases (v 1 ,
...
, wq ) is obtained from A by
transposition of rows and columns
...
Matrix A consists of scalars aij such that T vi =
k=1 aki wk
...
This means that T w = k=1 bkj wk
...
On one hand,
* q
+
q
q
X
X
X
j
j
j
hw |T vi i = w
aki wk =
aki hw |wk i =
aki δkj = aji
...
DUAL SPACE

8

X

On the other hand, hwj |T vi i = hT wj |vi i and
+
* p
p
p

X
X
X
X
j
k
k
hT w |vi i =
bkj v vi =
bkj hv |vi i =
bkj δik = bij ,

k=1

k=1

k=1

where (bik ) is the matrix of the dual map
...


1
...
Rank of dual map
Recall that the rank of a linear map is the dimension of its range, the
rank of T is denoted by rk T
...
N
...
The ranks of a linear maps dual to each other are
equal
...
Any linear map T : V → W is represented as a composition
S
R
V −
→ range T −
→ W of the surjection S defined by T (that is Su = T u
R
for u ∈ V ) and an inclusion range T −
→ W
...
Hence rk S = rk R = dim range T =
rk T
...
By 1
...
Hence rk S =
X
X
X
X
rk R = rk T = dim(range T )
...
Hence their dimensions are equal
...
O Corollary
...

Proof
...
The maximal
number of linearly independent rows of a matrix is equal to the rank of
the dual linear map, by Theorem 1
...
By Theorem 1
...


1
...
Improving matrix notation
Traditionally, vectors of Fn in matrix notation are associated with
matrices-columns
...
DUAL SPACE

9

and if we associate to u a matrix-column X and to map T , its matrix A,
then AX is the matrix-column associated to T u
...
This cannot be
corresponding to linear maps, because the dimension of the space of linear maps is greater than the dimension of the space of matrix-columns
...

The choice of notation used above is commonly accepted, and we
speculate on other possibilities not because we consider seriously change
of a commonly accepted notation, but because we want to prepare the
next twist of notation’s development
...

(1) Covectors are associated to matrix-rows
...

(3) Dual maps are represented in dual bases by the same matrix
...

The firstPrule does not require a formal justification
...
x n
...


...
Here it is
...


 
y1
  y2 

xn 

...
DUAL SPACE

10

Consider now the last two rules
...
The dual map T is
Pdefined
X
q
q X
p
i
by hT x|yi
=
hx|T
yi
for
all
x

(F
)
and
y

F

...
Denote matrix x1 x2
...


...
Then T y is represented by matrix AY
...
Fix
X
X
X
x
...
Denote the matrix-row representing T x by Z
...
It holds true for all Y
...
It follows Z = XA
...
11
...

Mathematicians place indices on the right hand side and below the
main symbol
...
However, in situations when there are too many
indices of different nature, these objections do not work
...
Until this point we
used mainly lower indices
...
In fact, this exception is the first manifestation of the whole
system, according to which about half of all indices should be upper
...
The coordinates of vectors are
be equipped with upper indices,
Pto
n
1
n
n
like this: (x ,
...
Vectors in the basis
dual to a basis v1 ,
...
, v n ∈ V
...
, v n ∈ V are numerated with low indices: x1 v 1 +x2 v 2 +· · ·+xn v n ∈
X
V
...
This is so usual that there is an agreement
to skip summation sign in such a situation (i
...
, when in a formula an

index appears twice once as lower and once
P as iupper index)
...
The range of summation
is determined from the context
...

Recall that entries of the matrix of a linear map are involved in the
following formulas: the
of the basis vector ej under the linear map
Pimage
m
with matrix (aij ) is Pi=1 aij ei and the ith coordinate of the image of
vector (x1 ,
...
The first formula suggests to raise the
P
i
first index of the entry aij
...
In the second formula we have to
raise, first, the index at xj , as it was stated above, and
Pthen raising the
first index at the matrix entry would make it perfect: nj=1 aij xj
...

Thus, in matrices that we met so far, the index numerating lines
should be raised to the upper position, while the index numerating rows
should be left in the lower position
...
10 turns
to xi aij y j where double summation (both over i and j) is understood
...

It could be preserved if one could use high dimensional matrices
...
Tensors
2
...
Polylinear maps
Let V1 ,
...
A map
F : V1 × · · · × Vn → W : (v1 ,
...
, vn )
is said to be polylinear or multilinear, if it is linear as a function of each
of its arguments, when the other arguments are fixed
...
, vi−1 , x + y, vi+1 ,
...
, vi−1 , x, vi+1 ,
...
, vi−1 , y, vi+1 ,
...
, vi−1 , avi , vi+1 ,
...
, vi−1 , vi , vi+1 ,
...
, n, a ∈ F
...

11

2
...
, Vn ; W )
...


2
...
Tensor algebra of a vector space
Let V be a finite dimensional vector space over F
...
It is also
said to be a mixed tensor p times covariant and q times contravariant
...
As a subspace of L(V,
...
, V ; F), Tensqp (V ) is a vector
space over the same ground field F as V
...
Then we write
Tensp (V ) or Tensq (V )
...
Thus Tens1 (V ) = V
...
Thus Tens1 (V ) = V
...

X
• A tensor V × V → F of type (1, 1) defines (and is defined by) a
X X
linear map V → (V ) = V , thus it is identified with an operator
V → V
...


2
...
Coordinates in the spaces of tensors
Let e1 ,
...
en be the dual
X
X
basis in V
...
It is defined by its
values on lists of base vectors
j ,
...
, eip , ej1 ,
...
,ipq
These values are called coordinates of T
...
A tensor, as a polylinear
function on vectors v1 ,
...
, uq is determined by

2
...
, vp , u1 ,
...
, vpip eip , u1j1 ej1 ,
...
vpip u1j1
...
, eip , ej1 ,
...
,j

= Ti11,
...
vpip u1j1
...
The vector vk is ipresented
symbol, so that vk = vk ei
...

j ,
...
,ipq of a tensor T are its coordinates with respect
i ,
...
,jpq in Tensqp (V ) in the sense that any tensor T ∈ Tensqp (V )
i ,
...
,jpq with
i ,
...
,j i ,
...
,j
coefficients Ti11,
...
,ipq ej11 ,
...
The tensor ej11 ,
...
,i
defined by formula (ej11 ,
...
, vq , u1 ,
...
vpp u1j1
...


2
...
Change of basis
Under a change of basis in V , the new basis is expressed in terms of
the old one according to formula
X
˜eα = Cαi ei =
Cαi ei
i

and the old basis is expressed in terms of the new by formula
eα = C˜αi ˜ei ,
where C˜αi are terms of the matrix inverse to the transition matrix Ciα , so
that C˜αi Cjα = δji and C˜αi Ciβ = δαβ
...
,j
β

...

= Ti11,
...
Cαipp C˜jβ11
...

p

Indeed,
β
...
,αqp = T (˜eα1 ,
...
, ˜eβq )
i
β
= T (Cαi11 ei1 ,
...
, C˜jqq ejq )
i
β
β
= Cαi1
...
C˜ q T (ei1 ,
...
, ejq )
1

jq

jq

i

β

β

j ,
...
Cαpp C˜jqq
...
,ipq

2
...
Maps induced by a linear map
A linear map F : V → W defines linear maps only for Tensk and
Tensk , for Tensqp with both p 6= 0 and q 6= 0, it does not induce any map
...
TENSORS

14

The map Tensk (F ) : Tensk (W ) → Tensk (V ), which is induced by
a linear map F : V → W , maps T : W k → F to the composition
F ×···×F
T
V k −−−−−→ W k −
→ F
...
The star here indicates that the
map is induced by F , its upper index position means that it acts in the
direction that is opposite to the direction of F
...
If there is no danger of confusion, we will
use a shorthand notation F∗ for Tensk (F )
...

Both constructions respect compositions: for linear maps F : U → V
F
G
and G : V → W , the maps induced by their composition U −
→V −
→W
are the appropriate compositions of the maps induced by F and G
...
The proofs of
these statements are straightforward
...
6
...
e
...
, vp , w1 ,
...
, ur , z 1 ,
...
, vp , w1 ,
...
, ur , z 1 ,
...

This multiplication is distributive with respect to addition of tensors
...

Example
...
e
...

Let us fix a basis e1 ,
...
Then e1 ,
...
Consider ei ⊗ ej ∈ Tens02 (V )
...
Let us calculate its value on vectors v = v k ek
and w = wm em
ei ⊗ ej : (v, w) 7→ hei |vihej |wi = hei |v k ek ihej |wm em i
j
= v k hei |ek iwm hej |em i = v k δki wm δm
= v i wj

In words: the tensor ei ⊗ ej evaluated on pair of vectors v and w gives
the product of the ith coordinate of v and the jth coordinate of w
...
, en are (ei ⊗ ej )p,q = δpi δqj
...
This is one of the base vectors in Tens20 (V )
...

q+s
Although the basis vectors of Tensp+r
(V ) belong to the range of pairq+s
q
s
ing Tensp (V ) × Tensr (V ) → Tensp+r (V ) the pairing is not surjective
...
A linear map, whose range contains
a basis, would be surjective
...
Indeed,
p+q+r+s
dim Tensqp (V )×Tenssr (V ) = np+q +nr+s while dim Tensq+s
,
p+r (V ) = n
so usually the dimension of the target space is less than the dimension
of the source
...
e
...

Those tensors, which can be presented as a product of tensors are
called decomposable
...

There is a construction which for any two vector spaces V and W over
F gives rise to a vector space V ⊗ W
...
The space V ⊗ V can be identified
with Tens2 (V )
...


3
...
1
...
Denote the set {1, 2, 3,
...
The set of all permutations of the set Nn is denoted by Sn and called
the symmetric group of degree n
...

Permutations belonging to Sn can be presented by pictures of the
following type
...
One should draw the arcs clearly, avoiding intersection of several arcs in one point and points
of tangency
...
SYMMETRIC AND SKEW-SYMMETRIC

16

A picture for the composition σ1 ◦ σ1 of permutations σ0 and σ2 can
be obtained from the pictures for σ0 and σ2 by drawing them one over
the other as follows
...
A
...
Any permutation can be presented as a composition
of transpositions
...
On a picture of arbitrary permutation, arcs can be drawn in
such a way that no two intersection points of the arcs 1 2 3 4 5
were on the same horizontal line
...
This gives
a desired decomposition of the permutation into a composition of permutations each of which is presented by a
picture with one intersection point
...


The arcs, which start at points i and j with i < j and finish at σ(i)
and σ(j), must intersect if σ(i) > σ(j)
...
Namely, if σ(i) > σ(j), then the number of
intersection points is odd, if σ(i) < σ(j), then it is even
...
The sign sign σ of
a permutation σ is defined to be −1 if σ is odd and +1 if σ is even
...
2
...
In other words, T is
symmetric, if, for any permutation σ : {1,
...
, k} and
any v1 ,
...
, vk ) = T (vσ(1) ,
...


3
...
This
subspace is denoted by Symk (V )
...
This subspace is denoted by Symk (V )
...
3
...

Consider the multiples of 1 in a field F: 1, 1 + 1, 1 + 1 + 1,
...

This sequence may be periodic, like in the field F2 = {0, 1} of two
elements, where 2 · 1 = 1 + 1 = 0
...
If k·1 6= 0
for any integer k > 0, then F is said to be a field of characteristic zero
...

In a field of characteristic zero, it is possible to divide any element of
the field by any positive integer
...
, ak ∈ F as
a1 +···+ak

...
If the characteristic of the field is not zero and
k
divides k, then an arithmetic mean of k elements of the field cannot be
defined
...
4
...
Then any
polylinear form can be symmetrized
...
, vk ) =

X 1
T (vσ(1) ,
...
It coincides with the
original form T if T was already symmetric
...


3
...
5
...
A bilinear form T : V ×V → F
is said to be anti-symmetric or skew-symmetric if T (v, w) = −T (w, v)
for any v, w ∈ V
...

In formula:
T (A, v, B, w, C) = −T (A, w, B, v, C),
where v, w ∈ V and A, B, C are lists of vectors (some of which may be
empty)
...
, ai , B = b1 ,
...
, cl
...
B Reformulations
...
Then
the following statements are equivalent:
(1) T is anti-symmetric
...
, vk in which
two of the vectors are equal (say, vi = vj for some i 6= j)
...
In formula:
T (A, v, B, w, C) = T (A, v, B, w + av, C),
where v, w ∈ V , a ∈ F and A, B, C are some lists of vectors
...

Proof
...
On the other hand,
the transposition does not change the list of arguments and hence does
not change the value of T
...

In formulas: T (A, v, B, w, C) = −T (A, w, B, v, C)
...

Hence T (A, v, B, v, C) = 0
...

(2) =⇒ (3):
T (A, v, B, w + av, C) = T (A, v, B, w, C) + aT (A, v, B, v, C)
= T (A, v, B, w, C) + a0 = T (A, v, B, w, C)

3
...
, vk : vj = i6=j ai vi
...
, vk ) = T v1 ,
...
, vk
i6=j

= T (v1 ,
...
, vk ) = 0
(4) =⇒ (2): A list of vectors in which two elements are equal is
linearly dependent
...
, vσ(n) ) = sign σ T (v1 , v2 ,
...

Denote the space of anti-symmetric polylinear forms V k → F by symbol Λk V
...
B hold true tautologically
...

If dim V = 1, then Λ2 V = 0
...
B the value of antisymmetric form must be zero
...


3
...
Anti-symmetrization
Assume that the ground field F has characteristic zero
...
There is a linear map alt : Tensk (V ) →
Λk (V ) defined by the formula
X sign σ
alt T (v1 ,
...
, vσ(k) )
k!
σ∈S
k

If T ∈ Tensk (V ) is anti-symmetric, then alt T = T
...


3
...
Exterior k-forms on a k-dimensional space
3
...
Theorem
...


Proof
...
Hence dim Λk (Fk ) ≥ 1
...
It is dek times

fined by its coordinates in the space Tensk (Fk )
...
3)
that the coordinates of a polylinear map are its values on sequences
(ei1 , ei2 ,
...
If ip = iq for some p 6= q, then the
value of a skew-symmetric form is zero
...
, ik ) = (σ(1), σ(2),
...
Hence
T (eσ(1) , eσ(2) ,
...
, ek ))
...
, ek ) ∈ F
...

An isomorphism Λk (Fk ) → F is defied by T 7→ T (e1 , e2
...

3
...
If dim V = k > 0, then dim Λk (V ) = 1
...
Determinant
4
...
Determinant of an operator
Let V be a vector space of dimension n over a field F, and T : V → V
be a linear map
...
This element is called the determinant of T and
is denoted by det T
...
2
...
A
...
In formula: det(T ◦ S) = det T det S
...

4
...
det id = 1
...
C
...
In particular,
det T
det T 6= 0
...

20

4
...
D
...

Proof
...
Since T
is not invertible, it is not surjective and dim range T < dim V
...
Hence, T ∗ : Λn (V ) → Λn (V ) is factored through
the zero space
...

The last two properties imply the following convenient criterion for
non-invertibility of a linear map T : V → V :
4
...
A linear map T : V → V is not invertible ⇐⇒

det T = 0
...
3
...
F
...

Proof
...
Then det S = det(L ◦ T ◦ L−1 ) =
det L det T det(L−1 ) = det T det L(det L)−1 = det T
...
In order to make the arguments legitimate, we
have to identify V and W somehow
...

Notice that commutative diagrams in that proof are not necessary
...

Linear maps T , S, and L form a commutative diagram
L
V
W



T
S
...
The maps T ∗ and S ∗ are multiplications
by det T and det S, respectively
...
DETERMINANT

22

F
these isomorphisms
...
Linear map

det S

F L∗ F
L∗ : F → F appears twice in this diagram
...
The commutativity
of the diagram (1) means that multiplication by C det(S) coincides with
the multiplication by det(T )C
...


4
...
Formula for determinant
4
...
Let T : Fn → Fn be a linear map with T ej = Tji ei
...

σ∈Sn

Proof
...
, en ) = 1
...
, en ) = D(T e1 ,
...
, Tnjn ejn )
= T1j1
...
, ejn )
X σ(1)
=
T1
...
, eσ(n) )
σ∈Sn

=

X

σ(1)

T1


...
, en )

σ∈Sn

=

X

σ(1)

sign σ T1


...
5
...
For each λ ∈ F consider the determinant of operator
λI − T : V → V
...
)
By Theorem 4
...
Each of the products contains dim V factors
...

det(λI − T ) is called the characteristic polynomial of T
...
H
...


4
...
Recall that any operator isomorphic to T : V → V can be
presented as L−1 ◦ T ◦ L, where L : W → V is an invertible linear
map
...
F, that the values of the characteristic
polynomials of T and S at each λ ∈ F are equal, because operators
λI − T and λI − L−1 ◦ T ◦ L are isomorphic for any value of λ
...

These arguments suffice if the characteristic of the ground field F is
zero, because two polynomials over such a field equals iff they have the
same values at each λ ∈ F
...
For example, a polynomial
x2 and x have the same values if F = Z/2, and, more generally, if the
characteristic is 2
...
5 holds true for operators over any field F
...
F
...

Recall (see Theorem 5
...
Together with the criterion of non-invertibility from Section
4
...
I
...

Exercise
...


4
...
Trace
Let V be a finite-dimensional vector space over a field F and T : V →
V be an operator
...
, vn the operator T has
matrix Tij
...

This definition requires a proof, because it involves a choice of basis,
while in the name no basis is mentioned
...
I and the following statement
...
J
...


4
...
J together with Exercise above can be summarized
in the following formula:
det(λI − T ) = λn − tr T λn−1 + · · · + (−1)n det T
Proof of 4
...
Expand det(λI − T ) according to Theorem 4
...
The
summands which contribute to the monomial of degree n − 1 correspond
to permutations σ which leave n − 1 elements of {1, 2,
...
Only
one permutation has this property: σ = id
...
(λ − Tnn )
...

(λ − T11 )(λ − T22 )
...


Thus, the trace tr T and determinant det T are, up to sign, coefficients
of the characteristic polynomial of T
...
The trace and determinant occupy the
extreme positions and they have special properties distinguishing them
from other numerical invariants which come from the characteristic
...

The next theorem is a distinctive property of trace
...
K
...

Proof
...
Let Tij and Sij be marices of T and S in
this basis
...
Let us find the traces:
tr(T S) = Tij Sji and tr(ST ) = Sij Tji
These two numbers are equal, because the summation indices are damn,
their renaming would not effect the sum
...
Observe that the
 trace is neither multiplicative, nor additive
...
Then
1 0



2


0 1
0 1
1 0
tr
= 0, while tr
= tr
=2
1 0
1 0
0 1


Title: POLYLINEAR PART OF ADVANCED ALGEBRA
Description: POLYLINEAR PART OF ADVANCED ALGEBRA