Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Title: Econometrics using Matrix Algebra, a summary of key concepts
Description: Here's a neat little summary sheet(s) of all the concepts covered in the ES30027 module, namely Econometrics concepts with Matrix Algebra. Key topics include: Matrix Algebra Review, Probability Distributions, Hypothesis Testing, Heteroskedasticity, Instrumental Variables & Maximum Likelihood Estimation. Covers all the key concepts you need to remember at the last moment, great to review right before your exam!
Description: Here's a neat little summary sheet(s) of all the concepts covered in the ES30027 module, namely Econometrics concepts with Matrix Algebra. Key topics include: Matrix Algebra Review, Probability Distributions, Hypothesis Testing, Heteroskedasticity, Instrumental Variables & Maximum Likelihood Estimation. Covers all the key concepts you need to remember at the last moment, great to review right before your exam!
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
ES30027 Egg Timer!
Topic
Sub-Topic
Summary of Key Points & Formulae
Key Takeaways!
Matrix Algebra
Review
Vectors &
Matrices; Matrix
Operations;
Transpose
๏ท A ๐๐ฅ๐ matrix โถ bold uppercase letters
...
๏ท 2 matrices are said to be conformable if they have the appropriate dimensions for the same
operation
...
o Addition/Subtraction: same number of rows & columns
...
of columns for 1st matrix should equal no
...
๏ท Key properties & facts!
o ๐จ+๐ฉ =๐ฉ+๐จ
o (๐จ + ๐ฉ) + ๐ช = ๐จ + (๐ฉ + ๐ช)
o (๐ด๐ต)๐ถ = ๐ด(๐ต๐ถ) = ๐ด๐ต๐ถ
o ๐ด(๐ต + ๐ถ) = ๐ด๐ต + ๐ด๐ถ ๐๐๐ (๐ต + ๐ถ)๐ด = ๐ต๐ด + ๐ถ๐ด
o ๐ด๐ต not necessarily equal to ๐ต๐ด
o ๐ด๐ต = 0 possible even if ๐ด โ 0 and ๐ต โ 0
o ๐ถ๐ท = ๐ถ๐ธ possible even if ๐ถ = 0 and ๐ท โ ๐ธ
๏ท Transpose of a ๐๐ฅ๐ matrix A is the ๐๐ฅ๐ matrix B such that ๐ = ๐ , โ ๐ = 1, โฆ , ๐ ๐๐๐ ๐ =
1, โฆ , ๐
๏ท Key properties!
o (๐จ ) = ๐จ
o (๐จ + ๐ฉ) = ๐จ + ๐ฉ
o (๐จ๐ฉ) = ๐ฉ ๐จ
๏ท Square Matrix: ๐๐ฅ๐ matrix where ๐ = ๐
...
e
...
๐ =
1 โ ๐ = 1, โฆ , ๐ ๐๐๐ ๐ = 0 โ ๐ โ ๐
...
๏ท Linear Dependence: A set of m-vectors ๐ฅ , โฆ , ๐ฅ is linearly dependent if there exist numbers
๐พ , โฆ , ๐พ such that โ ๐พ ๐ฅ = 0
...
(can express one vector as a weighted sum of all other
๏ท Key properties of matrices:
o ๐จ+๐ฉ =๐ฉ+๐จ
o (๐จ + ๐ฉ) + ๐ช = ๐จ + (๐ฉ +
๐ช)
o (๐จ๐ฉ)๐ช = ๐จ(๐ฉ๐ช) = ๐จ๐ฉ๐ช
o ๐จ(๐ฉ + ๐ช) = ๐จ๐ฉ +
๐จ๐ช ๐๐๐ (๐ฉ + ๐ช)๐จ =
๐ฉ๐จ + ๐ช๐จ
o ๐จ๐ฉ not necessarily equal to
๐ฉ๐จ
o ๐จ๐ฉ = ๐ possible even if
๐จ โ 0 and ๐ฉ โ 0
o ๐ช๐ซ = ๐ช๐ฌ possible even if
๐ช = 0 and ๐ซ โ ๐ฌ
๏ท Properties of Transposes
o (๐จ ) = ๐จ
o (๐จ + ๐ฉ) = ๐จ + ๐ฉ
o (๐จ๐ฉ) = ๐ฉ ๐จ
๏ท Properties of Inverses:
Some Special
Matrices
Rank; Inverse;
Trace
vectors)
๏ท When considering linear dependencies of matrices, can either look at the rows/columns of
vectors that make up the matrix, and see if they are linearly dependent, i
...
o ๐จ ๐
o (๐จ๐ฉ)
๐
๐
=๐จ
= ๐ฉ ๐๐จ
๐
๐ ๐ป
๐
o ๐จ๐ป
= ๐จ
๏ท Properties of Traces (where A,B &
C are nxn matrices and ๐พ is a
scalar)
o ๐๐(๐จ + ๐ฉ) = ๐๐(๐จ) +
๐๐(๐ฉ)
o ๐๐(๐ธ๐จ) = ๐ธ๐๐(๐จ)
o ๐๐(๐จ๐ฉ๐ช) = ๐๐(๐ฉ๐ช๐จ) =
๐๐(๐ช๐จ๐ฉ)
๏ท Key results from matrix
differentiation!
๏ท
๐๐
๐
=
๐๐
๐
=๐
๐=
1
3
2
โ ๐ฅ = [1
4
2] ๐๐
1
2
& ๐ฅ = [3 4] ๐๐
3
4
๏ท Rank (of a set of vectors): The maximum number of linearly independent vectors that can be
chosen from the set
...
o For any matrix, rank of row vectors = rank of column vectors
...
o Singularity: An mxm matrix A is singular if the rank of A is less than m
...
๏ท Key properties!
๏ท (๐ด ) = ๐ด
๏ท (๐ด๐ต) = ๐ต ๐ด
๏ท (๐ด ) = (๐ด )
๏ท Inverse of a 2x2 matrix:
1
๐ ๐
๐ โ๐
๐ด=
โ๐ด =
๐ ๐
๐๐ โ ๐๐ โ๐ ๐
๏ท Inverse of a diagonal matrix
1 0 0
โก
โค
1
1 0 0
0โฅ
โข0
๐ด= 0 4 0 โ๐ด =โข
4
โฅ
1โฅ
0 0 2
โข
โฃ0 0 2โฆ
๏ท Trace [tr()]: The sum of the elements on the principal diagonal of a square matrix
...
r
...
๏ท Scalar w
...
t column vector โ mxn matrix
...
r
...
๏ท Key results!
๏ท
๏ท
๏ท
๐จ๐
๐๐จ
= ๐จ ๐๐๐
=๐จ
๐
๐
๐ ๐จ๐
= (๐จ + ๐จ )๐
๐
๐ ๐จ๐
If A is symmetric,
=
๐
๐๐จ๐
o
=
o
= ๐ด ๐๐๐
o
=๐
=๐ด
= (๐ด + ๐ด )๐
o If A is symmetric,
Ordinary Least
Squares with
Matrices
Probability
Distributions
Random
Variables;
Random Vectors
= 2๐ด๐
๏ท Convert a scalar relationship into a vector matrix relationship by stacking all the rows for different
observations
...
๏ท Multivariate true linear model: ๐ฆ = ๐๐ฝ + ๐ข
๐ฆ
๐ฆ= โฎ
๐ฆ
๐ = [1 ๐ฅ โฆ ๐ฅ ]
๐ฝ
๐ฝ= โฎ
๐ฝ
๐ข โฆ ๐ข ]
๐ข = [๐ข
๏ท Multivariate Linear Fitted Model: ๐ข โก ๐ฆ โ ๐๐ฝ
๏ท Concept of Ordinary Least Squares: min โ ๐ข = min ๐ข ๐ข, given ๐ข ๐ข = โ ๐ข
๏ท ๐ฝ
= (๐ ๐) ๐ ๐ฆ (know how to prove this too!)
๏ท ๐ ๐ should be a positive definite matrix if ๐ฝ
works
...
๏ท A random vector x is a vector whose elements are themselves random variables
...
๐ธ(๐ฅ )
โฎ
๏ท Expected Value: ๐ธ(๐ฅ) =
โฎ
๐ธ(๐ฅ )
๏ท Variance: A matrix containing variances of each element of x along the principal diagonal, and the
covariances between different elements of x off the diagonal
...
o X is a fixed, non-stochastic matrix with rank k
...
U is a random vector with ๐ธ(๐ข) = 0 and ๐ฃ๐๐(๐ข) = ๐ธ(๐ข๐ข ) = ๐ ๐ผ
๐ฃ๐๐(๐ข) is the error covariance matrix, which gives the covariance between any 2 elements in u
...
o Assume that all observations have zero covariance (no autocorrelation)
...
e
...
o Linearity: ๐ฝ
= ๐ถ๐ฆ, where ๐ถ denotes a matrix
o Unbiasedness: ๐ธ ๐ฝ
= ๐ฝ = ๐ธ(๐ถ๐ฆ)
Residuals: ๐ข โก ๐ฆ โ ๐๐ฝ = (๐ผ โ ๐(๐ ๐) ๐ )๐ข
Expected Sum of Squared Residuals: ๐ธ( ๐ข ๐ข) = ๐ (๐ โ ๐)
๐ข ๐ข
โน๐
=
๐โ๐
Since ๐
is an unbiased estimator, ๐ธ(๐ ) = ๐ธ
=๐
๏ท Normal Distribution, N ๐, ๐๐
o ๐ is the mean, controls center point
...
e
...
๏ท Linearity: ๐ฝ
= ๐ถ๐ฆ, where ๐ถ
denotes a matrix
o Unbiasedness: ๐ธ ๐ฝ
=๐ฝ=
๐ธ(๐ถ๐ฆ)
๏ท Key relationships between
distributions:
o If ๐ง ~ ๐(0,1), ๐ = 1, โฆ , ๐ and
๐ง are all independent, then
โ ๐ง ~ ๐ (๐)
o If ๐ง ~ ๐(0,1), ๐ฆ ~ ๐ (๐) and ๐ง
& ๐ฆ are independent, then
โ
โ
~ ๐ก(๐)
๏ท Key properties on joint PDFs:
๏ท ๐ (๐ฅ, ๐ฆ) โฅ 0, for all x & y
๏ทโซ
๐ (๐ฅ, ๐ฆ) ๐๐ฆ ๐๐ฅ =
โซ
1
๏ทโซ
โซ
๐ (๐ฅ, ๐ฆ) ๐๐ฆ ๐๐ฅ =
๐(๐ โค ๐ โค ๐, ๐ โค ๐ โค ๐)
๏ท Studentโs t-Distribution, ๐(๐)
o Has a single parameter
...
o As ๐ฃ โ โ, ๐ก(๐ฃ) โ ๐(0, 1) [standard normal distribution]
๏ท Key properties of multivariate
normal distributions:
๏ท An affine transformation (a type
of geometric transformation that
preserves collinearity) of a
normal vector is a normal
vector
...
e
...
vโs
...
o Defined only for positive values of x
...
๏ท Chi-square Distribution, ๐๐ (๐)
o Right-skewed distribution
...
o Need one degree of freedom ๐
...
๏ท Joint P
...
vโs move together
...
vโs,
their PDF gives the probability of X and Y simultaneously lying in given ranges
...
๐ (๐ฅ) =
๐ (๐ฅ, ๐ฆ) ๐๐ฆ
๐ (๐ฆ) =
๐ (๐ฅ, ๐ฆ) ๐๐ฅ
๏ท Independence: X and Y are said to be independent iff: ๐ (๐ฅ, ๐ฆ) = ๐ (๐ฅ)
...
Not independent, in general
...
๐ด๐ฅ + ๐ ~ ๐(๐ด๐ + ๐, ๐ดฮฃ๐ด )
Distribution of
OLS Estimates
๏ท
๏ท
๏ท
๏ท
Testing A Single
Hypothesis
๏ท If covariance between 2 jointly normal random variables is 0, the random variables are
independent, i
...
, if the off-diagonal terms in ๐ฃ๐๐(๐ฅ) are 0, elements of x are independent
r
...
[useful for proving independence]
๐ข ~ ๐(0, ๐ ๐ผ)
๐ข are independent r
...
โs, variance matrix is a diagonal matrix [zero autocorrelation]
If ๐ข is multivariate normal, each of ๐ข will be univariate normal, where ๐ฝ
is an affine
transformation of ๐ข, and ๐ฅ ๐ด๐ฅ is a scalar
...
e
...
๏ท The respecified statistic:
๏ท
~ ๐(0,1) โน
~ ๐ก(๐ โ ๐), which now follows a t-distribution and has ๐ โ ๐ degrees of freedom
...
1
...
2
...
3
...
Decide on a significance level ๐
...
i
...
5
...
If |๐ก | > ๐ก , reject ๐ป
...
P-values: They capture the probabilities of observing OLS estimates that are greater in absolute
value than the estimates actually obtained, under the assumption that the true parameter value is
0
...
Using a two-sided F-test!
Here, you can jointly test a set of ๐ linear restrictions on the model parameters and express the
restrictions as ๐ ๐ฝ = ๐
...
If ๐ป : ๐ ๐ฝ = ๐ is true, ๐ ๐ฝ โ ๐ ~๐(0, ๐ ๐ (๐ ๐) ๐ )
๏ท F-statistic (learn to derive this!): ๐น(๐, ๐ โ ๐)~
Heteroskedastici
ty
OLS Under
Heteroskedasticity
Generalised Least
Squares (๐ท๐ฎ๐ณ๐บ )
Feasible General
Least Squares
๏ท Heteroskedasticity: Occurs when ๐ฃ๐๐(๐ข) is diagonal, but has different terms along the main
diagonal, i
...
๏ท Under heteroskedasticity, ๐ธ ๐ฝ
= ๐ฝ
...
๏ท How do you choose the method?
๏ท If we know the structure of the variance matrix, use feasible GLS
...
๏ท Both provide unbiased estimates of ๐ฝ and standard errors in large samples, but perform poorly
in small samples
...
๏ท By standardization, we can re-estimate a new variance matrix, and the transformed model
would be homoskedastic
...
OLS is a type of GLS estimator
...
๏ท The special case of GLS under heteroskedasticity only is aka Weighted Least Squares (๐ฝ
)
...
Rather, we only need to know the entries of ฮฉ up to a constant
...
๐ฃ๐๐(๐ข) = ๐ ฮฉ
๏ท This is how GLS can be used to find the IV estimator, given that ๐ฃ๐๐(๐ข) = ๐ (๐ ๐)
๏ท Key question: Problem Set 3, Q1(d)
...
๐ ๐ผ
0
ฮฉ=
0
๐ ๐ผ
๏ท How to perform feasible GLS?
1
...
2
...
๐ =
๐ =
โ
1
๐ โ๐
๐ข
๐ข (the first observation from group B is ๐ + 1!)
๏ท How do you choose the method?
๏ท If we know the structure of the
variance matrix, use feasible
GLS
...
๏ท Both provide unbiased
estimates of ๐ฝ and standard
errors in large samples, but
perform poorly in small
samples
...
๏ท The special case of GLS under
heteroskedasticity only is aka
Weighted Least Squares (๐ฝ
)
...
Rather, we only need to know the
entries of ฮฉ up to a constant
...
๐ฃ๐๐(๐ข) = ๐ ฮฉ
๏ท This is how GLS can be used to
find the IV estimator, given that
๐ฃ๐๐(๐ข) = ๐ (๐ ๐)
ฮฉ=
๐ ๐ผ
0
0
๐ ๐ผ
3
...
White Standard
Errors
Instrumental
Variable
Estimation
The Endogeneity
Problem
๐ฝ
= ๐ ฮฉ ๐ ๐ ฮฉ ๐ฆ
๐ฝ
is not unbiased, but the bias disappears in large samples
...
๏ท Given that ๐ฃ๐๐ ๐ฝ
= (๐ ๐) ๐ ฮฉX(๐ ๐)
...
๏ท As the sample size tends to infinity, ๐ ฮฉX tends to converge to ๐ ฮฉX
...
Even if X is
random, OLS is still an unbiased estimator
...
๏ท ๐ธ ๐ฝ
โ ๐ฝ, OLS is biased!
๏ท What if you could increase the sample size? Even if there is bias in a finite sample, will this bias
tend to 0 as sample size tends to infinity?
๏ท Consistency: An estimator ๐ is said to be consistent if it converges in probability to the true
parameter value ๐, which requires that:
lim ๐ ๐ โ ๐ โค ๐ = 1, for all ๐ > 0
...
๏ท What does convergence of probability require?
๏ท Limit ๐ โ โ of the probability that the absolute difference between our estimator and our true
value is less than some value ๐ going to 1 for any positive ๐
...
๏ท An estimator is consistent if ๐๐๐๐ ๐ฝ = ๐ฝ
...
๏ท Using these rules (also need to calculate!): ๐๐๐๐ ๐ฝ
= ๐ฝ + ๐๐๐๐
๐๐๐๐
๏ท If endogeneity occurs, OLS is
biased and ๐ธ ๐ฝ
โ ๐ฝ
...
๏ท If ๐๐๐๐
=ฮฃ
, X is well-
behaved
...
๏ท Key conditions for validity of
instruments:
๏ท Relevance
๏ท Exogeneity
๏ท For specification of IVs, k is the
number of variables and l is the
If ๐๐๐๐
= 0, ๐ฝ
is consistent and will provide good estimates as long as we have a large
number of instruments
...
If not, (if equal to ฮฃ โ 0) then ๐๐๐๐(๐ฝ ) โ ๐ฝ and ๐ฝ
is inconsistent even in large ๏ท The IV Estimates:
๏ท ๐ฝ =
samples
...
(linearly dependent instruments
๏ท ๐๐๐๐ ๐ฝ = ๐ฝ, hence ๐ฝ is
must be ruled out!)
consistent!
Key conditions for validity of instruments!
๏ท ๐ฃ๐๐(๐ ๐ข) = ๐ ๐โฒ๐
๏ท 2SLS Estimation:
๏ท Relevance: ๐๐๐๐
= ฮฃ โ 0: variables in Z are correlated with those in X
...
variable ๐ฅ ๐ ๐ on Z and
โExogenousโ variables are not correlated with u, and can be used as instruments for themselves
...
Endogenous Variables: Variables correlated with u
...
e
...
If ๐ > ๐, over-identification
...
under-identification
...
e
...
๏ท
๏ท
๏ท ๐ฃ๐๐(๐ ๐ข) = ๐ ๐โฒ๐
๏ท 2SLS Estimation: Need to perform OLS in 2 stages
...
Regress each endogenous variable ๐ฅ ๐ ๐ on Z and obtain predicted values
...
๐ฅ = ๐(๐ ๐) ๐ ๐ฅ for ๐ = 1, โฆ , ๐
๐ฅ = ๐ฅ for all exogenous variables
...
Regress y on ๐ and obtain the OLS slope estimate ๐ฝ
= ๐ ๐ ๐ ๐ฆ [also need to prove
that ๐ฝ = ๐ฝ
]
๏ท Need to use the GLS variance estimator ๐ฃ๐๐ ๐ฝ
, since estimated variance matrix of OLS
is wrong
...
๏ท Are instruments uncorrelated with U? Use Sargan test
...
๏ท Key Characteristics:
๏ท The IV must be correlated with economic growth, but not directly affect the probability of
conflict
...
๐ ๐ ๐ ๐ฆ [also need to
prove that ๐ฝ = ๐ฝ
]
๏ท Standard errors may change when
using either approach, but IV
estimation gives better standard
errors
...
๏ท Is the IV relevant?
๏ท It should have a positive relationship with economic growth
...
๏ท Switching to the IV from OLS regressions should not result in loss of significance of the
coefficient, but should ideally be larger than the coefficient
...
๏ท Step 2: Regress dependent variables on predicted values from Step 1
...
But the coefficients shouldnโt change a
lot, and they should still be significant
...
๏ท Use an F-test to regress the dependent variable on the dummy variables
...
1
...
๏ท Likelihood function takes y as given and tells the likelihood (probability) of obtaining some
parameter value ๐
...
๐ฟ(๐; ๐ฆ) = ๐ (๐ฆ ; ๐)
...
Calculate the log likelihood (In ๐ฟ(๐; ๐ฆ)) by taking logs of the likelihood function
...
Set the derivative of the log likelihood function w
...
t ๐ (the โscoreโ) equal to 0
...
4
...
These are the ML estimates, ๐
...
Write down the likelihood function
...
๐ฟ(๐ฝ, ๐ ; ๐ฆ) = ๐ (๐ข; ๐ฝ, ๐ )
Since individual ๐ข are independent normal r
...
๐ (๐ข ; ๐ฝ, ๐ )
Since error terms have same distribution,
๐ฟ(๐ฝ, ๐ ; ๐ฆ) =
2
...
๐ (๐ข ; ๐ฝ, ๐ )
๏ท
Key steps:
๏ท Write down the likelihood
function
...
๏ท Set the derivative of the log
likelihood function (the
โscoreโ) to 0
...
๐
๐
1
(๐ฆ ๐ฆ โ 2๐ฆ ๐๐ฝ + ๐ฝ ๐ ๐๐ฝ)
ln ๐ฟ(๐ฝ, ๐ ; ๐ฆ) = โ ln(2๐) โ ln ๐ โ
2
2
2๐
3
...
๐ ln ๐ฟ(๐ฝ, ๐ ; ๐ฆ|๐)
โ1
โก
โค
(โ๐ ๐ฆ + ๐ ๐๐ฝ)
๐๐ฝ
0
โข
โฅ=
๐
=
โข๐ ln ๐ฟ(๐ฝ, ๐ ; ๐ฆ|๐)โฅ
โ๐
1
0
โข
โฅ
(๐ฆ โ ๐๐ฝ) (๐ฆ โ ๐๐ฝ)
+
2๐
2๐
โฃ
โฆ
๐๐
4
...
๐ฝ = (๐ ๐) ๐ ๐ฆ
1
๐ = ๐ฆ โ ๐๐ฝ
๐ฆ โ ๐๐ฝ
๐
๏ท ๐ฝ is unbiased, but ๐ is not
...
e
...
โ
๏ท Estimator variance disappears as sample size increases, i
...
, lim ๐ฃ๐๐ ๐
โ
Regression with
Binary Dependent
Variables
=0
๏ท ๐ is consistent
...
๏ท Major drawbacks of using regular OLS estimates:
๏ท OLS predictions of ๐ฆ can be less than 0 or greater than 1, which are hard to interpret when
dependent values are binary variables
...
This
problem may disappear in large samples
...
๐ฃ๐๐(๐ข ) is a function of ๐ฅ , the model exhibits heteroskedasticity
...
But can use White Standard Errors
...
Probit
๏ท An ML alternative to LPM
...
๏ท Define ๐ฆ โ , a latent variable (which is unobserved) such that ๐ฆ โ = ๐ฅ ๐ฝ + ๐ข
...
๏ท Similarly, ๐(๐ฆ = 0) = 1 โ ฮฆ
๏ท These equations are non-linear in parameters, OLS canโt be used
...
๏ท Order the data such that 1st m observations have ๐ฆ = 0 & n-m observations have ๐ฆ = 1
...
i
...
๐ฅ๐ฝ
๐ฅ๐ฝ
๐ฟ(๐ฝ, ๐ ; ๐ฆ|๐) =
1โฮฆ
ฮฆ
๐
๐
๏ท Taking log likelihood, which needs to be maximised numerically (since there is no closed form
expression for the standard cdf):
๐ฅ๐ฝ
๐ฅ๐ฝ
ln ๐ฟ(๐ฝ, ๐ ; ๐ฆ|๐) =
(๐ฆ ln ฮฆ
) + (1 โ ๐ฆ )ln ( 1 โ ฮฆ
)
๐
๐
๏ท Probit estimates ๐ฝ & ๐ obtained
...
Marginal Effects
๏ท Helps to interpret ๐ฝ , as a derivative indicating how much a one unit increase in each X variable
will increase the expected value of y by
...
( )
๏ท In probit, marginal effects:
=
ฮฆ
=๐
๏ท Can be evaluated at any value of ๐ฅ , however, it is more common to use the mean values of each
X variable, denoted ๐ฅฬ , a row vector
...
Then the estimated marginal effect just uses the probit coefficients in
place of the true model parameters
...
Title: Econometrics using Matrix Algebra, a summary of key concepts
Description: Here's a neat little summary sheet(s) of all the concepts covered in the ES30027 module, namely Econometrics concepts with Matrix Algebra. Key topics include: Matrix Algebra Review, Probability Distributions, Hypothesis Testing, Heteroskedasticity, Instrumental Variables & Maximum Likelihood Estimation. Covers all the key concepts you need to remember at the last moment, great to review right before your exam!
Description: Here's a neat little summary sheet(s) of all the concepts covered in the ES30027 module, namely Econometrics concepts with Matrix Algebra. Key topics include: Matrix Algebra Review, Probability Distributions, Hypothesis Testing, Heteroskedasticity, Instrumental Variables & Maximum Likelihood Estimation. Covers all the key concepts you need to remember at the last moment, great to review right before your exam!