Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Title: Statistical simulation lecture notes
Description: There are chapters: simulation from a known distribution, simulation techniques and methods(rejection sampling, importance sampling, etc.), Random vectors and copulas, generating sample path, Bayes, Bootstrap. It's from Rutgers University in USA.
Description: There are chapters: simulation from a known distribution, simulation techniques and methods(rejection sampling, importance sampling, etc.), Random vectors and copulas, generating sample path, Bayes, Bootstrap. It's from Rutgers University in USA.
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
Proof
1
Simulation Lecture Notes
Wanlin Juan
September 2019
1
Simulation From a Known Distribution
Assumptions
We know how to simulate from U(0,1)
...
, xn ∼ U (0, 1), i
...
d
...
rk ,
...
M
yi
E
...
M=50000
...
Simulate from an exponential distribution x ∼ exp(λ)
Density function: f (x) = λe−λx 1{x>0}
Algorithm
1
...
Compute x = − λ1 log(u)
Claim: x ∼ f (x)
Reason
∀t > 0,
1
P (x ≤ t) = P (− log(u) ≤ t)
λ
= P (u ≥ e−λt )
= 1 − P (u < e−λt )
= 1 − e−λt)
Z t
=
λe−λs ds
0
Z t
=
fexp (s)ds
0
2
B
...
Simulate u ∼ U (0, 1)
2
...
g
...
Then x = − λ1 log(1 − t) = F −1 (t)
C
...
2
...
Simulate u1 , u2 ,
...
Compute x = u1 +
...
Yn ∼ g(•),
√ ¯
n(Y − EY )
p
→ N (0, 1), n → ∞
V ar(Y )
E(u) = 21
R1
V ar(u) = E(u2 ) − (E(u))2 = 0 u2 du − ( 12 )2 =
1
12
Q: n=12 large enough?
A: It is enough for u ∼ U (0, 1)
...
Simulate u1 ∼ U (0, 1) and calculate θ = 2πu1 ∼ U (0, 2π)
√
2
...
Calculate x = rcosθ, y = rsinθ
Claim:
x ∼
N (0, 1),
1), x is independent with y
y∼ N (0,
x
0
1 0
∼N
,
y
0
0 1
Joint density function:
2
2
1
1 − x2 +y2
1
2
f (x, y) = φ(x)φ(y) = √ e−x /2 √ e−y /2 =
e
2π
2π
2π
Take the Box-Muller transformation,
r=
p
x2 + y 2
x
cosθ = p
2
x + y2
Cartesian coordinate (x, y) ↔ Polar coordinate (r, θ)
f (r, θ) = f (x, y)|J|
(Jacobian Matrix)
D1
...
Call Algorithm for normal distributions to get x1 ,
...
Calculate y = x21 +
...
Simulate x1 ,
...
Calculate y = x21 +
...
Simulate from F-distribution
F-distribution: X =
Y1 /d1
Y2 /d2 ,
where Y1 ∼ χ2d1 , Y2 ∼ χ2d2 , Y1 and Y2 are independent
...
Simulate from multivariate normal distribution
µ1
σ12
ρσ1 σ2
x=
∼N
,
µ2
ρσ1 σ2
σ22
x1
µ1
x2
∼ N µ2 , Pp×p
More generally, x=
...
xp
µp
x1
x2
Algorithm:
1
...
Calculate x1 = µ1 + ρσ1 z1 , x2 = µ2 + ρσ2 z1 + 1 − ρ2 σ22 z2
x1
µ1
σ12
ρσ1 σ2
Claim: x=
∼N
,
x2
µ2
ρσ1 σ2
σ22
Why: Linear combinations of normal ⇒ still normal
Thus, we only need to check
E(x1 ) = µ1 + ρσ1 z1 = µ1
q
E(x2 ) = µ2 + ρσ2 z1 + 1 − ρ2 σ22 z2 = µ2
V ar(x1 ) = σ12 V ar(z1 ) = σ12
V ar(x2 ) = ρ2 σ22 V ar(z1 ) + (1 − ρ2 )σ22 V ar(z2 ) = σ22
q
(1 − ρ2 )σ22 z2 )
q
= Cov(σ1 z1 , ρσ2 z1 ) + Cov(σ1 z1 , (1 − ρ2 )σ22 z2 )
q
= ρσ1 σ2 V ar(z1 ) + σ1 (1 − rho2 )σ22 Cov(z1 , z2 )
Cov(x1 , x2 ) = Cov(σ1 z1 , ρσ2 z1 +
= ρσ1 σ2
5
Check
σ1 p
0
σ1
AAT =
2 )σ 2
ρσ
(1
−
ρ
0
2
2
P
σ12
ρσ1 σ2
=
=
ρσ1 σ2
σ22
More generally,
suppose
x1
x2
Then x=
...
µp
p ρσ2
(1 − ρ2 )σ22
p×p
1
T
can
s
...
AA
= 4
find A
z1
z2
p×p
+A
...
µp
Scalar version:
z ∼ N (0, 1)
x = c + az ∼ N (c, a2 )
Pp×p
Call Algorithm to solve for AAT =
Algorithm:
1
...
, zp ∼ N (0, 1), i
...
d
2
...
, zp )T
Claim:
x ∼ N (µ,
X
)
This leads to:
for i = 1, 2,
...
For j = 1, 2,
...
,p
Set vi = σij ;
For k = 1, 2,
...
Simulate from double exponential distribution (Laplace Distribution)
x ∼ fDE (x)
|x−µ|
1 − θ
Density: fDE (x) = 2θ
e
, for x ∈ (−∞, +∞)
Rx
|x−µ|
CDF: FDE (x) = −∞ fDE (t)dt = 12 + 12 sign(x − µ) • {1 − e θ }
−1
FDE
(t) = µ − θ • sign(t − 12 ) • log{1 − 2|t − 21 |}, for t ∈ (0, 1)
Algorithm1: use algorithm in Part B
...
Simulate u1 , u2 ∼ U (0, 1)
2
...
If u2 < 0
...
Return x
Claim: x ∼ fDE (x; µ, θ)
G1
...
Simulate u ∼ U (0, 1)
2
...
Simulate from Binomial Distribution
Algorithm
1
...
un ∼ U (0, 1), i
...
d
2
...
+ 1{un ≤p}
Claim: x ∼ Binomial(n, p)
Why: x =
Pn
i=1 {Independent
Bernoulli}
G3
...
))
Multinomial (1, (p1 , p2 ,
...
Algorithm
1
...
Set (x1 , x2 , x3 ) = (1{0Claim: x ∼ M ultinomial(1,(p1 , p2 ,
...
Simulate from Multinomial Distribution
Algorithm
1
...
, un P
∼ U (0, 1)
Pn
Pn
n
2
...
))
H
...
e−λ λk
k!
Connections between Poisson(λ) and exp(λ):
Poisson x is the number of events in [0,1], when the time between consecutive events are i
...
d
exponential(λ)
...
, Tx , Tx+1 ∼ exp(λ) ⇒ x ∼ P oisson(λ)
Algorithm
1
...
While(s ≤ 1)
8
Simulate u ∼ U (0, 1)
...
Return x = k − 1
Alternatively, recall U ∼ U (0, 1) ⇒ X = F −1 (u) ∼ F (•)
Q: In discrete case, how to define F −1 (u)?
A: x = F −1 (u) =the largest integer such that F (x) ≤ u
Algorithm(numerical)
1
...
Generate u ∼ U (0, 1)
2
...
Return x
9
2
Simulation Techniques and Methods
A
...
Assumption
1
...
2
...
R
R
Remark C has to be more than 1, since f (x)dx = g(x)dx = 1
...
Algorithm
1
...
f (z)
, set x = z (accept z), and then go to 3;
2
...
3
...
Claim: x ∼ f (x)
Reason(intuitively)
10
Reason
a
...
Accept xk+1 = z by the following probability
α(xk , z) = min{1,
f (z)/f (xk )
}
g(z|xk )/g(xk |z)
Otherwise set xk+1 = xk
3
...
Stop!
Claim: When N is large, x∗ ∼ f (x)
Reason: verify (*)
Special cases (Algorithms)
1
...
Independent Chain M-H Algorithm: g(z|x) = g(z)
(z)g(xk )
⇒ In M-H, change α(xk , z) = min{1, ff (x
}
k )g(z)
(MCMC always depends on the previous one, even if x and z are independent
...
Initial value: x0 = 0
2
...
Simulate z ∼ g(z) = ft3 (z)
...
Simulate u ∼ Bernoulli(α(xk , z)), where
ft (z)[1 − sin(20z)]1|z|≤3 ft3 (xk )
[1 − sin(20z)]1|z|≤3
α(xk , z) = min 1, 3
= min 1,
ft3 (xk )[1 − sin(20xk )]ft3 (z)
[1 − sin(20xk )]
Set
xk+1 =
z, if u = 1
xk , if u = 0
3
...
Pick {x5000 , x5050 ,
...
Claim: {x5000 , x5050 ,
...
Pearson Correlation Coefficient
(Continuous x and y
...
)
Population Version
ρ(X, Y ) = p
EXY − EXEY
=p
2
V ar(X)V ar(Y )
[EX − (EX)2 ][EY 2 − (EY )2 ]
Cov(X, Y )
Sample Version
ρˆ =
where s1 =
1
n−1
Pn
i=1 (Xi
¯ 2 , s2 =
− X)
1/n
1
n−1
P
Pn
¯ Y¯
Xi Yi − X
s1 s2
(Yi − Y¯ )2
i=1
1
...
2
...
3
...
Spearman’s Correlation
(Good for ordinal data and skew data
...
Then
E[F1 (X)F2 (Y )] − E[F1 (X)]E[F2 (Y )]
q
1
1
12 × 12
Z Z
=12E[F1 (X)F2 (Y )] − 3 = 12
F1 (x)F2 (y)f (x, y)dxdy − 3
ρs (X, Y ) =ρ(F1 (X), F2 (Y )) =
Sample Version Sort the samples within X’s and Y’s separately
...
, Xn ⇒ r1
(Y )
(X)
, r2
,
...
, Yn ⇒ r1 , r2 ,
...
ρˆs → ρs when n → ∞ (consistent estimator)
2
...
X and Y are not linearly related, but F1 (X) and F2 (Y ) are linearly related
...
Kendall’s τ
(Also related to ranks
...
)
19
Population Version
Suppose (X, Y ) ∼ F (X, Y ), (X 0 , Y 0 ) ∼ F (X, Y )
τk = P {(X − X 0 )(Y − Y 0 ) > 0} − P {(X − X 0 )(Y − Y 0 ) < 0}
Concordance: (X − X 0 )(Y − Y 0 ) > 0
Discordance: (X − X 0 )(Y − Y 0 ) < 0
X and Y positively correlated ⇒ 0 < τk ≤ 1
X and Y negatively correlated ⇒ −1 ≤ τk < 0
X and Y independent ⇒ τk = 0
Sample Version
Supposedataset (x1 , y1 ),
...
For each pair, we say (xi , yi ), (xj , yj )
...
τˆk =
# of positive pairs # of negative pairs
−
n
n
Pn
=
i=1
Pn 2
2
j=1 [1{(xi −xj )(yi −yj )>0} − 1{(xi −xj )(yi −yj )<0} ]
n
2
Remark
1
...
When n is large, computation can be a little slow
...
Further/Alternative expression:
Z Z
τk = 4
F (X, Y )f (X, Y )dXdY − 1
4
...
D
...
Independence ⇒ ρ = 0, ρs = 0, τk = 0
Let’s start with a random vector of length = 2 (general case length > 2)
...
It is a way to describe the dependence
between X and Y
...
F1 (x) is the marginal distribution of X, F2 (y) is the
marginal distribution of Y
...
Vice versa, since F1 (x) = F (x, y)dy, F2 (y) = F (x, y)dx
...
C(t, s) =
21
X, Y are independent⇒ C(t, s) = ts ⇒ τk = 4
RR
ts(tds + sdt) − 1 = 0
More generally, for random vector (x1 , x2 ,
...
, xn )
...
, 0 ≤ tn ≤ 1, C(t1 ,
...
, Fn−1 (tn )), where
F (x1 ,
...
Similar as before,
F (x1 ,
...
, Fn (xn ))
Again, Joint distribution F (x1 ,
...
, tn )
...
E
...
xn ) such that xi ∼ Fi (x) for any distribution Fi (•), ∀i = 1, 2,
...
For Multivariate normal distribution: see (1
...
2
...
D)
...
Marginal distributions Fi (x), i = 1, 2,
...
2
...
Algorithm
Call Cholesky Decomposition algorithm to get A, where AAT = Σ
...
Simulate z1 ,
...
Set z ∗ = (z1∗ ,
...
, zn )T
z∗
3
...
, n, set ui = Φ( σii ) ∼ U (0, 1), xi = Fi−1 (ui ) ∼ Fi (x)
...
xn1
x12
x22
...
x1p
...
...
, x
˜10
...
Covariance Matrix: Σ = Cov(x∗ ), where x∗i = Φ−1 (F (xi ))
Then call Gaussian Copula to generate x
˜1 ,
...
22
F
...
, xn ) such that marginal distribution xi ∼
Fi (x)
...
Algorithm
Call Cholesky Decomposition al
r(t) from taking negative values
...
, n, set
r(ti ) = r(ti−1 ) + a(b − r(ti−1 ))(ti − ti−1 ) + σ
p
p
r(ti−1 ) ti − ti−1 Zi , Zi ∼ N (0, 1)
G
...
N(t) is a counting process, and Y1 ,
...
In particular, suppose the random arrival times of the jumps are 0 < τ1 <
...
We often model (not always):
• τj − τj−1 ∼ exp(λ) ⇔ N (t) ∼ P oisson(λt)
• Yt ∼ lognormal, i
...
d
Ito’s representation of (*) (omit derivatives)
N (t)
S(t) = S(0)e
(µ−σ 2 /2)t+σW (t)
Y
j=1
29
Yj
Algorithm to simulate S(t) for 0 = t0 < t1 <
...
For i = 1,
...
, YN (ti−1 )+D=YN (ti ) ∼ lognormal
Alternatively, we can simulate the jump exactly
...
Within each (τk , τk+1 ), we can just use the regular way to simulate from GBM(µ, σ 2 ) for the
grids
...
The estimation uncertainty
is quantified by ”confidence” which is related to ”repeated experiments”
...
To account the uncertainty, a Bayesian approach treat parameter to be random
...
Bayesian Approach Steps
1
...
2
...
3
...
The posterior
distribution could be very complex, but we can simulate from the distribution
...
B
...
31
1
σ2
Inproper prior: approximated by a proper prior
e
...
Normal mean µ: π(µ) ∝ c
Approximation: π(µ) ∼ N (0, 1000000)
But uniform prior depends on parameterigation
...
Now if my interest is ψ = θ2 , uniform
prior ψ ∼ U (0, 1) is not equivalent to θ ∼ U (0, 1)
Title: Statistical simulation lecture notes
Description: There are chapters: simulation from a known distribution, simulation techniques and methods(rejection sampling, importance sampling, etc.), Random vectors and copulas, generating sample path, Bayes, Bootstrap. It's from Rutgers University in USA.
Description: There are chapters: simulation from a known distribution, simulation techniques and methods(rejection sampling, importance sampling, etc.), Random vectors and copulas, generating sample path, Bayes, Bootstrap. It's from Rutgers University in USA.