Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Statistical simulation lecture notes
Description: There are chapters: simulation from a known distribution, simulation techniques and methods(rejection sampling, importance sampling, etc.), Random vectors and copulas, generating sample path, Bayes, Bootstrap. It's from Rutgers University in USA.

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


Proof

1

Simulation Lecture Notes
Wanlin Juan
September 2019

1

Simulation From a Known Distribution

Assumptions

We know how to simulate from U(0,1)
...
, xn ∼ U (0, 1), i
...
d
...
rk ,
...

M
yi
E
...
M=50000
...
Simulate from an exponential distribution x ∼ exp(λ)
Density function: f (x) = λe−λx 1{x>0}

Algorithm
1
...
Compute x = − λ1 log(u)
Claim: x ∼ f (x)

Reason

∀t > 0,
1
P (x ≤ t) = P (− log(u) ≤ t)
λ
= P (u ≥ e−λt )
= 1 − P (u < e−λt )
= 1 − e−λt)
Z t
=
λe−λs ds
0
Z t
=
fexp (s)ds
0

2

B
...
Simulate u ∼ U (0, 1)
2
...
g
...
Then x = − λ1 log(1 − t) = F −1 (t)

C
...
2
...
Simulate u1 , u2 ,
...
Compute x = u1 +
...
Yn ∼ g(•),

√ ¯
n(Y − EY )
p
→ N (0, 1), n → ∞
V ar(Y )

E(u) = 21
R1
V ar(u) = E(u2 ) − (E(u))2 = 0 u2 du − ( 12 )2 =

1
12

Q: n=12 large enough?
A: It is enough for u ∼ U (0, 1)
...
Simulate u1 ∼ U (0, 1) and calculate θ = 2πu1 ∼ U (0, 2π)

2
...
Calculate x = rcosθ, y = rsinθ
Claim:
x ∼
N (0, 1),
1), x is independent with y

 y∼ N (0, 
x
0
1 0
∼N
,
y
0
0 1
Joint density function:
2
2
1
1 − x2 +y2
1
2
f (x, y) = φ(x)φ(y) = √ e−x /2 √ e−y /2 =
e




Take the Box-Muller transformation,
r=

p

x2 + y 2

x
cosθ = p
2
x + y2
Cartesian coordinate (x, y) ↔ Polar coordinate (r, θ)
f (r, θ) = f (x, y)|J|
(Jacobian Matrix)

D1
...
Call Algorithm for normal distributions to get x1 ,
...
Calculate y = x21 +
...
Simulate x1 ,
...
Calculate y = x21 +
...
Simulate from F-distribution
F-distribution: X =

Y1 /d1
Y2 /d2 ,

where Y1 ∼ χ2d1 , Y2 ∼ χ2d2 , Y1 and Y2 are independent
...
Simulate from multivariate normal distribution
 

µ1
σ12
ρσ1 σ2
x=
∼N
,
µ2
ρσ1 σ2
σ22





x1
µ1
 x2 



 ∼ N  µ2  , Pp×p 
More generally, x=

...


xp
µp


x1
x2





Algorithm:
1
...
Calculate x1 = µ1 + ρσ1 z1 , x2 = µ2 + ρσ2 z1 + 1 − ρ2 σ22 z2



 

x1
µ1
σ12
ρσ1 σ2
Claim: x=
∼N
,
x2
µ2
ρσ1 σ2
σ22
Why: Linear combinations of normal ⇒ still normal
Thus, we only need to check
E(x1 ) = µ1 + ρσ1 z1 = µ1
q
E(x2 ) = µ2 + ρσ2 z1 + 1 − ρ2 σ22 z2 = µ2
V ar(x1 ) = σ12 V ar(z1 ) = σ12
V ar(x2 ) = ρ2 σ22 V ar(z1 ) + (1 − ρ2 )σ22 V ar(z2 ) = σ22
q

(1 − ρ2 )σ22 z2 )
q
= Cov(σ1 z1 , ρσ2 z1 ) + Cov(σ1 z1 , (1 − ρ2 )σ22 z2 )
q
= ρσ1 σ2 V ar(z1 ) + σ1 (1 − rho2 )σ22 Cov(z1 , z2 )

Cov(x1 , x2 ) = Cov(σ1 z1 , ρσ2 z1 +

= ρσ1 σ2

5

Check 

σ1 p
0
σ1
AAT =
2 )σ 2
ρσ
(1

ρ
0
2
2


P
σ12
ρσ1 σ2
=
=
ρσ1 σ2
σ22
More generally,
suppose

 
x1
 x2  
 
Then x=

...

µp



p ρσ2
(1 − ρ2 )σ22

p×p

1
T
can
s
...
AA
 = 4 
 find A 
z1
 z2 


p×p



+A

...

µp

Scalar version:
z ∼ N (0, 1)
x = c + az ∼ N (c, a2 )

Pp×p
Call Algorithm to solve for AAT =
Algorithm:
1
...
, zp ∼ N (0, 1), i
...
d
2
...
, zp )T
Claim:
x ∼ N (µ,

X

)

This leads to:
for i = 1, 2,
...

For j = 1, 2,
...
,p
Set vi = σij ;
For k = 1, 2,
...
Simulate from double exponential distribution (Laplace Distribution)
x ∼ fDE (x)
|x−µ|

1 − θ
Density: fDE (x) = 2θ
e
, for x ∈ (−∞, +∞)
Rx
|x−µ|
CDF: FDE (x) = −∞ fDE (t)dt = 12 + 12 sign(x − µ) • {1 − e θ }
−1
FDE
(t) = µ − θ • sign(t − 12 ) • log{1 − 2|t − 21 |}, for t ∈ (0, 1)

Algorithm1: use algorithm in Part B
...
Simulate u1 , u2 ∼ U (0, 1)
2
...
If u2 < 0
...
Return x
Claim: x ∼ fDE (x; µ, θ)

G1
...
Simulate u ∼ U (0, 1)
2
...
Simulate from Binomial Distribution
Algorithm
1
...
un ∼ U (0, 1), i
...
d
2
...
+ 1{un ≤p}
Claim: x ∼ Binomial(n, p)
Why: x =

Pn

i=1 {Independent

Bernoulli}

G3
...
))
Multinomial (1, (p1 , p2 ,
...


Algorithm
1
...
Set (x1 , x2 , x3 ) = (1{0Claim: x ∼ M ultinomial(1,(p1 , p2 ,
...
Simulate from Multinomial Distribution
Algorithm
1
...
, un P
∼ U (0, 1)
Pn
Pn
n
2
...
))

H
...


e−λ λk
k!

Connections between Poisson(λ) and exp(λ):
Poisson x is the number of events in [0,1], when the time between consecutive events are i
...
d
exponential(λ)
...
, Tx , Tx+1 ∼ exp(λ) ⇒ x ∼ P oisson(λ)

Algorithm
1
...
While(s ≤ 1)

8

Simulate u ∼ U (0, 1)
...
Return x = k − 1
Alternatively, recall U ∼ U (0, 1) ⇒ X = F −1 (u) ∼ F (•)

Q: In discrete case, how to define F −1 (u)?
A: x = F −1 (u) =the largest integer such that F (x) ≤ u

Algorithm(numerical)
1
...

Generate u ∼ U (0, 1)
2
...
Return x

9

2

Simulation Techniques and Methods

A
...

Assumption
1
...

2
...


R
R
Remark C has to be more than 1, since f (x)dx = g(x)dx = 1
...


Algorithm
1
...

f (z)
, set x = z (accept z), and then go to 3;
2
...

3
...

Claim: x ∼ f (x)

Reason(intuitively)

10

Reason(in math)
Define
Aj = {Accept z as x in the j − th loop}
= {Reject loop 1} ∩ {Reject loop 2} ∩
...

C
C

We can easily verify that
+∞
X

P (Aj ) =

(1 −

j=1

j=1

Algorithm converges!
Rejection rate at each loop is 1 −

+∞
X

1
C , if

P (x ≤ t) =

=

1 j−1 1
)
=
...
So we can only
use Laplace distribution to generate Normal distribution, but cannot use Normal distribution to
get Laplace
...
Importance Sampling Method
Goal

To simulate x ∼ f (x), which is difficult
...
We know f(x) (up to a constant)
...
g
...
Exists a candidate distribution (helper) g(z) close to f(z) and easy to simulate from
...


The assumptions are similar to the rejection sampling but weaker than rejection sampling
...
g
...

Algorithm
1
...
, zN ∼ g(z)
2
...
zN according to the probability
P (x = zj |z1 ,
...
, zN ) = PN
g(zj )
j=1 wj

3
...
, zN ) = PN

k=1

P (X ≤ t|z1 ,
...
zN )P (X = zk |z1 ,
...
zN ) •

−∞

Remark
For computing efficiency, try to find g(z) close to f(z), so that wj ’s are not too close to 0 or 1
...

Z
E[a(x)] = a(x)f (x)dx
Z
a(x)f (x)
g(x)dx
=
g(x)


a(z)f (z)
=E
g(z)


N
1 X a(zi )f (zi )
N i=1 g(zi )

=

N
1 X
a(zi )wi
N i=1

13

C
...


Assumption We know how to simulate from f (x|y, z), f (y|x, z) and f (z|x, y)
...

Algorithm
1
...
For k=0, 1,
...
Stop after a large number N iterations,
set (x, y, z) = (x(N ) , y (N ) , z (N ) ) and return (x,y,z)
Claim: When N is large, (x, y, z) ∼ f (x, y, z)
Remarks
1
...

2
...

Markov Chain:

 (1) 
 (2) 
 (N ) 
x(0)
x
x
x
 y (0)  →  y (1)  →  y (2)  →
...
Junker from CMU)
Need to simulalte


x
y


∼ f (x, y) =

2x + 3y + 2
, f or 0 < x < 2, 0 < y < 2
...
We can use Rejection
Sampling and Importance Sampling to simulate from f(x,y), using uniform distribution as the
helper
...

f (x|y) =

f (x, y)
f (X, y)
2x + 3y + 2 3y + 4
2x + 3y + 2
=R
=
/
=
f (y)
28
14
6y + 8
f (x, y)dx

f (y|x) =

f (x, y)
f (X, y)
2x + 3y + 2 2x + 5
2x + 3y + 2
=R
=
/
=
f (x)
28
14
4x + 10
f (x, y)dy
14

Suppose we know how to simulate from f (x|y) for a fixed y and f (y|x) for a fixed x
...
Set k = 0 and starting point (x(0) , y (0) ) = (1, 1)
2
...

x(k+1) ∼ f (x|y (k)
y (k+1) ∼ f (y|x(k)
3
...
5y 2 + 2(x + 1)y
2x + 3y + 2
dt =
F (y|x) =
f (t|x)dt ==
4x + 10
4x + 10
0
0
Solve u = F (y|x) for y
...
Simulate u ∼ U (0,
q1)
2
...

F −1 (v|y) =

p

v(6y + 8) + (1
...
5y + 1)

1
...
Given y, set x = v(6y + 8) + (1
...
5y + 1)

15

D
...
−→ xk −→
...
, xk ,
...

Theorem 1
...
−→ xk −→
...
Key Condition: There exists a density function f(x) such that (*) holds
...
The chain is also π-irreducible(the set of possible values for xk not change for all k =
0, 1, 2,
...
The chain is aperiodic(not going to the same value periodically)
...

Recall Gibbs,
x(k+1) ∼ f (x|y (k) , z (k) )
y (k+1) ∼ f (y|x(k+1) , z (k) )
z (k+1) ∼ f (z|x(k+1) , y (k+1) )
⇒p(x(k+1) , y (k+1) , z (k+1) |x(k) , y (k) , z (k) )
=f (x(k+1) |y (k) , z (k) )f (y (k+1) |x(k+1) , z (k) )f (z (k+1) |x(k+1) , y (k+1) )

16

Z Z Z

p(x(k+1) , y (k+1) , z (k+1) |x(k) , y (k) , z (k) )f (x(k) , y (k) , z (k) )dx(k) dy (k) dz (k)

Z Z Z

f (x(k+1) |y (k) , z (k) )f (y (k+1) |x(k+1) , z (k) )f (z (k+1) |x(k+1) , y (k+1) )f (x(k) , y (k) , z (k) )dx(k) dy (k) dz (k)
Z

Z Z
f (x(k) , y (k) , z (k) )dx(k) dy (k) dz (k)
=
f (x(k+1) |y (k) , z (k) )f (y (k+1) |x(k+1) , z (k) )f (z (k+1) |x(k+1) , y (k+1) )
Z Z
=
f (x(k+1) |y (k) , z (k) )f (y (k+1) |x(k+1) , z (k) )f (y (k) , z (k) )dy (k) dz (k) f (z (k+1) |x(k+1) , y (k+1) )
Z Z
=
f (x(k+1) , y (k) , z (k) )f (y (k+1) |x(k+1) , z (k) )dy (k) dz (k) f (z (k+1) |x(k+1) , y (k+1) )
Z
= f (x(k+1) , z (k) )f (y (k+1) |x(k+1) , z (k) )dz (k) f (z (k+1) |x(k+1) , y (k+1) )
Z
= f (x(k+1) , y (k+1) , z (k) )dz (k) f (z (k+1) |x(k+1) , y (k+1) )

=

=f (x(k+1) , y (k+1) )f (z (k+1) |x(k+1) , y (k+1) )
=f (x(k+1) , y (k+1) , z (k+1) )
Verify (*)!

E
...

Goal
from
...

M-H Algorithm
1
...

2
Title: Statistical simulation lecture notes
Description: There are chapters: simulation from a known distribution, simulation techniques and methods(rejection sampling, importance sampling, etc.), Random vectors and copulas, generating sample path, Bayes, Bootstrap. It's from Rutgers University in USA.