Parallel Simulation Mathematisches Institut Goethe-Universität Frankfurt am Main Advances in Financial Mathematics Paris January 7-10, 2014 Simulation
Outline 1 Monte Carlo 2 3 4 Algorithm Numerical Results Simulation
Outline 1 Monte Carlo 2 3 4 Algorithm Numerical Results Simulation
Option pricing Model: Black-Scholes ds(t) = µs(t)dt + σs(t)dw (t) Euler-Maruyama discretization: Ŝ(t j+1 ) = Ŝ(t j) + µŝ(t j)h + σŝ(t j)z j with z j N(0, h) Martingale approach: V (S, 0) = e rt E [V (S, T )] Monte Carlo simulation: ˆV (S, 0) = e rt 1 N N ˆV ({Ŝ (i) (t 1 ),..., Ŝ (i) (t d )}, T ) i=1 where ˆV is the discretized payoff, e.g. for a lookback option ˆV ({Ŝ(t 1 ),... Ŝ(t d )}, T ) = Ŝ(t d ) min Ŝ(t j ) 1 j d Simulation
Discretization Error Monte Carlo Shortcut notation: Mean square error: Y = E[f (S)] and Ŷ = 1 N N f (Ŝ (i) ) i=1 MSE = E[ E[f (S)] Ê[f (Ŝ)] 2 ] = (E[Ŷ Y ]) 2 + Var[Ŷ ] = O(h 2α + N 1 ) if the approximation f (Ŝ) converges with weak order α max E[f (Ŝ(t j))] E[f (S(t j ))] ch α. 1 j d Simulation
Error balancing: To achieve an RMSE of O(ɛ) it is necessary to select The corresponding cost C is then N = O(ɛ 2 ) and h = O(ɛ 1/α ) C = N d = O(ɛ 2 1/α ). Solving for ɛ yields the optimal rate of convergence Optimal refinement rule: RMSE = O(ɛ) = O(C α 2α+1 ). d 2d N 4αN Simulation
10 1 Asian Option RMSE Bias Std. Deviation 10 2 10 3 10 0 10 2 10 4 10 6 C Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, d 1 = 1, N 1 = 5, 100 repetitions all rates 1/3 Simulation
10 1 RMSE of various options European option Asian option Barrier option RMSE 10 2 10 3 10 0 10 2 10 4 10 6 C Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, d 1 = 1, N 1 = 5, 100 repetitions Simulation
Convergence rates European option: α = 1 Asian option: α = 1 Barrier option: α = 1/2 α RMSE rate α/(2α+1) 1 1/3 1/2 1/4 Improvement: discrete minimum correction for Barrier options: Ŝ min = min Ŝ(t j ) k σ T /d j=1,...,d with k = 0.5826 recovers α = 1 (Kou, 2003) Simulation
10 1 RMSE of various options European option Asian option Barrier option Barrier option transf. min. RMSE 10 2 10 3 10 0 10 2 10 4 10 6 C Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, d 1 = 1, N 1 = 5, 100 repetitions Simulation
Outline 1 Monte Carlo 2 3 4 Algorithm Numerical Results Simulation
Multilevel Monte Carlo (Giles, 2008): simulate asset prices for different mesh widths h 1,..., h L in time small mesh width = low discretization error, but large costs large mesh width = high discretization error, but small costs Rewrite the payoff on the finest level L as a telescope sum E[ˆP L ] = E[ˆP 0 ] + L E[ˆP l ˆP l 1 ]. l=1 where ˆP l is the approximation for mesh width h l = M l T. Simulation
Multilevel Monte Carlo Computation: From the estimate of the expectation E[ˆP l ˆP l 1 ]: Ŷ l = 1 N l N l i=1 (ˆP (i) l and the estimate of E[ˆP 0 ]: ˆP (i) l 1 ), for l = 1,..., L, Ŷ 0 = 1 N 0 N 0 i=1 ˆP (i) 0 results the multilevel estimate of E[ˆP L ]: L Ŷ = Ŷ l, l=0 with N l being the number of simulations for level l. Simulation
Multilevel Conditions weak convergence strong convergence ] E [ˆP l P c 1 hl α ] V [Ŷl c 2 N 1 l h β l which corresponds to the strong order of the SDE discretization ( max E f (Ŝ(t j)) f (S(t j )) p) 1 p c 3 h β/2 1 j d Multilevel Complexity (Giles, 2008): { O(C 1/2 ) for β 1, RMSE O(C α 2α+1 β ) for β < 1. Simulation
Monte Carlo 10 1 Alpha of various options weak error 10 2 10 3 Asian option Lookback option Barrier option Digital option 10 4 10 0 10 1 10 2 d Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions Simulation
Monte Carlo 10 0 Beta of various options Variance 10 2 10 4 Asian option Lookback option Barrier option Digital option 10 6 10 0 10 1 10 2 d Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions Simulation
Monte Carlo 10 1 RMSE of various options Asian option Lookback option Barrier option Digital option RMSE 10 2 10 3 10 3 10 4 10 5 10 6 C Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions Simulation
Summary Option expected computed α β RMSE α β RMSE Asian 1 1 1/2 0.95 1.76 0.53 Lookback 1/2 1 1/2 0.43 0.83 0.42 Barrier 1/2 1/2 1/3 0.42 0.03 0.13 Digital 1 1/2 2/5 0.87 0.39 0.37 RMSE = α/(2α + 1 β) if β < 1 and RMSE=1/2 if β 1 Simulation
Outline 1 Monte Carlo 2 3 4 Algorithm Numerical Results Simulation
Dimension- (Dimension-) Algorithm: 1 set N 0, N 1 = 100 2 determine V l for l = 0,..., L such that V := L l=0 V l estimates the variance and B := (ŶL/(M α 1)) 2 estimates the bias 2 3 if V + B < ɛ 2 stop 4 else if V > B determine V l, l = 0,..., L, which has the largest variance/work and double N l if B > V then L L + 1 and set N L+1 = 100 5 goto step 2 Simulation
Dimension- (Dimension-) Algorithm: 1 set N 0, N 1 = 100 2 determine V l for l = 0,..., L such that V := L l=0 V l estimates the variance and B := (ŶL/(M α 1)) 2 estimates the bias 2 3 if V + B < ɛ 2 stop 4 else if V > B determine V l, l = 0,..., L, which has the largest variance/work and double N l if B > V set N L+1 = 100 5 goto step 2 1600 Dimension adaptive MLMC 1600 Dimension adaptive MLMC 800 800 400 400 N(L) N(L) 200 200 100 100 0 1 2 3 4 5 6 7 8 Level L 0 1 2 3 4 5 6 7 8 Level L Simulation
Dimension- (Dimension-) Algorithm: 1 set N 0, N 1 = 100 2 determine V l for l = 0,..., L such that V := L l=0 V l estimates the variance and B := (ŶL/(M α 1)) 2 estimates the bias 2 3 if V + B < ɛ 2 stop 4 else if V > B determine V l, l = 0,..., L, which has the largest variance/work and double N l if B > V set N L+1 = 100 5 goto step 2 1600 Dimension adaptive MLMC 1600 Dimension adaptive MLMC 800 800 400 400 N(L) N(L) 200 200 100 100 0 1 2 3 4 5 6 7 8 Level L 0 1 2 3 4 5 6 7 8 Level L Simulation
Dimension- (Dimension-) Algorithm: 1 set N 0, N 1 = 100 2 determine V l for l = 0,..., L such that V := L l=0 V l estimates the variance and B := (ŶL/(M α 1)) 2 estimates the bias 2 3 if V + B < ɛ 2 stop 4 else if V > B determine V l, l = 0,..., L, which has the largest variance/work and double N l if B > V set N L+1 = 100 5 goto step 2 1600 Dimension adaptive MLMC 6400 Dimension adaptive MLMC 800 3200 1600 400 800 N(L) 200 N(L) 400 200 100 100 0 1 2 3 4 5 6 7 8 Level L 0 1 2 3 4 5 6 7 8 Level L Simulation
Algorithm 1 set N 0, N 1 = 100 2 determine V l for l = 0,..., L such that V := L l=0 V l estimates the variance and B := (ŶL/(M α 1)) 2 estimates the bias 2 3 if V + B < ɛ 2 stop 4 else if V > B determine V l, l = 0,..., L, which has the largest variance/work and double N l if B > V set N L+1 = 100 5 goto step 2 Simulation
Monte Carlo 10 2 European Option RMSE 10 3 standard method standard error bound adaptive method adaptive error bound 10 4 10 3 10 4 10 5 C Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions Simulation
Monte Carlo 10 1 Asian Option RMSE Bias Std. Deviation 10 2 10 3 10 3 10 4 10 5 C Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions all rates 1/2 Simulation
Monte Carlo RMSE 10 1 10 2 RMSE of various options Asian option Lookback option Barrier option Digital option RMSE Option MLMC adapt. MLMC Asian 0.53 0.54 Lookback 0.42 0.39 Barrier 0.13 0.12 Digital 0.37 0.37 10 3 10 3 10 4 10 5 10 6 C Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions Simulation
MLMC not optimal for barrier options one idea: rewrite payoff as a product of probabilities with N 1 P = e r (S(t N ) K) + i=0 ( 2(Sn B) + (S n+1 B) + ) p i = 1 exp σ 2 SnT/N 2 also possible for Double Barrier Options but getting more complicated with more conditions in the payoff p i Simulation
more general idea: use adaptive path discretization close to the barrier adaptively refine time intervals if the barrier-crossing probability is large Ψ(S i 1, S i ) := P ( S i 1/2 < B ) > w no additional complexity for more complicated options Simulation
How to construct the mid points after refinement? version 1: Brownian bridge using the mean of forward and backward Euler-Maruyama estimates S i 1/2 = 1 ( Si 1 + S i 1 r(t i 1/2 t i 1 ) + S i 1 σ(w i 1/2 W i 1 ) ) 2 + 1 ( ) S i 2 1 + r(t i t i 1/2 ) + σ(w i W i 1/2 ) version 2: Brownian bridge using an arithmetic Brownian motion S i 1/2 = S i 1 + r(t i 1/2 t i 1 ) + σ(w i 1/2 W i 1 ) Simulation
Example Monte Carlo 1.4 Down and Out Call 1.2 1 0.8 0.6 0.4 0.2 0 0.2 0 0.2 0.4 0.6 0.8 1 t S(t) W(t) Barrier Simulation
Example Monte Carlo 1.4 Down and Out Call 1.2 1 0.8 0.6 0.4 0.2 0 S(t) W(t) Barrier 0.2 0 0.2 0.4 0.6 0.8 1 t Simulation
Example Monte Carlo 1.4 Down and Out Call 1.2 1 0.8 0.6 0.4 0.2 0 S(t) W(t) Barrier 0.2 0 0.2 0.4 0.6 0.8 1 t Simulation
Example Monte Carlo 10000 10 4 paths for a Barrier option Refinement steps with w= 1.28 and d=1 10 10 4 paths for a Barrier option Refinement steps with w= 1.28 and d=1 8000 8 Number 6000 4000 log(number) 6 4 2000 2 0 0 5 10 15 20 25 Steps 0 0 5 10 15 20 25 Steps Simulation
Time-adaptive Again rewrite the payoff on the finest level L as a telescope sum E[ˆP w L L ] = E[ˆP w 0 0 ] + L l=1 E[ˆP w l l ˆP w l 1 l 1 ]. where ˆP w l l is the approximation for mesh width h l = M l T and adaptive path discretization parameter w l. Calculate the expectations on the right side with the corresponding Monte Carlo estimator. Simulation
Example Monte Carlo 10 1 Alpha of various methods for Barrier Options 10 2 weak error 10 3 10 4 10 5 transformed Barrier adapt. refining Barrier 10 0 10 1 10 2 10 3 Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions d Simulation
Example Monte Carlo 10 1 Beta of various methods for Barrier Options transformed Barrier adaptive refining Barrier 10 2 Variance 10 3 10 4 10 0 10 1 10 2 10 3 d Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions Simulation
Example Monte Carlo 10 1 RMSE of various methods for Barrier Options transformed Barrier adaptive refining Barrier RMSE 10 2 Method RMSE Barrier 0.13 transf. Barrier 0.29 adapt. refining 0.5 10 3 10 3 10 4 10 5 10 6 C Parameters: S 0 = 1, r = 0.05, σ = 0.2, K = 1, T = 1, N 0 = 100, 100 repetitions Simulation
Algorithm Numerical Results Outline 1 Monte Carlo 2 3 4 Algorithm Numerical Results Simulation
Algorithm Numerical Results Parallel programming Properties Computer with more than one processor/kernel needed Linear speed-up in the number of processors possible Efficient for loops that do the same recurring tasks Very useful for Monte Carlo simulation Example: Parallel Monte Carlo with M processors N/M ˆV j (S, 0) = e rt 1 ˆV ({Ŝ (i,j) (t 1 ),..., Ŝ (i,j) (t d )}, T ) N/M i=1 and the aggretated estimator for the option price is ˆV (S, 0) = M j=1 ˆV j (S, 0) Simulation
Algorithm Numerical Results Idea Parallelize each sum of the MLMC estimator Ŷ = L l=0 Ŷ l with Ŷ l = 1 N l N l i=1 (ˆP (i) l ˆP (i) l 1 ). Do not overcharge the memory - do not save the whole path of S but only the necessary part for the option value - do not save all option values but calculate the expectation and variance recursively Simulation
Algorithm Monte Carlo Algorithm Numerical Results 1 set L := 0, N M0 = 1000/M 2 determine the variances V k,l for k ( = 1,..., M and l = 0,..., L M ) on each processor such that V l = k=1 V k,l /M and V := L l=0 V l 3 define optimal N l, l = 0,..., L as in the standard algorithm and if N l has increased calculate N Ml = ( Nl new Nl old ) /M extra samples on each processor 4 stop if RMSE< ɛ and L 2 5 else set L := L + 1, N ML = 1000/M and go to step 2. Simulation
European Option Monte Carlo Algorithm Numerical Results 10 2 10 3 50 threads 10 threads 1 thread RMSE 10 4 10 5 10 6 10 3 10 2 10 1 10 0 10 1 Time Figure: Convergence rates in time for a european option using 1,10 and 50 threads. Simulation
Algorithm Numerical Results European Option ɛ 2 10 3 1 10 3 5 10 4 1 10 4 5 10 5 10 kernels 8.1 8.6 9.3 10.0 10.0 50 kernels 13.4 16.5 24.3 45.4 48.8 Table: Time factor improvements for a european option with 10 and 50 kernerls compared to 1 kernel. linear speed-up in the number of kernels if the program is running for more than 1 second code written in C++ using MPI Simulation
Algorithm Numerical Results Conclusions Summary: Monte Carlo: RMSE rate 1/3 : RMSE rate 1/2 in best case : RMSE rate 1/2 also for barrier options : linear speed-up Extensions: MLMC for Milstein and higher order schemes (Giles, 2007) Multilevel Quasi-Monte Carlo (Giles, Waterhouse, 2009; G., Noll 2012) Multilevel (adaptive) sparse grid integration (G., Heinz, 2012) Simulation