Monte Carlo Methods Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford University Mathematical Institute Lecture 1 p. 1
Geometric Brownian Motion In the case of Geometric Brownian Motion ds t = rs t dt+σs t dw t the use of Iˆto calculus gives d(logs t ) = (r 1 2 σ2 )dt+σdw t which can be integrated to give S T = S 0 exp ( (r 1 2 σ2 )T +σw T ) so we are able to directly simulate S T to perform Monte Carlo estimation for European options with a payoff f(s T ). Lecture 1 p. 2
Euler-Maruyama path simulation In more general cases, the scalar SDE ds t = a(s t,t) dt+b(s t,t) dw t can be approximated using the Euler-Maruyama discretisation Ŝ n+1 = Ŝ n +a(ŝ n,t n )h+b(ŝ n,t n ) W n Here h is the timestep, Ŝ n is the approximation to S nh and the W n are i.i.d. N(0,h) Brownian increments. Lecture 1 p. 3
Euler-Maruyama method For ODEs, the forward Euler method has O(h) accuracy, and other more accurate methods are usually preferred. However, SDEs are very much harder to approximate so the Euler-Maruyama method is used widely in practice. Numerical analysis is also very difficult and even the definition of accuracy is tricky. Lecture 1 p. 4
Weak convergence In finance applications, we are mostly concerned with weak errors, the error in the expected payoff due to using a finite timestep h. For a European payoff f(s T ), the weak error is E[f(S T )] E[f(Ŝ M )] where M = T/h, and for a path-dependent option it is E[f(S)] E[ f(ŝ)] where f(s) is a function of the entire path S t, and f(ŝ) is a corresponding approximation using the whole discrete path. Lecture 1 p. 5
Weak convergence Key theoretical result (Bally and Talay, 1995): If p(s) is the p.d.f. for S T and p(s) is the p.d.f. for Ŝ T/h computed using the Euler-Maruyama approximation, then under certain conditions on a(s,t) and b(s,t) p(s) p(s) = O(h) and hence E[f(S T )] E[f(Ŝ T/h )] = O(h) This holds even for digital options with discontinuous payoffs f(s) earlier theory covered only European options such as put and call options with Lipschitz payoffs. Lecture 1 p. 6
Weak convergence Numerical demonstration: Geometric Brownian Motion r = 0.05, σ = 0.5, T = 1 ds = rsdt+σsdw European call: S 0 = 100,K = 110. Plot shows weak error versus analytic expectation when using 10 8 paths, and also Monte Carlo error (3 standard deviations) Lecture 1 p. 7
Weak convergence Weak convergence -- comparison to exact solution 10-1 Weak error MC error Error 10-2 10-1 h Lecture 1 p. 8
Weak convergence Previous plot showed difference between exact expectation and numerical approximation. What if the exact solution is unknown? Compare approximations with timesteps h and 2h. If then and so E[f(S T )] E[f(Ŝ h T/h )] a h E[f(S T )] E[f(ŜT/2h 2h )] 2ah E[f(ŜT/h h )] E[f(Ŝ2h T/2h )] ah Lecture 1 p. 9
Weak convergence To minimise the number of paths that need to be simulated, we use same driving Brownian path when doing 2h and h approximations. i.e. take Brownian increments for h simulation and sum in pairs to get Brownian increments for 2h simulation. The variance is lower because the h and 2h paths are close to each other (strong convergence). (We won t cover this, but this forms the basis for the Multilevel Monte Carlo method (Giles, 2006)) Lecture 1 p. 10
Weak convergence Weak convergence -- difference from 2h approximation 10-1 Weak error MC error Error 10-2 10-3 10-2 10-1 h Lecture 1 p. 11
Mean Square Error Question: how do we choose the number of timesteps (to reduce the weak error) the number of paths (to reduce the Monte Carlo sampling error) If the true option value is and the discrete approximation is V = E[f] V = E[ f] and the Monte Carlo estimate is Ŷ = 1 N N f (i) i=1 then... Lecture 1 p. 12
Mean Square Error... the Mean Square Error is [ (Ŷ ) ] 2 ] E V = V [Ŷ V = V[Ŷ]+ + ( E[Ŷ] V ( ) 2 E[Ŷ V] ) 2 = N 1 V[ f]+ ( E[ f] E[f] ) 2 first term is due to the variance of estimator second term is square of bias due to weak error Lecture 1 p. 13
Mean Square Error If there are M timesteps, the computational cost is proportional to C = MN and the MSE is approximately an 1 +bm 2 = an 1 +bc 2 N 2. For a fixed computational cost, this is a minimum when N = ( ) ac 2 1/3, M = 2b ( 2bC a ) 1/3, and hence an 1 = ( 2a 2 ) 1/3 b, bm 2 = C 2 ( a 2 ) 1/3 b 4C 2, so the MC term is twice as big as the bias term. Lecture 1 p. 14
Path-dependent Options For European options, Euler-Maruyama method has O(h) weak convergence. However, for some path-dependent options it may give only O( h) weak convergence, unless the numerical payoff is constructed carefully. Lecture 1 p. 15
Barrier option A down-and-out call option has discounted payoff exp( rt) (S T K) + 1 mint S(t)>B i.e. it is like a standard call option except that it pays nothing if the minimum value drops below the barrier B. The natural numerical discretisation of this is f = exp( rt) (Ŝ M K) + 1 minnŝn>b Lecture 1 p. 16
Barrier option Numerical demonstration: Geometric Brownian Motion r = 0.05, σ = 0.5, T = 1 ds t = rs t dt+σs t dw t Down-and-out call: S 0 = 100,K = 110,B = 90. Plots shows weak error versus analytic expectation using 10 6 paths, and difference from 2h approximation using 10 5 paths. (We don t need as many paths as before because the weak errors are much larger in this case.) Lecture 1 p. 17
Barrier option Barrier weak convergence -- comparison to exact solution 10 1 Weak error MC error Error 10 0 10-1 10-1 h Lecture 1 p. 18
Barrier option Barrier weak convergence -- difference from 2h approximation Weak error MC error 10 0 Error 10-1 10-2 10-1 h Lecture 1 p. 19
Lookback option A floating-strike lookback call option has discounted payoff ( ) exp( rt) S T min [0,T] S t The natural numerical discretisation of this is ) f = exp( rt) (ŜM minŝ n n Lecture 1 p. 20
Lookback option Lookback weak convergence -- comparison to exact solution Weak error MC error 10 1 Error 10 0 10-1 10-1 h Lecture 1 p. 21
Lookback option weak convergence -- difference from 2h approximation 1Lookback 10 Weak error MC error 10 0 Error 10-1 10-2 10-2 10-1 h Lecture 1 p. 22
Brownian Bridge To recover O(h) weak convergence we first need some theory. Consider simple Brownian motion ds t = a dt+b dw t with constant a, b and initial data S 0 =0. Question: given S T, what is conditional probability density for S T/2? Lecture 1 p. 23
Conditional probability With discrete probabilities, P(A B) = P(A B) P(B) Similarly, with probability density functions where p 1 (x y) = p 2(x,y) p 3 (y) p 1 (x y) is the conditional p.d.f. for x, given y p 2 (x,y) is the joint probability density function for x,y p 3 (y) is the probability density function for y Lecture 1 p. 24
Brownian bridge In our case, y S T, x S T/2 p 2 (x,y) = p 3 (y) = = p 1 (x y) = ) 1 exp ( (x at/2)2 πt b b 2 T 1 (y x at/2)2 exp ( πt b 1 exp ( 2πT b 1 πt/2 b exp b 2 T ) (y at)2 2b 2 T ) ( (x y/2)2 b 2 T/2 ) Hence, x is Normally distributed with mean y/2 and variance b 2 T/4. Lecture 1 p. 25
Brownian bridge Extending this to a particular timestep with endpoints S n and S n+1, conditional on these the mid-point is Normally distributed with mean and variance b 2 h/4. 1 2 (S n +S n+1 ) We can take a sample from this conditional p.d.f. and then repeat the process, recursively bisecting each interval to fill in more and more detail. Note: the drift a is irrelevant, given the two endpoints. Because of this, we will take a = 0 in the next bit of theory. Lecture 1 p. 26
Barrier crossing Consider zero drift Brownian motion with S 0 >0. If the path S t hits a barrier at 0, it is equally likely thereafter to go up or down. Hence, by symmetry, for s > 0, the p.d.f. for paths with S T = s after hitting the barrier is equal to the p.d.f. for paths with S T = s. Thus, for S T > 0, P(hit barrier S T ) = = exp ) exp ( ( S T S 0 ) 2 2b 2 T ( ) exp (S T S 0 ) 2 2b 2 T ( 2S T S 0 b 2 T ) Lecture 1 p. 27
Barrier crossing For a timestep [t n,t n+1 ] and non-zero barrier B this generalises to ( P(hit barrier S n,s n+1 > B) = exp 2(S ) n+1 B)(S n B) b 2 h This can also be viewed as the cumulative probability P(S min < B) where S min = min [t n,t n+1 ] S(t). Since this is uniformly distributed on [0,1] we can equate this to a uniform [0,1] random variable U n and solve to get S min = 2 (S 1 n+1 +S n ) (S n+1 S n ) 2 2b 2 h logu n Lecture 1 p. 28
Barrier crossing For a barrier above, we have P(hit barrier S n,s n+1 < B) = exp ( 2(B S n+1)(b S n ) b 2 h ) and hence S max = 1 2 (S n+1 +S n + (S n+1 S n ) 2 2b 2 h logu n ) where U n is again a uniform [0,1] random variable. Lecture 1 p. 29
Barrier option Returning now to the barrier option, how do we define the numerical payoff f(ŝ)? First, calculate Ŝ n as usual using Euler-Maruyama method. Second, two alternatives: use (approximate) probability of crossing the barrier directly sample (approximately) the minimum in each timestep Lecture 1 p. 30
Barrier option Alternative 1: treating the drift and volatility as being approximately constant within each timestep, the probability of having crossed the barrier within timestep n is P n = exp ( 2(Ŝ n+1 B) + (Ŝ n B) + b 2 (Ŝ n,t n ) h Probability at end of not having crossed barrier is (1 P n ) and so the payoff is n ) f(ŝ) = exp( rt) (Ŝ M K) + n (1 P n ). I prefer this approach because it is differentiable good for Greeks Lecture 1 p. 31
Barrier option Alternative 2: again treating the drift and volatility as being approximately constant within each timestep, define the minimum within timestep n as ( ) M n = 2 1 Ŝ n+1 +Ŝ n (Ŝ n+1 Ŝ n ) 2 2b 2 (Ŝ n,t n )h logu n where the U n are i.i.d. uniform [0,1] random variables. The payoff is then f(ŝ) = exp( rt) (Ŝ M K) + 1 minn M n >B With this approach one can stop the path calculation as soon as one Mn drops below B. Lecture 1 p. 32
Weak convergence Barrier: comparison to solution 10 0 Weak error MC error 10-1 Error 10-2 10-3 10-1 h Lecture 1 p. 33
Weak convergence Barrier: h versus 2h solution 10-1 Weak error MC error Error 10-2 10-3 10-2 10-1 h Lecture 1 p. 34
Lookback option This is treated in a similar way to Alternative 2 for the barrier option. We construct a minimum Mn within each timestep and then the payoff is ) f(ŝ) = exp( rt) (ŜM min Mn n This is differentiable, so good for Greeks unlike Alternative 2 for the barrier option. Lecture 1 p. 35
Weak convergence Lookback: comparison to true solution 10 0 Weak error MC error Error 10-1 10-2 10-1 h Lecture 1 p. 36
Weak convergence Lookback: h versus 2h solution 10 0 Weak error MC error Error 10-1 10-2 10-2 10-1 h Lecture 1 p. 37
Final Words Euler-Maruyama gives O(h) weak convergence for European options Mean Square Error analysis shows how to balance weak errors and Monte Carlo sampling errors natural approximation of barrier and lookback options leads to poor O( h) weak convergence due to O( h) path variation within each timestep improved treatment based on Brownian bridge theory approximates behaviour within timestep as simple Brownian motion with constant drift and volatility gives O(h) weak convergence Lecture 1 p. 38