Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Computational Finance Seminar Purdue University January 17th, 2013 (Joint work with Peter Tankov from Paris 7, France)
Outline 1 Motivation: Barrier Options 2 Small-time asymptotics for stopped Lévy bridges Formulation of the Problem The Main Result 3 Monte Carlo Methods for stopped Lévy processes Bridge MC Simulation Adaptive simulation with bias control Numerical Illustration 4 Conclusions
Barrier options Set-up: Market consisting of a money market account with constant interest rate r and a risky asset with prices process {S t } 0 t T ; European Barrier Options: Option whose payoff at maturity is triggered or cancelled when the stock price hits" a certain domain of price values. Up-and-in call: Given a barrier" value B > S 0, X = { (ST K ) +, if sup 0 t T S t B, 0, otherwise, = (S T K ) + 1 {supt T S t B}. Up-and-out call: Given a barrier" value B > S 0, ) X = (S T K ) + (1 1 {supt T S t B} = (S T K ) + 1 {supt T S t <B}. Down-and-out call: Given a barrier" value A < S 0, ) X = (S T K ) + (1 1 {inft T S t A} = (S T K ) + 1 {inft T S t >A}.
Barrier options. Cont... Payoff of a general double-barrier out-type" barrier option: where X := f (S T )1 {St (A,B), for all t [0,T ]} (0 A < S 0 < B ) = F (X T ) 1 {Xt (a,b), for all t [0,T ]} = F (X T ) 1 {τ>t }, X t = ln S t S 0, F(x) = f (S 0 e x ), (Log-Return Process), a = ln(a/s 0 ), b = ln(b/s 0 ), ( a < 0 < b ), τ := inf{t > 0 : X t / (a, b)}, (Exit or Hitting Time).
Arbitrage-Free Pricing 1 (FTF) Under arbitrage-freeness, the time-0 premium of a European option with payoff X is given by the expected discounted payoff under a risk-neutral measure Q: Π (X ; T ) = E Q ( e rt X ) = E Q ( e rt F (X T )1 {τ>t } ). 2 In the Black-Scholes model, where S t := S 0 e µt+σwt, X t = µt + σw t, under the real world probability P, there exists a unique risk-neutral measure Q. Under Q, S t := S 0 e µ Qt+σW t, X t = µ Q t + σw t, µ Q := r σ2 2. 3 The barrier premium is then Π(X ; T ) = E Q ( e rt F (µ Q T + σw T ) 1 {µq t+σw t (a,b), for all t T }). 4 There is no closed formula. Need to rely on numerical methods.
Traditional (sequencial) Monte Carlo (MC) Method Algorithm: 1 Time Discretization: 0 = t 0 < < t n = T (e.g., uniform sampling t i = it /n); 2 Sample simulation: X t1,..., X tn 3 Approximation of the exit time: τ n := min {t k : X tk / (a, b)}; 4 Evaluation of the approximate final discounted payoff: Y := e rt F(X T )1 {τ>t } Ỹ := e rt F (X tn )1 { τn>t }. 5 Repeat (1)-(4) to generate m independent copies of Ỹ: Ỹ 1,..., Ỹm. 6 MC estimate: C(0; T, F ) := 1 m m i=1 Ỹi. Error analysis: There are two errors: Discretization and statistical error. The former is due to approximation τ n τ and the latter is due to the approximation 1 m m i=1 Ỹi E Q (Y).
Traditional (sequencial) Monte Carlo (MC) Method Drawback of sequential MC for stopped processes: 1 Highly biased due to the possibility of exiting the interval (a, b) between sampling observations. 2 The discretization error is of order 1 n for diffusions (Asmussen, Glyn, and Pitman; 1995) and diffusions with finite jump activity (Dia and Laberton; 2007); 3 Unknown for general Lévy processes, but it is expected to be much higher for infinite jump activity Lévy processes.
Improved MC for Markov processes (Baldi, 1995) 1 Suppose one can compute the exit probability of the bridge" process: p(x, y, t) := P (X u / (a, b), for some u [s, s + t] X s = x, X s+t = y). 2 By the Markov Property, for any fixed times 0 = t 0 < < t n = T, E [ ] F (X T )1 {Xu (a,b), u [0,T ]} [ ] n 1 = E F(X T ) = E [ [ = E F(X T )E n 1 F (X T ) 1 {Xu (a,b), u (t i 1,t i ]} i=0 [ n 1 1 {Xu (a,b), u (t i 1,t i ]} i=0 i=0 ]] X t 1,..., X tn E [ ] ] 1 {Xu (a,b), u (t i 1,t i ]} Xti 1, X ti [ n 1 ( = E F(X T ) 1 p(xti, X, t ti+1 i+1 t i ) ) ]. i=0
Improved MC Method. Cont... Algorithm: 1 Simulation of a discrete skeleton of the process: {(t 1, X t1 ),..., (t n, X tn )} 2 Compute the (discounted) conditional expected payoff given the skeleton: n 1 Ỹ := e rt ( F (X tn ) 1 p(xti, X, t ti+1 i+1 t i ) ). i=0 3 Repeat (1)-(2) to generate m independent copies: Ỹ 1,..., Ỹm. 4 MC estimate: C(0; T, F ) := 1 m m i=1 Ỹi. Advantage: There is no discretization error and the only error is statistical (which is of order n 1/2 by the CLT).
The Problem Important question: How to find the exit probability p(x, y, t)? Closed form available for the Black-Scholes model X t = µt + σw t ; Small-time approximation known for diffusions (Baldi, 1995) dx t := µ(t, X t )dt + σ(t, X t )dw t ; Unknown approximation for processes with jumps. Key problem: Given a domain (a, b) with a < 0 < b and initial and final points x, y (a, b), we want to characterize the small-time asymptotics of the exit probability for Lévy bridges: p(x, y, t) := P (X u / (a, b), for some u [s, s + t] X s = x, X s+t = y), where X := (X t ) t 0 is a general Lévy Model".
Exponential Lévy Model 1 From Black-Scholes to a exponential Lévy model: The log-return process X t := ln St S 0 is a B.M. with drift σw t + µt: X 0 = 0; X has independent increments: X t1 X t0,..., X tn X tn 1 for any t 0 < < t n; X has stationary increments: The distribution of X t+s X s is indep. of s; The paths of X are continuous The distribution of X t+s X s is Gaussian with mean µt and variance σ 2 t. 2 In a Léve model, the distribution law of X is characterized by its Lévy triplet (σ, ν, µ) via the Lévy-Khintchine formula: E [ e iu(xs+t Xs)] ( = e t iuµ σ2 u 2 2 + ) [e iux 1 iux1 x 1]ν(dx). The Lévy measure ν governs the intensity of jumps; σ is the volatility of the continuous component; µ is related to a deterministic drift or expected rate of grow.
Exponential Lévy Model 1 From Black-Scholes to a exponential Lévy model: The log-return process X t := ln St S 0 is a Lévy process: X 0 = 0; X has independent increments: X t1 X t0,..., X tn X tn 1 for any t 0 < < t n; X has stationary increments: The distribution of X t+s X s is indep. of s; The paths of X may have discontinuities but only of first type" (jump type); In principle, there is no restriction on the distribution of X t+s X s. 2 In a Léve model, the distribution law of X is characterized by its Lévy triplet (σ, ν, µ) via the Lévy-Khintchine formula: E [ e iu(xs+t Xs)] ( = e t iuµ σ2 u 2 2 + ) [e iux 1 iux1 x 1]ν(dx). The Lévy measure ν governs the intensity of jumps; σ is the volatility of the continuous component; µ is related to a deterministic drift or expected rate of grow.
Exponential Lévy Model 1 From Black-Scholes to a exponential Lévy model: The log-return process X t := ln St S 0 is a Lévy process: X 0 = 0; X has independent increments: X t1 X t0,..., X tn X tn 1 for any t 0 < < t n; X has stationary increments: The distribution of X t+s X s is indep. of s; The paths of X may have discontinuities but only of first type" (jump type); In principle, there is no restriction on the distribution of X t+s X s. 2 In a Léve model, the distribution law of X is characterized by its Lévy triplet (σ, ν, µ) via the Lévy-Khintchine formula: E [ e iu(xs+t Xs)] ( = e t iuµ σ2 u 2 2 + ) [e iux 1 iux1 x 1]ν(dx). The Lévy measure ν governs the intensity of jumps; σ is the volatility of the continuous component; µ is related to a deterministic drift or expected rate of grow.
Important Lévy models 1 (σ, 0, µ) correspond to the Brownian Motion with drift X t = σw t + µt; 2 (σ, ν, µ) where ν(dx) = s(x)dx with s(x)dx < correspond to R X t = µt + σw t + J t, where J t is a compound Poisson process with intensity of jumps λ := s(x)dx and jump density p(x) := s(x) λ. If p(x) is Normal, then X is known as the Merton model; If p(x) is double exponential (Laplace distribution), then the model is called the Kou model.
Short-time asymptotics of stopped Lévy bridges 1 Problem: Characterize the small-time asymptotics of the exit probability: p(x, y, t) := P ( u [s, s + t] : X u / (a, b) X s = x, X s+t = y). 2 Note that P (X u / (a, b), for some u [s, s + t] X s = x, X s+t = y) = P (X u / (a x, b x), for some u [0, t] X 0 = 0, X s+t = y x) So, it suffices to study the small-time asymptotics of the exit probability: P (X u / (a, b), for some u [0, t] X t = y, X 0 = 0) = P (τ t X t = y, X 0 = 0), where τ := inf {u 0 : X u / (a, b)}, y (a, b), a < 0 < b.
Related results From now, we suppose ν(dx) = s(x)dx, for some smooth function s : R\{0} (0, ). Then, the following two assertions hold: 1 For any x > 0, P (X t x) t x s(w)dw, (t 0); 2 [Léandre(1987)]. If f t (x) is the probability density of X t, then 1 lim t 0 t f t(x) = s(x), (x 0).
The Main Result Theorem. [F-L & Tankov (2012)] For a.e. y (a, b)\{0}, we have, as t 0, In particular, P (τ t X t = y) = t 2 2 = t 2 p(x, y, t) = t 2 s(v)s(y v) (a,b) f c t (y) s(v)s(y v) (a,b) s(y) c s(v)s(y x v) (a x,b x) s(y x) c dv + O(t 3/2 ) dv + O(t 3/2 ). dv + O(t 3/2 ).
Intuition from the finite jump-intensity case Consider a compound Poisson process X t = N t n=0 ξ n with jump intensityλ = 1: P(τ t X t = y) δ 1 Fundamental Reason: P( u t : X u / (a, b); X t (y δ, y + δ)) P(X t (y δ, y + δ) t 2 2 P(ξ 1 / (a, b), ξ 1 + ξ 2 (y δ, y + δ)) f t (y)δ δ 0 t 2 s(x)s(y x) dx. 2 (a,b) f c t (y) If, during a small time interval, X exits the interval (a, b) and then comes back to a point y (a, b), this essentially happens with two large jumps: the first one takes the process out of (a, b), while a second jumps brings it back. We show that this logic extends to a large class of infinite jump activity Lévy processes.
Illustration 2.5 3.0 2.0 2.5 1.5 1.0 0.5 0.0 2.0 1.5 1.0 0.5 0.0-0.5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0-0.5 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 Left: Cauchy bridge when it cross level b = 2 during [0, 1]; Right: Cauchy bridge when it cross level b = 2 during [0, 0.1]. Remarkable asymptotic identity. P (τ t X t = y) 2P ( X t/2 / (a, b) Xt = y )
Back to the Baldi s sequential MC Method Algorithm: 1 Generation of the sample: X t1,..., X tn (from the distribution of the increments n i X := X t i X ti 1 ); 2 Compute the expected payoff conditional on the discrete skeleton: n 1 ( X := F(X tn ) 1 p(xti, X, t ti+1 i+1 t i ) ). i=0 3 Repeat (1)-(2) to generate m copies of approx. payoffs: X1,..., X m. 4 MC estimate: C(0; T, F ) := 1 m m i=1 X i. Proposed solution: Short-time approximation. p(x, y, t) := P ( u [0, t] : X u / (a x, b x) X 0 = 0, X t = y x) t 2 s(v)s(y x v) dv =: p(x, y, t), (x y). 2 (a x,b x) f c t (y x)
Back to the Baldi s sequential MC Method Algorithm: 1 Generation of the sample: X t1,..., X tn (from the distribution of the increments n i X := X t i X ti 1 ); 2 Compute the expected payoff conditional on the discrete skeleton: n 1 ( X := F(X tn ) 1 p(xti, X, t ti+1 i+1 t i ) ). i=0 3 Repeat (1)-(2) to generate m copies of approx. payoffs: X1,..., X m. 4 MC estimate: C(0; T, F ) := 1 m m i=1 X i. Proposed solution: Short-time approximation. p(x, y, t) := P ( u [0, t] : X u / (a x, b x) X 0 = 0, X t = y x) t s(v)s(y x v) dv =: p(x, y, t), (x y). 2 (a x,b x) s(y x) c
Controlling the bias 1 We propose to generate the approximated expected payoff: X := F(X T )1 τ>t X := F (X tn ) n 1 ( i=0 1 p(xti, X, t ti+1 i+1 t i ) ). 2 The bias is introduced via the error in the approximation p(x, y, t i+1 t i ) p(x, y, t i+1 t i ), which in principle improves if t i+1 t i is small; 3 How to choose a suitable mesh size between sampling times? 4 If we have at our hand an estimate e p (x, y, t) of the approximation error: p(x, y, t) p(x, y, t) e p (x, y, t), one may control the bias by splitting the subinterval [t i, t i+1 ] into two if e(x ti, X ti+1, t i+1 t i ) γ(t i+1 t i ) for some desired tolerance γ > 0; 5 More suitable with adaptive simulation (i.e. sample more points only when and where is needed) and Bridge Monte Carlo.
Bridge Monte Carlo 1 Simulate the final value X T from the marginal law f t ( ) of X t with t = T ; 2 Simulate intermediate points using the bridge law: f br t ( s, x, u, y) = Law (X t X s = x, X u = y), (s < t < u). Concretely, to generate X 0, X T 4 Simulate X T from f T ( );, X T 2 Simulate X T from f br T /2 0, 0, T, X T ); 2 ( ) Simulate X T from f br T /4 0, 0, T, X 4 2 T ; ( 2 Simulate X 3T from f br T /4 T, X 4 2 T, T, X T ); 2 Advantages:, X 3T, X T, we proceed as follows: 4 The trajectory can be adaptively refined only where and when necessary; Variance reduction methods are easy to design by replacing the density of X T with an important sampling distribution;
Simulation from the Lévy bridge law 1 The Lévy bridge density (the density of X s+t/2 given X s = 0, X s+t = y) is f br t/2 (x 0, 0, t, y) = f t/2(x)f t/2 (y x). f t (y) 2 In the case of a unimodal marginal densities f t, for all t > 0, ft/2 br (x 0, 0, t, y) 2f t/2(y/2) f t (y) }{{} Rejection rate 1 2 ( ft/2 (x) + f t/2 (y x) ) } {{ } f (x), proposal density 3 This lead us to propose a new rejection-based method: Simulate X from f (x): X = { ˆX w.p. 1/2 y ˆX w.p. 1/2, with ˆX f t/2 ( ). Simulate U Unif(0, 1), independent from X. If f t/2 ( X)f t/2 (y X) U ( f t/2 (y/2) f t/2 ( X) + f t/2 (y X) ), then accept X; otherwise, reject and go back to previous step.
Bridge-based MC method with controlled bias Algorithm Given x and y, the algorithm simulates a discrete skeleton of a Lévy bridge X = {(T i, X Ti )} N i=0 on [0, T ] with T 0 = 0, T N = T, X 0 = x, X T = y s.t. e p (X T(i), X T(i+1), T (i+1) T (i) ) γ(t (i+1) T (i) ), (1) where 0 = T (0) < < T (N) are the order statistics" of {T 0,..., T N } Returns Ñ(X ) := N 1 ( i=0 1 p(xt(i), X T, T (i+1) (i+1) T (i) ) ). FUNCTION N(parameters: x, y, T ) IF x / D OR y / D THEN RETURN 0 IF e p (x, y, T ) γ T THEN RETURN 1 p(x, y, T ) ELSE Sample ˆX from the bridge distribution X T X T = y 2 RETURN N(x, ˆX, T /2) N( ˆX, y, T /2) END IF
Illustration 0.5 0.0-0.5-1.0-1.5-2.0-2.5-3.0-3.5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure: A typical trajectory simulated by the adaptive algorithm (Cauchy Process). The algorithm places more points at the parts of the trajectory which are close to the boundary.
Some important issues 1 The ordered sampling times 0 = T (0) < < T (N) = T are random (and non-anticipative") times. Is the decomposition [ E[F (X T )1 τ>t ] = E F (X T ) still true? N 1 i=0 ( 1 p(xt(i), X T (i+1), T (i+1) T (i) ) ) ], 2 Does the algorithm terminate in finite-time? Note that we require for all i = 0,..., N 1. e p (X T(i), X T(i+1), T (i+1) T (i) ) γ(t (i+1) T (i) ), 3 Does the algorithm attains the desired controlled bias?
Convergence of the algorithm and bias control Theorem. [F-L & Tankov (2012)] Suppose that X satisfies one of the conditions: 1 X does not hit points; that is, P(τ {x} < ) = 0 for all x, where τ {x} := inf{s > 0 : X s = x} or, equivalently, ( ) 1 R du =, 1 + ψ(u) R 2 X has finite variation (e.g. Variance Gamma Process). Also, assume the approximation error satisfies 1 lim sup e p (x, y, t) = 0, t 0 t x,y (a,b ) a, b (a, b).
Convergence of the algorithm and bias control. Cont... Theorem. [F-L & Tankov (2012)] Then, for any T > 0, γ > 0, and F such that E F(X T ) <, we have (i) The previous adaptive algorithm terminates in finite time a.s. (ii) The random skeleton X = {(T i, X Ti )} N i=0 generated by the above algorithm satisfies: E[F (X T )1 {τ>t } ] E[F(X T ) Ñ(X )] γe[ F (X T ) ].
Error estimate for self-decomposable Lévy processes Theorem. [F-L & Tankov (2012)] Fix ε > 0 small enough, let (i) λ ε := x ε s(x)dx, b ε := b ε< x 1 xν(dx), σ2 ε := σ 2 + x ε x 2 ν(dx) (ii) a ε := sup x >ε s(x), a ε := sup x >ε s (x), C(η, ε) := (iii) α := b a and y := (b y) (y a) > 0 ( ) eσ 2 η ε ε εη (a,b) c s(v)s(y v) f t (y) For t > 0 small enough, the approx. p(0, y, t) = t 2 2 p(0, y, t) p(0, y, t) 1 ( { e λεt C( y /4, ε)t y 8 4ε + 2a ε t + a ε λ ε t 2 f t (y) y dv is s.t. + 2e λεt a ε C(α/2, ε)t 1+ α 2ε {1 + tλε } + λ2 εa ε 2 t 3 + a ε λ 1 ( ε 1 e λ εt [1 + λ ε t + (λ ε t) 2 /2] ) + e λεt t 2[ 2aε 2 + λ ε a ε ] (σε t 1/2 + b ) ε 2 t). }
Numerical Example 1: Cauchy Process 0.044 0.043 0.042 Uniform Adaptive True value 5% confidence bound 0.1 Adaptive discretization Uniform discretization 0.041 0.040 0.039 0.038 Bias 0.01 0.001 0.037 0.036 0.0001 0.035 100 1000 Time 1 10 100 1000 Time Figure: Computation of P(τ > 1) := P[sup 0 s 1 X s 10 2 ] (a =, b = 0.01, and F( ) 1). Left: Values computed by the uniform discretization algorithm (UDA) and the adaptive algorithm (AA), as function of the computational time (in sec.), for 10 6 paths. Different points on the graph correspond to different numbers of discretization times (n) for the UDA (from 256 to 16384) and different values of the tolerance parameter γ for the AA (from 9 to 9 10 3 ). Right: Comparison of the discretization bias for the uniform discretization and the bias for the adaptive algorithm.
Conclusions and extensions Main results Asymptotics for stopped Lévy bridges with explicitly computable error bounds; Bridge simulation method for general Lévy processes; Application to Monte Carlo simulation of stopped Lévy process with controlled bias via an adaptive bridge Monte Carlo simulation method. Extensions: Simulation of stopping times and overshoots; Multidimensional Lévy processes in conþned domains; General Markov jump processes.
For Further Reading I Figueroa-López & Tankov. Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias Preprint, 2012. Available at Arxiv and at www.stat.purdue.edu/ figueroa. Figueroa-Lopez & Houdré. Small-time expansions for the transition distributions of Lévy processes. Stochastic Processes and Their Applications, 119:3862-3889, 2009. Léandre Densité en temps petit d un processus de sauts. Séminaire de probabilités XXI, Lecture Notes in Math. J. Azéma, P.A. Meyer, and M. Yor (eds), 1987.