Analysis of Monte Carlo Calibration of Financial Market Models Christoph Käbe Universität Trier Workshop on PDE Constrained Optimization of Certain and Uncertain Processes June 03, 2009
Monte Carlo Calibration Fundamentals Focus will be on calibration of European call options Definition 1 (European Call Option) A European call option is the right to buy a predetermined underlying (e.g. stock) at a certain time T (maturity) for a certain price K (strike).
Monte Carlo Calibration Fundamentals Focus will be on calibration of European call options Definition 1 (European Call Option) A European call option is the right to buy a predetermined underlying (e.g. stock) at a certain time T (maturity) for a certain price K (strike). Definition 2 (Price of a Call Option) The price of a call option C in t = 0 can be calculated through C = e rt E (max(s T K, 0)) where r is the risk free rate and S T the value of the underlying at future time T.
Monte Carlo Calibration Stochastic Differential Equation L-dimensional system of stochastic differential equations (SDE): dy t (x) = a(x, Y t (x))dt + b(x, Y t (x))dw t where x R P Y t = [S t, Yt 2,..., Yt L ] R L W t = (Wt 1,..., Wt L ) R L a : R P R L R L b : R P R L R L R L vector of parameters Solution of SDE Vector of Brownian motions a l (x, Y t (x))dt L ν=1 bl,ν (x, Y t (x))dwt ν, l = 1,..., L
Monte Carlo Calibration Least Squares Problem Continuous Optimization Problem (True Problem) min f(x) := I ( C i (x) C i 2 x X obs) i=1 where C i (x) = e rt i E (max(s Ti (x) K i, 0)) s.t. dy t (x) = a(x, Y t (x))dt + b(x, Y t (x))dw t, Y 0 > 0 X R P convex and compact
Monte Carlo Calibration Least Squares Problem Continuous Optimization Problem (True Problem) min f(x) := I ( C i (x) C i 2 x X obs) i=1 where C i (x) = e rt i E (max(s Ti (x) K i, 0)) s.t. dy t (x) = a(x, Y t (x))dt + b(x, Y t (x))dw t, Y 0 > 0 X R P convex and compact Discretized Optimization Problem (SAA Problem) min f M, t,ɛ := I ( ) 2 CM, t,ɛ i (x) x X Ci obs i=1 M where C i M, t,ɛ (x) := e rt i 1 M s.t. m=1 ( ) π ɛ (s m N i,ɛ (x) K i) y m n+1,ɛ (x) = ym n,ɛ(x) + a ɛ (x, y m n,ɛ(x)) t n + b ɛ (x, y m n,ɛ(x)) W m n
Monte Carlo Calibration Smoothing Non-differentiabilities Consider Heston s Model: ds t,ɛ = (r δ)s t,ɛ dt + v + t,ɛ S t,ɛdw 1 t dv t,ɛ = κ(θ v + t,ɛ )dt + σ v + t,ɛ (ρdw 1 t + 1 ρ 2 dw 2 t )
Overview Table of Contents 1 Monte Carlo Calibration 2 Convergence Overview Pathwise Uniqueness Uniform Convergence First Order Optimality Condition 3 Conclusions
Overview Convergence True Problem min f(x) := I ( C i (x) C i 2 x X obs) i=1 SAA Problem min f M k, t k,ɛ k := I x X i=1 Increase number of simulations: ( ) 2 CM i k, t k,ɛ k (x) Cobs i M k Decrease discretization step size: t k 0 Decrease smoothing parameter: ɛ k 0 x k X solutions
Overview Convergence True Problem min f(x) := I ( C i (x) C i 2 x X obs) i=1 SAA Problem min f M k, t k,ɛ k := I x X i=1 Increase number of simulations: ( ) 2 CM i k, t k,ɛ k (x) Cobs i M k Decrease discretization step size: t k 0 Decrease smoothing parameter: ɛ k 0 X compact x kl x with x in X. x k X solutions
Overview Convergence True Problem min f(x) := I ( C i (x) C i 2 x X obs) i=1 SAA Problem min f M k, t k,ɛ k := I x X i=1 Increase number of simulations: ( ) 2 CM i k, t k,ɛ k (x) Cobs i M k Decrease discretization step size: t k 0 Decrease smoothing parameter: ɛ k 0 X compact x kl x with x in X. Question x solution of the true problem? x k X solutions
Overview Local Minima min f(x) := x2 x [ 1;1] min f M(x) := x 2 M 1 sin(mx 2 ) x [ 1;1]
Overview Local Minima min f(x) := x2 x [ 1;1] min f M(x) := x 2 M 1 sin(mx 2 ) x [ 1;1] Local minima might lead to problems:
Overview Literature Review True Problem: min h(x) := E(H(x, ω)) min x X SAA Problem x X h M(x) := 1 M M m=1 H(x, ω m) Shapiro (2000): Convergence if min h(x) produces global minimum Rubinstein & Shapiro (1993): Convergence to a critical first order point under assumption that H(x, ω) is dominated integrable and continuous Bastin et al. (2006): Additionally second order convergence even for stochastic constraints Dependence on three error sources: Monte Carlo, discretization and smoothing!
Overview Goal: First Order Optimality Steps to be taken: 1 Pathwise Uniqueness of SDE 2 Uniform Convergence: lim sup f Mk, t k,ɛ k (x) f(x) = 0 k x X lim sup f Mk, t k,ɛ k (x) f(x) = 0 k x X 3 First Order Optimality Condition: f(x ) T (x x ) 0 x X
Pathwise Uniqueness Table of Contents 1 Monte Carlo Calibration 2 Convergence Overview Pathwise Uniqueness Uniform Convergence First Order Optimality Condition 3 Conclusions
Pathwise Uniqueness Pathwise Uniqueness under Lipschitz Continuity Theorem 3 (Kloeden & Platen) Under the assumptions that There exists a constant K Lip > 0 such that t [0, T ] and y R L a(t, y) a(t, z) + b(t, y) b(t, z) K Lip y z There exists a constant K Grow > 0 such that t [0, T ] and y R L a(t, y) + b(t, y) K Grow (1 + y ) the stochastic differential equation dy t = a(t, Y t )dt + b(t, Y t )dw t, Y 0 (0, ). has a pathwise unique strong solution Y t on [0, T ].
Pathwise Uniqueness Problem: Lipschitz Continuity Consider Heston s model ds t,ɛ = (r δ)s t,ɛ dt + π ɛ (v t,ɛ )S t,ɛ dwt 1 dv t,ɛ = κ(θ π ɛ (v t,ɛ ))dt + σ π ɛ (v t,ɛ )(ρdwt 1 + 1 ρ 2 dwt 2 ) Lipschitz continuity for ɛ > 0
Pathwise Uniqueness Yamada Condition Theorem 4 Let with dy t,ɛ = a ɛ (t, Y t,ɛ )dt + b ɛ (t, Y t,ɛ )dw t. a ɛ (t, Y t,ɛ ) = (a 1 ɛ(t, Y 1 t,ɛ),..., a L ɛ (t, Y L t,ɛ)) T b ɛ (t, Y t,ɛ ) = diag(b 1 ɛ(t, Y 1 t,ɛ),..., b L ɛ (t, Y L t,ɛ)) If there exists a positive increasing function β : [0, ) [0, ) with and b i (t, x) b i (t, y) β( x y ) x, y R, i = 1,..., L δ with an arbitrarily small δ > 0... 0 β 2 (z)dz =.
Pathwise Uniqueness Yamada Condition (2)... and a positive increasing concave function α : [0, ) [0, ) such that with a i (t, x) a i (t, y) α( x y ) x, y R, i = 1,..., L δ 0 α 1 (z)dz =. with an arbitrarily small δ > 0, the SDE has a pathwise unique solution. Proof: Yamada (1971)
Pathwise Uniqueness Yamada Condition (3) Reconsider Heston s model ds t,ɛ = (r δ)s t,ɛ dt + π ɛ (v t,ɛ )S t,ɛ dwt 1 dv t,ɛ = κ(θ π ɛ (v t,ɛ ))dt + σ π ɛ (v t,ɛ )(ρdwt 1 + 1 ρ 2 dwt 2 ) The drift is Lipschitz continuous: a i (t, x) a i (t, y) K Lip x y x, y R, i = 1, 2 and the diffusion is Hölder continuous: b i (t, x) b i (t, y) x y x, y R, i = 1, 2 with δ 0 1 dz = ; K Lip z δ 0 1 dz =. z
Pathwise Uniqueness Problem: Independent components required Heston s model: ds t,ɛ = (r δ)s t,ɛ dt + π ɛ (v t,ɛ )S t,ɛ dwt 1 dv t,ɛ = κ(θ π ɛ (v t,ɛ ))dt + σ π ɛ (v t,ɛ )dwt 2
Pathwise Uniqueness Problem: Independent components required Heston s model: ds t,ɛ = (r δ)s t,ɛ dt + π ɛ (v t,ɛ )S t,ɛ dwt 1 dv t,ɛ = κ(θ π ɛ (v t,ɛ ))dt + σ π ɛ (v t,ɛ )dwt 2 Solution: Process v t,ɛ has pathwise unique solution following Yamada s Theorem Insert this unique solution in process S t,ɛ Process S t,ɛ has pathwise unique solution following Yamada s Theorem Pathwise unique solution via Yamada s Theorem
Uniform Convergence Table of Contents 1 Monte Carlo Calibration 2 Convergence Overview Pathwise Uniqueness Uniform Convergence First Order Optimality Condition 3 Conclusions
Uniform Convergence Convergence of the Problem Reconsider: f M, t,ɛ (x) f(x) f M, t,ɛ (x) f t,ɛ (x) (1) + f t,ɛ (x) f ɛ (x) (2) + f ɛ (x) f(x) (3)
Uniform Convergence Convergence of the Problem Reconsider: f M, t,ɛ (x) f(x) f M, t,ɛ (x) f t,ɛ (x) (1) + f t,ɛ (x) f ɛ (x) (2) + f ɛ (x) f(x) (3) Assumption: There exists a constant K Grow > 0 such that t [0, T ] and y R L a ɛ (t, y) + b ɛ (t, y) K Grow (1 + y ).
Uniform Convergence Convergence of Smoothed and Discretized SDE Theorem 5 Consider the SDE dy t,ɛ = a ɛ (t, Y t,ɛ )dt + b ɛ (t, Y t,ɛ )dw t. and the continuously interpolated process t y t,ɛ = Y 0 + 0 t a ɛ (x, y τ(s),ɛ )ds + 0 b ɛ (x, y τ(s),ɛ )dw s where τ(s) = n, s [τ n, τ n+1 ) and n = 0,..., N 1. Assuming that the growth condition holds and the SDE has a pathwise unique solution it holds ( lim sup E y t,ɛ Y T,ɛ 2) = 0. t 0 x X Proof: Kaneko & Nakao (1988)
Uniform Convergence Convergence of Smoothed SDE Theorem 6 Assume that the growth condition and the pathwise uniqueness holds for a solution of dy t = a(t, Y t )dt + b(t, Y t )dw t and let Y t,ɛ be a solution of dy t,ɛ = a ɛ (t, Y t,ɛ )dt + b ɛ (t, Y t,ɛ )dw t. If a ɛ and b ɛ converge uniformly to a and b for ɛ 0, i.e. lim sup ɛ 0 t [0,T ] x X sup ( a ɛ (t, x) a(t, x) + b ɛ (t, x) b(t, x) ) = 0. where is a matrix norm, it holds lim sup E ɛ 0 x X Proof: Kaneko & Nakao (1988) ( Y t,ɛ Y t 2) = 0.
Uniform Convergence Dominated Integrability & Continuity Lemma 7 Assume that the families {π(s T (x, ω) K), x X} are dominated by a Q-integrable function P (ω). Then there exist t > 0 and ɛ > 0 such that {π ɛ (s N,ɛ (x, ω) K), x X} is dominated by a Q-integrable function for all t [0, t] and ɛ [0, ɛ]. Lemma 8 If the functions π(s T (, ω) K) are continuous on X for Q almost every ω, the functions π ɛ (s N,ɛ (x, ω) K) are continuous on X for 0 < t < and 0 < ɛ <.
Uniform Convergence Uniform Convergence Theorem 9 Assume that the families {π(s T (x, ω) K), x X} are dominated by a Q-integrable function P (ω) and furthermore the functions π(s T (, ω) K) are continuous on X for Q almost every ω. If additionally X is compact, then f(x) is continuous on X. Furthermore f M, t,ɛ converges uniformly to f on X, i.e. for given sequences (M k ) k IN, ( t k ) k R + and (ɛ k ) k R + satisfying M k, t k 0, ɛ k 0 it holds lim sup f Mk, t k,ɛ k (x) f(x) = 0. k x X Note that the same can be shown for the gradients!
First Order Optimality Condition Table of Contents 1 Monte Carlo Calibration 2 Convergence Overview Pathwise Uniqueness Uniform Convergence First Order Optimality Condition 3 Conclusions
First Order Optimality Condition First Order Optimality Condition Theorem 10 Assume that the families {π(s T (x, ω) K), x X} and { x p π(s T (, ω) K), x X}, i = 1,..., I are dominated by a Q-integrable function P (ω) and furthermore the functions π(s T (, ω) K) and x p π(s T (, ω) K), i = 1,..., I are continuous on X for Q almost every ω and additionally that X is compact. Further let (M k ) k N +, ( t k ) k R +, (ɛ k ) k R + and (γ k ) k R + with M k, t k 0, ɛ k 0 and γ k 0 be given sequences and assume that (x k ) k IN X is a sequence of points satisfying f(x k ) T (x x k ) γ k x X. Then every limit point x X of (x k ) k almost surely satisfies the first order optimality condition f(x ) T (x x ) 0 x X.
First Order Optimality Condition Convergence: Graphical Illustration
First Order Optimality Condition Convergence: Graphical Illustration
First Order Optimality Condition Convergence: Graphical Illustration
Conclusions Table of Contents 1 Monte Carlo Calibration 2 Convergence Overview Pathwise Uniqueness Uniform Convergence First Order Optimality Condition 3 Conclusions
Conclusions Conclusions Set up calibration problem Discretized via Monte Carlo, Euler-Maruyama and smoothing Pathwise Uniqueness for resulting SDE under Yamada Condition Uniform convergence of objectives under unrestrictive assumptions First order optimality condition satisfied for limit point x
Conclusions Bibliography Bastin,F., Cirillo,C. and Toint,P.L.: Convergence Theory for Nonconvex Stochastic Programming with an Application to Mixed Logit, Mathematical Programming Series B, Vol. 108, 2006, Rubinstein,R.Y. and Shapiro,A.: Discrete Event Systems, John Wiley, 1993 Shapiro, A.: Stochastic Programming by Monte Carlo Simulation Methods, Stochastic Programming E-Print Series 2000, Kaneko,H. and Nakao,S.: A Note on Approximation for Stochastic Differential Equations, Seminaire de Probabilites, XXII, Lecture Notes in Mathematics, Vol. 1321, 1988, Yamada, T. and Watanabe, S.: On the Uniqueness of Solutions of Stochastic Differential Equations, Journal of Mathematics of Kyoto University, Vol 11, 1971 Kaebe, C., Maruhn, J. and Sachs, E.W.: Adjoint Based Monte Carlo Calibration of Financial Market Models, Journal of Finance and Stochastics (to appear)