Monte Carlo Methods for Uncertainty Quantification

Similar documents
Lecture outline. Monte Carlo Methods for Uncertainty Quantification. Importance Sampling. Importance Sampling

Monte Carlo Methods for Uncertainty Quantification

Monte Carlo Methods for Uncertainty Quantification

Numerical Methods II

Monte Carlo Methods for Uncertainty Quantification

Multilevel Monte Carlo for VaR

Module 2: Monte Carlo Methods

Quasi-Monte Carlo for finance applications

Quasi-Monte Carlo for Finance Applications

Chapter 2 Uncertainty Analysis and Sampling Techniques

Bias Reduction Using the Bootstrap

2.1 Mathematical Basis: Risk-Neutral Pricing

Module 4: Monte Carlo path simulation

Multilevel quasi-monte Carlo path simulation

Monte Carlo Methods. Prof. Mike Giles. Oxford University Mathematical Institute. Lecture 1 p. 1.

Multilevel Monte Carlo for Basket Options

Math Computational Finance Option pricing using Brownian bridge and Stratified samlping

ENHANCED QUASI-MONTE CARLO METHODS WITH DIMENSION REDUCTION

Using Monte Carlo Integration and Control Variates to Estimate π

Monte Carlo Methods for Uncertainty Quantification

Quasi-Monte Carlo Methods in Financial Engineering: An Equivalence Principle and Dimension Reduction

Quasi-Monte Carlo for Finance

Monte Carlo Methods in Financial Engineering

AD in Monte Carlo for finance

King s College London

Computational Finance Improving Monte Carlo

SIMULATION OF ELECTRICITY MARKETS

Write legibly. Unreadable answers are worthless.

Computer Exercise 2 Simulation

Monte Carlo Methods in Finance

Gamma. The finite-difference formula for gamma is

IEOR E4703: Monte-Carlo Simulation

Math Computational Finance Double barrier option pricing using Quasi Monte Carlo and Brownian Bridge methods

Monte Carlo Methods in Option Pricing. UiO-STK4510 Autumn 2015

King s College London

Market Risk Analysis Volume I

IEOR E4703: Monte-Carlo Simulation

Asian Option Pricing: Monte Carlo Control Variate. A discrete arithmetic Asian call option has the payoff. S T i N N + 1

ELEMENTS OF MONTE CARLO SIMULATION

"Vibrato" Monte Carlo evaluation of Greeks

Math Option pricing using Quasi Monte Carlo simulation

3. Monte Carlo Simulation

Valuation of performance-dependent options in a Black- Scholes framework

A general approach to calculating VaR without volatilities and correlations

Numerical Simulation of Stochastic Differential Equations: Lecture 2, Part 2

Design of a Financial Application Driven Multivariate Gaussian Random Number Generator for an FPGA

AMERICAN OPTION PRICING WITH RANDOMIZED QUASI-MONTE CARLO SIMULATIONS. Maxime Dion Pierre L Ecuyer

PRMIA Exam 8002 PRM Certification - Exam II: Mathematical Foundations of Risk Measurement Version: 6.0 [ Total Questions: 132 ]

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises

Strategies for Improving the Efficiency of Monte-Carlo Methods

NEWCASTLE UNIVERSITY SCHOOL OF MATHEMATICS, STATISTICS & PHYSICS SEMESTER 1 SPECIMEN 2 MAS3904. Stochastic Financial Modelling. Time allowed: 2 hours

Financial Risk Management and Governance Other VaR methods. Prof. Hugues Pirotte

Monte Carlo Methods in Structuring and Derivatives Pricing

Value at Risk Ch.12. PAK Study Manual

MONTE CARLO EXTENSIONS

As we saw in Chapter 12, one of the many uses of Monte Carlo simulation by

Results for option pricing

The Pennsylvania State University. The Graduate School. Department of Industrial Engineering AMERICAN-ASIAN OPTION PRICING BASED ON MONTE CARLO

Alternative VaR Models

Interest Rate Curves Calibration with Monte-Carlo Simulatio

IEOR E4703: Monte-Carlo Simulation

JDEP 384H: Numerical Methods in Business

10. Monte Carlo Methods

On the Scrambled Sobol sequences Lecture Notes in Computer Science 3516, , Springer 2005

Numerical schemes for SDEs

Credit Portfolio Simulation with MATLAB

Efficient Deterministic Numerical Simulation of Stochastic Asset-Liability Management Models in Life Insurance

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

AMH4 - ADVANCED OPTION PRICING. Contents

APPEND I X NOTATION. The product of the values produced by a function f by inputting all n from n=o to n=n

The University of Sydney School of Mathematics and Statistics. Computer Project

Energy Systems under Uncertainty: Modeling and Computations

EFFICIENCY IMPROVEMENT BY LATTICE RULES FOR PRICING ASIAN OPTIONS. Christiane Lemieux Pierre L Ecuyer

Audit Sampling: Steering in the Right Direction

Market risk measurement in practice

Computational Efficiency and Accuracy in the Valuation of Basket Options. Pengguo Wang 1

Using Halton Sequences. in Random Parameters Logit Models

Multilevel Monte Carlo Simulation

GRAPHICAL ASIAN OPTIONS

Simulating Stochastic Differential Equations

Variance Reduction Through Multilevel Monte Carlo Path Calculations

Barrier Option. 2 of 33 3/13/2014

Mathematics of Finance Final Preparation December 19. To be thoroughly prepared for the final exam, you should

A Stratified Sampling Plan for Billing Accuracy in Healthcare Systems

Ch4. Variance Reduction Techniques

Fast Convergence of Regress-later Series Estimators

A Moment Matching Approach To The Valuation Of A Volume Weighted Average Price Option

Valuation of Asian Option. Qi An Jingjing Guo

Markowitz portfolio theory

Optimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing

New Frontiers in Practical Risk Management

Computational Methods for Option Pricing. A Directed Research Project. Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE

Graph signal processing for clustering

A Matlab Program for Testing Quasi-Monte Carlo Constructions

Monte Carlo Simulations

The Use of Importance Sampling to Speed Up Stochastic Volatility Simulations

Monte Carlo Simulations

"Pricing Exotic Options using Strong Convergence Properties

Genetics and/of basket options

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS

Transcription:

Monte Carlo Methods for Uncertainty Quantification Abdul-Lateef Haji-Ali Based on slides by: Mike Giles Mathematical Institute, University of Oxford Contemporary Numerical Techniques Haji-Ali (Oxford) Monte Carlo methods 2 1 / 27

Lecture outline Lecture 2: Variance reduction control variate Latin Hypercube randomised quasi-monte Carlo Haji-Ali (Oxford) Monte Carlo methods 2 2 / 27

Control Variates Suppose we want to estimate E[f (X )], and there is another function g(x ) for which we know E[g(X )]. We can use this by averaging N samples of a new estimator f = f λ (g E[g]) Again unbiased since E[ f ] = E[f ] λ E[ g E[g] ] = E[f ] Haji-Ali (Oxford) Monte Carlo methods 2 3 / 27

Control Variates For a single sample, V[f λ (g E[g])] = V[f λ g] = For an average of N samples, V[f λ (g E[g])] = To minimise this, the optimum value for λ is λ = Haji-Ali (Oxford) Monte Carlo methods 2 4 / 27

Control Variates For a single sample, V[f λ (g E[g])] = V[f λ g] = V[f ] 2 λ Cov[f, g] + λ 2 V[g] For an average of N samples, ( ) V[f λ (g E[g])] = N 1 V[f ] 2 λ Cov[f, g] + λ 2 V[g] To minimise this, the optimum value for λ is λ = Cov[f, g] V[g] Haji-Ali (Oxford) Monte Carlo methods 2 4 / 27

Control Variates The resulting variance is N 1 V[f ] (1 ) (Cov[f, g])2 = N 1 V[f ] ( 1 ρ 2) V[f ] V[g] where 1 ρ 1 is the correlation between f and g. The challenge is to choose a good g which is well correlated with f. The covariance, and hence the optimal λ, can be estimated numerically. For λ = 1 and assuming that V[f ] = V[g] then the control variate method offers an advantage if ρ > 1 2, i.e., if f and g are sufficiently positively correlated. The variance reduction is 2(1 ρ) in this case. Haji-Ali (Oxford) Monte Carlo methods 2 5 / 27

Effect of optimising λ 1.5 Error reduction 1 0.5 0 2(1 ρ) 1 ρ 2 0 0.2 0.4 0.6 0.8 1 ρ Strong correlation is required Haji-Ali (Oxford) Monte Carlo methods 2 6 / 27

Example: Estimating log(2) [ ] 2 log(2) = E 3 + X For X U( 1, 1). Consider the new random variable 2 2 ( 1 X ) + 2 } 3 + {{ X } 3 3 }{{}}{{} 3 f (X ) g(x ) E[g(X )] whose expectation is also log(2). For this choice, ρ 0.98. [ ] 2 V 1.95 10 2 3 + X while [ V 2 3 + X 2 3 ( 1 X ) + 2 ] 7 10 4 3 3 a reduction factor of about 27! Leading to an error reduction factor of approximately 5. Haji-Ali (Oxford) Monte Carlo methods 2 7 / 27

Correlated variables 1 2 3 ( ) 1 x 2 3+x 3 0.8 0.6 0.4 1 0.5 0 0.5 1 x Haji-Ali (Oxford) Monte Carlo methods 2 8 / 27

Reduced variance 0.78 0.76 2 3+x 2 3 ( ) 1 x 3 + 2 3 0.74 0.72 0.7 0.68 0.66 1 0.5 0 0.5 1 x Haji-Ali (Oxford) Monte Carlo methods 2 9 / 27

Latin Hypercube The central idea is to achieve a more regular sampling of the unit hypercube [0, 1] d when trying to estimate [0,1] d f (U) du. We start by considering a one-dimensional problem: I = 1 0 f (U) du. Instead of taking N samples, drawn from uniform distribution on [0, 1], break the interval into N strata (or sub-intervals) of width 1/N and take 1 sample from each, with a uniform random distribution within the stratum. Haji-Ali (Oxford) Monte Carlo methods 2 10 / 27

Stratified Sampling For j th stratum, if f (U) is differentiable then f (U) f (U j ) + f (U j ) (U U j ) where U j is midpoint of stratum, and hence V[f (U) U stratum j] ( f (U j ) ) 2 V[U Uj U stratum j] = 1 ( f 12N 2 (U j ) ) 2 since the stratum has width 1/N so V[U U j U stratum j] = 1/(2N) 1/(2N) U 2 N du Haji-Ali (Oxford) Monte Carlo methods 2 11 / 27

Stratified Sampling Summing all of the variances (due to independence) and dividing by N 2 (due to averaging) the variance of the average over all strata is then 1 ( f 12N 4 (U j ) ) 2 1 (f 12N 3 (U) ) 2 du j so the r.m.s. error is O(N 3/2 ), provided f (U) is square integrable. This is much better than the usual O(N 1/2 ) r.m.s. error shows how powerful stratified sampling can be. Haji-Ali (Oxford) Monte Carlo methods 2 12 / 27

Latin Hypercube Latin Hypercube generalises this idea to multiple dimensions. Cut each dimension into L strata, and generate L points assigning them randomly to the L d cubes to give precisely one point in each stratum Haji-Ali (Oxford) Monte Carlo methods 2 13 / 27

Latin Hypercube This gives one set of L points, with average f = L 1 L f (U l ) Since each of the points U m is uniformly distributed over the hypercube, l=1 E[f ] = E[f ] The fact that the points are not independently generated does not affect the expectation, only the (reduced) variance Haji-Ali (Oxford) Monte Carlo methods 2 14 / 27

Latin Hypercube We now take M independently-generated set of points, each giving an average f m. Averaging these M M 1 f m m=1 gives an unbiased estimate for E[f ], and the empirical variance for f m gives a confidence interval in the usual way. Haji-Ali (Oxford) Monte Carlo methods 2 15 / 27

Latin Hypercube Note: in the special case in which the function f (U) is a sum of one-dimensional functions: f (U) = i f i (U i ) where U i is the i th component of U, then Latin Hypercube sampling reduces to 1D stratified sampling in each dimension. In this case, potential for very large variance reduction by using large sample size L. Much harder to analyse in general case. Haji-Ali (Oxford) Monte Carlo methods 2 16 / 27

Quasi-Monte Carlo Standard Monte Carlo approximates high-dimensional hypercube integral [0,1] d f (x) dx by 1 N N f (x (i) ) i=1 with points chosen randomly, giving r.m.s. error proportional to N 1/2 an unbiased estimator confidence interval Haji-Ali (Oxford) Monte Carlo methods 2 17 / 27

Quasi-Monte Carlo Standard quasi Monte Carlo uses the same equal-weight estimator 1 N N f (x (i) ) i=1 but chooses the points systematically so that error roughly proportional to N 1 a biased estimator no confidence interval (We ll fix the bias and get the confidence interval back later by adding in some randomisation!) Haji-Ali (Oxford) Monte Carlo methods 2 18 / 27

Quasi-Monte Carlo The key is to use points which are fairly uniformly spread within the hypercube, not clustered anywhere. There is theory to prove that for certain point constructions, and certain function classes, Error < C (log N)d N for small dimension d, (d <10?) this is much better than N 1/2 r.m.s. error for standard MC for large dimension d, (log N) d could be enormous, so not clear there is any benefit Haji-Ali (Oxford) Monte Carlo methods 2 19 / 27

Sobol Sequences Sobol sequences x (i) have the property that for small dimensions d <40 the subsequence 2 m i < 2 m+1 has precisely 2 m d points in each sub-unit formed by d bisections of the original hypercube. For example: cutting it into halves in any dimension, each has 2 m 1 points cutting it into quarters in any dimension, each has 2 m 2 points cutting it into halves in one direction, then halves in another direction, each quarter has 2 m 2 points etc. The generation of these sequences is a bit complicated, but it is fast and plenty of software is available to do it. MATLAB has sobolset as part of the Statistics toolbox. Haji-Ali (Oxford) Monte Carlo methods 2 20 / 27

Sobol Sequences Two dimensions: 256 points Sobol points 1 0.8 0.6 1 0.8 0.6 random points x 2 0.4 x 2 0.4 0.2 0.2 0 0 0.5 1 x 1 0 0 0.5 1 x 1 Haji-Ali (Oxford) Monte Carlo methods 2 21 / 27

Randomised QMC In the best cases, QMC error is O(N 1 ) instead of O(N 1/2 ) but with bias and no confidence interval. To fix this, we introduce randomisation through a digital scrambling which maintains the special properties of the Sobol sequence. For the i th point in the m th set of points, we define x (i,m) = x (i) X (m) where X (m) is a uniformly-distributed random point in [0, 1) d, and the exclusive-or operation is applied elementwise and bitwise so that 0. 1 0 1 0 0 1 1 0. 0 1 1 0 1 1 0 = 0. 1 1 0 0 1 0 1 MATLAB s sobolset supports this digital scrambling. Haji-Ali (Oxford) Monte Carlo methods 2 22 / 27

Randomised QMC For each m, let f m = 1 N N f (x (i,m) ) This is a random variable, and since E[f (x (i,m) )] = E[f ] it follows that E[f m ] = E[f ] i=1 By using multiple sets, we can estimate V[f ] in the usual way and so get a confidence interval More sets = better variance estimate, but poorer error. Some people use as few as 10 sets, but I prefer 32. Haji-Ali (Oxford) Monte Carlo methods 2 23 / 27

Finance Application In the basket call option example, the asset simulation can be turned into ) S i (T ) = S i (0) exp ((r 1 2 σ2 i )T + (L Y ) i where Y is a vector of 5 independent unit normals and with Σ ij = σ i σ j ρ ij. L L T = Σ There are two standard ways of generating L: Cholesky factorisation (so L is lower-triagular) PCA factorisation (L = UΛ 1/2, where Λ is diagonal matrix of eigenvalues, and U is orthonormal matrix of eigenvectors) Haji-Ali (Oxford) Monte Carlo methods 2 24 / 27

Financial Application 5 underlying assets starting at S 0 = 100, with call option on arithmetic mean with strike K = 100 Geometric Brownian Motion model, r = 0.05, T = 1 volatility σ = 0.2 and covariance matrix 1 0.1 0.1 0.1 0.1 Σ = σ 2 0.1 1 0.1 0.1 0.1 0.1 0.1 1 0.1 0.1 0.1 0.1 0.1 1 0.1 0.1 0.1 0.1 0.1 1 Haji-Ali (Oxford) Monte Carlo methods 2 25 / 27

Financial Application Numerical results using 2 20 10 6 samples in total, comparing MC, Latin Hypercube and Sobol QMC, each with either Cholesky or PCA factorisation of Σ. Cholesky PCA Val Err Bnd Val Err Bnd Monte Carlo 7.0193 0.0239 7.0250 0.0239 Latin Hypercube 7.0244 0.0081 7.0220 0.0015 Sobol QMC 7.0228 0.0007 7.0228 0.0001 Haji-Ali (Oxford) Monte Carlo methods 2 26 / 27

Final comments Control variates can sometimes be very useful needs good insight to find a suitable control variate Latin Hypercube achieves a more uniform spread of sampling points particularly effective when function can be almost decomposed into a sum of 1D functions quasi-monte Carlo can give a much lower error than standard MC; O(N 1 ) in best cases, instead of O(N 1/2 ) randomised QMC is important to regain confidence interval and eliminate bias Haji-Ali (Oxford) Monte Carlo methods 2 27 / 27