Ch4. Variance Reduction Techniques

Similar documents
IEOR E4703: Monte-Carlo Simulation

Strategies for Improving the Efficiency of Monte-Carlo Methods

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

Statistics for Business and Economics

Hand and Spreadsheet Simulations

Simulation Wrap-up, Statistics COS 323

Bias Reduction Using the Bootstrap

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 5

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

Chapter 2 Uncertainty Analysis and Sampling Techniques

B. Maddah INDE 504 Discrete-Event Simulation. Output Analysis (3)

Chapter 3 Common Families of Distributions. Definition 3.4.1: A family of pmfs or pdfs is called exponential family if it can be expressed as

10. Monte Carlo Methods

V ar( X 1 + X 2. ) = ( V ar(x 1) + V ar(x 2 ) + 2 Cov(X 1, X 2 ) ),

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

IEOR E4703: Monte-Carlo Simulation

Chapter 5. Statistical inference for Parametric Models

Lecture outline. Monte Carlo Methods for Uncertainty Quantification. Importance Sampling. Importance Sampling

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial

Using Monte Carlo Integration and Control Variates to Estimate π

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises

STATS 200: Introduction to Statistical Inference. Lecture 4: Asymptotics and simulation

Probability. An intro for calculus students P= Figure 1: A normal integral

AMH4 - ADVANCED OPTION PRICING. Contents

Chapter 7 1. Random Variables

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

ELEMENTS OF MONTE CARLO SIMULATION

ECE 295: Lecture 03 Estimation and Confidence Interval

Chapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables

Gamma. The finite-difference formula for gamma is

Probability Distributions for Discrete RV

S t d with probability (1 p), where

Section 8.2: Monte Carlo Estimation

Tutorial 6. Sampling Distribution. ENGG2450A Tutors. 27 February The Chinese University of Hong Kong 1/6

IEOR E4703: Monte-Carlo Simulation

Homework Assignments

IEOR 165 Lecture 1 Probability Review

Central Limit Theorem, Joint Distributions Spring 2018

Stratified Sampling in Monte Carlo Simulation: Motivation, Design, and Sampling Error

4 Random Variables and Distributions

STAT/MATH 395 PROBABILITY II

Econ 250 Fall Due at November 16. Assignment 2: Binomial Distribution, Continuous Random Variables and Sampling

Statistical Computing (36-350)

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals

MTH6154 Financial Mathematics I Stochastic Interest Rates

Review for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Lecture 3: Return vs Risk: Mean-Variance Analysis

MAFS5250 Computational Methods for Pricing Structured Products Topic 5 - Monte Carlo simulation

Tutorial 11: Limit Theorems. Baoxiang Wang & Yihan Zhang bxwang, April 10, 2017

Much of what appears here comes from ideas presented in the book:

Week 1 Quantitative Analysis of Financial Markets Basic Statistics A

Section 7.1: Continuous Random Variables

Probability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions

CPSC 540: Machine Learning

The histogram should resemble the uniform density, the mean should be close to 0.5, and the standard deviation should be close to 1/ 12 =

Section 0: Introduction and Review of Basic Concepts

Importance Sampling and Monte Carlo Simulations

Lecture 4: Return vs Risk: Mean-Variance Analysis

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.

Stochastic Simulation

CPSC 540: Machine Learning

The Binomial and Geometric Distributions. Chapter 8

Random Variables and Applications OPRE 6301

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))

Statistical analysis and bootstrapping

IEOR E4703: Monte-Carlo Simulation

Monte Carlo Methods for Uncertainty Quantification

The Normal Distribution

Machine Learning for Quantitative Finance

Business Statistics 41000: Probability 3

Law of Large Numbers, Central Limit Theorem

FREDRIK BAJERS VEJ 7 G 9220 AALBORG ØST Tlf.: URL: Fax: Monte Carlo methods

6. Continous Distributions

M1 M1 A1 M1 A1 M1 A1 A1 A1 11 A1 2 B1 B1. B1 M1 Relative efficiency (y) = M1 A1 BEWARE PRINTED ANSWER. 5

2.1 Mathematical Basis: Risk-Neutral Pricing

Chapter 3 - Lecture 3 Expected Values of Discrete Random Va

Financial Risk Forecasting Chapter 9 Extreme Value Theory

Introduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book.

Random Variables and Probability Distributions

Sampling and sampling distribution

CS 237: Probability in Computing

Monte Carlo Methods in Option Pricing. UiO-STK4510 Autumn 2015

Value at Risk Ch.12. PAK Study Manual

Results for option pricing

Lecture 8. The Binomial Distribution. Binomial Distribution. Binomial Distribution. Probability Distributions: Normal and Binomial

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Week 7 Quantitative Analysis of Financial Markets Simulation Methods

IEOR E4703: Monte-Carlo Simulation

Probability is the tool used for anticipating what the distribution of data should look like under a given model.

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50)

Chapter 8: Sampling distributions of estimators Sections

Central Limit Theorem (cont d) 7/28/2006

MAFS Computational Methods for Pricing Structured Products

Discrete Random Variables

For these techniques to be efficient, we need to use. M then we introduce techniques to reduce Var[Y ] while. Var[Y ]

5. In fact, any function of a random variable is also a random variable

Section 7.5 The Normal Distribution. Section 7.6 Application of the Normal Distribution

Transcription:

Ch4. Zhang Jin-Ting Department of Statistics and Applied Probability July 17, 2012 Ch4.

Outline Ch4.

This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.

This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.

This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.

This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.

This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.

The Integration Problem Suppose we want to estimate an integral over some region, such as I A = k(x)dx S where S is subset of R d, x denotes a generic point of R d, and k is a given real-valued function on S; or I B = h(x)f (x)dx d R where h is a real-valued function on R d and f is a given pdf on R d. Ch4.

The Transformed Problem: Monte Carlo Integration It is clear that I B can be written as an expectation: I B = E(h(X)) where X f. Also, extend the definition of k to all of R d by saying that k(x) = 0 for every x that is not in S, then I A = R k(x) = d R d k(x) f (x) f (x)dx = E[k(x) ]. (1) f (x) Ch4.

The Transformed Problem: Monte Carlo Integration It is clear that I B can be written as an expectation: I B = E(h(X)) where X f. Also, extend the definition of k to all of R d by saying that k(x) = 0 for every x that is not in S, then I A = R k(x) = d R d k(x) f (x) f (x)dx = E[k(x) ]. (1) f (x) Ch4.

Notice that k(x ) f (x ) is well-defined except where f equals 0, which is a set of probability 0. This is a simple trick that will be especially useful in the method known as Importance Sampling. Ch4.

Notice that k(x ) f (x ) is well-defined except where f equals 0, which is a set of probability 0. This is a simple trick that will be especially useful in the method known as Importance Sampling. Ch4.

Simple Sampling This leads to a natural Monte Carlo strategy for estimating the value of I B, say. If we can generate iid random variables X 1, X 2,... whose common pdf is f, then for every n, Î n = 1 n n h(x i ) i=1 is an unbiased estimator of I B. Ch4.

Simple Sampling This leads to a natural Monte Carlo strategy for estimating the value of I B, say. If we can generate iid random variables X 1, X 2,... whose common pdf is f, then for every n, Î n = 1 n n h(x i ) i=1 is an unbiased estimator of I B. Ch4.

Moreover, the strong law of large numbers implies that În converges to I B with probability 1 as n. This method for estimating I B is called simple sampling. Ch4.

Moreover, the strong law of large numbers implies that În converges to I B with probability 1 as n. This method for estimating I B is called simple sampling. Ch4.

The Variance Reduction Problem The variance of simple sampling estimator În of I B is var(în) = var(h(x)) n = ( S h(x)2 f (x)dx IB 2). (2) n The variance of the estimator determines the size of the confidence interval. The n in the denominator is hard to avoid in Monte Carlo, but there are various ways to reduce the numerator. Ch4.

The Variance Reduction Problem The variance of simple sampling estimator În of I B is var(în) = var(h(x)) n = ( S h(x)2 f (x)dx IB 2). (2) n The variance of the estimator determines the size of the confidence interval. The n in the denominator is hard to avoid in Monte Carlo, but there are various ways to reduce the numerator. Ch4.

The Variance Reduction Problem The variance of simple sampling estimator În of I B is var(în) = var(h(x)) n = ( S h(x)2 f (x)dx IB 2). (2) n The variance of the estimator determines the size of the confidence interval. The n in the denominator is hard to avoid in Monte Carlo, but there are various ways to reduce the numerator. Ch4.

The goal of this chapter is to explore alternative sampling schemes which can achieve smaller variance for the same amount of computational efforts. Ch4.

Stratified Sampling Step 1: Range Partition Stratified sampling is a powerful and commonly used technique in population survey and is also very useful in Monte Carlo computations. To evaluate I B, the stratified sampling is to partition S into several disjoint sets S (1),..., S (M) (so that S = M i=1 S(i) ). Ch4.

Stratified Sampling Step 1: Range Partition Stratified sampling is a powerful and commonly used technique in population survey and is also very useful in Monte Carlo computations. To evaluate I B, the stratified sampling is to partition S into several disjoint sets S (1),..., S (M) (so that S = M i=1 S(i) ). Ch4.

For i = 1,..., M, let a i = f (x)dx = P(X S (i) ). S (i) Observe that a 1 + + a M = 1. Fix integers n 1,..., n M such that n 1 + + n M = n. Ch4.

For i = 1,..., M, let a i = f (x)dx = P(X S (i) ). S (i) Observe that a 1 + + a M = 1. Fix integers n 1,..., n M such that n 1 + + n M = n. Ch4.

Step 2: Sub-sampling For each i, generate n i samples X (i) 1,..., X (i) n i from S (i) having the conditional pdf { f (x ) a g(x) = i if x S (i) 0 otherwise Ch4.

Let T i = n 1 ni (i) i j=1 h(x j ). Then E(T i ) = h(x) f (x) dx = 1 S (i) a i a i by defining I i = S (i) h(x)f (x)dx. S (i) h(x)f (x)dx = I i /a i, Ch4.

Step 3: The Stratified Estimator Observe that I 1 + + I M = I B. The stratified estimator is T = M a i T i. i=1 It is unbiased because of E(T ) = M a i E(T i ) = i=1 M a i I i /a i = I B. i=1 Ch4.

Step 3: The Stratified Estimator Observe that I 1 + + I M = I B. The stratified estimator is T = M a i T i. i=1 It is unbiased because of E(T ) = M a i E(T i ) = i=1 M a i I i /a i = I B. i=1 Ch4.

The variance of T is var(t ) = M ai 2 var(t i), i=1 where var(t i ) = following from (2). S h(x) 2 f (x ) (i) a i dx ( I i a i ) 2. n i Ch4.

Theorem The Foundation Theory of the Stratified Sampling If n i = na i for i = 1,..., M. then the stratified estimator has smaller variance than the simple estimator În. In fact, var(în) = var(t ) + 1 n M i=1 a i ( I i a i I B ) 2. The choice n i = na i, called proportional allocation, give a stratified estimator which has smaller variance than the simple estimator. Ch4.

Theorem The Foundation Theory of the Stratified Sampling If n i = na i for i = 1,..., M. then the stratified estimator has smaller variance than the simple estimator În. In fact, var(în) = var(t ) + 1 n M i=1 a i ( I i a i I B ) 2. The choice n i = na i, called proportional allocation, give a stratified estimator which has smaller variance than the simple estimator. Ch4.

Importance Sampling Property of the Important Sampling Importance sampling is a very powerful method that can improve Monte Carlo efficiency by orders of magnitude in some problems. But it requires Caution: an inappropriate implementation can reduce efficiency by orders of magnitude! Ch4.

Importance Sampling Property of the Important Sampling Importance sampling is a very powerful method that can improve Monte Carlo efficiency by orders of magnitude in some problems. But it requires Caution: an inappropriate implementation can reduce efficiency by orders of magnitude! Ch4.

The Basic Idea The method works by sampling from an artificial probability distribution that is chosen by the user, and then reweighting the observations to get an unbiased estimate. The idea is based on the identity (1) I A = R k(x) = k(x) f (x)dx = E[k(x) d R d f (x) f (x) ]. Ch4.

The Basic Idea The method works by sampling from an artificial probability distribution that is chosen by the user, and then reweighting the observations to get an unbiased estimate. The idea is based on the identity (1) I A = R k(x) = k(x) f (x)dx = E[k(x) d R d f (x) f (x) ]. Ch4.

It implies that I A can be estimated by Ĵ n = 1 n n i=1 k(x i ) f (X i ), where X i s are iid from f. We call Ĵn the importance sampling estimator based on f. The identity (1) implies that Ĵn is unbiased. Ch4.

It implies that I A can be estimated by Ĵ n = 1 n n i=1 k(x i ) f (X i ), where X i s are iid from f. We call Ĵn the importance sampling estimator based on f. The identity (1) implies that Ĵn is unbiased. Ch4.

It implies that I A can be estimated by Ĵ n = 1 n n i=1 k(x i ) f (X i ), where X i s are iid from f. We call Ĵn the importance sampling estimator based on f. The identity (1) implies that Ĵn is unbiased. Ch4.

The Important Sampling Procedure Suppose now one is interested in evaluating I B = h(x)f (x)dx, d R the procedure of the importance sampling is as follows: (a) Draw X 1,..., X n from a trial density g. (b) Calculate the importance weight (c) Approximate I B by w j = f (X j )/g(x j ), for j = 1,..., n. n j=1 Ĵ g,n = w jh(x j ) n j=1 w. (3) j Ch4.

The Important Sampling Procedure Suppose now one is interested in evaluating I B = h(x)f (x)dx, d R the procedure of the importance sampling is as follows: (a) Draw X 1,..., X n from a trial density g. (b) Calculate the importance weight (c) Approximate I B by w j = f (X j )/g(x j ), for j = 1,..., n. n j=1 Ĵ g,n = w jh(x j ) n j=1 w. (3) j Ch4.

The Important Sampling Procedure Suppose now one is interested in evaluating I B = h(x)f (x)dx, d R the procedure of the importance sampling is as follows: (a) Draw X 1,..., X n from a trial density g. (b) Calculate the importance weight (c) Approximate I B by w j = f (X j )/g(x j ), for j = 1,..., n. n j=1 Ĵ g,n = w jh(x j ) n j=1 w. (3) j Ch4.

The Important Sampling Procedure Suppose now one is interested in evaluating I B = h(x)f (x)dx, d R the procedure of the importance sampling is as follows: (a) Draw X 1,..., X n from a trial density g. (b) Calculate the importance weight (c) Approximate I B by w j = f (X j )/g(x j ), for j = 1,..., n. n j=1 Ĵ g,n = w jh(x j ) n j=1 w. (3) j Ch4.

Thus, in order to make the estimation error small, one wants to choose g as close in shape to h(x)f (x) as possible. Ch4.

An Alternative Important Sampling Procedure A major advantage of using (3) instead of the unbiased estimate, Ĩ B = 1 n w j h(x j ) n j=1 is that in using the former, we need only to know the ratio f (X)/g(X) up to a multiplicative constant; whereas in the latter, the ratio needs to be known exactly. Although introducing a small bias, (3) often has a smaller mean squared error than the unbiased one ĨB. Ch4.

An Alternative Important Sampling Procedure A major advantage of using (3) instead of the unbiased estimate, Ĩ B = 1 n w j h(x j ) n j=1 is that in using the former, we need only to know the ratio f (X)/g(X) up to a multiplicative constant; whereas in the latter, the ratio needs to be known exactly. Although introducing a small bias, (3) often has a smaller mean squared error than the unbiased one ĨB. Ch4.

An Alternative Important Sampling Procedure A major advantage of using (3) instead of the unbiased estimate, Ĩ B = 1 n w j h(x j ) n j=1 is that in using the former, we need only to know the ratio f (X)/g(X) up to a multiplicative constant; whereas in the latter, the ratio needs to be known exactly. Although introducing a small bias, (3) often has a smaller mean squared error than the unbiased one ĨB. Ch4.

Example 1 Let h(x) = 4 1 x 2, x [0, 1]. Let us imagine that we do not know how to evaluate I = 1 0 h(x)dx (which is π, of course). Ch4.

Use Simple Sampling The simple sampling estimate is Î n = 1 n n i=1 4 1 Ui 2, where U i are iid U[0,1] random variables. This is unbiased, with variance var(în) = 1 1 n ( h(x) 2 dx I 2 ) = 1 1 n ( 0 0 16(1 x 2 )dx π 2 ) = 0.797 n. Ch4.

Use Simple Sampling The simple sampling estimate is Î n = 1 n n i=1 4 1 Ui 2, where U i are iid U[0,1] random variables. This is unbiased, with variance var(în) = 1 1 n ( h(x) 2 dx I 2 ) = 1 1 n ( 0 0 16(1 x 2 )dx π 2 ) = 0.797 n. Ch4.

Use Inappropriate Important Sampling Consider the importance sampling estimate based on the pdf g b (x) = 2x, x [0, 1]. It is easy to generate Y i g b ( the cdf is F(t) = t 2, so we can set Y i F 1 (U i ) = U i, where U i U[0, 1]). The importance sampling estimator is Ĵ (b) n = 1 n n h(y i )/g b (Y i ) = 1 n i=1 n i=1 4 1 Yi 2. 2Y i Ch4.

Use Inappropriate Important Sampling Consider the importance sampling estimate based on the pdf g b (x) = 2x, x [0, 1]. It is easy to generate Y i g b ( the cdf is F(t) = t 2, so we can set Y i F 1 (U i ) = U i, where U i U[0, 1]). The importance sampling estimator is Ĵ (b) n = 1 n n h(y i )/g b (Y i ) = 1 n i=1 n i=1 4 1 Yi 2. 2Y i Ch4.

Use Inappropriate Important Sampling Consider the importance sampling estimate based on the pdf g b (x) = 2x, x [0, 1]. It is easy to generate Y i g b ( the cdf is F(t) = t 2, so we can set Y i F 1 (U i ) = U i, where U i U[0, 1]). The importance sampling estimator is Ĵ (b) n = 1 n n h(y i )/g b (Y i ) = 1 n i=1 n i=1 4 1 Yi 2. 2Y i Ch4.

The Ĵ(b) n has mean I and variance var(ĵ(b) n ) = 1 h(y ) var( n g b (Y ) ) = 1 n 1 0 ( h(x) g b (x) I)2 dx = +. Hence, the trial density g(x) = 2x is very bad, and we need try a different one. Ch4.

The Ĵ(b) n has mean I and variance var(ĵ(b) n ) = 1 h(y ) var( n g b (Y ) ) = 1 n 1 0 ( h(x) g b (x) I)2 dx = +. Hence, the trial density g(x) = 2x is very bad, and we need try a different one. Ch4.

Use Appropriate Important Sampling Let g c (x) = (4 2x)/3, x [0, 1]. The importance sampling estimator is whose variance is Ĵ (c) n = 1 n n i=1 var(ĵ(c) n ) = 1 h(y ) var( n g c (Y ) ) = 1 n = 1 n [ 1 0 4 1 Yi 2 (4 2Y i )/3, 1 0 ( h(x) g c (x) I)2 dx 16(1 x 2 ) (4 2x)/3 dx π2 ] = 0.224/n. Ch4.

Use Appropriate Important Sampling Let g c (x) = (4 2x)/3, x [0, 1]. The importance sampling estimator is whose variance is Ĵ (c) n = 1 n n i=1 var(ĵ(c) n ) = 1 h(y ) var( n g c (Y ) ) = 1 n = 1 n [ 1 0 4 1 Yi 2 (4 2Y i )/3, 1 0 ( h(x) g c (x) I)2 dx 16(1 x 2 ) (4 2x)/3 dx π2 ] = 0.224/n. Ch4.

Thus, the importance sampling estimate of (c) can achieve the same size confidence interval as the simple sampling estimate of (a) while using only one third as many generated random variables. Ch4.

Control Variates Method The Main Idea In this method, one uses a control variate C, which is Correlated with the sample X, to produce a better estimate. The Procedure Suppose the estimation of µ = E(X) is of interest and µ C = E(C) is known. Then we can construct Monte Carlo samples of the form X(b) = X + b(c µ C ), which have the same mean as X, but a new variance var(x(b)) = var(x) 2bCov(X, C) + b 2 var(c). Ch4.

Control Variates Method The Main Idea In this method, one uses a control variate C, which is Correlated with the sample X, to produce a better estimate. The Procedure Suppose the estimation of µ = E(X) is of interest and µ C = E(C) is known. Then we can construct Monte Carlo samples of the form X(b) = X + b(c µ C ), which have the same mean as X, but a new variance var(x(b)) = var(x) 2bCov(X, C) + b 2 var(c). Ch4.

Control Variates Method The Main Idea In this method, one uses a control variate C, which is Correlated with the sample X, to produce a better estimate. The Procedure Suppose the estimation of µ = E(X) is of interest and µ C = E(C) is known. Then we can construct Monte Carlo samples of the form X(b) = X + b(c µ C ), which have the same mean as X, but a new variance var(x(b)) = var(x) 2bCov(X, C) + b 2 var(c). Ch4.

If the computation of Cov(X, C) and var(c) is easy, then we can let b = Cov(X, C)/Var(C), in which case var(x(b)) = (1 ρ 2 XC )var(x) < var(x). A Special Case Another situation is when we know only that E(C) is equal to µ. Then, we can form X(b) = bx + (1 b)c. It is easy to show that if C is Correlated with X, we can always choose a proper b so that X(b) has a smaller variance than X. Ch4.

If the computation of Cov(X, C) and var(c) is easy, then we can let b = Cov(X, C)/Var(C), in which case var(x(b)) = (1 ρ 2 XC )var(x) < var(x). A Special Case Another situation is when we know only that E(C) is equal to µ. Then, we can form X(b) = bx + (1 b)c. It is easy to show that if C is Correlated with X, we can always choose a proper b so that X(b) has a smaller variance than X. Ch4.

If the computation of Cov(X, C) and var(c) is easy, then we can let b = Cov(X, C)/Var(C), in which case var(x(b)) = (1 ρ 2 XC )var(x) < var(x). A Special Case Another situation is when we know only that E(C) is equal to µ. Then, we can form X(b) = bx + (1 b)c. It is easy to show that if C is Correlated with X, we can always choose a proper b so that X(b) has a smaller variance than X. Ch4.

Antithetic Variates Method The Main Idea Suppose U is a random number used in the production of a sample X that follows a distribution with cdf F, that is, X = F 1 (U), then X = F 1 (1 U) also follows distribution F. More generally, if g is a monotone function, then [g(u 1 ) g(u 2 )][g(1 u 1 ) g(1 u 2 )] 0 for any u 1, u 2 [0, 1]. Ch4.

Antithetic Variates Method The Main Idea Suppose U is a random number used in the production of a sample X that follows a distribution with cdf F, that is, X = F 1 (U), then X = F 1 (1 U) also follows distribution F. More generally, if g is a monotone function, then [g(u 1 ) g(u 2 )][g(1 u 1 ) g(1 u 2 )] 0 for any u 1, u 2 [0, 1]. Ch4.

For two independent uniform random variable U 1 and U 2, we have E{[g(U 1 ) g(u 2 )][g(1 U 1 ) g(1 U 2 )]} = Cov(X, X ) 0, where X = g(u) and X = g(1 U). Therefore, var[(x + X )/2] var(x)/2, implying that using the pair X and X is better than using two independent Monte Carlo draws for estimating E(X). Ch4.

For two independent uniform random variable U 1 and U 2, we have E{[g(U 1 ) g(u 2 )][g(1 U 1 ) g(1 U 2 )]} = Cov(X, X ) 0, where X = g(u) and X = g(1 U). Therefore, var[(x + X )/2] var(x)/2, implying that using the pair X and X is better than using two independent Monte Carlo draws for estimating E(X). Ch4.

Example 2 We return once more to the problem of estimating the integral I = 1 0 4 1 x 2 dx. Choose a large even value of n. As usual, our Simple Estimator and its Variance are Î n = 1 n n h(u i ), i=1 var(în) = 0.797/n. Ch4.

Example 2 We return once more to the problem of estimating the integral I = 1 0 4 1 x 2 dx. Choose a large even value of n. As usual, our Simple Estimator and its Variance are Î n = 1 n n h(u i ), i=1 var(în) = 0.797/n. Ch4.

Our corresponding Antithetic Estimator and its Variance are În An = 1 n/2 (h(u i ) + h(1 U i )). n i=1 var(îan n ) = 1 n 2 {n 2 [var(h(u 1) + 2Cov(h(U 1 ), h(1 U 1 )) + var(h(1 U 1 ))]} = 1 n [var(h(u 1) + Cov(h(U 1 ), h(1 U 1 ))] = 0.219/n Ch4.