Ch4. Variance Reduction Techniques
|
|
- Norman Tyler
- 5 years ago
- Views:
Transcription
1 Ch4. Zhang Jin-Ting Department of Statistics and Applied Probability July 17, 2012 Ch4.
2 Outline Ch4.
3 This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.
4 This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.
5 This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.
6 This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.
7 This chapter aims to improve the Monte Carlo Integration estimator via reducing its variance using some useful techniques. Stratified Sampling Importance Sampling Control Variates Method Antithetic Variates Method Ch4.
8 The Integration Problem Suppose we want to estimate an integral over some region, such as I A = k(x)dx S where S is subset of R d, x denotes a generic point of R d, and k is a given real-valued function on S; or I B = h(x)f (x)dx d R where h is a real-valued function on R d and f is a given pdf on R d. Ch4.
9 The Transformed Problem: Monte Carlo Integration It is clear that I B can be written as an expectation: I B = E(h(X)) where X f. Also, extend the definition of k to all of R d by saying that k(x) = 0 for every x that is not in S, then I A = R k(x) = d R d k(x) f (x) f (x)dx = E[k(x) ]. (1) f (x) Ch4.
10 The Transformed Problem: Monte Carlo Integration It is clear that I B can be written as an expectation: I B = E(h(X)) where X f. Also, extend the definition of k to all of R d by saying that k(x) = 0 for every x that is not in S, then I A = R k(x) = d R d k(x) f (x) f (x)dx = E[k(x) ]. (1) f (x) Ch4.
11 Notice that k(x ) f (x ) is well-defined except where f equals 0, which is a set of probability 0. This is a simple trick that will be especially useful in the method known as Importance Sampling. Ch4.
12 Notice that k(x ) f (x ) is well-defined except where f equals 0, which is a set of probability 0. This is a simple trick that will be especially useful in the method known as Importance Sampling. Ch4.
13 Simple Sampling This leads to a natural Monte Carlo strategy for estimating the value of I B, say. If we can generate iid random variables X 1, X 2,... whose common pdf is f, then for every n, Î n = 1 n n h(x i ) i=1 is an unbiased estimator of I B. Ch4.
14 Simple Sampling This leads to a natural Monte Carlo strategy for estimating the value of I B, say. If we can generate iid random variables X 1, X 2,... whose common pdf is f, then for every n, Î n = 1 n n h(x i ) i=1 is an unbiased estimator of I B. Ch4.
15 Moreover, the strong law of large numbers implies that În converges to I B with probability 1 as n. This method for estimating I B is called simple sampling. Ch4.
16 Moreover, the strong law of large numbers implies that În converges to I B with probability 1 as n. This method for estimating I B is called simple sampling. Ch4.
17 The Variance Reduction Problem The variance of simple sampling estimator În of I B is var(în) = var(h(x)) n = ( S h(x)2 f (x)dx IB 2). (2) n The variance of the estimator determines the size of the confidence interval. The n in the denominator is hard to avoid in Monte Carlo, but there are various ways to reduce the numerator. Ch4.
18 The Variance Reduction Problem The variance of simple sampling estimator În of I B is var(în) = var(h(x)) n = ( S h(x)2 f (x)dx IB 2). (2) n The variance of the estimator determines the size of the confidence interval. The n in the denominator is hard to avoid in Monte Carlo, but there are various ways to reduce the numerator. Ch4.
19 The Variance Reduction Problem The variance of simple sampling estimator În of I B is var(în) = var(h(x)) n = ( S h(x)2 f (x)dx IB 2). (2) n The variance of the estimator determines the size of the confidence interval. The n in the denominator is hard to avoid in Monte Carlo, but there are various ways to reduce the numerator. Ch4.
20 The goal of this chapter is to explore alternative sampling schemes which can achieve smaller variance for the same amount of computational efforts. Ch4.
21 Stratified Sampling Step 1: Range Partition Stratified sampling is a powerful and commonly used technique in population survey and is also very useful in Monte Carlo computations. To evaluate I B, the stratified sampling is to partition S into several disjoint sets S (1),..., S (M) (so that S = M i=1 S(i) ). Ch4.
22 Stratified Sampling Step 1: Range Partition Stratified sampling is a powerful and commonly used technique in population survey and is also very useful in Monte Carlo computations. To evaluate I B, the stratified sampling is to partition S into several disjoint sets S (1),..., S (M) (so that S = M i=1 S(i) ). Ch4.
23 For i = 1,..., M, let a i = f (x)dx = P(X S (i) ). S (i) Observe that a a M = 1. Fix integers n 1,..., n M such that n n M = n. Ch4.
24 For i = 1,..., M, let a i = f (x)dx = P(X S (i) ). S (i) Observe that a a M = 1. Fix integers n 1,..., n M such that n n M = n. Ch4.
25 Step 2: Sub-sampling For each i, generate n i samples X (i) 1,..., X (i) n i from S (i) having the conditional pdf { f (x ) a g(x) = i if x S (i) 0 otherwise Ch4.
26 Let T i = n 1 ni (i) i j=1 h(x j ). Then E(T i ) = h(x) f (x) dx = 1 S (i) a i a i by defining I i = S (i) h(x)f (x)dx. S (i) h(x)f (x)dx = I i /a i, Ch4.
27 Step 3: The Stratified Estimator Observe that I I M = I B. The stratified estimator is T = M a i T i. i=1 It is unbiased because of E(T ) = M a i E(T i ) = i=1 M a i I i /a i = I B. i=1 Ch4.
28 Step 3: The Stratified Estimator Observe that I I M = I B. The stratified estimator is T = M a i T i. i=1 It is unbiased because of E(T ) = M a i E(T i ) = i=1 M a i I i /a i = I B. i=1 Ch4.
29 The variance of T is var(t ) = M ai 2 var(t i), i=1 where var(t i ) = following from (2). S h(x) 2 f (x ) (i) a i dx ( I i a i ) 2. n i Ch4.
30 Theorem The Foundation Theory of the Stratified Sampling If n i = na i for i = 1,..., M. then the stratified estimator has smaller variance than the simple estimator În. In fact, var(în) = var(t ) + 1 n M i=1 a i ( I i a i I B ) 2. The choice n i = na i, called proportional allocation, give a stratified estimator which has smaller variance than the simple estimator. Ch4.
31 Theorem The Foundation Theory of the Stratified Sampling If n i = na i for i = 1,..., M. then the stratified estimator has smaller variance than the simple estimator În. In fact, var(în) = var(t ) + 1 n M i=1 a i ( I i a i I B ) 2. The choice n i = na i, called proportional allocation, give a stratified estimator which has smaller variance than the simple estimator. Ch4.
32 Importance Sampling Property of the Important Sampling Importance sampling is a very powerful method that can improve Monte Carlo efficiency by orders of magnitude in some problems. But it requires Caution: an inappropriate implementation can reduce efficiency by orders of magnitude! Ch4.
33 Importance Sampling Property of the Important Sampling Importance sampling is a very powerful method that can improve Monte Carlo efficiency by orders of magnitude in some problems. But it requires Caution: an inappropriate implementation can reduce efficiency by orders of magnitude! Ch4.
34 The Basic Idea The method works by sampling from an artificial probability distribution that is chosen by the user, and then reweighting the observations to get an unbiased estimate. The idea is based on the identity (1) I A = R k(x) = k(x) f (x)dx = E[k(x) d R d f (x) f (x) ]. Ch4.
35 The Basic Idea The method works by sampling from an artificial probability distribution that is chosen by the user, and then reweighting the observations to get an unbiased estimate. The idea is based on the identity (1) I A = R k(x) = k(x) f (x)dx = E[k(x) d R d f (x) f (x) ]. Ch4.
36 It implies that I A can be estimated by Ĵ n = 1 n n i=1 k(x i ) f (X i ), where X i s are iid from f. We call Ĵn the importance sampling estimator based on f. The identity (1) implies that Ĵn is unbiased. Ch4.
37 It implies that I A can be estimated by Ĵ n = 1 n n i=1 k(x i ) f (X i ), where X i s are iid from f. We call Ĵn the importance sampling estimator based on f. The identity (1) implies that Ĵn is unbiased. Ch4.
38 It implies that I A can be estimated by Ĵ n = 1 n n i=1 k(x i ) f (X i ), where X i s are iid from f. We call Ĵn the importance sampling estimator based on f. The identity (1) implies that Ĵn is unbiased. Ch4.
39 The Important Sampling Procedure Suppose now one is interested in evaluating I B = h(x)f (x)dx, d R the procedure of the importance sampling is as follows: (a) Draw X 1,..., X n from a trial density g. (b) Calculate the importance weight (c) Approximate I B by w j = f (X j )/g(x j ), for j = 1,..., n. n j=1 Ĵ g,n = w jh(x j ) n j=1 w. (3) j Ch4.
40 The Important Sampling Procedure Suppose now one is interested in evaluating I B = h(x)f (x)dx, d R the procedure of the importance sampling is as follows: (a) Draw X 1,..., X n from a trial density g. (b) Calculate the importance weight (c) Approximate I B by w j = f (X j )/g(x j ), for j = 1,..., n. n j=1 Ĵ g,n = w jh(x j ) n j=1 w. (3) j Ch4.
41 The Important Sampling Procedure Suppose now one is interested in evaluating I B = h(x)f (x)dx, d R the procedure of the importance sampling is as follows: (a) Draw X 1,..., X n from a trial density g. (b) Calculate the importance weight (c) Approximate I B by w j = f (X j )/g(x j ), for j = 1,..., n. n j=1 Ĵ g,n = w jh(x j ) n j=1 w. (3) j Ch4.
42 The Important Sampling Procedure Suppose now one is interested in evaluating I B = h(x)f (x)dx, d R the procedure of the importance sampling is as follows: (a) Draw X 1,..., X n from a trial density g. (b) Calculate the importance weight (c) Approximate I B by w j = f (X j )/g(x j ), for j = 1,..., n. n j=1 Ĵ g,n = w jh(x j ) n j=1 w. (3) j Ch4.
43 Thus, in order to make the estimation error small, one wants to choose g as close in shape to h(x)f (x) as possible. Ch4.
44 An Alternative Important Sampling Procedure A major advantage of using (3) instead of the unbiased estimate, Ĩ B = 1 n w j h(x j ) n j=1 is that in using the former, we need only to know the ratio f (X)/g(X) up to a multiplicative constant; whereas in the latter, the ratio needs to be known exactly. Although introducing a small bias, (3) often has a smaller mean squared error than the unbiased one ĨB. Ch4.
45 An Alternative Important Sampling Procedure A major advantage of using (3) instead of the unbiased estimate, Ĩ B = 1 n w j h(x j ) n j=1 is that in using the former, we need only to know the ratio f (X)/g(X) up to a multiplicative constant; whereas in the latter, the ratio needs to be known exactly. Although introducing a small bias, (3) often has a smaller mean squared error than the unbiased one ĨB. Ch4.
46 An Alternative Important Sampling Procedure A major advantage of using (3) instead of the unbiased estimate, Ĩ B = 1 n w j h(x j ) n j=1 is that in using the former, we need only to know the ratio f (X)/g(X) up to a multiplicative constant; whereas in the latter, the ratio needs to be known exactly. Although introducing a small bias, (3) often has a smaller mean squared error than the unbiased one ĨB. Ch4.
47 Example 1 Let h(x) = 4 1 x 2, x [0, 1]. Let us imagine that we do not know how to evaluate I = 1 0 h(x)dx (which is π, of course). Ch4.
48 Use Simple Sampling The simple sampling estimate is Î n = 1 n n i=1 4 1 Ui 2, where U i are iid U[0,1] random variables. This is unbiased, with variance var(în) = 1 1 n ( h(x) 2 dx I 2 ) = 1 1 n ( (1 x 2 )dx π 2 ) = n. Ch4.
49 Use Simple Sampling The simple sampling estimate is Î n = 1 n n i=1 4 1 Ui 2, where U i are iid U[0,1] random variables. This is unbiased, with variance var(în) = 1 1 n ( h(x) 2 dx I 2 ) = 1 1 n ( (1 x 2 )dx π 2 ) = n. Ch4.
50 Use Inappropriate Important Sampling Consider the importance sampling estimate based on the pdf g b (x) = 2x, x [0, 1]. It is easy to generate Y i g b ( the cdf is F(t) = t 2, so we can set Y i F 1 (U i ) = U i, where U i U[0, 1]). The importance sampling estimator is Ĵ (b) n = 1 n n h(y i )/g b (Y i ) = 1 n i=1 n i=1 4 1 Yi 2. 2Y i Ch4.
51 Use Inappropriate Important Sampling Consider the importance sampling estimate based on the pdf g b (x) = 2x, x [0, 1]. It is easy to generate Y i g b ( the cdf is F(t) = t 2, so we can set Y i F 1 (U i ) = U i, where U i U[0, 1]). The importance sampling estimator is Ĵ (b) n = 1 n n h(y i )/g b (Y i ) = 1 n i=1 n i=1 4 1 Yi 2. 2Y i Ch4.
52 Use Inappropriate Important Sampling Consider the importance sampling estimate based on the pdf g b (x) = 2x, x [0, 1]. It is easy to generate Y i g b ( the cdf is F(t) = t 2, so we can set Y i F 1 (U i ) = U i, where U i U[0, 1]). The importance sampling estimator is Ĵ (b) n = 1 n n h(y i )/g b (Y i ) = 1 n i=1 n i=1 4 1 Yi 2. 2Y i Ch4.
53 The Ĵ(b) n has mean I and variance var(ĵ(b) n ) = 1 h(y ) var( n g b (Y ) ) = 1 n 1 0 ( h(x) g b (x) I)2 dx = +. Hence, the trial density g(x) = 2x is very bad, and we need try a different one. Ch4.
54 The Ĵ(b) n has mean I and variance var(ĵ(b) n ) = 1 h(y ) var( n g b (Y ) ) = 1 n 1 0 ( h(x) g b (x) I)2 dx = +. Hence, the trial density g(x) = 2x is very bad, and we need try a different one. Ch4.
55 Use Appropriate Important Sampling Let g c (x) = (4 2x)/3, x [0, 1]. The importance sampling estimator is whose variance is Ĵ (c) n = 1 n n i=1 var(ĵ(c) n ) = 1 h(y ) var( n g c (Y ) ) = 1 n = 1 n [ Yi 2 (4 2Y i )/3, 1 0 ( h(x) g c (x) I)2 dx 16(1 x 2 ) (4 2x)/3 dx π2 ] = 0.224/n. Ch4.
56 Use Appropriate Important Sampling Let g c (x) = (4 2x)/3, x [0, 1]. The importance sampling estimator is whose variance is Ĵ (c) n = 1 n n i=1 var(ĵ(c) n ) = 1 h(y ) var( n g c (Y ) ) = 1 n = 1 n [ Yi 2 (4 2Y i )/3, 1 0 ( h(x) g c (x) I)2 dx 16(1 x 2 ) (4 2x)/3 dx π2 ] = 0.224/n. Ch4.
57 Thus, the importance sampling estimate of (c) can achieve the same size confidence interval as the simple sampling estimate of (a) while using only one third as many generated random variables. Ch4.
58 Control Variates Method The Main Idea In this method, one uses a control variate C, which is Correlated with the sample X, to produce a better estimate. The Procedure Suppose the estimation of µ = E(X) is of interest and µ C = E(C) is known. Then we can construct Monte Carlo samples of the form X(b) = X + b(c µ C ), which have the same mean as X, but a new variance var(x(b)) = var(x) 2bCov(X, C) + b 2 var(c). Ch4.
59 Control Variates Method The Main Idea In this method, one uses a control variate C, which is Correlated with the sample X, to produce a better estimate. The Procedure Suppose the estimation of µ = E(X) is of interest and µ C = E(C) is known. Then we can construct Monte Carlo samples of the form X(b) = X + b(c µ C ), which have the same mean as X, but a new variance var(x(b)) = var(x) 2bCov(X, C) + b 2 var(c). Ch4.
60 Control Variates Method The Main Idea In this method, one uses a control variate C, which is Correlated with the sample X, to produce a better estimate. The Procedure Suppose the estimation of µ = E(X) is of interest and µ C = E(C) is known. Then we can construct Monte Carlo samples of the form X(b) = X + b(c µ C ), which have the same mean as X, but a new variance var(x(b)) = var(x) 2bCov(X, C) + b 2 var(c). Ch4.
61 If the computation of Cov(X, C) and var(c) is easy, then we can let b = Cov(X, C)/Var(C), in which case var(x(b)) = (1 ρ 2 XC )var(x) < var(x). A Special Case Another situation is when we know only that E(C) is equal to µ. Then, we can form X(b) = bx + (1 b)c. It is easy to show that if C is Correlated with X, we can always choose a proper b so that X(b) has a smaller variance than X. Ch4.
62 If the computation of Cov(X, C) and var(c) is easy, then we can let b = Cov(X, C)/Var(C), in which case var(x(b)) = (1 ρ 2 XC )var(x) < var(x). A Special Case Another situation is when we know only that E(C) is equal to µ. Then, we can form X(b) = bx + (1 b)c. It is easy to show that if C is Correlated with X, we can always choose a proper b so that X(b) has a smaller variance than X. Ch4.
63 If the computation of Cov(X, C) and var(c) is easy, then we can let b = Cov(X, C)/Var(C), in which case var(x(b)) = (1 ρ 2 XC )var(x) < var(x). A Special Case Another situation is when we know only that E(C) is equal to µ. Then, we can form X(b) = bx + (1 b)c. It is easy to show that if C is Correlated with X, we can always choose a proper b so that X(b) has a smaller variance than X. Ch4.
64 Antithetic Variates Method The Main Idea Suppose U is a random number used in the production of a sample X that follows a distribution with cdf F, that is, X = F 1 (U), then X = F 1 (1 U) also follows distribution F. More generally, if g is a monotone function, then [g(u 1 ) g(u 2 )][g(1 u 1 ) g(1 u 2 )] 0 for any u 1, u 2 [0, 1]. Ch4.
65 Antithetic Variates Method The Main Idea Suppose U is a random number used in the production of a sample X that follows a distribution with cdf F, that is, X = F 1 (U), then X = F 1 (1 U) also follows distribution F. More generally, if g is a monotone function, then [g(u 1 ) g(u 2 )][g(1 u 1 ) g(1 u 2 )] 0 for any u 1, u 2 [0, 1]. Ch4.
66 For two independent uniform random variable U 1 and U 2, we have E{[g(U 1 ) g(u 2 )][g(1 U 1 ) g(1 U 2 )]} = Cov(X, X ) 0, where X = g(u) and X = g(1 U). Therefore, var[(x + X )/2] var(x)/2, implying that using the pair X and X is better than using two independent Monte Carlo draws for estimating E(X). Ch4.
67 For two independent uniform random variable U 1 and U 2, we have E{[g(U 1 ) g(u 2 )][g(1 U 1 ) g(1 U 2 )]} = Cov(X, X ) 0, where X = g(u) and X = g(1 U). Therefore, var[(x + X )/2] var(x)/2, implying that using the pair X and X is better than using two independent Monte Carlo draws for estimating E(X). Ch4.
68 Example 2 We return once more to the problem of estimating the integral I = x 2 dx. Choose a large even value of n. As usual, our Simple Estimator and its Variance are Î n = 1 n n h(u i ), i=1 var(în) = 0.797/n. Ch4.
69 Example 2 We return once more to the problem of estimating the integral I = x 2 dx. Choose a large even value of n. As usual, our Simple Estimator and its Variance are Î n = 1 n n h(u i ), i=1 var(în) = 0.797/n. Ch4.
70 Our corresponding Antithetic Estimator and its Variance are În An = 1 n/2 (h(u i ) + h(1 U i )). n i=1 var(îan n ) = 1 n 2 {n 2 [var(h(u 1) + 2Cov(h(U 1 ), h(1 U 1 )) + var(h(1 U 1 ))]} = 1 n [var(h(u 1) + Cov(h(U 1 ), h(1 U 1 ))] = 0.219/n Ch4.
IEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Simulation Efficiency and an Introduction to Variance Reduction Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University
More informationStrategies for Improving the Efficiency of Monte-Carlo Methods
Strategies for Improving the Efficiency of Monte-Carlo Methods Paul J. Atzberger General comments or corrections should be sent to: paulatz@cims.nyu.edu Introduction The Monte-Carlo method is a useful
More informationUQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.
UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. Random Variables 2 A random variable X is a numerical (integer, real, complex, vector etc.) summary of the outcome of the random experiment.
More informationStatistics for Business and Economics
Statistics for Business and Economics Chapter 5 Continuous Random Variables and Probability Distributions Ch. 5-1 Probability Distributions Probability Distributions Ch. 4 Discrete Continuous Ch. 5 Probability
More informationHand and Spreadsheet Simulations
1 / 34 Hand and Spreadsheet Simulations Christos Alexopoulos and Dave Goldsman Georgia Institute of Technology, Atlanta, GA, USA 9/8/16 2 / 34 Outline 1 Stepping Through a Differential Equation 2 Monte
More informationSimulation Wrap-up, Statistics COS 323
Simulation Wrap-up, Statistics COS 323 Today Simulation Re-cap Statistics Variance and confidence intervals for simulations Simulation wrap-up FYI: No class or office hours Thursday Simulation wrap-up
More informationBias Reduction Using the Bootstrap
Bias Reduction Using the Bootstrap Find f t (i.e., t) so that or E(f t (P, P n ) P) = 0 E(T(P n ) θ(p) + t P) = 0. Change the problem to the sample: whose solution is so the bias-reduced estimate is E(T(P
More informationLecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.
Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional
More informationMath489/889 Stochastic Processes and Advanced Mathematical Finance Homework 5
Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 5 Steve Dunbar Due Fri, October 9, 7. Calculate the m.g.f. of the random variable with uniform distribution on [, ] and then
More informationcontinuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence
continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, P(a X b) = b a f (x)dx.
More informationChapter 2 Uncertainty Analysis and Sampling Techniques
Chapter 2 Uncertainty Analysis and Sampling Techniques The probabilistic or stochastic modeling (Fig. 2.) iterative loop in the stochastic optimization procedure (Fig..4 in Chap. ) involves:. Specifying
More informationB. Maddah INDE 504 Discrete-Event Simulation. Output Analysis (3)
B. Maddah INDE 504 Discrete-Event Simulation Output Analysis (3) Variance Reduction Variance reduction techniques (VRT) are methods to reduce the variance (i.e. increase precision) of simulation output
More informationChapter 3 Common Families of Distributions. Definition 3.4.1: A family of pmfs or pdfs is called exponential family if it can be expressed as
Lecture 0 on BST 63: Statistical Theory I Kui Zhang, 09/9/008 Review for the previous lecture Definition: Several continuous distributions, including uniform, gamma, normal, Beta, Cauchy, double exponential
More information10. Monte Carlo Methods
10. Monte Carlo Methods 1. Introduction. Monte Carlo simulation is an important tool in computational finance. It may be used to evaluate portfolio management rules, to price options, to simulate hedging
More informationV ar( X 1 + X 2. ) = ( V ar(x 1) + V ar(x 2 ) + 2 Cov(X 1, X 2 ) ),
ANTITHETIC VARIABLES, CONTROL VARIATES Variance Reduction Background: the simulation error estimates for some parameter θ X, depend on V ar( X) = V ar(x)/n, so the simulation can be more efficient if V
More informationVersion A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.
Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Further Variance Reduction Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Outline
More informationChapter 5. Statistical inference for Parametric Models
Chapter 5. Statistical inference for Parametric Models Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation Statistical Inference for Parametric
More informationLecture outline. Monte Carlo Methods for Uncertainty Quantification. Importance Sampling. Importance Sampling
Lecture outline Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford KU Leuven Summer School on Uncertainty Quantification Lecture 2: Variance reduction
More informationLecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial
Lecture 23 STAT 225 Introduction to Probability Models April 4, 2014 approximation Whitney Huang Purdue University 23.1 Agenda 1 approximation 2 approximation 23.2 Characteristics of the random variable:
More informationUsing Monte Carlo Integration and Control Variates to Estimate π
Using Monte Carlo Integration and Control Variates to Estimate π N. Cannady, P. Faciane, D. Miksa LSU July 9, 2009 Abstract We will demonstrate the utility of Monte Carlo integration by using this algorithm
More information2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises
96 ChapterVI. Variance Reduction Methods stochastic volatility ISExSoren5.9 Example.5 (compound poisson processes) Let X(t) = Y + + Y N(t) where {N(t)},Y, Y,... are independent, {N(t)} is Poisson(λ) with
More informationSTATS 200: Introduction to Statistical Inference. Lecture 4: Asymptotics and simulation
STATS 200: Introduction to Statistical Inference Lecture 4: Asymptotics and simulation Recap We ve discussed a few examples of how to determine the distribution of a statistic computed from data, assuming
More informationProbability. An intro for calculus students P= Figure 1: A normal integral
Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided
More informationAMH4 - ADVANCED OPTION PRICING. Contents
AMH4 - ADVANCED OPTION PRICING ANDREW TULLOCH Contents 1. Theory of Option Pricing 2 2. Black-Scholes PDE Method 4 3. Martingale method 4 4. Monte Carlo methods 5 4.1. Method of antithetic variances 5
More informationChapter 7 1. Random Variables
Chapter 7 1 Random Variables random variable numerical variable whose value depends on the outcome of a chance experiment - discrete if its possible values are isolated points on a number line - continuous
More informationDefinition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.
9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.
More informationELEMENTS OF MONTE CARLO SIMULATION
APPENDIX B ELEMENTS OF MONTE CARLO SIMULATION B. GENERAL CONCEPT The basic idea of Monte Carlo simulation is to create a series of experimental samples using a random number sequence. According to the
More informationECE 295: Lecture 03 Estimation and Confidence Interval
ECE 295: Lecture 03 Estimation and Confidence Interval Spring 2018 Prof Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 23 Theme of this Lecture What is Estimation? You
More informationChapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables
Chapter 5 Continuous Random Variables and Probability Distributions 5.1 Continuous Random Variables 1 2CHAPTER 5. CONTINUOUS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Probability Distributions Probability
More informationGamma. The finite-difference formula for gamma is
Gamma The finite-difference formula for gamma is [ P (S + ɛ) 2 P (S) + P (S ɛ) e rτ E ɛ 2 ]. For a correlation option with multiple underlying assets, the finite-difference formula for the cross gammas
More informationProbability Distributions for Discrete RV
Probability Distributions for Discrete RV Probability Distributions for Discrete RV Definition The probability distribution or probability mass function (pmf) of a discrete rv is defined for every number
More informationS t d with probability (1 p), where
Stochastic Calculus Week 3 Topics: Towards Black-Scholes Stochastic Processes Brownian Motion Conditional Expectations Continuous-time Martingales Towards Black Scholes Suppose again that S t+δt equals
More informationSection 8.2: Monte Carlo Estimation
Section 8.2: Monte Carlo Estimation Discrete-Event Simulation: A First Course c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation: A First Course Section 8.2: Monte Carlo Estimation 1/ 19
More informationTutorial 6. Sampling Distribution. ENGG2450A Tutors. 27 February The Chinese University of Hong Kong 1/6
Tutorial 6 Sampling Distribution ENGG2450A Tutors The Chinese University of Hong Kong 27 February 2017 1/6 Random Sample and Sampling Distribution 2/6 Random sample Consider a random variable X with distribution
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Generating Random Variables and Stochastic Processes Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com
More informationHomework Assignments
Homework Assignments Week 1 (p. 57) #4.1, 4., 4.3 Week (pp 58 6) #4.5, 4.6, 4.8(a), 4.13, 4.0, 4.6(b), 4.8, 4.31, 4.34 Week 3 (pp 15 19) #1.9, 1.1, 1.13, 1.15, 1.18 (pp 9 31) #.,.6,.9 Week 4 (pp 36 37)
More informationIEOR 165 Lecture 1 Probability Review
IEOR 165 Lecture 1 Probability Review 1 Definitions in Probability and Their Consequences 1.1 Defining Probability A probability space (Ω, F, P) consists of three elements: A sample space Ω is the set
More informationCentral Limit Theorem, Joint Distributions Spring 2018
Central Limit Theorem, Joint Distributions 18.5 Spring 218.5.4.3.2.1-4 -3-2 -1 1 2 3 4 Exam next Wednesday Exam 1 on Wednesday March 7, regular room and time. Designed for 1 hour. You will have the full
More informationStratified Sampling in Monte Carlo Simulation: Motivation, Design, and Sampling Error
South Texas Project Risk- Informed GSI- 191 Evaluation Stratified Sampling in Monte Carlo Simulation: Motivation, Design, and Sampling Error Document: STP- RIGSI191- ARAI.03 Revision: 1 Date: September
More information4 Random Variables and Distributions
4 Random Variables and Distributions Random variables A random variable assigns each outcome in a sample space. e.g. called a realization of that variable to Note: We ll usually denote a random variable
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Distribution of Random Samples & Limit Theorems Néhémy Lim University of Washington Winter 2017 Outline Distribution of i.i.d. Samples Convergence of random variables The Laws
More informationEcon 250 Fall Due at November 16. Assignment 2: Binomial Distribution, Continuous Random Variables and Sampling
Econ 250 Fall 2010 Due at November 16 Assignment 2: Binomial Distribution, Continuous Random Variables and Sampling 1. Suppose a firm wishes to raise funds and there are a large number of independent financial
More informationStatistical Computing (36-350)
Statistical Computing (36-350) Lecture 16: Simulation III: Monte Carlo Cosma Shalizi 21 October 2013 Agenda Monte Carlo Monte Carlo approximation of integrals and expectations The rejection method and
More informationWeek 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals
Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :
More informationMTH6154 Financial Mathematics I Stochastic Interest Rates
MTH6154 Financial Mathematics I Stochastic Interest Rates Contents 4 Stochastic Interest Rates 45 4.1 Fixed Interest Rate Model............................ 45 4.2 Varying Interest Rate Model...........................
More informationReview for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom
Review for Final Exam 18.05 Spring 2014 Jeremy Orloff and Jonathan Bloom THANK YOU!!!! JON!! PETER!! RUTHI!! ERIKA!! ALL OF YOU!!!! Probability Counting Sets Inclusion-exclusion principle Rule of product
More informationMonte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)
Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 3 Importance sampling January 27, 2015 M. Wiktorsson
More informationLecture 3: Return vs Risk: Mean-Variance Analysis
Lecture 3: Return vs Risk: Mean-Variance Analysis 3.1 Basics We will discuss an important trade-off between return (or reward) as measured by expected return or mean of the return and risk as measured
More informationMAFS5250 Computational Methods for Pricing Structured Products Topic 5 - Monte Carlo simulation
MAFS5250 Computational Methods for Pricing Structured Products Topic 5 - Monte Carlo simulation 5.1 General formulation of the Monte Carlo procedure Expected value and variance of the estimate Multistate
More informationTutorial 11: Limit Theorems. Baoxiang Wang & Yihan Zhang bxwang, April 10, 2017
Tutorial 11: Limit Theorems Baoxiang Wang & Yihan Zhang bxwang, yhzhang@cse.cuhk.edu.hk April 10, 2017 1 Outline The Central Limit Theorem (CLT) Normal Approximation Based on CLT De Moivre-Laplace Approximation
More informationMuch of what appears here comes from ideas presented in the book:
Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many
More informationWeek 1 Quantitative Analysis of Financial Markets Basic Statistics A
Week 1 Quantitative Analysis of Financial Markets Basic Statistics A Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 October
More informationSection 7.1: Continuous Random Variables
Section 71: Continuous Random Variables Discrete-Event Simulation: A First Course c 2006 Pearson Ed, Inc 0-13-142917-5 Discrete-Event Simulation: A First Course Section 71: Continuous Random Variables
More informationProbability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions
April 9th, 2018 Lecture 20: Special distributions Week 1 Chapter 1: Axioms of probability Week 2 Chapter 3: Conditional probability and independence Week 4 Chapters 4, 6: Random variables Week 9 Chapter
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2019 Last Time: Markov Chains We can use Markov chains for density estimation, d p(x) = p(x 1 ) p(x }{{}
More informationThe histogram should resemble the uniform density, the mean should be close to 0.5, and the standard deviation should be close to 1/ 12 =
Chapter 19 Monte Carlo Valuation Question 19.1 The histogram should resemble the uniform density, the mean should be close to.5, and the standard deviation should be close to 1/ 1 =.887. Question 19. The
More informationSection 0: Introduction and Review of Basic Concepts
Section 0: Introduction and Review of Basic Concepts Carlos M. Carvalho The University of Texas McCombs School of Business mccombs.utexas.edu/faculty/carlos.carvalho/teaching 1 Getting Started Syllabus
More informationImportance Sampling and Monte Carlo Simulations
Lab 9 Importance Sampling and Monte Carlo Simulations Lab Objective: Use importance sampling to reduce the error and variance of Monte Carlo Simulations. Introduction The traditional methods of Monte Carlo
More informationLecture 4: Return vs Risk: Mean-Variance Analysis
Lecture 4: Return vs Risk: Mean-Variance Analysis 4.1 Basics Given a cool of many different stocks, you want to decide, for each stock in the pool, whether you include it in your portfolio and (if yes)
More informationNormal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.
Lecture 21,22, 23 Text: A Course in Probability by Weiss 8.5 STAT 225 Introduction to Probability Models March 31, 2014 Standard Sums of Whitney Huang Purdue University 21,22, 23.1 Agenda 1 2 Standard
More informationStochastic Simulation
Stochastic Simulation APPM 7400 Lesson 5: Generating (Some) Continuous Random Variables September 12, 2018 esson 5: Generating (Some) Continuous Random Variables Stochastic Simulation September 12, 2018
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2018 Last Time: Markov Chains We can use Markov chains for density estimation, p(x) = p(x 1 ) }{{} d p(x
More informationThe Binomial and Geometric Distributions. Chapter 8
The Binomial and Geometric Distributions Chapter 8 8.1 The Binomial Distribution A binomial experiment is statistical experiment that has the following properties: The experiment consists of n repeated
More informationRandom Variables and Applications OPRE 6301
Random Variables and Applications OPRE 6301 Random Variables... As noted earlier, variability is omnipresent in the business world. To model variability probabilistically, we need the concept of a random
More information1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))
Correlation & Estimation - Class 7 January 28, 2014 Debdeep Pati Association between two variables 1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by Cov(X, Y ) = E(X E(X))(Y
More informationStatistical analysis and bootstrapping
Statistical analysis and bootstrapping p. 1/15 Statistical analysis and bootstrapping Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Statistical analysis and bootstrapping
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Simulating Stochastic Differential Equations Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com
More informationMonte Carlo Methods for Uncertainty Quantification
Monte Carlo Methods for Uncertainty Quantification Abdul-Lateef Haji-Ali Based on slides by: Mike Giles Mathematical Institute, University of Oxford Contemporary Numerical Techniques Haji-Ali (Oxford)
More informationThe Normal Distribution
The Normal Distribution The normal distribution plays a central role in probability theory and in statistics. It is often used as a model for the distribution of continuous random variables. Like all models,
More informationMachine Learning for Quantitative Finance
Machine Learning for Quantitative Finance Fast derivative pricing Sofie Reyners Joint work with Jan De Spiegeleer, Dilip Madan and Wim Schoutens Derivative pricing is time-consuming... Vanilla option pricing
More informationBusiness Statistics 41000: Probability 3
Business Statistics 41000: Probability 3 Drew D. Creal University of Chicago, Booth School of Business February 7 and 8, 2014 1 Class information Drew D. Creal Email: dcreal@chicagobooth.edu Office: 404
More informationLaw of Large Numbers, Central Limit Theorem
November 14, 2017 November 15 18 Ribet in Providence on AMS business. No SLC office hour tomorrow. Thursday s class conducted by Teddy Zhu. November 21 Class on hypothesis testing and p-values December
More informationFREDRIK BAJERS VEJ 7 G 9220 AALBORG ØST Tlf.: URL: Fax: Monte Carlo methods
INSTITUT FOR MATEMATISKE FAG AALBORG UNIVERSITET FREDRIK BAJERS VEJ 7 G 9220 AALBORG ØST Tlf.: 96 35 88 63 URL: www.math.auc.dk Fax: 98 15 81 29 E-mail: jm@math.aau.dk Monte Carlo methods Monte Carlo methods
More information6. Continous Distributions
6. Continous Distributions Chris Piech and Mehran Sahami May 17 So far, all random variables we have seen have been discrete. In all the cases we have seen in CS19 this meant that our RVs could only take
More informationM1 M1 A1 M1 A1 M1 A1 A1 A1 11 A1 2 B1 B1. B1 M1 Relative efficiency (y) = M1 A1 BEWARE PRINTED ANSWER. 5
Q L e σ π ( W μ e σ π ( W μ M M A Product form. Two Normal terms. Fully correct. (ii ln L const ( W ( W d ln L ( W + ( W dμ 0 σ W σ μ W σ W W ˆ μ σ Chec this is a maximum. d ln L E.g. < 0 dμ σ σ σ μ σ
More information2.1 Mathematical Basis: Risk-Neutral Pricing
Chapter Monte-Carlo Simulation.1 Mathematical Basis: Risk-Neutral Pricing Suppose that F T is the payoff at T for a European-type derivative f. Then the price at times t before T is given by f t = e r(t
More informationChapter 3 - Lecture 3 Expected Values of Discrete Random Va
Chapter 3 - Lecture 3 Expected Values of Discrete Random Variables October 5th, 2009 Properties of expected value Standard deviation Shortcut formula Properties of the variance Properties of expected value
More informationFinancial Risk Forecasting Chapter 9 Extreme Value Theory
Financial Risk Forecasting Chapter 9 Extreme Value Theory Jon Danielsson 2017 London School of Economics To accompany Financial Risk Forecasting www.financialriskforecasting.com Published by Wiley 2011
More informationIntroduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book.
Simulation Methods Chapter 13 of Chris Brook s Book Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 April 26, 2017 Christopher
More informationRandom Variables and Probability Distributions
Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering
More informationSampling and sampling distribution
Sampling and sampling distribution September 12, 2017 STAT 101 Class 5 Slide 1 Outline of Topics 1 Sampling 2 Sampling distribution of a mean 3 Sampling distribution of a proportion STAT 101 Class 5 Slide
More informationCS 237: Probability in Computing
CS 237: Probability in Computing Wayne Snyder Computer Science Department Boston University Lecture 12: Continuous Distributions Uniform Distribution Normal Distribution (motivation) Discrete vs Continuous
More informationMonte Carlo Methods in Option Pricing. UiO-STK4510 Autumn 2015
Monte Carlo Methods in Option Pricing UiO-STK4510 Autumn 015 The Basics of Monte Carlo Method Goal: Estimate the expectation θ = E[g(X)], where g is a measurable function and X is a random variable such
More informationValue at Risk Ch.12. PAK Study Manual
Value at Risk Ch.12 Related Learning Objectives 3a) Apply and construct risk metrics to quantify major types of risk exposure such as market risk, credit risk, liquidity risk, regulatory risk etc., and
More informationResults for option pricing
Results for option pricing [o,v,b]=optimal(rand(1,100000 Estimators = 0.4619 0.4617 0.4618 0.4613 0.4619 o = 0.46151 % best linear combination (true value=0.46150 v = 1.1183e-005 %variance per uniform
More informationLecture 8. The Binomial Distribution. Binomial Distribution. Binomial Distribution. Probability Distributions: Normal and Binomial
Lecture 8 The Binomial Distribution Probability Distributions: Normal and Binomial 1 2 Binomial Distribution >A binomial experiment possesses the following properties. The experiment consists of a fixed
More informationTHE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek
HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrich Alfons Vasicek he amount of capital necessary to support a portfolio of debt securities depends on the probability distribution of the portfolio loss. Consider
More informationPoint Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage
6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic
More informationWeek 7 Quantitative Analysis of Financial Markets Simulation Methods
Week 7 Quantitative Analysis of Financial Markets Simulation Methods Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Other Miscellaneous Topics and Applications of Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com
More informationProbability is the tool used for anticipating what the distribution of data should look like under a given model.
AP Statistics NAME: Exam Review: Strand 3: Anticipating Patterns Date: Block: III. Anticipating Patterns: Exploring random phenomena using probability and simulation (20%-30%) Probability is the tool used
More informationMonte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50)
Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I January
More informationChapter 8: Sampling distributions of estimators Sections
Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.
More informationCentral Limit Theorem (cont d) 7/28/2006
Central Limit Theorem (cont d) 7/28/2006 Central Limit Theorem for Binomial Distributions Theorem. For the binomial distribution b(n, p, j) we have lim npq b(n, p, np + x npq ) = φ(x), n where φ(x) is
More informationMAFS Computational Methods for Pricing Structured Products
MAFS550 - Computational Methods for Pricing Structured Products Solution to Homework Two Course instructor: Prof YK Kwok 1 Expand f(x 0 ) and f(x 0 x) at x 0 into Taylor series, where f(x 0 ) = f(x 0 )
More informationDiscrete Random Variables
Discrete Random Variables In this chapter, we introduce a new concept that of a random variable or RV. A random variable is a model to help us describe the state of the world around us. Roughly, a RV can
More informationFor these techniques to be efficient, we need to use. M then we introduce techniques to reduce Var[Y ] while. Var[Y ]
SPRING 2008, CSC KTH - Numerical methods for SDEs, Szepessy, Tempone 261 Variance reduction a Idea: Since the Monte Carlo Error is E[Y ] 1 M M j=1 Var[Y ] Y (ωj) Cα M then we introduce techniques to reduce
More information5. In fact, any function of a random variable is also a random variable
Random Variables - Class 11 October 14, 2012 Debdeep Pati 1 Random variables 1.1 Expectation of a function of a random variable 1. Expectation of a function of a random variable 2. We know E(X) = x xp(x)
More informationSection 7.5 The Normal Distribution. Section 7.6 Application of the Normal Distribution
Section 7.6 Application of the Normal Distribution A random variable that may take on infinitely many values is called a continuous random variable. A continuous probability distribution is defined by
More information