X i = 124 MARTINGALES

Similar documents
4 Martingales in Discrete-Time

6. Martingales. = Zn. Think of Z n+1 as being a gambler s earnings after n+1 games. If the game if fair, then E [ Z n+1 Z n

Martingales. Will Perkins. March 18, 2013

Martingales. by D. Cox December 2, 2009

Definition 4.1. In a stochastic process T is called a stopping time if you can tell when it happens.

Math-Stat-491-Fall2014-Notes-V

Lecture 23: April 10

then for any deterministic f,g and any other random variable

Convergence. Any submartingale or supermartingale (Y, F) converges almost surely if it satisfies E Y n <. STAT2004 Martingale Convergence

TEST 1 SOLUTIONS MATH 1002

18.440: Lecture 35 Martingales and the optional stopping theorem

Probability without Measure!

Lecture 19: March 20

Comparison of proof techniques in game-theoretic probability and measure-theoretic probability

Asymptotic results discrete time martingales and stochastic algorithms

Monte Carlo Methods (Estimators, On-policy/Off-policy Learning)

GEK1544 The Mathematics of Games Suggested Solutions to Tutorial 3

An Introduction to Stochastic Calculus

DO NOT OPEN THIS QUESTION BOOKLET UNTIL YOU ARE TOLD TO DO SO

Outline of Lecture 1. Martin-Löf tests and martingales

Arbitrage Pricing. What is an Equivalent Martingale Measure, and why should a bookie care? Department of Mathematics University of Texas at Austin

Laws of probabilities in efficient markets

The Game-Theoretic Framework for Probability

Midterm Exam: Tuesday 28 March in class Sample exam problems ( Homework 5 ) available tomorrow at the latest

Probability, Price, and the Central Limit Theorem. Glenn Shafer. Rutgers Business School February 18, 2002

Martingale Measure TA

Math 180A. Lecture 5 Wednesday April 7 th. Geometric distribution. The geometric distribution function is

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 11 10/9/2013. Martingales and stopping times II

5. In fact, any function of a random variable is also a random variable

Equivalence between Semimartingales and Itô Processes

Advanced Probability and Applications (Part II)

From Discrete Time to Continuous Time Modeling

STA 103: Final Exam. Print clearly on this exam. Only correct solutions that can be read will be given credit.

Efficiency and Herd Behavior in a Signalling Market. Jeffrey Gao

Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 5

FE 5204 Stochastic Differential Equations

An introduction to game-theoretic probability from statistical viewpoint

Prediction Market Prices as Martingales: Theory and Analysis. David Klein Statistics 157

FURTHER ASPECTS OF GAMBLING WITH THE KELLY CRITERION. We consider two aspects of gambling with the Kelly criterion. First, we show that for

Homework Assignments

Approximate Revenue Maximization with Multiple Items

STAT 830 Convergence in Distribution

Optimal Stopping. Nick Hay (presentation follows Thomas Ferguson s Optimal Stopping and Applications) November 6, 2008

Introduction to Game-Theoretic Probability

Yao s Minimax Principle

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

House-Hunting Without Second Moments

Theoretical Statistics. Lecture 4. Peter Bartlett

BROWNIAN MOTION II. D.Majumdar

6.042/18.062J Mathematics for Computer Science November 30, 2006 Tom Leighton and Ronitt Rubinfeld. Expected Value I

Monte-Carlo Planning: Introduction and Bandit Basics. Alan Fern

AMH4 - ADVANCED OPTION PRICING. Contents

MA 1125 Lecture 14 - Expected Values. Wednesday, October 4, Objectives: Introduce expected values.

Homework 9 (for lectures on 4/2)

A1: American Options in the Binomial Model

based on two joint papers with Sara Biagini Scuola Normale Superiore di Pisa, Università degli Studi di Perugia

The Kelly Criterion. How To Manage Your Money When You Have an Edge

The ruin probabilities of a multidimensional perturbed risk model

MATH3075/3975 FINANCIAL MATHEMATICS TUTORIAL PROBLEMS

Central Limit Theorem 11/08/2005

CS134: Networks Spring Random Variables and Independence. 1.2 Probability Distribution Function (PDF) Number of heads Probability 2 0.

Computational Finance

Chapter 7. Sampling Distributions and the Central Limit Theorem

Chapter 7. Sampling Distributions and the Central Limit Theorem

Information Aggregation in Dynamic Markets with Strategic Traders. Michael Ostrovsky

Finite Additivity in Dubins-Savage Gambling and Stochastic Games. Bill Sudderth University of Minnesota

1 Rare event simulation and importance sampling

Sampling; Random Walk

Monte Carlo Methods in Structuring and Derivatives Pricing

10.1 Elimination of strictly dominated strategies

ELEMENTS OF MONTE CARLO SIMULATION

Discrete Mathematics for CS Spring 2008 David Wagner Final Exam

AS Mathematics Assignment 7 Due Date: Friday 14 th February 2014

MTH The theory of martingales in discrete time Summary

Optimal Auctions. Game Theory Course: Jackson, Leyton-Brown & Shoham

18.440: Lecture 32 Strong law of large numbers and Jensen s inequality

Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 4

Probability and Random Variables A FINANCIAL TIMES COMPANY

Drunken Birds, Brownian Motion, and Other Random Fun

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Notes on Auctions. Theorem 1 In a second price sealed bid auction bidding your valuation is always a weakly dominant strategy.

Math 167: Mathematical Game Theory Instructor: Alpár R. Mészáros

2 Deduction in Sentential Logic

Tug of War Game. William Gasarch and Nick Sovich and Paul Zimand. October 6, Abstract

Consistency of option prices under bid-ask spreads

A GENERALIZED MARTINGALE BETTING STRATEGY

Computational Finance Least Squares Monte Carlo

arxiv: v2 [math.lo] 13 Feb 2014

CONVERGENCE OF OPTION REWARDS FOR MARKOV TYPE PRICE PROCESSES MODULATED BY STOCHASTIC INDICES

BIO5312 Biostatistics Lecture 5: Estimations

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 17

MORE REALISTIC FOR STOCKS, FOR EXAMPLE

Chapter 5. Sampling Distributions

Computational Independence

Chapter 3 Discrete Random Variables and Probability Distributions

DRAFT. 1 exercise in state (S, t), π(s, t) = 0 do not exercise in state (S, t) Review of the Risk Neutral Stock Dynamics

PAPER 27 STOCHASTIC CALCULUS AND APPLICATIONS

The Binomial Lattice Model for Stocks: Introduction to Option Pricing

Sidney I. Resnick. A Probability Path. Birkhauser Boston Basel Berlin

Hedging under Arbitrage

Transcription:

124 MARTINGALES 5.4. Optimal Sampling Theorem (OST). First I stated it a little vaguely: Theorem 5.12. Suppose that (1) T is a stopping time (2) M n is a martingale wrt the filtration F n (3) certain other conditions are satisfied. Then: E(M T F 0 ) = M 0 The first thing I explained is that this statement is NOT TRUE for Monte Carlo. This is the gambling strategy in which you double your bet every time you lose. Suppose that you want to win $100. Then you go to a casino and you bet $100. If you lose you bet $200. If you lose again, you bet $400 and so on. At the end you get $100. The probability is zero that you lose every single time. In practice this does not work since you need an unlimited supply of money. But in mathematics we don t have that problem. To make this a martingale you do the following. Let X 1, X 2, X 3, be i.i.d. Bernoulli random variables which are equal to ±1 with equal probability: { 1 with probability 1 X i = 2 1 with probability 1 2 In other words, we are assuming each gave is fair. Then E(X i ) = 0. Let M n = X 1 + 2X 2 + 4X 3 + + 2 n 1 X n This is the amount of money you will have at the end of n rounds of play if you bet 1 on the first game, 2 on the second, 4 on the third, etc. and keep playing regardless of whether you win or lose. To see that this is a martingale we calculate: M n+1 = X 1 + 2X 2 + + 2 n 1 X n + 2 n X n+1 = M n + 2 n X n+1 At time n we know the first n numbers but we don t know the last number. So, E(M n+1 F n ) = M n + E(2 n X n+1 ) = M n + 2 n E(X n+1 ) = M n + 0 = M n I.e., the expect future value is the same as the known value on each day. So, this is a martingale.

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 125 T = the first time you win. Then P(T < ) = 1. The argument about random walk being null recurrent actually does not apply here. I will explain on Monday what that was about. In the Monte Carlo case it is obvious that T < since P(T > n) = 1 2 n 0. In any case, M T = 1 since, at the moment you win, your net gain will be exactly 1. So, E(M T F 0 ) = 1 M 0 = 0. In other words, the Optimal Sampling Theorem does not hold. We need to add a condition that excludes Monte Carlo. We also know that we cannot prove a theorem which is false. So, we need some other condition in order to prove OST. The simplest condition is boundedness: Theorem 5.13 (OST1). The OST holds if T is bounded, i.e., if T B for some constant B.

126 MARTINGALES 5.5. integrability conditions. The OST says that E(M T F 0 ) = M 0 under certain conditions. These are integrability conditions which I want to explain (but just the definition). Definition 5.14. Suppose that Y is a random variable. Then (1) Y is integrable (L 1 ) if I.e, if the integral E( Y ) < y f Y (y)dy converges. (2) Y is square integrable (L 2, p. 217 in book) if E(Y 2 ) < If Y is not integrable then one of the tails must be fat: I.e., either the right tail or the left tail: K K y f Y (y)dy = y f Y (y)dy = If we cut off any finite piece, you still have infinity left. So, the same will be true for any value of K. 5.5.1. a beautiful theorem. Here is a wonderful theorem which takes longer to state than to prove and which is related to what we just learned. Theorem 5.15. Suppose that (1) Y n is F n -measurable, (2) T is a stopping time and (3) P(T < ) = 1.

Then MATH 56A SPRING 2008 STOCHASTIC PROCESSES 127 is a martingale wrt F n. Proof. M n := E(Y T F n ) E(M n+1 F n ) = E(E(Y T F n+1 ) F n ) = E(Y T F n ) = M n. So, M n is a martingale. Example 5.16. Let Y n = f(x n ) be the payoff function. X n = state at time n. T = optimal stopping time. Then Y T = f(x T ) = optimal payoff. v(x) = value function. Then v(x n ) = E(f(X T ) }{{} Y T F n ) As an example of the theorem we just proved, we have: Corollary 5.17. M n = v(x n ) is a martingale! Question: Does v(x n ) satisfy OST? In other words: E(v(X T ) F 0 ) = v(x 0 )? Answer: Yes, because v(x T ) = f(x T ). (When you reach the state X T you are supposed to stop and take the payoff.) 5.5.2. uniform integrability. Theorem 5.18 (2nd Optimal Sampling Theorem). Suppose that M 0, M 1, M 2, is a martingale wrt the filtration F n. Suppose (1) T = stopping time (2) P(T < ) = 1. (3) M T is integrable E( M T ) < (4) M 0, M 1, are uniformly integrable (defined below). Then OST holds, i.e., E(M T F 0 ) = M 0. Note: The contrapositive is also true. I.e., if OST fails then one of the conditions must fail. For example, in Monte Carlo, X i = ±1 with probability 1/2, is a martingale M n = X 1 + 2X 2 + 2 2 X 3 + + 2 n 1 X n T = smallest n so that X n = 1.

128 MARTINGALES This is a stopping time with P(T < ) = 1 and M T = 1 is integrable. But OST fails. So, it must be that this martingale is not uniformly integrable. Definition 5.19. Y n is integrable if for every ɛ > 0 there is a K n > 0 so that the K n -tails have total area less than ɛ: K n yf Yn (y)dy + Kn y f Yn (y)dy < ɛ Y n is uniformly integrable if the cutoff points are the same for all Y n : K n = K. If a sequence Y n is not uniformly integrable then, as time goes on, you are very likely to end up in the tail. (No matter where you cut it the tail has probability ɛ > 0. But you have an infinite sequence of random variable. If they are independent you are almost certain to end up in the tail.) Finally, I asked: Why is Monte Carlo not uniformly integrable? It is not given by an integral. So, what does this mean? 5.5.3. nonintegral meaning of uniform integrability. We need a new definition of tail which applies to any random variable Y n, not just the continuous ones. For any δ > 0 define a δ-tail to be a set of values of Y n with probability δ. Then uniform integrability implies that: ɛ > 0 δ > 0 so that Y n < ɛ δ-tail for all n. (In the discrete case the integral means you add up the probability times Y n for all points in the tail.) In the case of Monte Carlo, regardless of δ, we can take n so that 1/2 n < δ. Then the event that X 1, X 2,, X n are all 1 is in the δ-tail. It has probability 1/2 n. But M n = 2 n 1 on this tail. So, M n 2n 1 1 2 n δ-tail which will not be < ɛ. So, this sequence is not uniformly integrable. This δ-tail condition is not exactly the same uniform integrability. This will be explained at the end. 5.5.4. Martingale convergence theorem. I just stated this theorem without much explanation. It has two important integrality conditions. Theorem 5.20 (Martingale convergence theorem). Suppose that M n is a martingale wrt the filtration F n. Then

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 129 (1) M n converges to a random variable M if E( M n ) C for some constant C. (2) E(M n ) E(M ) if M n are uniformly integrable. This ends what I said in class about martingales. What follows are some theoretical comments that I didn t have time to say. If we need them later I will go back and explain them. It helps to know that the second conditions implies the first condition. Lemma 5.21. If Y n are uniformly integrable then there is finite C so that E( Y n ) C for all n. In fact there is the following theorem relating uniform integrability, this boundedness condition and the δ-tail interpretation. Theorem 5.22. A sequence of real valued random variables Y n is uniformly integrable if and only if both of the following conditions hold. (1) (uniform L 1 -boundedness) C < s.t. E( Y n ) C for all n (2) (δ-tail condition) ( ɛ > 0)( δ > 0) Y n < ɛ for all n. δ-tail 5.5.5. definition of uniform integrability. The book gives the following definition of uniform integrability. This wording is intended to apply to all cases of real valued random variables. Definition 5.23. A sequence or real valued random variables Y n is uniformly integrable iff ( ɛ > 0)( K > 0) so that E( Y n I( Y n > K) ɛ Where I( Y n > K) is the indicator function of the property Y n < K, i.e., it is the function which is equal to 1 when Y n < K and 0 elsewhere. Expectation value are given by integrals for continuous random variables and sum for discrete random variables. So, this is always defined. Proof of Lemma 5.21. Another one-line proof: E( Y n ) = E( Y n I( Y n K) + E( Y n I(Y > K)) 2K 2 + ɛ.