House-Hunting Without Second Moments
|
|
- Ralf Arnold
- 6 years ago
- Views:
Transcription
1 House-Hunting Without Second Moments Thomas S. Ferguson, University of California, Los Angeles Michael J. Klass, University of California, Berkeley Abstract: In the house-hunting problem, i.i.d. random variables, X,X 2,... are observed sequentially at a cost of c>0 per observation. The problem is to choose a stopping rule, N, to maximize E(X N Nc). If the X s have a finite second moment, the optimal stopping rule is N =min{n :X n >V },wherev satisfies E(X V ) + = c. The statement of the problem and its solution requires only the first moment of the X n to be finite. Is a finite second moment really needed? In 970, Herbert Robbins showed, assuming only a finite first moment, that the rule N is optimal within the class of stopping rules, N, such that E(X N Nc) >, but it is not clear that this restriction of the class of stopping rules is really required. In this paper it is shown that this restriction is needed, but that if the expectation is replaced by a generalized expectation, N is optimal out of all stopping rules assuming only first moments. AMS 2000 Subject Classification: 60G40; 90A80. Key Words: Optimal Stopping; Selling an Asset; Job Search.. Introduction. The house-hunting problem, also called the problem of selling an asset or the job search problem, was introduced and solved almost simultaneously in papers by MacQueen and Miller (960), Derman and Sacks (960), Chow and Robbins (96) and Sakaguchi (96). This problem may be described as follows. Independent, identically distributed random variables, X,X 2,... with common distribution function, F (x), are observed sequentially at a cost of c per observation, where c>0. We always assume that the expectation of the positive part of X is finite : EX + <, wherex has distribution F (x). You must take at least one observation. If you stop after n observations,you receive X n as a payoff, so your net return is X n nc. If you never stop, your payoff is defined to be since X n nc a.s. as n when EX + <. The problem is to choose a stopping rule, N, to maximize E(X N Nc). It is customary to assume F (x) has a finite second moment, or more generally that E{(X + )2 } <. Under this assumption, the stopping rule N =min{n :X n >V } () maximizes E(X n nc) among all stopping rules, where V satisfies (x V ) df (x) =c. (2) V Partially supported by NSF Grant DMS
2 In addition, V =E(X N N c) is also the optimal expected return. The statement of the problem and its solution requires only the first moment of F (x) to be finite. In particular, the stopping rule () still gives expected return V. Yet the proofs of optimality of N seem to require that the second moment of F (x) be finite. Is a finite second moment really needed? By an elegant direct argument based on Wald s equation and using only the assumption that EX + <, Robbins (970) shows that the rule N given by () with V given by (2) is still optimal. However, he uses a slightly different definition of optimality, namely, he defines N to be optimal if it maximizes E(X N Nc) within the class of stopping rules, N, such that E(X N Nc) >. This seems innocuous enough. Who would like to accept a random reward the expectation of whose negative part is? The trouble is that this restriction also excludes payoffs the expectation of whose positive part is +. If E(X N Nc) = and E(X N Nc) + <, then you don t need to exclude N. The rule N is definitely better. So a slightly stronger definition of optimality would be to restrict consideration to stopping rules N such that E(X N Nc) + <. This looks more questionable. Why should one exclude rules with infinite positive expectation? Is a finite second moment necessary for N to be optimal out of all stopping rules? 2. Necessity of E{(X + ) 2 } <. Robbins result certainly provides an extension of the optimal property of the rule N that is valid even if E(X + ) 2 =. However, there are difficulties of interpretation that arise because of the restriction to stopping rules that satisfy E(X N Nc) + <. Restricting attention to such rules seems to say that any rule, N, withe X N Nc <, no matter how bad, is better than a rule whose expected payoff does not exist because E(X N Nc) = and E(X N Nc) + =+. For what W do you prefer a gamble giving you a payoff of $W to a gamble giving you $Z, wherez is chosen from a standard Cauchy distribution? Answering such questions seems to require an extension to standard utility theory. One important question that arises is whether or not there are some distributions of X with E(X + ) 2 = for which all stopping rules have E(X N Nc) + <. Then, at least for some distributions with infinite second moment one could say that N is optimal among all stopping rules. It will be shown that for all distributions F with E(X + ) 2 =, there exist stopping rules, N, such that E(X N Nc) = and E(X N Nc) + =+. Theorem. If X, X,X 2,... are i.i.d. with EX + < and E(X + ) 2 =, then the stopping rule, N =min{n :X n 2nc} (3) satisfies E(X N Nc) + =. Thus no new examples of optimality within the class of all stopping rules may be found using Robbins result. Proofs are deferred to the appendix. 3. A Stronger Result. As it stands, Theorem would not impress Robbins. Robbins requires that stopping rules stop with probability one. This contrasts with others 2
3 that allow stopping rules, N, such that P(N = ) > 0 (see for example the electronic text, Ferguson (2006)). This allows treatment of more general problems, for example bandit problems, but it requires specifying what the payoff will be if N =. Generally, one may force the decision maker to use rules, N, that stop with probability one by choosing the payoff to be if N =. However in this problem, the restriction to stopping rules that stop with probability one is very reasonable. All Theorem really says is that if E(X + ) 2 is infinite, a prophet can get an infinite expected return. He simply stops at N if there exists an n such that X n > 2nc and stops at n = otherwise. It seems that those of us without superpowers must be satisfied with V or risk not stopping and so receiving infinite loss. Therefore to satisfy Robbins, we need to answer the question: For what distributions of X with finite first moment and E(X + ) 2 = is it true that there exists stopping rules N that stop with probability one and for which E(X N Nc) + =? The answer is contained in the following theorem. Theorem 2. If X, X,X 2,... are i.i.d. with EX + < and E(X + ) 2 =, then there exists a stopping rule, N, withp(n < ) =such that E(X N Nc) + =, for all c<. Thus there are no distributions with infinite positive second moment for which we may dispense with Robbins restriction to stopping rules, N, such that E(X N Nc) + <. It is interesting to note that the stopping rule, N, of Theorem 2 does not depend on c. The proof is constructive. In addition, the stopping rule has the simple form, N =min{n : X n >a n } for some sequence of constants, a n. 4. Optimality of N among all stopping rules. To extend Robbins result to make it valid for all stopping rules, we must therefore find some way to compare two payoff distributions whose first moments don t exist. Certainly if given the choice between two Cauchy distributions with the same interquartile range, we would prefer the one with the higher median. More generally, we would prefer F to G if F stochastically dominates G (i.e. if F (x) < G(x) for all x). There are many ways to extend this idea further. One sufficient for the problem at hand is the following. We say that a lottery from a distribution G ispreferredtoa0payoffiffori.i.d. Z,Z 2,... from G we have (/n) n Z a.s. i γ for some 0 <γ. For a distribution G with finite mean, this just means that a lottery from G is preferred to 0 if the mean of G is positive. For a distribution G whose mean does not exist, it can still happen that (/n) n Z a.s. i +, in which case we prefer G to 0. This can happen if the mass on the positive axis dominates the mass on the negative axis, even though the expectation of the positive and negative parts are both infinite. Similarly, we prefer 0 to a lottery from G if for i.i.d. Z,Z 2,... from G we have (/n) n Z a.s. i. More generally, we prefer a lottery for G to a lottery for G 2 if for 3
4 i.i.d. Y,Y 2,...from G and independent i.i.d. Z,Z 2,... from G 2,wehave(/n) n (Y i Z i ) a.s. γ for some 0 <γ. Using this extension of the definition of preference between lotteries, one can show that Robbins result is true without restricting the stopping rules one considers. In the paper of Robbins and Samuel (966), an extension of the definition of mathematical expectation is given which is useful in this context. Definition. For a random variable X, we say that the extended expectation of X exists and is equal to µ, insymbolsêx = µ, if n n X i µ a.s. (5) i= when X,X 2,... are i.i.d. with the same distribution as X. If EX exists, then ÊX =EX, including the case where EX = ±. However, if EX does not exist, it still may happen that (/n) n X i converges almost surely to + or. Thus,Ê is indeed an extension of the notion of expectation. Using this notion, we can state the optimality of the stopping rule () for the house hunting problem. Theorem 3. In the house hunting problem with finite first moment, the stopping rule N of () is optimal in the sense that for all stopping rules N, Ê(X N Nc) V. In other words, if E(X + ) <, then for any stopping rule, N, eitherx N Nc has finite expectation less than or equal to V, or its extended expectation is. 5. A Near Counterexample. If the first moment of X barely exists in the sense that EX + < and EX + log + (X + )=, then there is a stopping rule that looks as if it might be a counterexample to Theorem 3. Theorem 4. If EX + < and EX + log + (X + )=, then there exists a stopping rule of the form N =min{n :X n >a n } for some sequence a n such that P(N < ) =, and E{(X n nc)i(n = n)} = P(N >n )E{(X nc)i(x>a n )} = (6) for all c>0. Note that, again, N does not depend on c. To see how curious this result is, examine the second sum in (6). The stopping rule N stops with probability, and when it stops at stage n, the conditional payoff given N = n 4
5 is simply E{(X nc)i(x>a n )}, a fairly large positive number, even though stopping may occur at negative values of X n nc. This has to be multiplied by the probability of reaching that stage which is P(N > n ) = n i= F (a i). The product of this and E{(X nc)i(x>a n )} is the summand of the second sum, and is a fairly small number. Nevertheless, if you add up all these small numbers, you get +. Doesn t that seem better than receiving V as the payoff? The catch is, of course, that this summation is not equal to E(X N Nc), which doesn t exist. This is an example where the expectation of the sum is not the sum of the expectations. Worse, the sum of the expectations is + while the generalized expectation of the sum is ; in other words, if you take a sample from the distribution of X N Nc, the average of the sample will tend to, a.s. APPENDIX Proof of Theorem. We show that the rule N =min{n : X n 2nc} gives E(X N Nc) + = when EX + < and E(X + ) 2 =. E(X N Nc) + = = E(X n nc)i(n = n) ncp(n = n) ncp(n >n )P(X n > 2nc) ncp(n = )P(X n > 2nc) =, since E(X + ) 2 = implies np(x n > 2nc) =, and P(N = ) =P(X n < 2nc for all n) = P(X n < 2nc) = ( P(X n 2nc)) exp{ P(X n > 2nc)} exp{ EX + /2c} > 0. (6) (7) Proof of Theorem 2. Without loss of generality, assume that X>0andthatX has a continuous distribution function. (Otherwise, only stop when X>0 and replace the distribution of X with the distribution of XU where U has a uniform(0,) distribution independent of X.) Since E(X + ) 2 =, wehave k= k P(X >y) dy =. (8) 5
6 Let E k = k P(X >y) dy and Q k = E + E E k, (9) for k>0. Then, k E k = and Q k.itisalsotruethat k= E k Q k =. (0) See, for example, Rudin (976), Problem (b) page 79. Let n be the smallest k such that Q k >. Let a n be defined by P(X a n )= Q n () and for k>n,leta k satisfy P(X a k )= Q k Q k. (2) Let N =min{k n : X k >a k }. Notice that for n>n. P(N >n)=p( n k=n {X k a k }) = n Q k = 0. Q n Q k Q n k=n + (3) Hence N stops with probability one. (This may also be seen using k>n + P(X >a k)= k>n + ( (Q k /Q k )) = k>n + (E k/q k )= ; sop(x n >a n i.o.) =.) Notice that for any <c<, P(X >a k ) P(X >ck) = In particular, Q k <ckfrom some point on. E k (c )kp(x >ck) = Q k P(X >ck) Q k P(X >ck) (c )k Q k. (4) Fix any c>. There exists n c such that Q k ck for all k n c. Therefore, E(X N Nc) + = k=n c ck P(X >y) dyp(n >k ) = E ck > E j =. Q ck 2c Q k=n c j=cn j c E ck Q k k=n c (5) 6
7 Proof of Theorem 3. We use the cute idea, cited in the paper of Robbins (970) as due to David Burdick, entailed in the inequality X n nc = v +(X n v) nc v +(X n v) + nc n v + (X i v) + nc = v + = v + n ((X i v) + c) n W i (6) where W i =(X i v) + c. Now choose v to be any number greater than V. The W i are i.i.d. with expectation EW i < 0sincev>V. Let N be an arbitrary stopping rule. We show below that Ê N W i < 0; this implies that Ê(X N Nc) <vand, since v is an arbitrary number greater than V,thatÊ(X N Nc) V. We now use the idea of Blackwell (946) in his proof of Wald s Equation. Consider n stopping problems as follows. Let N be the stopping rule N applied to the sequence W,W 2,...,letN 2 be the stopping rule N applied to W N +,W N +2,...,etc.,andletN n be the stopping rule N applied to W N + +N n +, W N + +N n +2,... Let the returns for these problems be denoted by Z,...,Z n where Z j = W N + +N j W N + +N j. Then, the Z j are independent with the same distribution as N W i, and we have Z + + Z n n = W + + W N + +N n N + + N n N + + N n. (7) n From the strong law of large numbers, the first term on the right of (7), (W + + W N + +N n )/(N + + N n ), converges a.s. to EW i < 0. The second term on the right of (7), N + + N n )/n converges a.s. to EN, whetheren is finite or +. Therefore, the left side of (7) converges a.s. to EW i EN<0. This shows that Ê N W i < 0aswas to be shown. Proof of Theorem 4. Again assume without loss of generality that X>0andthatX has a strictly increasing continuous distribution function on (0, ). Let ϕ(b) =E(X X >b)= E(X I(X >b)). P(X >b) Then ϕ(b) is increasing and continuous and the inverse function, b(y) =ϕ (y) is welldefined by ϕ(b(y)) = E(X X >b(y)) = y (8) 7
8 for y E(X). Clearly, b(y) is increasing, lim y b(y) =, andb(y) <y.then P(X >b(n)) = = E{X I(X >b(n))} n n ( )E{X I(b(n) <X b(n +))} j j= E{X log(ϕ(x)/2)i(b(n) < X b(n +))} (9) Therefore, using ϕ(y) >y, P(X >b(n)) E{X log(x/2) I(X >b())} =. (20) There exist constants, γ k, increasing to infinity such that P(X >b(kγ k )) =. (2) k= Let a k = b(kγ k ). Notice that as in the first line of (9), There exists a k such that P(X >a k )= E{X I(X >a k)} kγ k = o(/k). (22) P(X >a k ) < 2k for all k k. (23) Let N =min{k k : X k >a k }. Then from (23) and Borel-Cantelli, P(N < ) =. Now fix any c>andchoosek c so that a k >b(2ck) for all k k c. Notice that for k>k, P(N >k)=p(x j a j for k j k) k = ( P(X >a j )) j=k k 2j 2j j=k > 2k 2k 2k (24) 8
9 Putting these inequalities together, P(N >n )E{(X cn)i(x>a n )} n=k c = P(N >n )E((X cn)i(x>b(nγ n ))) (def. of a n ) n=k c = P(N >n )[nγ n cn]p(x >b(nγ n )) (from (8)) n=k c > (γ n c)p(x >b(nγ n )) (from (24)) 2 n=k c = (since γ n and (2)) References. Blackwell, D. (946). On an equation of Wald, Annals of Mathematical Statistics 7: Chow, Y. S., and Robbins, H. (96). A martingale system theorem and applications, Proceedings of the Fourth Berkeley Symposium on Mathathematical Statistics and Probability, : Derman, C., and Sacks, J. (960). Replacement of periodically inspected equipment (an optimal stopping rule), Naval Research Logistics Quarterly 7: Ferguson, T. S. (2006). Optimal Stopping and Applications, Electronic Text at MacQueen, J., and Miller, R. G., Jr. (960). Optimal persistence policies, Operations Research 8: Robbins, H. (970). Optimal stopping, Americam Mathematical Monthly, 77: Robbins, H., and Samuels, E. (966). An extension of a lemma of Wald, Journal of Applied Probability 3: Rudin, W. (976). Principles of Mathematical Analysis, 3rd Edition, McGraw-Hill. Sakaguchi M. (96). Dynamic programming of some sequential sampling design, Journal of Mathematical Analysis and Applications 2:
Optimal Stopping. Nick Hay (presentation follows Thomas Ferguson s Optimal Stopping and Applications) November 6, 2008
(presentation follows Thomas Ferguson s and Applications) November 6, 2008 1 / 35 Contents: Introduction Problems Markov Models Monotone Stopping Problems Summary 2 / 35 The Secretary problem You have
More informationMartingales. by D. Cox December 2, 2009
Martingales by D. Cox December 2, 2009 1 Stochastic Processes. Definition 1.1 Let T be an arbitrary index set. A stochastic process indexed by T is a family of random variables (X t : t T) defined on a
More informationX i = 124 MARTINGALES
124 MARTINGALES 5.4. Optimal Sampling Theorem (OST). First I stated it a little vaguely: Theorem 5.12. Suppose that (1) T is a stopping time (2) M n is a martingale wrt the filtration F n (3) certain other
More informationMath-Stat-491-Fall2014-Notes-V
Math-Stat-491-Fall2014-Notes-V Hariharan Narayanan December 7, 2014 Martingales 1 Introduction Martingales were originally introduced into probability theory as a model for fair betting games. Essentially
More information6. Martingales. = Zn. Think of Z n+1 as being a gambler s earnings after n+1 games. If the game if fair, then E [ Z n+1 Z n
6. Martingales For casino gamblers, a martingale is a betting strategy where (at even odds) the stake doubled each time the player loses. Players follow this strategy because, since they will eventually
More information18.440: Lecture 32 Strong law of large numbers and Jensen s inequality
18.440: Lecture 32 Strong law of large numbers and Jensen s inequality Scott Sheffield MIT 1 Outline A story about Pedro Strong law of large numbers Jensen s inequality 2 Outline A story about Pedro Strong
More informationForecast Horizons for Production Planning with Stochastic Demand
Forecast Horizons for Production Planning with Stochastic Demand Alfredo Garcia and Robert L. Smith Department of Industrial and Operations Engineering Universityof Michigan, Ann Arbor MI 48109 December
More informationX ln( +1 ) +1 [0 ] Γ( )
Problem Set #1 Due: 11 September 2014 Instructor: David Laibson Economics 2010c Problem 1 (Growth Model): Recall the growth model that we discussed in class. We expressed the sequence problem as ( 0 )=
More informationFURTHER ASPECTS OF GAMBLING WITH THE KELLY CRITERION. We consider two aspects of gambling with the Kelly criterion. First, we show that for
FURTHER ASPECTS OF GAMBLING WITH THE KELLY CRITERION RAVI PHATARFOD *, Monash University Abstract We consider two aspects of gambling with the Kelly criterion. First, we show that for a wide range of final
More informationAsymptotic results discrete time martingales and stochastic algorithms
Asymptotic results discrete time martingales and stochastic algorithms Bernard Bercu Bordeaux University, France IFCAM Summer School Bangalore, India, July 2015 Bernard Bercu Asymptotic results for discrete
More informationLecture 23: April 10
CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 23: April 10 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationA class of coherent risk measures based on one-sided moments
A class of coherent risk measures based on one-sided moments T. Fischer Darmstadt University of Technology November 11, 2003 Abstract This brief paper explains how to obtain upper boundaries of shortfall
More informationDirected Search and the Futility of Cheap Talk
Directed Search and the Futility of Cheap Talk Kenneth Mirkin and Marek Pycia June 2015. Preliminary Draft. Abstract We study directed search in a frictional two-sided matching market in which each seller
More informationThe ruin probabilities of a multidimensional perturbed risk model
MATHEMATICAL COMMUNICATIONS 231 Math. Commun. 18(2013, 231 239 The ruin probabilities of a multidimensional perturbed risk model Tatjana Slijepčević-Manger 1, 1 Faculty of Civil Engineering, University
More informationMath 489/Math 889 Stochastic Processes and Advanced Mathematical Finance Dunbar, Fall 2007
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Math 489/Math 889 Stochastic
More informationChapter6.MAXIMIZINGTHERATEOFRETURN.
Chapter6.MAXIMIZINGTHERATEOFRETURN. In stopping rule problems that are repeated in time, it is often appropriate to maximize the average return per unit of time. This leads to the problem of choosing a
More informationBrownian Motion. Richard Lockhart. Simon Fraser University. STAT 870 Summer 2011
Brownian Motion Richard Lockhart Simon Fraser University STAT 870 Summer 2011 Richard Lockhart (Simon Fraser University) Brownian Motion STAT 870 Summer 2011 1 / 33 Purposes of Today s Lecture Describe
More informationFDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.
FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.) Hints for Problem Set 2 1. Consider a zero-sum game, where
More informationEquivalence between Semimartingales and Itô Processes
International Journal of Mathematical Analysis Vol. 9, 215, no. 16, 787-791 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ijma.215.411358 Equivalence between Semimartingales and Itô Processes
More information4 Martingales in Discrete-Time
4 Martingales in Discrete-Time Suppose that (Ω, F, P is a probability space. Definition 4.1. A sequence F = {F n, n = 0, 1,...} is called a filtration if each F n is a sub-σ-algebra of F, and F n F n+1
More informationA lower bound on seller revenue in single buyer monopoly auctions
A lower bound on seller revenue in single buyer monopoly auctions Omer Tamuz October 7, 213 Abstract We consider a monopoly seller who optimally auctions a single object to a single potential buyer, with
More informationCentral limit theorems
Chapter 6 Central limit theorems 6.1 Overview Recall that a random variable Z is said to have a standard normal distribution, denoted by N(0, 1), if it has a continuous distribution with density φ(z) =
More informationOPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF FINITE
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 005 Seville, Spain, December 1-15, 005 WeA11.6 OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF
More information1 Rare event simulation and importance sampling
Copyright c 2007 by Karl Sigman 1 Rare event simulation and importance sampling Suppose we wish to use Monte Carlo simulation to estimate a probability p = P (A) when the event A is rare (e.g., when p
More informationSTAT 830 Convergence in Distribution
STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2013 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2013 1 / 31
More informationGame Theory: Normal Form Games
Game Theory: Normal Form Games Michael Levet June 23, 2016 1 Introduction Game Theory is a mathematical field that studies how rational agents make decisions in both competitive and cooperative situations.
More informationRemarks on Probability
omp2011/2711 S1 2006 Random Variables 1 Remarks on Probability In order to better understand theorems on average performance analyses, it is helpful to know a little about probability and random variables.
More informationIntroduction to Probability Theory and Stochastic Processes for Finance Lecture Notes
Introduction to Probability Theory and Stochastic Processes for Finance Lecture Notes Fabio Trojani Department of Economics, University of St. Gallen, Switzerland Correspondence address: Fabio Trojani,
More informationFinite Additivity in Dubins-Savage Gambling and Stochastic Games. Bill Sudderth University of Minnesota
Finite Additivity in Dubins-Savage Gambling and Stochastic Games Bill Sudderth University of Minnesota This talk is based on joint work with Lester Dubins, David Heath, Ashok Maitra, and Roger Purves.
More informationComparison of proof techniques in game-theoretic probability and measure-theoretic probability
Comparison of proof techniques in game-theoretic probability and measure-theoretic probability Akimichi Takemura, Univ. of Tokyo March 31, 2008 1 Outline: A.Takemura 0. Background and our contributions
More informationA GENERALIZED MARTINGALE BETTING STRATEGY
DAVID K. NEAL AND MICHAEL D. RUSSELL Astract. A generalized martingale etting strategy is analyzed for which ets are increased y a factor of m 1 after each loss, ut return to the initial et amount after
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Distribution of Random Samples & Limit Theorems Néhémy Lim University of Washington Winter 2017 Outline Distribution of i.i.d. Samples Convergence of random variables The Laws
More informationIn Discrete Time a Local Martingale is a Martingale under an Equivalent Probability Measure
In Discrete Time a Local Martingale is a Martingale under an Equivalent Probability Measure Yuri Kabanov 1,2 1 Laboratoire de Mathématiques, Université de Franche-Comté, 16 Route de Gray, 253 Besançon,
More informationPath-properties of the tree-valued Fleming-Viot process
Path-properties of the tree-valued Fleming-Viot process Peter Pfaffelhuber Joint work with Andrej Depperschmidt and Andreas Greven Luminy, 492012 The Moran model time t As every population model, the Moran
More informationThe value of foresight
Philip Ernst Department of Statistics, Rice University Support from NSF-DMS-1811936 (co-pi F. Viens) and ONR-N00014-18-1-2192 gratefully acknowledged. IMA Financial and Economic Applications June 11, 2018
More information10.1 Elimination of strictly dominated strategies
Chapter 10 Elimination by Mixed Strategies The notions of dominance apply in particular to mixed extensions of finite strategic games. But we can also consider dominance of a pure strategy by a mixed strategy.
More informationarxiv: v1 [cs.lg] 21 May 2011
Calibration with Changing Checking Rules and Its Application to Short-Term Trading Vladimir Trunov and Vladimir V yugin arxiv:1105.4272v1 [cs.lg] 21 May 2011 Institute for Information Transmission Problems,
More informationCase Study: Heavy-Tailed Distribution and Reinsurance Rate-making
Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making May 30, 2016 The purpose of this case study is to give a brief introduction to a heavy-tailed distribution and its distinct behaviors in
More informationSidney I. Resnick. A Probability Path. Birkhauser Boston Basel Berlin
Sidney I. Resnick A Probability Path Birkhauser Boston Basel Berlin Preface xi 1 Sets and Events 1 1.1 Introduction 1 1.2 Basic Set Theory 2 1.2.1 Indicator functions 5 1.3 Limits of Sets 6 1.4 Monotone
More informationOn the Lower Arbitrage Bound of American Contingent Claims
On the Lower Arbitrage Bound of American Contingent Claims Beatrice Acciaio Gregor Svindland December 2011 Abstract We prove that in a discrete-time market model the lower arbitrage bound of an American
More informationInformation Aggregation in Dynamic Markets with Strategic Traders. Michael Ostrovsky
Information Aggregation in Dynamic Markets with Strategic Traders Michael Ostrovsky Setup n risk-neutral players, i = 1,..., n Finite set of states of the world Ω Random variable ( security ) X : Ω R Each
More information2 of PU_2015_375 Which of the following measures is more flexible when compared to other measures?
PU M Sc Statistics 1 of 100 194 PU_2015_375 The population census period in India is for every:- quarterly Quinqennial year biannual Decennial year 2 of 100 105 PU_2015_375 Which of the following measures
More informationFinite Memory and Imperfect Monitoring
Federal Reserve Bank of Minneapolis Research Department Finite Memory and Imperfect Monitoring Harold L. Cole and Narayana Kocherlakota Working Paper 604 September 2000 Cole: U.C.L.A. and Federal Reserve
More informationOutline of Lecture 1. Martin-Löf tests and martingales
Outline of Lecture 1 Martin-Löf tests and martingales The Cantor space. Lebesgue measure on Cantor space. Martin-Löf tests. Basic properties of random sequences. Betting games and martingales. Equivalence
More informationLecture 7: Bayesian approach to MAB - Gittins index
Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach
More informationAn Application of Ramsey Theorem to Stopping Games
An Application of Ramsey Theorem to Stopping Games Eran Shmaya, Eilon Solan and Nicolas Vieille July 24, 2001 Abstract We prove that every two-player non zero-sum deterministic stopping game with uniformly
More informationMTH6154 Financial Mathematics I Stochastic Interest Rates
MTH6154 Financial Mathematics I Stochastic Interest Rates Contents 4 Stochastic Interest Rates 45 4.1 Fixed Interest Rate Model............................ 45 4.2 Varying Interest Rate Model...........................
More informationNotes on Auctions. Theorem 1 In a second price sealed bid auction bidding your valuation is always a weakly dominant strategy.
Notes on Auctions Second Price Sealed Bid Auctions These are the easiest auctions to analyze. Theorem In a second price sealed bid auction bidding your valuation is always a weakly dominant strategy. Proof
More informationbased on two joint papers with Sara Biagini Scuola Normale Superiore di Pisa, Università degli Studi di Perugia
Marco Frittelli Università degli Studi di Firenze Winter School on Mathematical Finance January 24, 2005 Lunteren. On Utility Maximization in Incomplete Markets. based on two joint papers with Sara Biagini
More informationRandom Variables Handout. Xavier Vilà
Random Variables Handout Xavier Vilà Course 2004-2005 1 Discrete Random Variables. 1.1 Introduction 1.1.1 Definition of Random Variable A random variable X is a function that maps each possible outcome
More informationTotal Reward Stochastic Games and Sensitive Average Reward Strategies
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 98, No. 1, pp. 175-196, JULY 1998 Total Reward Stochastic Games and Sensitive Average Reward Strategies F. THUIJSMAN1 AND O, J. VaiEZE2 Communicated
More informationThe Value of Information in Central-Place Foraging. Research Report
The Value of Information in Central-Place Foraging. Research Report E. J. Collins A. I. Houston J. M. McNamara 22 February 2006 Abstract We consider a central place forager with two qualitatively different
More informationBuilding Infinite Processes from Regular Conditional Probability Distributions
Chapter 3 Building Infinite Processes from Regular Conditional Probability Distributions Section 3.1 introduces the notion of a probability kernel, which is a useful way of systematizing and extending
More informationLecture 2 Dynamic Equilibrium Models: Three and More (Finite) Periods
Lecture 2 Dynamic Equilibrium Models: Three and More (Finite) Periods. Introduction In ECON 50, we discussed the structure of two-period dynamic general equilibrium models, some solution methods, and their
More informationChapter 3 Discrete Random Variables and Probability Distributions
Chapter 3 Discrete Random Variables and Probability Distributions Part 4: Special Discrete Random Variable Distributions Sections 3.7 & 3.8 Geometric, Negative Binomial, Hypergeometric NOTE: The discrete
More informationComparing Allocations under Asymmetric Information: Coase Theorem Revisited
Comparing Allocations under Asymmetric Information: Coase Theorem Revisited Shingo Ishiguro Graduate School of Economics, Osaka University 1-7 Machikaneyama, Toyonaka, Osaka 560-0043, Japan August 2002
More informationOn Complexity of Multistage Stochastic Programs
On Complexity of Multistage Stochastic Programs Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA e-mail: ashapiro@isye.gatech.edu
More informationMath489/889 Stochastic Processes and Advanced Mathematical Finance Homework 5
Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 5 Steve Dunbar Due Fri, October 9, 7. Calculate the m.g.f. of the random variable with uniform distribution on [, ] and then
More informationFebruary 23, An Application in Industrial Organization
An Application in Industrial Organization February 23, 2015 One form of collusive behavior among firms is to restrict output in order to keep the price of the product high. This is a goal of the OPEC oil
More informationDrunken Birds, Brownian Motion, and Other Random Fun
Drunken Birds, Brownian Motion, and Other Random Fun Michael Perlmutter Department of Mathematics Purdue University 1 M. Perlmutter(Purdue) Brownian Motion and Martingales Outline Review of Basic Probability
More informationAuctions That Implement Efficient Investments
Auctions That Implement Efficient Investments Kentaro Tomoeda October 31, 215 Abstract This article analyzes the implementability of efficient investments for two commonly used mechanisms in single-item
More informationUniversity of California Berkeley
University of California Berkeley Improving the Asmussen-Kroese Type Simulation Estimators Samim Ghamami and Sheldon M. Ross May 25, 2012 Abstract Asmussen-Kroese [1] Monte Carlo estimators of P (S n >
More informationLecture 5: Iterative Combinatorial Auctions
COMS 6998-3: Algorithmic Game Theory October 6, 2008 Lecture 5: Iterative Combinatorial Auctions Lecturer: Sébastien Lahaie Scribe: Sébastien Lahaie In this lecture we examine a procedure that generalizes
More information6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts
6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts Asu Ozdaglar MIT February 9, 2010 1 Introduction Outline Review Examples of Pure Strategy Nash Equilibria
More informationUNIFORM BOUNDS FOR BLACK SCHOLES IMPLIED VOLATILITY
UNIFORM BOUNDS FOR BLACK SCHOLES IMPLIED VOLATILITY MICHAEL R. TEHRANCHI UNIVERSITY OF CAMBRIDGE Abstract. The Black Scholes implied total variance function is defined by V BS (k, c) = v Φ ( k/ v + v/2
More informationApproximate Revenue Maximization with Multiple Items
Approximate Revenue Maximization with Multiple Items Nir Shabbat - 05305311 December 5, 2012 Introduction The paper I read is called Approximate Revenue Maximization with Multiple Items by Sergiu Hart
More informationLecture Notes 1
4.45 Lecture Notes Guido Lorenzoni Fall 2009 A portfolio problem To set the stage, consider a simple nite horizon problem. A risk averse agent can invest in two assets: riskless asset (bond) pays gross
More informationMarch 30, Why do economists (and increasingly, engineers and computer scientists) study auctions?
March 3, 215 Steven A. Matthews, A Technical Primer on Auction Theory I: Independent Private Values, Northwestern University CMSEMS Discussion Paper No. 196, May, 1995. This paper is posted on the course
More information6.231 DYNAMIC PROGRAMMING LECTURE 5 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 5 LECTURE OUTLINE Stopping problems Scheduling problems Minimax Control 1 PURE STOPPING PROBLEMS Two possible controls: Stop (incur a one-time stopping cost, and move
More informationCompeting Mechanisms with Limited Commitment
Competing Mechanisms with Limited Commitment Suehyun Kwon CESIFO WORKING PAPER NO. 6280 CATEGORY 12: EMPIRICAL AND THEORETICAL METHODS DECEMBER 2016 An electronic version of the paper may be downloaded
More informationAre the Azéma-Yor processes truly remarkable?
Are the Azéma-Yor processes truly remarkable? Jan Obłój j.obloj@imperial.ac.uk based on joint works with L. Carraro, N. El Karoui, A. Meziou and M. Yor Swiss Probability Seminar, 5 Dec 2007 Are the Azéma-Yor
More informationProbability without Measure!
Probability without Measure! Mark Saroufim University of California San Diego msaroufi@cs.ucsd.edu February 18, 2014 Mark Saroufim (UCSD) It s only a Game! February 18, 2014 1 / 25 Overview 1 History of
More informationCS364A: Algorithmic Game Theory Lecture #14: Robust Price-of-Anarchy Bounds in Smooth Games
CS364A: Algorithmic Game Theory Lecture #14: Robust Price-of-Anarchy Bounds in Smooth Games Tim Roughgarden November 6, 013 1 Canonical POA Proofs In Lecture 1 we proved that the price of anarchy (POA)
More informationSupplementary Material to: Peer Effects, Teacher Incentives, and the Impact of Tracking: Evidence from a Randomized Evaluation in Kenya
Supplementary Material to: Peer Effects, Teacher Incentives, and the Impact of Tracking: Evidence from a Randomized Evaluation in Kenya by Esther Duflo, Pascaline Dupas, and Michael Kremer This document
More informationif a < b 0 if a = b 4 b if a > b Alice has commissioned two economists to advise her on whether to accept the challenge.
THE COINFLIPPER S DILEMMA by Steven E. Landsburg University of Rochester. Alice s Dilemma. Bob has challenged Alice to a coin-flipping contest. If she accepts, they ll each flip a fair coin repeatedly
More informationProblems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:
Math 224 Fall 207 Homework 5 Drew Armstrong Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman: Section 3., Exercises 3, 0. Section 3.3, Exercises 2, 3, 0,.
More informationProblem Set 3 Solutions
Problem Set 3 Solutions Ec 030 Feb 9, 205 Problem (3 points) Suppose that Tomasz is using the pessimistic criterion where the utility of a lottery is equal to the smallest prize it gives with a positive
More informationTopics in Contract Theory Lecture 5. Property Rights Theory. The key question we are staring from is: What are ownership/property rights?
Leonardo Felli 15 January, 2002 Topics in Contract Theory Lecture 5 Property Rights Theory The key question we are staring from is: What are ownership/property rights? For an answer we need to distinguish
More informationMulti-armed bandit problems
Multi-armed bandit problems Stochastic Decision Theory (2WB12) Arnoud den Boer 13 March 2013 Set-up 13 and 14 March: Lectures. 20 and 21 March: Paper presentations (Four groups, 45 min per group). Before
More informationAmerican options and early exercise
Chapter 3 American options and early exercise American options are contracts that may be exercised early, prior to expiry. These options are contrasted with European options for which exercise is only
More informationDynamic Pricing with Varying Cost
Dynamic Pricing with Varying Cost L. Jeff Hong College of Business City University of Hong Kong Joint work with Ying Zhong and Guangwu Liu Outline 1 Introduction 2 Problem Formulation 3 Pricing Policy
More informationCharacterization of the Optimum
ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing
More informationRational Infinitely-Lived Asset Prices Must be Non-Stationary
Rational Infinitely-Lived Asset Prices Must be Non-Stationary By Richard Roll Allstate Professor of Finance The Anderson School at UCLA Los Angeles, CA 90095-1481 310-825-6118 rroll@anderson.ucla.edu November
More informationChapter 7. Sampling Distributions and the Central Limit Theorem
Chapter 7. Sampling Distributions and the Central Limit Theorem 1 Introduction 2 Sampling Distributions related to the normal distribution 3 The central limit theorem 4 The normal approximation to binomial
More informationThe Game-Theoretic Framework for Probability
11th IPMU International Conference The Game-Theoretic Framework for Probability Glenn Shafer July 5, 2006 Part I. A new mathematical foundation for probability theory. Game theory replaces measure theory.
More informationThe Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.
The Real Numbers Here we show one way to explicitly construct the real numbers R. First we need a definition. Definitions/Notation: A sequence of rational numbers is a funtion f : N Q. Rather than write
More informationAre the Azéma-Yor processes truly remarkable?
Are the Azéma-Yor processes truly remarkable? Jan Obłój j.obloj@imperial.ac.uk based on joint works with L. Carraro, N. El Karoui, A. Meziou and M. Yor Welsh Probability Seminar, 17 Jan 28 Are the Azéma-Yor
More informationDepartment of Mathematics. Mathematics of Financial Derivatives
Department of Mathematics MA408 Mathematics of Financial Derivatives Thursday 15th January, 2009 2pm 4pm Duration: 2 hours Attempt THREE questions MA408 Page 1 of 5 1. (a) Suppose 0 < E 1 < E 3 and E 2
More informationHow do Variance Swaps Shape the Smile?
How do Variance Swaps Shape the Smile? A Summary of Arbitrage Restrictions and Smile Asymptotics Vimal Raval Imperial College London & UBS Investment Bank www2.imperial.ac.uk/ vr402 Joint Work with Mark
More informationAn algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits
JMLR: Workshop and Conference Proceedings vol 49:1 5, 2016 An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits Peter Auer Chair for Information Technology Montanuniversitaet
More informationInformation aggregation for timing decision making.
MPRA Munich Personal RePEc Archive Information aggregation for timing decision making. Esteban Colla De-Robertis Universidad Panamericana - Campus México, Escuela de Ciencias Económicas y Empresariales
More informationCorrections to the Second Edition of Modeling and Analysis of Stochastic Systems
Corrections to the Second Edition of Modeling and Analysis of Stochastic Systems Vidyadhar Kulkarni November, 200 Send additional corrections to the author at his email address vkulkarn@email.unc.edu.
More informationGoal Problems in Gambling Theory*
Goal Problems in Gambling Theory* Theodore P. Hill Center for Applied Probability and School of Mathematics Georgia Institute of Technology Atlanta, GA 30332-0160 Abstract A short introduction to goal
More informationMechanism Design and Auctions
Mechanism Design and Auctions Game Theory Algorithmic Game Theory 1 TOC Mechanism Design Basics Myerson s Lemma Revenue-Maximizing Auctions Near-Optimal Auctions Multi-Parameter Mechanism Design and the
More informationApplying Risk Theory to Game Theory Tristan Barnett. Abstract
Applying Risk Theory to Game Theory Tristan Barnett Abstract The Minimax Theorem is the most recognized theorem for determining strategies in a two person zerosum game. Other common strategies exist such
More informationMonte-Carlo Planning: Introduction and Bandit Basics. Alan Fern
Monte-Carlo Planning: Introduction and Bandit Basics Alan Fern 1 Large Worlds We have considered basic model-based planning algorithms Model-based planning: assumes MDP model is available Methods we learned
More information4 Reinforcement Learning Basic Algorithms
Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems
More informationarxiv: v2 [q-fin.pr] 23 Nov 2017
VALUATION OF EQUITY WARRANTS FOR UNCERTAIN FINANCIAL MARKET FOAD SHOKROLLAHI arxiv:17118356v2 [q-finpr] 23 Nov 217 Department of Mathematics and Statistics, University of Vaasa, PO Box 7, FIN-6511 Vaasa,
More informationOptimal Stopping Rules of Discrete-Time Callable Financial Commodities with Two Stopping Boundaries
The Ninth International Symposium on Operations Research Its Applications (ISORA 10) Chengdu-Jiuzhaigou, China, August 19 23, 2010 Copyright 2010 ORSC & APORC, pp. 215 224 Optimal Stopping Rules of Discrete-Time
More informationModes of Convergence
Moes of Convergence Electrical Engineering 126 (UC Berkeley Spring 2018 There is only one sense in which a sequence of real numbers (a n n N is sai to converge to a limit. Namely, a n a if for every ε
More information