Markov Chains (Part 2)
|
|
- Magnus Barton
- 5 years ago
- Views:
Transcription
1 Markov Chains (Part 2) More Examples and Chapman-Kolmogorov Equations Markov Chains - 1
2 A Stock Price Stochastic Process Consider a stock whose price either goes up or down every day. Let X t be a random variable that is: 0 if the stock price goes up on day t, and 1 if the stock price goes down on day t. The probability that the stock price goes up tomorrow, given it goes up today, is 0.7. If the stock goes down today, the probability that it goes up tomorrow is 0.5. Does the stochastic process X t possess the Markovian property? What is the one-step transition probability matrix? Markov Chains - 2
3 A Stock Price Stochastic Process Consider a stock whose price either goes up or down every day. Let X t be a random variable that is: 0 if the stock price goes up on day t, and 1 if the stock price goes down on day t. The probability that the stock price goes up tomorrow, given it goes up today, is 0.7. If the stock goes down today, the probability that it goes up tomorrow is 0.5. Does the stochastic process X t possess the Markovian property? What is the one-step transition probability matrix? Stock behavior today Probability of stock going up tomorrow Probability of stock going down tomorrow Up Down Markov Chains - 3
4 A Stock Price Stochastic Process Now, suppose the probability of whether the stock goes up or down tomorrow depends on the stock s behavior today and yesterday Intuitively, can we define a stochastic process X t that possesses the Markovian property? Stock behavior yesterday Stock behavior today Probability of stock going up tomorrow Up Up 0.9 Down Up 0.6 Up Down 0.5 Down Down 0.3 Markov Chains - 4
5 A Stock Price Stochastic Process We can expand the state space to include a little bit of history, and create a Markov chain. Let X t be a random variable that has four states: 0, 0 if the stock price went up yesterday on day t-1 and up today on day t, 1, 0 if the stock price went down yesterday on day t-1 and up today on day t, 0, 1 if the stock price went up yesterday on day t-1 and down today on day t, 1, 1 if the stock price went down yesterday on day t-1 and down today on day t Intuitively, now the stochastic process X t satisfies the Markovian property Markov Chains - 5
6 A Stock Price Stochastic Process Now, the one-step transition probability matrix is 4x4 State Transition (t-1,t) to (t,t+1) 0,0 (up, up) 1,0 (down, up) 0,1 (up, down) 1,1 (down, down) 0,0 (up, up) ,0 (down, up) ,1 (up, down) ,1 (down, down) Markov Chains - 6
7 Multi-step Transition Probabilities So far, we have only focused on one-step transition probabilities p ij But these don t directly provide answers to some interesting questions For example, if it is sunny today, what is the probability that it will be sunny day after tomorrow? If the stock went down today, what is the probability that it will go down three days later? These are called multi-step (or n-step) transition probabilities. In particular, we want to find P(X t+n =j X t = i), which is denoted by p ij (n) The Chapman-Kolmogorov (C-K) equation is a formula to calculate n-step transition probabilities.
8 n-step Transition Probabilities If the one-step transition probabilities are stationary, then the n-step transition probabilities are written: P(X t+n =j X t =i) = P(X n =j X 0 =i) for all t = p ij (n) Interpretation: X t j j i i n t t+1 t+2 t+n Markov Chains - 8
9 Inventory Example n-step Transition Probabilities p 12 (3) = conditional probability that starting with one camera, there will be two cameras after three weeks Four ways that could happen: X t t Markov Chains - 9
10 Two-step Transition Probabilities for the Weather Example Intuition: to go from state 0 to 0 in two steps we can either Go from 0 to 0 in one step and then go from 0 to 0 in one step OR Go from 0 to 1 in one step and then go from 1 to 0 in one step Therefore, p 00 = P(X 2 = 0 X 0 =0) = p 00 p 00 +p 01 p 10 In short, p 1 = p0k p 00 k0 k = 0 You just wrote down your first Chapman-Kolmogorov equation using intuition Now use the above intuition to write down the other 2-step transition probabilities p 01, p 10, p 11 These four two-step transition probabilities can be arranged in a matrix P called the two-step transition matrix Markov Chains - 10
11 Two-step Transition Probabilities for the Weather Example P ( " 2 ) = p 00 # ( 2 ) ( 2 ) p 10 ( 2 ) p 01 ( 2 ) p 11 % " ' = p p + p p p p + p p % ' & # p 10 p 00 + p 11 p 10 p 10 p 01 + p 11 p 11 & Interpretation : p 01 is the probability that the weather day after tomorrow will be rainy if the weather today is sunny. An interesting observation: the two-step transition matrix is the square of the one-step transition matrix That is, P =P 2 Why? Recall matrix product to write down P 2 and confirm that it equals P above. Markov Chains - 11
12 Two-step Transition Probabilities for General Markov Chains For a general Markov chain with states 0,1,,M, to make a two-step transition from i to j, we go to some state k in one step from i and then go from k to j in one step. Therefore, the two-step transition probability matrix is, P =P 2 # # P = # # # " p 00 p 10 p 01 p 11 p M 0 p M1... p 0M... p 1M... p MM & & & & & % M k=0 with p = p p ij ik kj Markov Chains - 12
13 n-step Transition Probabilities for General Markov Chains For a general Markov chain with states 0,1,,M, the n-step transition from i to j means the process goes from i to j in n time steps Let m be a non-negative integer not bigger than n. The Chapman- Kolmogorov equation is: M p (n) = " p (m) p (nm) ij ik kj k=0 Interpretation: if the process goes from state i to state j in n steps then it must go from state i to some state k in m (less than n) steps and then go from k to j in the remaining n-m steps. In matrix notation, P (n) =P (m) P (n-m). This implies that the n-step transition matrix is the n th power of the one-step transition matrix (Why? - substitute m=1 and see what happens) Markov Chains - 13
14 Chapman-Kolmogorov Equations p (n) ij = k=0 p (m) (nm) ik p kj Consider the case when m = 1: X t M (n) p = ij M " for all i, j, n and 0 m n M k = 0 (n "1) (n "1) pik p = P # P kj k j i p ik (n "1) p kj 0 1 n t Markov Chains - 14
15 Chapman-Kolmogorov Equations The p ij (n) are the elements of the n-step transition matrix, P (n) Note, though, that P (n) = P P (n "1) = P P P(n " 2) = P P P"P = P n Markov Chains - 15
16 How to use C-K Equations To answer the following question: what is the probability that starting in state i the Markov chain will be in state j after n steps? First write down the one-step transition probability matrix. Then use your calculator to calculate the n th power of this onestep transition probability matrix Write down the ij th entry of this n th power matrix. Markov Chains - 16
17 Weather Example n-step Transitions Two-step transition probability matrix: P = = Sunny Sunny & 0.5 Rainy % 0.2 Sunny Sunny & 0.35 Rainy % 0.26 Rainy # 0.8 " Rainy 0.65# 0.74 " Markov Chains - 17
18 Inventory Example n-step Transitions Two-step transition probability matrix: P = = " % 2 ' ' ' # ' & & % # " Note: even though p 12 =0, p 12 >0 Markov Chains - 18
19 Inventory Example n-step Transitions p 13 = probability that the inventory goes from 1 camera to 3 cameras in two weeks = (note: even though p 13 = 0) Question: Assuming the store starts with 3 cameras, find the probability there will be 0 cameras in 2 weeks p 30 = Markov Chains - 19
20 (Unconditional) Probability in state j at time n The transition probability p ij (n) is a conditional probability, P(X n =j X 0 =i) How do we un-condition the probabilities? That is, how do we find the (unconditional) probability of being in state j at time n, P(X n =j)? The probabilities P(X 0 =i) define the initial state distribution M i=0 M i=0 P(X n = j) = P(X n = j X 0 = i)p(x 0 = i) = p (n) P(X = i) ij 0 Markov Chains - 20
21 Inventory Example Unconditional Probabilities If initial conditions were unknown, we might assume it s equally likely to be in any initial state: P(X 0 =0) = ¼ = P(X 0 =1) = P(X 0 =2) = P(X 0 =3) Then, what is the probability that we order (any) camera in two weeks? P(order in 2 weeks) = P(in state 0 at time 2) = P(X 0 =0)p 00 +P(X 0 =1)p 10 +P(X 0 =2)p 20 +P(X 0 =3)p 30 = ¼(0.249) + ¼(0.283) + ¼(0.351) + ¼(0.249) = Markov Chains - 21
22 Steady-State Probabilities As n gets large, what happens? What is the probability of being in any state? (e.g., in the inventory example, what happens as more and more weeks go by?) Consider the 8-step transition probability for the inventory example. P (8) = P 8 = & % # " Markov Chains - 22
23 Steady-State Probabilities In the long-run (e.g., after 8 or more weeks), the probability of being in state j is These probabilities are called the steady state probabilities lim p = n#" Another interpretation is that π j is the fraction of time the process is in state j (in the long-run) This limit exists for any irreducible ergodic Markov chain (Next, we will define these terms, then return to steady-state probabilities) ( n) ij j Markov Chains - 23
Stochastic Manufacturing & Service Systems. Discrete-time Markov Chain
ISYE 33 B, Fall Week #7, September 9-October 3, Introduction Stochastic Manufacturing & Service Systems Xinchang Wang H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of
More informationBSc (Hons) Software Engineering BSc (Hons) Computer Science with Network Security
BSc (Hons) Software Engineering BSc (Hons) Computer Science with Network Security Cohorts BCNS/ 06 / Full Time & BSE/ 06 / Full Time Resit Examinations for 2008-2009 / Semester 1 Examinations for 2008-2009
More informationResearch Paper. Statistics An Application of Stochastic Modelling to Ncd System of General Insurance Company. Jugal Gogoi Navajyoti Tamuli
Research Paper Statistics An Application of Stochastic Modelling to Ncd System of General Insurance Company Jugal Gogoi Navajyoti Tamuli Department of Mathematics, Dibrugarh University, Dibrugarh-786004,
More informationDecision Making in Robots and Autonomous Agents
Decision Making in Robots and Autonomous Agents Dynamic Programming Principle: How should a robot go from A to B? Subramanian Ramamoorthy School of InformaDcs 26 January, 2018 Objec&ves of this Lecture
More informationMultiple State Models
Multiple State Models Lecture: Weeks 6-7 Lecture: Weeks 6-7 (STT 456) Multiple State Models Spring 2015 - Valdez 1 / 42 Chapter summary Chapter summary Multiple state models (also called transition models)
More informationBinomial Distribution and Discrete Random Variables
3.1 3.3 Binomial Distribution and Discrete Random Variables Prof. Tesler Math 186 Winter 2017 Prof. Tesler 3.1 3.3 Binomial Distribution Math 186 / Winter 2017 1 / 16 Random variables A random variable
More informationSolutions for Homework #5
Econ 50a (second half) Prof: Tony Smith TA: Theodore Papageorgiou Fall 2004 Yale University Dept. of Economics Solutions for Homework #5 Question a) A recursive competitive equilibrium for the neoclassical
More informationStochastic Games and Bayesian Games
Stochastic Games and Bayesian Games CPSC 532L Lecture 10 Stochastic Games and Bayesian Games CPSC 532L Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games Stochastic Games
More informationSteven Heston: Recovering the Variance Premium. Discussion by Jaroslav Borovička November 2017
Steven Heston: Recovering the Variance Premium Discussion by Jaroslav Borovička November 2017 WHAT IS THE RECOVERY PROBLEM? Using observed cross-section(s) of prices (of Arrow Debreu securities), infer
More informationMath-Stat-491-Fall2014-Notes-V
Math-Stat-491-Fall2014-Notes-V Hariharan Narayanan December 7, 2014 Martingales 1 Introduction Martingales were originally introduced into probability theory as a model for fair betting games. Essentially
More informationHandout 8: Introduction to Stochastic Dynamic Programming. 2 Examples of Stochastic Dynamic Programming Problems
SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 8: Introduction to Stochastic Dynamic Programming Instructor: Shiqian Ma March 10, 2014 Suggested Reading: Chapter 1 of Bertsekas,
More informationStochastic Games and Bayesian Games
Stochastic Games and Bayesian Games CPSC 532l Lecture 10 Stochastic Games and Bayesian Games CPSC 532l Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games 4 Analyzing Bayesian
More informationX(t) = B(t), t 0,
IEOR 4106: Introduction to Operations Research: Stochastic Models Spring 2007, Professor Whitt, Final Exam Chapters 4-7 and 10 in Ross, Wednesday, May 9, 1:10pm-4:00pm Open Book: but only the Ross textbook,
More informationMATH/STAT 4720, Life Contingencies II Fall 2015 Toby Kenney
MATH/STAT 4720, Life Contingencies II Fall 2015 Toby Kenney In Class Examples () September 2, 2016 1 / 145 8 Multiple State Models Definition A Multiple State model has several different states into which
More informationSTOCHASTIC CALCULUS AND BLACK-SCHOLES MODEL
STOCHASTIC CALCULUS AND BLACK-SCHOLES MODEL YOUNGGEUN YOO Abstract. Ito s lemma is often used in Ito calculus to find the differentials of a stochastic process that depends on time. This paper will introduce
More informationProbability. An intro for calculus students P= Figure 1: A normal integral
Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided
More informationAsset Pricing and Equity Premium Puzzle. E. Young Lecture Notes Chapter 13
Asset Pricing and Equity Premium Puzzle 1 E. Young Lecture Notes Chapter 13 1 A Lucas Tree Model Consider a pure exchange, representative household economy. Suppose there exists an asset called a tree.
More informationModule 10:Application of stochastic processes in areas like finance Lecture 36:Black-Scholes Model. Stochastic Differential Equation.
Stochastic Differential Equation Consider. Moreover partition the interval into and define, where. Now by Rieman Integral we know that, where. Moreover. Using the fundamentals mentioned above we can easily
More informationChapter 10 Inventory Theory
Chapter 10 Inventory Theory 10.1. (a) Find the smallest n such that g(n) 0. g(1) = 3 g(2) =2 n = 2 (b) Find the smallest n such that g(n) 0. g(1) = 1 25 1 64 g(2) = 1 4 1 25 g(3) =1 1 4 g(4) = 1 16 1
More informationA potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples
1.3 Regime switching models A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples (or regimes). If the dates, the
More informationOperations Research. Chapter 8
QM 350 Operations Research Chapter 8 Case Study: ACCOUNTS RECEIVABLE ANALYSIS Let us consider the accounts receivable situation for Heidman s Department Store. Heidman s uses two aging categories for its
More informationModeling via Stochastic Processes in Finance
Modeling via Stochastic Processes in Finance Dimbinirina Ramarimbahoaka Department of Mathematics and Statistics University of Calgary AMAT 621 - Fall 2012 October 15, 2012 Question: What are appropriate
More informationEcon 8602, Fall 2017 Homework 2
Econ 8602, Fall 2017 Homework 2 Due Tues Oct 3. Question 1 Consider the following model of entry. There are two firms. There are two entry scenarios in each period. With probability only one firm is able
More informationFinal exam solutions
EE365 Stochastic Control / MS&E251 Stochastic Decision Models Profs. S. Lall, S. Boyd June 5 6 or June 6 7, 2013 Final exam solutions This is a 24 hour take-home final. Please turn it in to one of the
More informationADVANCED MACROECONOMIC TECHNIQUES NOTE 6a
316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 6a Chris Edmond hcpedmond@unimelb.edu.aui Introduction to consumption-based asset pricing We will begin our brief look at asset pricing with a review of the
More informationDifferent Monotonicity Definitions in stochastic modelling
Different Monotonicity Definitions in stochastic modelling Imène KADI Nihal PEKERGIN Jean-Marc VINCENT ASMTA 2009 Plan 1 Introduction 2 Models?? 3 Stochastic monotonicity 4 Realizable monotonicity 5 Relations
More information2. Modeling Uncertainty
2. Modeling Uncertainty Models for Uncertainty (Random Variables): Big Picture We now move from viewing the data to thinking about models that describe the data. Since the real world is uncertain, our
More informationCorporate Control. Itay Goldstein. Wharton School, University of Pennsylvania
Corporate Control Itay Goldstein Wharton School, University of Pennsylvania 1 Managerial Discipline and Takeovers Managers often don t maximize the value of the firm; either because they are not capable
More informationAM 121: Intro to Optimization Models and Methods
AM 121: Intro to Optimization Models and Methods Lecture 18: Markov Decision Processes Yiling Chen and David Parkes Lesson Plan Markov decision processes Policies and Value functions Solving: average reward,
More informationAnalyzing Expected Returns of a Stock Using The Markov Chain Model and the Capital Asset Pricing Model
Applied Mathematical Sciences, Vol. 11, 2017, no. 56, 2777-2788 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.79287 Analyzing Expected Returns of a Stock Using The Markov Chain Model and
More informationON THE USE OF MARKOV ANALYSIS IN MARKETING OF TELECOMMUNICATION PRODUCT IN NIGERIA. *OSENI, B. Azeez and **Femi J. Ayoola
ON THE USE OF MARKOV ANALYSIS IN MARKETING OF TELECOMMUNICATION PRODUCT IN NIGERIA *OSENI, B. Azeez and **Femi J. Ayoola *Department of Mathematics and Statistics, The Polytechnic, Ibadan. **Department
More informationDefinition 4.1. In a stochastic process T is called a stopping time if you can tell when it happens.
102 OPTIMAL STOPPING TIME 4. Optimal Stopping Time 4.1. Definitions. On the first day I explained the basic problem using one example in the book. On the second day I explained how the solution to the
More informationA methodology for stochastic analysis of share prices as Markov chains with finite states
Mettle et al. SpringerPlus 2014, 3:657 a SpringerOpen Journal METHODOLOGY Open Access A methodology for stochastic analysis of share prices as Markov chains with finite states Felix Okoe Mettle 1, Enoch
More informationRandomization and Simplification. Ehud Kalai 1 and Eilon Solan 2,3. Abstract
andomization and Simplification y Ehud Kalai 1 and Eilon Solan 2,3 bstract andomization may add beneficial flexibility to the construction of optimal simple decision rules in dynamic environments. decision
More informationValuation and Tax Policy
Valuation and Tax Policy Lakehead University Winter 2005 Formula Approach for Valuing Companies Let EBIT t Earnings before interest and taxes at time t T Corporate tax rate I t Firm s investments at time
More informationCredit Risk in Lévy Libor Modeling: Rating Based Approach
Credit Risk in Lévy Libor Modeling: Rating Based Approach Zorana Grbac Department of Math. Stochastics, University of Freiburg Joint work with Ernst Eberlein Croatian Quants Day University of Zagreb, 9th
More informationProbability Basics. Part 1: What is Probability? INFO-1301, Quantitative Reasoning 1 University of Colorado Boulder. March 1, 2017 Prof.
Probability Basics Part 1: What is Probability? INFO-1301, Quantitative Reasoning 1 University of Colorado Boulder March 1, 2017 Prof. Michael Paul Variables We can describe events like coin flips as variables
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models SOLUTIONS to Final Exam, Sunday, December 16, 2012
IEOR 306: Introduction to Operations Research: Stochastic Models SOLUTIONS to Final Exam, Sunday, December 6, 202 Four problems, each with multiple parts. Maximum score 00 (+3 bonus) = 3. You need to show
More informationThe Asymptotic Shapley Value for a Simple Market Game
The Asymptotic Shapley Value for a Simple Market Game Thomas M. Liggett, Steven A. Lippman, and Richard P. Rumelt Mathematics Department, UCLA The UCLA Anderson School of Management The UCLA Anderson School
More informationFinancial Econometrics
Financial Econometrics Volatility Gerald P. Dwyer Trinity College, Dublin January 2013 GPD (TCD) Volatility 01/13 1 / 37 Squared log returns for CRSP daily GPD (TCD) Volatility 01/13 2 / 37 Absolute value
More informationMengdi Wang. July 3rd, Laboratory for Information and Decision Systems, M.I.T.
Practice July 3rd, 2012 Laboratory for Information and Decision Systems, M.I.T. 1 2 Infinite-Horizon DP Minimize over policies the objective cost function J π (x 0 ) = lim N E w k,k=0,1,... DP π = {µ 0,µ
More informationA tree-based approach to price leverage super-senior tranches
A tree-based approach to price leverage super-senior tranches Areski Cousin November 26, 2009 Abstract The recent liquidity crisis on the credit derivative market has raised the need for consistent mark-to-model
More informationInvestigation of Dependency between Short Rate and Transition Rate on Pension Buy-outs. Arık, A. 1 Yolcu-Okur, Y. 2 Uğur Ö. 2
Investigation of Dependency between Short Rate and Transition Rate on Pension Buy-outs Arık, A. 1 Yolcu-Okur, Y. 2 Uğur Ö. 2 1 Hacettepe University Department of Actuarial Sciences 06800, TURKEY 2 Middle
More informationOnline Appendix (Not intended for Publication): Federal Reserve Credibility and the Term Structure of Interest Rates
Online Appendix Not intended for Publication): Federal Reserve Credibility and the Term Structure of Interest Rates Aeimit Lakdawala Michigan State University Shu Wu University of Kansas August 2017 1
More informationProblem 1: Random variables, common distributions and the monopoly price
Problem 1: Random variables, common distributions and the monopoly price In this problem, we will revise some basic concepts in probability, and use these to better understand the monopoly price (alternatively
More information17 MAKING COMPLEX DECISIONS
267 17 MAKING COMPLEX DECISIONS The agent s utility now depends on a sequence of decisions In the following 4 3grid environment the agent makes a decision to move (U, R, D, L) at each time step When the
More informationSociety of Actuaries Exam MLC: Models for Life Contingencies Draft 2012 Learning Objectives Document Version: August 19, 2011
Learning Objective Proposed Weighting* (%) Understand how decrements are used in insurances, annuities and investments. Understand the models used to model decrements used in insurances, annuities and
More informationA Markov decision model for optimising economic production lot size under stochastic demand
Volume 26 (1) pp. 45 52 http://www.orssa.org.za ORiON IN 0529-191-X c 2010 A Markov decision model for optimising economic production lot size under stochastic demand Paul Kizito Mubiru Received: 2 October
More informationM.Sc. ACTUARIAL SCIENCE. Term-End Examination
No. of Printed Pages : 15 LMJA-010 (F2F) M.Sc. ACTUARIAL SCIENCE Term-End Examination O CD December, 2011 MIA-010 (F2F) : STATISTICAL METHOD Time : 3 hours Maximum Marks : 100 SECTION - A Attempt any five
More informationMicroeconomic Theory August 2013 Applied Economics. Ph.D. PRELIMINARY EXAMINATION MICROECONOMIC THEORY. Applied Economics Graduate Program
Ph.D. PRELIMINARY EXAMINATION MICROECONOMIC THEORY Applied Economics Graduate Program August 2013 The time limit for this exam is four hours. The exam has four sections. Each section includes two questions.
More information1 Answers to the Sept 08 macro prelim - Long Questions
Answers to the Sept 08 macro prelim - Long Questions. Suppose that a representative consumer receives an endowment of a non-storable consumption good. The endowment evolves exogenously according to ln
More informationMarkov Chain Model Application on Share Price Movement in Stock Market
Markov Chain Model Application on Share Price Movement in Stock Market Davou Nyap Choji 1 Samuel Ngbede Eduno 2 Gokum Titus Kassem, 3 1 Department of Computer Science University of Jos, Nigeria 2 Ecwa
More informationDo all of Part One (1 pt. each), one from Part Two (15 pts.), and four from Part Three (15 pts. each) <><><><><> PART ONE <><><><><>
56:171 Operations Research Final Exam - December 13, 1989 Instructor: D.L. Bricker Do all of Part One (1 pt. each), one from Part Two (15 pts.), and four from
More informationStochastic Calculus, Application of Real Analysis in Finance
, Application of Real Analysis in Finance Workshop for Young Mathematicians in Korea Seungkyu Lee Pohang University of Science and Technology August 4th, 2010 Contents 1 BINOMIAL ASSET PRICING MODEL Contents
More information6.262: Discrete Stochastic Processes 3/2/11. Lecture 9: Markov rewards and dynamic prog.
6.262: Discrete Stochastic Processes 3/2/11 Lecture 9: Marov rewards and dynamic prog. Outline: Review plus of eigenvalues and eigenvectors Rewards for Marov chains Expected first-passage-times Aggregate
More informationHomework 2: Dynamic Moral Hazard
Homework 2: Dynamic Moral Hazard Question 0 (Normal learning model) Suppose that z t = θ + ɛ t, where θ N(m 0, 1/h 0 ) and ɛ t N(0, 1/h ɛ ) are IID. Show that θ z 1 N ( hɛ z 1 h 0 + h ɛ + h 0m 0 h 0 +
More information1 Consumption and saving under uncertainty
1 Consumption and saving under uncertainty 1.1 Modelling uncertainty As in the deterministic case, we keep assuming that agents live for two periods. The novelty here is that their earnings in the second
More informationBusiness 33001: Microeconomics
Business 33001: Microeconomics Owen Zidar University of Chicago Booth School of Business Week 6 Owen Zidar (Chicago Booth) Microeconomics Week 6: Capital & Investment 1 / 80 Today s Class 1 Preliminaries
More informationChapter 5 Macroeconomics and Finance
Macro II Chapter 5 Macro and Finance 1 Chapter 5 Macroeconomics and Finance Main references : - L. Ljundqvist and T. Sargent, Chapter 7 - Mehra and Prescott 1985 JME paper - Jerman 1998 JME paper - J.
More informationStatistical Intervals (One sample) (Chs )
7 Statistical Intervals (One sample) (Chs 8.1-8.3) Confidence Intervals The CLT tells us that as the sample size n increases, the sample mean X is close to normally distributed with expected value µ and
More informationOption Pricing under Delay Geometric Brownian Motion with Regime Switching
Science Journal of Applied Mathematics and Statistics 2016; 4(6): 263-268 http://www.sciencepublishinggroup.com/j/sjams doi: 10.11648/j.sjams.20160406.13 ISSN: 2376-9491 (Print); ISSN: 2376-9513 (Online)
More informationSolution 2.1. We determine the accumulation function/factor and use it as follows.
Applied solutions The time value of money: Chapter questions Solution.. We determine the accumulation function/factor and use it as follows. a. The accumulation factor is A(t) =. t. b. The accumulation
More informationMulti-year non-life insurance risk of dependent lines of business
Lukas J. Hahn University of Ulm & ifa Ulm, Germany EAJ 2016 Lyon, France September 7, 2016 Multi-year non-life insurance risk of dependent lines of business The multivariate additive loss reserving model
More informationEE266 Homework 5 Solutions
EE, Spring 15-1 Professor S. Lall EE Homework 5 Solutions 1. A refined inventory model. In this problem we consider an inventory model that is more refined than the one you ve seen in the lectures. The
More informationCHAPTER 8. Confidence Interval Estimation Point and Interval Estimates
CHAPTER 8. Confidence Interval Estimation Point and Interval Estimates A point estimate is a single number, a confidence interval provides additional information about the variability of the estimate Lower
More informationEstimating parameters 5.3 Confidence Intervals 5.4 Sample Variance
Estimating parameters 5.3 Confidence Intervals 5.4 Sample Variance Prof. Tesler Math 186 Winter 2017 Prof. Tesler Ch. 5: Confidence Intervals, Sample Variance Math 186 / Winter 2017 1 / 29 Estimating parameters
More informationThe Theory of Interest
The Theory of Interest An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Simple Interest (1 of 2) Definition Interest is money paid by a bank or other financial institution
More informationM d = PL( Y,i) P = price level. Y = real income or output. i = nominal interest rate earned by alternative nonmonetary assets
Chapter 7 Demand for Money: the quantity of monetary assets people choose to hold. In our treatment of money as an asset we need to briefly discuss three aspects of any asset 1. Expected Return: Wealth
More informationFeb. 20th, Recursive, Stochastic Growth Model
Feb 20th, 2007 1 Recursive, Stochastic Growth Model In previous sections, we discussed random shocks, stochastic processes and histories Now we will introduce those concepts into the growth model and analyze
More informationThe proof of Twin Primes Conjecture. Author: Ramón Ruiz Barcelona, Spain August 2014
The proof of Twin Primes Conjecture Author: Ramón Ruiz Barcelona, Spain Email: ramonruiz1742@gmail.com August 2014 Abstract. Twin Primes Conjecture statement: There are infinitely many primes p such that
More informationBEHAVIOUR OF PASSAGE TIME FOR A QUEUEING NETWORK MODEL WITH FEEDBACK: A SIMULATION STUDY
IJMMS 24:24, 1267 1278 PII. S1611712426287 http://ijmms.hindawi.com Hindawi Publishing Corp. BEHAVIOUR OF PASSAGE TIME FOR A QUEUEING NETWORK MODEL WITH FEEDBACK: A SIMULATION STUDY BIDYUT K. MEDYA Received
More informationLog-linear Dynamics and Local Potential
Log-linear Dynamics and Local Potential Daijiro Okada and Olivier Tercieux [This version: November 28, 2008] Abstract We show that local potential maximizer ([15]) with constant weights is stochastically
More informationExam M Fall 2005 PRELIMINARY ANSWER KEY
Exam M Fall 005 PRELIMINARY ANSWER KEY Question # Answer Question # Answer 1 C 1 E C B 3 C 3 E 4 D 4 E 5 C 5 C 6 B 6 E 7 A 7 E 8 D 8 D 9 B 9 A 10 A 30 D 11 A 31 A 1 A 3 A 13 D 33 B 14 C 34 C 15 A 35 A
More informationSequential Decision Making
Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008 Introduction Some examples Dynamic programming
More informationAppendix A: Introduction to Queueing Theory
Appendix A: Introduction to Queueing Theory Queueing theory is an advanced mathematical modeling technique that can estimate waiting times. Imagine customers who wait in a checkout line at a grocery store.
More informationBlackwell Optimality in Markov Decision Processes with Partial Observation
Blackwell Optimality in Markov Decision Processes with Partial Observation Dinah Rosenberg and Eilon Solan and Nicolas Vieille April 6, 2000 Abstract We prove the existence of Blackwell ε-optimal strategies
More informationVertical Asymptotes. We generally see vertical asymptotes in the graph of a function when we divide by zero. For example, in the function
MA 223 Lecture 26 - Behavior Around Vertical Asymptotes Monday, April 9, 208 Objectives: Explore middle behavior around vertical asymptotes. Vertical Asymptotes We generally see vertical asymptotes in
More informationDiscrete Random Variables
Discrete Random Variables MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2017 Objectives During this lesson we will learn to: distinguish between discrete and continuous
More informationDiscrete Probability Distribution
1 Discrete Probability Distribution Key Definitions Discrete Random Variable: Has a countable number of values. This means that each data point is distinct and separate. Continuous Random Variable: Has
More informationDiscrete Random Variables
Discrete Random Variables MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2018 Objectives During this lesson we will learn to: distinguish between discrete and continuous
More information8.1 Estimation of the Mean and Proportion
8.1 Estimation of the Mean and Proportion Statistical inference enables us to make judgments about a population on the basis of sample information. The mean, standard deviation, and proportions of a population
More informationSTAT 111 Recitation 3
STAT 111 Recitation 3 Linjun Zhang stat.wharton.upenn.edu/~linjunz/ September 23, 2017 Misc. The unpicked-up homeworks will be put in the STAT 111 box in the Stats Department lobby (It s on the 4th floor
More informationNovember 2012 Course MLC Examination, Problem No. 1 For two lives, (80) and (90), with independent future lifetimes, you are given: k p 80+k
Solutions to the November 202 Course MLC Examination by Krzysztof Ostaszewski, http://www.krzysio.net, krzysio@krzysio.net Copyright 202 by Krzysztof Ostaszewski All rights reserved. No reproduction in
More informationarxiv: v1 [stat.ap] 5 Mar 2012
Estimation of Claim Numbers in Automobile Insurance Miklós Arató 1 and László Martinek 1 1 Department of Probability Theory and Statistics, Eötvös Loránd University, Budapest March 6, 2012 arxiv:1203.0900v1
More informationtroduction to Algebra
Chapter Six Percent Percents, Decimals, and Fractions Understanding Percent The word percent comes from the Latin phrase per centum,, which means per 100. Percent means per one hundred. The % symbol is
More informationConstruction and behavior of Multinomial Markov random field models
Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2010 Construction and behavior of Multinomial Markov random field models Kim Mueller Iowa State University Follow
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More informationMacroeconomics and finance
Macroeconomics and finance 1 1. Temporary equilibrium and the price level [Lectures 11 and 12] 2. Overlapping generations and learning [Lectures 13 and 14] 2.1 The overlapping generations model 2.2 Expectations
More informationElif Özge Özdamar T Reinforcement Learning - Theory and Applications February 14, 2006
On the convergence of Q-learning Elif Özge Özdamar elif.ozdamar@helsinki.fi T-61.6020 Reinforcement Learning - Theory and Applications February 14, 2006 the covergence of stochastic iterative algorithms
More informationIt is a measure to compare bonds (among other things).
It is a measure to compare bonds (among other things). It provides an estimate of the volatility or the sensitivity of the market value of a bond to changes in interest rates. There are two very closely
More informationADVANCED MACROECONOMIC TECHNIQUES NOTE 7b
316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 7b Chris Edmond hcpedmond@unimelb.edu.aui Aiyagari s model Arguably the most popular example of a simple incomplete markets model is due to Rao Aiyagari (1994,
More informationChapter 8 Statistical Intervals for a Single Sample
Chapter 8 Statistical Intervals for a Single Sample Part 1: Confidence intervals (CI) for population mean µ Section 8-1: CI for µ when σ 2 known & drawing from normal distribution Section 8-1.2: Sample
More information1 Dynamic programming
1 Dynamic programming A country has just discovered a natural resource which yields an income per period R measured in terms of traded goods. The cost of exploitation is negligible. The government wants
More informationEconomics 8106 Macroeconomic Theory Recitation 2
Economics 8106 Macroeconomic Theory Recitation 2 Conor Ryan November 8st, 2016 Outline: Sequential Trading with Arrow Securities Lucas Tree Asset Pricing Model The Equity Premium Puzzle 1 Sequential Trading
More informationLong run equilibria in an asymmetric oligopoly
Economic Theory 14, 705 715 (1999) Long run equilibria in an asymmetric oligopoly Yasuhito Tanaka Faculty of Law, Chuo University, 742-1, Higashinakano, Hachioji, Tokyo, 192-03, JAPAN (e-mail: yasuhito@tamacc.chuo-u.ac.jp)
More informationSYSM 6304 Risk and Decision Analysis Lecture 2: Fitting Distributions to Data
SYSM 6304 Risk and Decision Analysis Lecture 2: Fitting Distributions to Data M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 5, 2015
More informationStat 260/CS Learning in Sequential Decision Problems. Peter Bartlett
Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Gittins Index: Discounted, Bayesian (hence Markov arms). Reduces to stopping problem for each arm. Interpretation as (scaled)
More informationPractice Problems on Term Structure
Practice Problems on Term Structure 1- The yield curve and expectations hypothesis (30 points) Assume that the policy of the Fed is given by the Taylor rule that we studied in class, that is i t = 1.5
More informationAdvanced Macroeconomics 5. Rational Expectations and Asset Prices
Advanced Macroeconomics 5. Rational Expectations and Asset Prices Karl Whelan School of Economics, UCD Spring 2015 Karl Whelan (UCD) Asset Prices Spring 2015 1 / 43 A New Topic We are now going to switch
More information1 Asset Pricing: Replicating portfolios
Alberto Bisin Corporate Finance: Lecture Notes Class 1: Valuation updated November 17th, 2002 1 Asset Pricing: Replicating portfolios Consider an economy with two states of nature {s 1, s 2 } and with
More information