Mengdi Wang. July 3rd, Laboratory for Information and Decision Systems, M.I.T.
|
|
- Reynard Morrison
- 6 years ago
- Views:
Transcription
1 Practice July 3rd, 2012 Laboratory for Information and Decision Systems, M.I.T.
2 1 2
3 Infinite-Horizon DP Minimize over policies the objective cost function J π (x 0 ) = lim N E w k,k=0,1,... DP π = {µ 0,µ 1,...} { N 1 k=0 α k g (x k,µ k (x k ),w k ) } How to DP Approximation: parameterize policies/cost vectors, aggregation, etc. Simulation: Use simulation-generated trajectories {x k } to calculate DP quantities, without knowing the system
4 Markovian Decision Process Assume the system is an n-state (controlled) Markov chain Change to Markov chain notation States i = 1,...,n (instead of x) Transition probabilities p ik i k+1 (u k ) [instead of x k+1 = f(x k,u k,w k )] Cost per stage g(i,u,j) [instead of g(x k,u k,w k )] Cost of a policy π = {µ 0,µ 1,...} J π (i) = lim N E w k k=0,1,... { N 1 k=0 α k g (i k,µ k (i k ),i k+1 ) i 0 = i }
5 MDP Continued The optimal cost vector satisfies the Bellman equation for all i J (i) = min u U or in matrix form J = n p ij (u)(g(i,u,j) +αj (j)), j=1 min {g µ +αp µ J }. µ:{1,...,n} U Shorthand notation for DP mappings (TJ)(i) = min u U(i) n p ij (u) ( g(i,u,j)+αj(j) ), i = 1,...,n, j=1 (T µ J)(i) = n ( )( ) p ij q(i) g (i,µ(i),j)+αj(j), i = 1,...,n j=1
6 Approximation in Policy Space Approximation Architecture Parameterize the set of policies µ using a vector r, and then optimize over r. Approximation in Value Space J and J µ from a family of functions parameterized by r, e.g., a linear approximation J Φr, J(i) φ(i) r.
7 PI (*) DP Algorithms: A Roadmap Implement the two steps of PI in an approximate sense: Policy J µt = T µt J µt by approximation/simulation Direct Approach (*), e.g., simulation-based least squares Indirect Approach, solve J µt = ΠT µt J µt by TD/LSTD/LSPE. Policy Improvement T µt+1 J µt = TJ µt using the approximate cost vector/q-factors. J and Q Solve J = TJ or Q = FQ directly by simulation, e.g., Q- Learning, Bellman Error Minimization, LP approach
8 1 2
9 Call A call option gives the buyer of the option the right to buy the underlying asset at a fixed price (strike price or K). The buyer pays a price for this right. At or before expiration, If the value of the underlying asset (S)> Strike Price(K) Buyer makes the difference: S - K If the value of the underlying asset (S) < Strike Price (K) Buyer does not exercise
10 Variables Valuing American Call Strike Price: K Time till Expiration: T Price of underlying asset: S Volatility, Dividends, etc. Valuing American options requires the solution of an optimal stopping problem: Option Price = E[S(t ) K Option eventually exercised ] where t = optimal exercising time. If the option writers do not solve t correctly, the option buyers will have an arbitrage opportunity to exploit the option writers.
11 Infinite-Horizon DP Formulation Assume that: Dynamics of underlying asset S t+1 = f(s t,w t ) State: S t, price of the underlying asset Control: u t {Exercise,Hold} Transition cost: g t (HOLD) = 0, g t (Exercise) = S t K. The option never expires. There exists a discount factor α (0,1) Bellman Equation Let J t (S) be the option price at the tth day when the current stock price is S J(S t ) = max{s t K,αE[J(S t+1 )]}.
12 Binomial For simplicity, consider a model with a finite number of states: S t+1 = { min{u,ust } with probability p max{d,ds t } with probability 1-p The Bellman equation is J = TJ where { TJ(S) =max S K, α[pj(min{u,us t })+(1 p)j(max{d,ds t })] }.
13 Features We will approximate the option prices J,J µ using two set of features, each consisting of 3 features/basis functions. Simple Polynomial Laguerre Polynomial L 0 (S) = 1, L 1 (S) = S,,L 2 (S) = S 2. L 0 (S) = exp( S), L 1 (S) = exp( S)(1 S), L 2 (S) = exp( S)(1 S +S 2 /2). The basis matrix Φ is an n 3 matrix.
14 Policy Exercise 1.A (Direct Approach) Use the direct least squares approach 1 min r 2 N 1 k=0 ( φ(i k ) r N 1 t=k α t k g (i t,µ(i t ),i t+1 ) to evaluate the profits of a specified exercising strategy. Construct a simulator that generates trajectories of {i k }. ) 2 Plot the approximate cost vector as a function of the stock price.
15 Formula of the Solution J µ Φr µ Exercise 1 Continued r µ = ( N 1 k=0 = A 1 b where ) 1( N 1 φ(i k )φ(i k ) A = N 1 k=0 k=0 N 1 φ(i k ) t=k φ(i k )φ(i k ), α t k g (i t,µ(i t ),i t+1 ) ) b = N 1 k=0 α tµ(k) k (S(t µ (k)) K)φ(i k ) where t µ (k) is the first time of triggering the Exercise control using policy µ after time k.
16 0.3 Results - Option Prices 0.25 Option Prices Stock Price
17 Exercise 1.B (Optional) Policy Suppose that holding the option always incurs a cost g(i,j) = Modify the program of Exercise 1.A to price the American call option. Exercise 1.C (Optional) Use an indirect approach to price an American call option. Choose any one of the three algorithms: TD/LSTD/LSPE
18 Use PI to Evaluate Exercise 2 Use approximate PI to price an American call option. The program should be a function of S 0,T,p,u,d,K. Suggestions: Start with a randomly generated policy µ 0 : {1,...,n} {HOLD,EXERCISE}. Use approximate policy evaluation (Exercise 1) to evaluate J µt and Q µt for a given policy µ t. Plot the trajectories of µ t.
19 Policy Iteration for Option Algorithm (starts with any µ 0 ) Policy evaluation: Evaluate J µt Φr µ by approximate policy evaluation: use the program of Exercise 1 to compute r µ Evaluate the Q-values. For example, for i t [2,n 1], ] Q µt (i t ) = αe[j µt (i t+1 )] αe[ Jµt (i t+1 ) ( ) = α p J µt (i t +1)+(1 p) J µt (i t 1). Note J µ (i) = φ(i) r µ. Policy improvement: { HOLD if S(i) K Qµt (i), µ t+1 (i) = EXERCISE Otherwise.
20 0.3 Results - Option Prices 0.25 Option Prices Stock Price
21 Price Convergence of Exercising Policies Convergence of Policies (blue: exercise, red: hold) Number of Policy Iteration
22 Exercise 3 Online PI for Q Factors Modify the program of Exercise 2, so that the policy improvement step uses approximate evaluation of Q-factors (instead of exact Q values calculated using known p). For each state i, calculate [ ] Q(i) = E α J(i k+1 ) i k = i by averaging the samples obtained from the trajectory Q(i) k=n k=0 1(i k = i)α J(i k+1 ) k=n k=0 1(i k = i) Note J(i k+1 ) = φ(i k+1 ) r.
23 The end Thank You Very Much! Any Question is Welcome :-)
6.231 DYNAMIC PROGRAMMING LECTURE 5 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 5 LECTURE OUTLINE Stopping problems Scheduling problems Minimax Control 1 PURE STOPPING PROBLEMS Two possible controls: Stop (incur a one-time stopping cost, and move
More informationMarkov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo
Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo Outline Sequential Decision Processes Markov chains Highlight Markov property Discounted rewards Value iteration Markov
More informationCPS 270: Artificial Intelligence Markov decision processes, POMDPs
CPS 270: Artificial Intelligence http://www.cs.duke.edu/courses/fall08/cps270/ Markov decision processes, POMDPs Instructor: Vincent Conitzer Warmup: a Markov process with rewards We derive some reward
More informationEE266 Homework 5 Solutions
EE, Spring 15-1 Professor S. Lall EE Homework 5 Solutions 1. A refined inventory model. In this problem we consider an inventory model that is more refined than the one you ve seen in the lectures. The
More informationNeuro-Dynamic Programming for Fractionated Radiotherapy Planning
Neuro-Dynamic Programming for Fractionated Radiotherapy Planning Geng Deng Michael C. Ferris University of Wisconsin at Madison Conference on Optimization and Health Care, Feb, 2006 Background Optimal
More informationAM 121: Intro to Optimization Models and Methods
AM 121: Intro to Optimization Models and Methods Lecture 18: Markov Decision Processes Yiling Chen and David Parkes Lesson Plan Markov decision processes Policies and Value functions Solving: average reward,
More informationDynamic Portfolio Choice II
Dynamic Portfolio Choice II Dynamic Programming Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Dynamic Portfolio Choice II 15.450, Fall 2010 1 / 35 Outline 1 Introduction to Dynamic
More information17 MAKING COMPLEX DECISIONS
267 17 MAKING COMPLEX DECISIONS The agent s utility now depends on a sequence of decisions In the following 4 3grid environment the agent makes a decision to move (U, R, D, L) at each time step When the
More informationMaking Complex Decisions
Ch. 17 p.1/29 Making Complex Decisions Chapter 17 Ch. 17 p.2/29 Outline Sequential decision problems Value iteration algorithm Policy iteration algorithm Ch. 17 p.3/29 A simple environment 3 +1 p=0.8 2
More informationFinal exam solutions
EE365 Stochastic Control / MS&E251 Stochastic Decision Models Profs. S. Lall, S. Boyd June 5 6 or June 6 7, 2013 Final exam solutions This is a 24 hour take-home final. Please turn it in to one of the
More informationNon-Deterministic Search
Non-Deterministic Search MDP s 1 Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2 Example:
More informationMarkov Decision Processes. CS 486/686: Introduction to Artificial Intelligence
Markov Decision Processes CS 486/686: Introduction to Artificial Intelligence 1 Outline Markov Chains Discounted Rewards Markov Decision Processes (MDP) - Value Iteration - Policy Iteration 2 Markov Chains
More information6.262: Discrete Stochastic Processes 3/2/11. Lecture 9: Markov rewards and dynamic prog.
6.262: Discrete Stochastic Processes 3/2/11 Lecture 9: Marov rewards and dynamic prog. Outline: Review plus of eigenvalues and eigenvectors Rewards for Marov chains Expected first-passage-times Aggregate
More informationComplex Decisions. Sequential Decision Making
Sequential Decision Making Outline Sequential decision problems Value iteration Policy iteration POMDPs (basic concepts) Slides partially based on the Book "Reinforcement Learning: an introduction" by
More informationElif Özge Özdamar T Reinforcement Learning - Theory and Applications February 14, 2006
On the convergence of Q-learning Elif Özge Özdamar elif.ozdamar@helsinki.fi T-61.6020 Reinforcement Learning - Theory and Applications February 14, 2006 the covergence of stochastic iterative algorithms
More informationThe Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions
The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions Optimality and Approximation Finite MDP: {S, A, R, p, γ}
More informationReinforcement Learning
Reinforcement Learning MDP March May, 2013 MDP MDP: S, A, P, R, γ, µ State can be partially observable: Partially Observable MDPs () Actions can be temporally extended: Semi MDPs (SMDPs) and Hierarchical
More informationIntroduction to Dynamic Programming
Introduction to Dynamic Programming http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Mengdi Wang s and Prof. Dimitri Bertsekas lecture notes Outline 2/65 1
More informationStochastic Optimal Control
Stochastic Optimal Control Lecturer: Eilyan Bitar, Cornell ECE Scribe: Kevin Kircher, Cornell MAE These notes summarize some of the material from ECE 5555 (Stochastic Systems) at Cornell in the fall of
More informationReinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration
Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More informationReinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration
Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision
More informationMartingale Pricing Theory in Discrete-Time and Discrete-Space Models
IEOR E4707: Foundations of Financial Engineering c 206 by Martin Haugh Martingale Pricing Theory in Discrete-Time and Discrete-Space Models These notes develop the theory of martingale pricing in a discrete-time,
More informationHandout 8: Introduction to Stochastic Dynamic Programming. 2 Examples of Stochastic Dynamic Programming Problems
SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 8: Introduction to Stochastic Dynamic Programming Instructor: Shiqian Ma March 10, 2014 Suggested Reading: Chapter 1 of Bertsekas,
More informationBasic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig]
Basic Framework [This lecture adapted from Sutton & Barto and Russell & Norvig] About this class Markov Decision Processes The Bellman Equation Dynamic Programming for finding value functions and optimal
More informationChapter 10 Inventory Theory
Chapter 10 Inventory Theory 10.1. (a) Find the smallest n such that g(n) 0. g(1) = 3 g(2) =2 n = 2 (b) Find the smallest n such that g(n) 0. g(1) = 1 25 1 64 g(2) = 1 4 1 25 g(3) =1 1 4 g(4) = 1 16 1
More information1.12 Exercises EXERCISES Use integration by parts to compute. ln(x) dx. 2. Compute 1 x ln(x) dx. Hint: Use the substitution u = ln(x).
2 EXERCISES 27 2 Exercises Use integration by parts to compute lnx) dx 2 Compute x lnx) dx Hint: Use the substitution u = lnx) 3 Show that tan x) =/cos x) 2 and conclude that dx = arctanx) + C +x2 Note:
More informationLecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018
Lecture 2: Making Good Sequences of Decisions Given a Model of World CS234: RL Emma Brunskill Winter 218 Human in the loop exoskeleton work from Steve Collins lab Class Structure Last Time: Introduction
More informationA Robust Option Pricing Problem
IMA 2003 Workshop, March 12-19, 2003 A Robust Option Pricing Problem Laurent El Ghaoui Department of EECS, UC Berkeley 3 Robust optimization standard form: min x sup u U f 0 (x, u) : u U, f i (x, u) 0,
More informationSOLVING ROBUST SUPPLY CHAIN PROBLEMS
SOLVING ROBUST SUPPLY CHAIN PROBLEMS Daniel Bienstock Nuri Sercan Özbay Columbia University, New York November 13, 2005 Project with Lucent Technologies Optimize the inventory buffer levels in a complicated
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non-Deterministic Search 1 Example: Grid World A maze-like problem The agent lives
More informationOutline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.
Outline Coordinate Minimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University November 27, 208 Introduction 2 Algorithms Cyclic order with exact minimization
More informationOverview: Representation Techniques
1 Overview: Representation Techniques Week 6 Representations for classical planning problems deterministic environment; complete information Week 7 Logic programs for problem representations including
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Markov Decision Processes II Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC
More informationCS 188: Artificial Intelligence Fall 2011
CS 188: Artificial Intelligence Fall 2011 Lecture 9: MDPs 9/22/2011 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 2 Grid World The agent lives in
More informationSequential Decision Making
Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008 Introduction Some examples Dynamic programming
More informationA potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples
1.3 Regime switching models A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples (or regimes). If the dates, the
More information6.231 DYNAMIC PROGRAMMING LECTURE 8 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 8 LECTURE OUTLINE Suboptimal control Cost approximation methods: Classification Certainty equivalent control: An example Limited lookahead policies Performance bounds
More informationPORTFOLIO OPTIMIZATION AND EXPECTED SHORTFALL MINIMIZATION FROM HISTORICAL DATA
PORTFOLIO OPTIMIZATION AND EXPECTED SHORTFALL MINIMIZATION FROM HISTORICAL DATA We begin by describing the problem at hand which motivates our results. Suppose that we have n financial instruments at hand,
More informationReasoning with Uncertainty
Reasoning with Uncertainty Markov Decision Models Manfred Huber 2015 1 Markov Decision Process Models Markov models represent the behavior of a random process, including its internal state and the externally
More informationLecture 1: Lucas Model and Asset Pricing
Lecture 1: Lucas Model and Asset Pricing Economics 714, Spring 2018 1 Asset Pricing 1.1 Lucas (1978) Asset Pricing Model We assume that there are a large number of identical agents, modeled as a representative
More informationMarkov Decision Process
Markov Decision Process Human-aware Robotics 2018/02/13 Chapter 17.3 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/mdp-ii.pdf
More informationTDT4171 Artificial Intelligence Methods
TDT47 Artificial Intelligence Methods Lecture 7 Making Complex Decisions Norwegian University of Science and Technology Helge Langseth IT-VEST 0 helgel@idi.ntnu.no TDT47 Artificial Intelligence Methods
More informationIteration. The Cake Eating Problem. Discount Factors
18 Value Function Iteration Lab Objective: Many questions have optimal answers that change over time. Sequential decision making problems are among this classification. In this lab you we learn how to
More information6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE Rollout algorithms Cost improvement property Discrete deterministic problems Approximations of rollout algorithms Discretization of continuous time
More informationPolynomial processes in stochastic portofolio theory
Polynomial processes in stochastic portofolio theory Christa Cuchiero University of Vienna 9 th Bachelier World Congress July 15, 2016 Christa Cuchiero (University of Vienna) Polynomial processes in SPT
More information16 MAKING SIMPLE DECISIONS
247 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action A will have possible outcome states Result
More informationPOMDPs: Partially Observable Markov Decision Processes Advanced AI
POMDPs: Partially Observable Markov Decision Processes Advanced AI Wolfram Burgard Types of Planning Problems Classical Planning State observable Action Model Deterministic, accurate MDPs observable stochastic
More informationReinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein
Reinforcement Learning Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the
More informationCredit Risk Models with Filtered Market Information
Credit Risk Models with Filtered Market Information Rüdiger Frey Universität Leipzig Bressanone, July 2007 ruediger.frey@math.uni-leipzig.de www.math.uni-leipzig.de/~frey joint with Abdel Gabih and Thorsten
More informationMarkov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N
Markov Decision Processes: Making Decision in the Presence of Uncertainty (some of) R&N 16.1-16.6 R&N 17.1-17.4 Different Aspects of Machine Learning Supervised learning Classification - concept learning
More informationHandout 4: Deterministic Systems and the Shortest Path Problem
SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 4: Deterministic Systems and the Shortest Path Problem Instructor: Shiqian Ma January 27, 2014 Suggested Reading: Bertsekas
More informationVersion A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.
Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x
More informationDefinition 4.1. In a stochastic process T is called a stopping time if you can tell when it happens.
102 OPTIMAL STOPPING TIME 4. Optimal Stopping Time 4.1. Definitions. On the first day I explained the basic problem using one example in the book. On the second day I explained how the solution to the
More informationUtility Indifference Pricing and Dynamic Programming Algorithm
Chapter 8 Utility Indifference ricing and Dynamic rogramming Algorithm In the Black-Scholes framework, we can perfectly replicate an option s payoff. However, it may not be true beyond the Black-Scholes
More informationEE365: Markov Decision Processes
EE365: Markov Decision Processes Markov decision processes Markov decision problem Examples 1 Markov decision processes 2 Markov decision processes add input (or action or control) to Markov chain with
More informationTopics in Computational Sustainability CS 325 Spring 2016
Topics in Computational Sustainability CS 325 Spring 2016 Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures.
More information2D5362 Machine Learning
2D5362 Machine Learning Reinforcement Learning MIT GALib Available at http://lancet.mit.edu/ga/ download galib245.tar.gz gunzip galib245.tar.gz tar xvf galib245.tar cd galib245 make or access my files
More informationMaking Decisions. CS 3793 Artificial Intelligence Making Decisions 1
Making Decisions CS 3793 Artificial Intelligence Making Decisions 1 Planning under uncertainty should address: The world is nondeterministic. Actions are not certain to succeed. Many events are outside
More informationModelling Anti-Terrorist Surveillance Systems from a Queueing Perspective
Systems from a Queueing Perspective September 7, 2012 Problem A surveillance resource must observe several areas, searching for potential adversaries. Problem A surveillance resource must observe several
More informationThe Irrevocable Multi-Armed Bandit Problem
The Irrevocable Multi-Armed Bandit Problem Ritesh Madan Qualcomm-Flarion Technologies May 27, 2009 Joint work with Vivek Farias (MIT) 2 Multi-Armed Bandit Problem n arms, where each arm i is a Markov Decision
More informationDeep RL and Controls Homework 1 Spring 2017
10-703 Deep RL and Controls Homework 1 Spring 2017 February 1, 2017 Due February 17, 2017 Instructions You have 15 days from the release of the assignment until it is due. Refer to gradescope for the exact
More information4 Reinforcement Learning Basic Algorithms
Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems
More informationDRAFT. 1 exercise in state (S, t), π(s, t) = 0 do not exercise in state (S, t) Review of the Risk Neutral Stock Dynamics
Chapter 12 American Put Option Recall that the American option has strike K and maturity T and gives the holder the right to exercise at any time in [0, T ]. The American option is not straightforward
More informationEE/AA 578 Univ. of Washington, Fall Homework 8
EE/AA 578 Univ. of Washington, Fall 2016 Homework 8 1. Multi-label SVM. The basic Support Vector Machine (SVM) described in the lecture (and textbook) is used for classification of data with two labels.
More informationA simple wealth model
Quantitative Macroeconomics Raül Santaeulàlia-Llopis, MOVE-UAB and Barcelona GSE Homework 5, due Thu Nov 1 I A simple wealth model Consider the sequential problem of a household that maximizes over streams
More informationStochastic Calculus, Application of Real Analysis in Finance
, Application of Real Analysis in Finance Workshop for Young Mathematicians in Korea Seungkyu Lee Pohang University of Science and Technology August 4th, 2010 Contents 1 BINOMIAL ASSET PRICING MODEL Contents
More informationLecture 6: Option Pricing Using a One-step Binomial Tree. Thursday, September 12, 13
Lecture 6: Option Pricing Using a One-step Binomial Tree An over-simplified model with surprisingly general extensions a single time step from 0 to T two types of traded securities: stock S and a bond
More informationA Markovian Futures Market for Computing Power
Fernando Martinez Peter Harrison Uli Harder A distributed economic solution: MaGoG A world peer-to-peer market No central auctioneer Messages are forwarded by neighbours, and a copy remains in their pubs
More informationStat 260/CS Learning in Sequential Decision Problems. Peter Bartlett
Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Gittins Index: Discounted, Bayesian (hence Markov arms). Reduces to stopping problem for each arm. Interpretation as (scaled)
More informationOptimization Models in Financial Mathematics
Optimization Models in Financial Mathematics John R. Birge Northwestern University www.iems.northwestern.edu/~jrbirge Illinois Section MAA, April 3, 2004 1 Introduction Trends in financial mathematics
More informationMDPs: Bellman Equations, Value Iteration
MDPs: Bellman Equations, Value Iteration Sutton & Barto Ch 4 (Cf. AIMA Ch 17, Section 2-3) Adapted from slides kindly shared by Stuart Russell Sutton & Barto Ch 4 (Cf. AIMA Ch 17, Section 2-3) 1 Appreciations
More informationQ1. [?? pts] Search Traces
CS 188 Spring 2010 Introduction to Artificial Intelligence Midterm Exam Solutions Q1. [?? pts] Search Traces Each of the trees (G1 through G5) was generated by searching the graph (below, left) with a
More informationOptimal Dam Management
Optimal Dam Management Michel De Lara et Vincent Leclère July 3, 2012 Contents 1 Problem statement 1 1.1 Dam dynamics.................................. 2 1.2 Intertemporal payoff criterion..........................
More informationProblem Set 3. Thomas Philippon. April 19, Human Wealth, Financial Wealth and Consumption
Problem Set 3 Thomas Philippon April 19, 2002 1 Human Wealth, Financial Wealth and Consumption The goal of the question is to derive the formulas on p13 of Topic 2. This is a partial equilibrium analysis
More informationMarkov Chains (Part 2)
Markov Chains (Part 2) More Examples and Chapman-Kolmogorov Equations Markov Chains - 1 A Stock Price Stochastic Process Consider a stock whose price either goes up or down every day. Let X t be a random
More informationEconomics 2010c: Lecture 4 Precautionary Savings and Liquidity Constraints
Economics 2010c: Lecture 4 Precautionary Savings and Liquidity Constraints David Laibson 9/11/2014 Outline: 1. Precautionary savings motives 2. Liquidity constraints 3. Application: Numerical solution
More informationPart 4: Markov Decision Processes
Markov decision processes c Vikram Krishnamurthy 2013 1 Part 4: Markov Decision Processes Aim: This part covers discrete time Markov Decision processes whose state is completely observed. The key ideas
More informationEC316a: Advanced Scientific Computation, Fall Discrete time, continuous state dynamic models: solution methods
EC316a: Advanced Scientific Computation, Fall 2003 Notes Section 4 Discrete time, continuous state dynamic models: solution methods We consider now solution methods for discrete time models in which decisions
More informationAsymptotic methods in risk management. Advances in Financial Mathematics
Asymptotic methods in risk management Peter Tankov Based on joint work with A. Gulisashvili Advances in Financial Mathematics Paris, January 7 10, 2014 Peter Tankov (Université Paris Diderot) Asymptotic
More informationCS 188: Artificial Intelligence. Outline
C 188: Artificial Intelligence Markov Decision Processes (MDPs) Pieter Abbeel UC Berkeley ome slides adapted from Dan Klein 1 Outline Markov Decision Processes (MDPs) Formalism Value iteration In essence
More informationPricing Problems under the Markov Chain Choice Model
Pricing Problems under the Markov Chain Choice Model James Dong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jd748@cornell.edu A. Serdar Simsek
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 8 The portfolio selection problem The portfolio
More informationThe assignment game: Decentralized dynamics, rate of convergence, and equitable core selection
1 / 29 The assignment game: Decentralized dynamics, rate of convergence, and equitable core selection Bary S. R. Pradelski (with Heinrich H. Nax) ETH Zurich October 19, 2015 2 / 29 3 / 29 Two-sided, one-to-one
More informationEcon 582 Nonlinear Regression
Econ 582 Nonlinear Regression Eric Zivot June 3, 2013 Nonlinear Regression In linear regression models = x 0 β (1 )( 1) + [ x ]=0 [ x = x] =x 0 β = [ x = x] [ x = x] x = β it is assumed that the regression
More informationMATH3075/3975 FINANCIAL MATHEMATICS TUTORIAL PROBLEMS
MATH307/37 FINANCIAL MATHEMATICS TUTORIAL PROBLEMS School of Mathematics and Statistics Semester, 04 Tutorial problems should be used to test your mathematical skills and understanding of the lecture material.
More informationMAFS5250 Computational Methods for Pricing Structured Products Topic 5 - Monte Carlo simulation
MAFS5250 Computational Methods for Pricing Structured Products Topic 5 - Monte Carlo simulation 5.1 General formulation of the Monte Carlo procedure Expected value and variance of the estimate Multistate
More information16 MAKING SIMPLE DECISIONS
253 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action a will have possible outcome states Result(a)
More informationShape-Preserving Dynamic Programming
Shape-Preserving Dynamic Programming Kenneth Judd and Yongyang Cai July 20, 2011 1 Introduction The multi-stage decision-making problems are numerically challenging. When the problems are time-separable,
More informationMonte Carlo Methods (Estimators, On-policy/Off-policy Learning)
1 / 24 Monte Carlo Methods (Estimators, On-policy/Off-policy Learning) Julie Nutini MLRG - Winter Term 2 January 24 th, 2017 2 / 24 Monte Carlo Methods Monte Carlo (MC) methods are learning methods, used
More informationFeb. 4 Math 2335 sec 001 Spring 2014
Feb. 4 Math 2335 sec 001 Spring 2014 Propagated Error in Function Evaluation Let f (x) be some differentiable function. Suppose x A is an approximation to x T, and we wish to determine the function value
More informationUnobserved Heterogeneity Revisited
Unobserved Heterogeneity Revisited Robert A. Miller Dynamic Discrete Choice March 2018 Miller (Dynamic Discrete Choice) cemmap 7 March 2018 1 / 24 Distributional Assumptions about the Unobserved Variables
More informationParameterized Expectations
Parameterized Expectations A Brief Introduction Craig Burnside Duke University November 2006 Craig Burnside (Duke University) Parameterized Expectations November 2006 1 / 10 Parameterized Expectations
More informationAMH4 - ADVANCED OPTION PRICING. Contents
AMH4 - ADVANCED OPTION PRICING ANDREW TULLOCH Contents 1. Theory of Option Pricing 2 2. Black-Scholes PDE Method 4 3. Martingale method 4 4. Monte Carlo methods 5 4.1. Method of antithetic variances 5
More informationMarkov Decision Processes
Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. AIMA 3. Chris Amato Stochastic domains So far, we have studied search Can use
More informationFebruary 2 Math 2335 sec 51 Spring 2016
February 2 Math 2335 sec 51 Spring 2016 Section 3.1: Root Finding, Bisection Method Many problems in the sciences, business, manufacturing, etc. can be framed in the form: Given a function f (x), find
More informationValuation and Tax Policy
Valuation and Tax Policy Lakehead University Winter 2005 Formula Approach for Valuing Companies Let EBIT t Earnings before interest and taxes at time t T Corporate tax rate I t Firm s investments at time
More informationValuing American Options by Simulation
Valuing American Options by Simulation Hansjörg Furrer Market-consistent Actuarial Valuation ETH Zürich, Frühjahrssemester 2008 Valuing American Options Course material Slides Longstaff, F. A. and Schwartz,
More informationCarnegie Mellon University Graduate School of Industrial Administration
Carnegie Mellon University Graduate School of Industrial Administration Chris Telmer Winter 2005 Final Examination Seminar in Finance 1 (47 720) Due: Thursday 3/3 at 5pm if you don t go to the skating
More informationDecision Theory: Value Iteration
Decision Theory: Value Iteration CPSC 322 Decision Theory 4 Textbook 9.5 Decision Theory: Value Iteration CPSC 322 Decision Theory 4, Slide 1 Lecture Overview 1 Recap 2 Policies 3 Value Iteration Decision
More information