Sequential Decision Making

Size: px
Start display at page:

Download "Sequential Decision Making"

Transcription

1 Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008

2 Introduction Some examples Dynamic programming Summary

3 The purpose of this lecture Basic concepts Refresh memory. Present the MDP setting. Define optimality. Categorize planning tasks Algorithms Introduce basic planning algorithms. Promote intuition about their relationships. Discuss their applicability. Ultimate goal A firm foundation in reasoning and planning under uncertainty.

4 Preliminaries Markoc decision processes Value functions and optimality Introduction Some examples Shortest-path problems Continuing problems Episodic, finite, infinite? Dynamic programming Introduction Backwards induction Iterative Methods Policy evaluation Value iteration Policy iteration Summary Lessons learnt Learning from reinforcement... Bibliography

5 Preliminaries Variables Environment µ M States s t S. Actions a t A. A reward r t R. A policy π P. Notation Probabilities P(x y, z) z(x y). Expectations E(x y, z) Sometimes P(a t = a ) will be used for clarity. i.e. π t(a s) = P(a t = a s t = s, π t)

6 Markov decision processes The setting We are in some dynamic environment µ, where at each time step t we observe States s t S. Actions a t A. µ r t+1 s t s t+1 A reward r t R. a t P(s t+1 s t, a t, s t 1, a t 1,..., µ) = P(s t+1 s t, a t, µ) (1) p(r t+1 s t+1, s t, a t, s t 1, a t 1,..., µ) = p(r t+1 s t+1, s t, a t, µ) (2)

7 Markov decision processes The setting We are in some dynamic environment µ, where at each time step t we observe States s t S. Actions a t A. µ r t+1 s t s t+1 A reward r t R. a t P(s t+1 s t, a t, s t 1, a t 1,..., µ) = P(s t+1 s t, a t, µ) (1) p(r t+1 s t+1, s t, a t, s t 1, a t 1,..., µ) = p(r t+1 s t+1, s t, a t, µ) (2) p(r t+1 s t+1, s t, a t, s t 1, a t 1,..., µ) = p(r t+1 s t, a t, µ) (3)

8 Markov decision processes The setting We are in some dynamic environment µ, where at each time step t we observe States s t S. Actions a t A. µ r t+1 s t s t+1 A reward r t R. a t P(s t+1 s t, a t, s t 1, a t 1,..., µ) = P(s t+1 s t, a t, µ) (1) p(r t+1 s t+1, s t, a t, s t 1, a t 1,..., µ) = p(r t+1 s t+1, s t, a t, µ) (2) p(r t+1 s t+1, s t, a t, s t 1, a t 1,..., µ) = p(r t+1 s t, a t, µ) (3) p(r t+1 s t+1, s t, a t, s t 1, a t 1,..., µ) = p(r t+1 s t+1, µ) (4)

9 Markov decision processes Controlling the environment We wish to control the environment according to some (for now undefined) optimality criterion. The agent The agent is fully defined by its policy π. This induces a probability distribution on actions and states. µ s t s t+1 a t π r t+1 P(a t s t, a t 2, s t 1, a t 2,..., π, µ) = P(a t s t, π) (5)

10 Markov decision processes µ The induced Markov chain Together with the policy π and the model µ, we induce a Markov chain on states. r t+1 s t s t+1 a t P(s t+1 s t, π, µ) = X a A P(s t+1 a t = a, s t, π, µ) P(a t = a s t, π) (6a) π P(s t+k s t, π, µ) = X s P(s t+k s t+k 1 = s, π, µ) P(s t+k 1 s t, π, µ) (6b) Note: lim k P(s t+k = s s t, π, µ) is the stationary distribution.

11 Markov decision processes µ The induced Markov chain Together with the policy π and the model µ, we induce a Markov chain on states. r t+1 s t s t+1 P(s t+1 s t, π, µ) = X a A P(s t+1 a t = a, s t, π, µ) P(a t = a s t, π) (6a) π P(s t+k s t, π, µ) = X s P(s t+k s t+k 1 = s, π, µ) P(s t+k 1 s t, π, µ) (6b) Note: lim k P(s t+k = s s t, π, µ) is the stationary distribution.

12 Planning The goal in reinforcement learning To maximise a function of future rewards. Finite horizon We are only interested in rewards up to a fixed point in time. Infinite horizon We are interested in all rewards.

13 Value functions The return / utility The agent s goal is to maximize the return (Too many Rs, switching to U). For example the utility given a policy π and an MDP µ! TX Ut,µ( π ) E(U, π, µ) = E γ k r t+k, π, µ (7) TX = γ X k E[r t+k s t+k =i, µ] P(s t+k = i, π, µ) (8) k=1 i S Can in principle be calculated from (6). The value functions k=1 Special case: T, V π t (s) = V π (s). V π t (s) X a A U π t,µ(s, a)π(a s) (9) Q π t (s, a) U π t,µ(s, a) (10)

14 Bellman equation An optimal policy An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. The recursion TX t t (s) = g(t) E[r t+1 s t=s, π] + g(t + k) E[r t+k s t=s, a t=a, π, µ] (11) V π k=2 = g(t) E[r t+1 s t=s, π] + X i S V π t+1(i)µ(s t+1=i s t=s, π). (12) The current stage s value is just the next reward plus the next stage s value. See also the Hamilton-Jacobi-Bellman equation in optimal control.

15 Greedy policies The 1-step greedy policy The 1-step-greedy policy with respect to a given value function can be expressed as ( 1, a = arg max π(a s) = a Q(s, a ) 0, otherwise (13) The optimal policy The 1-step-greedy policy with respect to the optimal value function is optimal. Naive solution Evaluate all policies, select π : V π (s) V π (s) s S. Clever solutions Directly estimate V. Iteratively improve π.

16 Preliminaries Markoc decision processes Value functions and optimality Introduction Some examples Shortest-path problems Continuing problems Episodic, finite, infinite? Dynamic programming Introduction Backwards induction Iterative Methods Policy evaluation Value iteration Policy iteration Summary Lessons learnt Learning from reinforcement... Bibliography

17 Problem types Planning with... Finite vs Infinite horizon Discounted vs Undiscounted rewards Certain vs Uncertain knowledge Expected vs worst-case utility functions Environments Deterministic Stochastic Episodic Continuing Observable Hidden state Statistical Adversarial

18 Deterministic shortest-path problems X Properties g(t) = 1, T. r t = 1 unless s t = X, in which case r t = 0. µ(s t+1 = X s t = X ) = 1. A = {North, South, East, West} Transitions are deterministic and walls block. What is the shortest path to the destination from any point?

19 Stochastic shortest path problem, with a pit O X Properties g(t) = 1, T. r t = 1, but r t = 0 at X and 100 at O and episode ends. µ(s t+1 = X s t = X ) = 1. A = {North, South, East, West} Moves to a random direction with probability θ. Walls block. For what value of θ is it better to take the dangerous shortcut? (However, if we want to take into account risk explicitly we must modify the agent s utility function)

20 Continuing stochastic MDPs Inventory management There are K storage locations. Each place can store n i items. At each time-step there is a probability φ i that a client try to buy an item from location i, P i φ i 1. If there is an item available, you gain reward 1. Action 1: ordering u units of stock, for paying c(u). Action 2: move u units of stock from one location i to another, j, for a cost ψ ij (u). An easy special case K = 1. There is one type of item only. Orders are placed and received every n timesteps.

21 Inventory management An easy special case K = 1. Deliveries happen once every m timesteps. Each time-step a client arrives with probability φ. Properties The state set. The action set. The transition probabilities

22 Inventory management An easy special case K = 1. Deliveries happen once every m timesteps. Each time-step a client arrives with probability φ. Properties The state set is the number of items we have: S = {0, 1,..., n}. The action set. The transition probabilities

23 Inventory management An easy special case K = 1. Deliveries happen once every m timesteps. Each time-step a client arrives with probability φ. Properties The state set is the number of items we have: S = {0, 1,..., n}. The action set A = {0, 1,..., n} since we can order from nothing up to n items. The transition probabilities

24 Inventory management An easy special case K = 1. Deliveries happen once every m timesteps. Each time-step a client arrives with probability φ. Properties The state set is the number of items we have: S = {0, 1,..., n}. The action set A = {0, 1,..., n} since we can order from nothing up to n items. The transition probabilities P(s s, a) = `m d φd (1 φ) m d, where d = s + a s, for s + a n.

25 Episodic, finite, infinite? Shortest path problems Episodic tasks with infinite horizon, 1 reward everywhere, but 0 in absorbing state. Continuing tasks with 0 reward everywhere, but > 0 in goal state, γ (0, 1), state reset after goal. Equivalent if optimal policy is the same.

26 Preliminaries Markoc decision processes Value functions and optimality Introduction Some examples Shortest-path problems Continuing problems Episodic, finite, infinite? Dynamic programming Introduction Backwards induction Iterative Methods Policy evaluation Value iteration Policy iteration Summary Lessons learnt Learning from reinforcement... Bibliography

27 Introduction Why dynamic programming? Programming means finding a solution. i.e. linear programming. Dynamic because we find solution to dynamical problems. Direct relation to control theory.

28 The shortest-path problem revisited Properties γ = 1, T. r t = 1 unless s t = X, in which case r t = 0. The length of the shortest path from s equals the negative value of the optimal policy. Also called cost-to-go. Remember Dijkstra s algorithm?

29 Backwards induction I s 4 T 2 st 3 2 st 2 2 s 2 T 1 s 1 T 1 s T If we know the value of the last state, we can calculate the values of its predecessors. The value of s i T 1 is the reward obtained by moving from s i T 1 to s T, plus the value of s T. s 1 T 2

30 Backwards induction II w B y D max{w + y, z + x + w} A B C D 0 w w e x y A B 0 w A 0 w A x z C x + w All w, x, y, z < 0, and reward e < 0 of staying at the same state, apart from A. All w, x, y, z

31 Backwards induction III Backwards induction in deterministic environments Input µ, S T. Initialise V T (s), for all s S T. for n = T 1, T 2,..., t do for s S n do a n (s) = arg max a E(r s s,a, s, µ) + V n+1(s s,a) V n (s) = E(r s s,a n (s), s, µ) + V n+1(s s,a n (s) ) end for end for Notes s s,a is the state that occurs if we take a in s. Because we always know the optimal choice at the last step, we can find the optimal policy directly!

32 Backwards induction III Backwards induction in deterministic environments Input µ, S T. Initialise V T (s), for all s S T. for n = T 1, T 2,..., t do for s S n do a n (s) = arg max a Ps S n+1 µ(s s, a) E(r s, s, µ) + V n+1(s ) V n(s) = P s S n+1 µ(s s, a n (s)) E(r s, s, µ) + V n+1(s ) end for end for Notes µ(s s, a) is an indicator function Because we always know the optimal choice at the last step, we can find the optimal policy directly!

33 Backwards induction III Backwards induction in deterministic environments Input µ, S T. Initialise V T (s), for all s S T. for n = T 1, T 2,..., t do for s S n do a n (s) = arg max a Ps S n+1 µ(s s, a) E(r s, s, µ) + V n+1(s ) V n(s) = P s S n+1 µ(s s, a n (s)) E(r s, s, µ) + V n+1(s ) end for end for Notes µ(s s, a) is an indicator function Nothing apparently stops µ(s s, a) from being a distribution So, what happens in stochastic environments?

34 Backwards induction IV: Stochastic problems a 0 0 A a 1 0 A B a 0 0 w w B Almost as before, but state depends stochastically on actions, i.e. µ(s t+1=a s t=b, a t=a) a 1 The backup operators V π n (s) = X s [µ(s s, π) E(r s, s) + V π n+1(s )] (14) V n (s) = max a X s µ(s s, a)[e(r s, s) + V n+1(s )] (15)

35 Backwards induction V Policy evaluation with Backwards induction Input π, µ, S T. Initialise V T (s), for all s S T. for n = T 1, T 2,..., t do for s S n do V π n (s) = P s S n+1 µ(s s, π)[e(r s, s, µ) + V π n+1(s )] end for end for Notes µ(s s, π) = P a µ(s s, a)π(a s). Finite horizon problems only, or approximations to finite horizon (i.e. lookahead in game trees). Hey, it works for stochastic problems too! (By marginalizing over states) Because we always know the optimal choice at the last step, we can find the optimal policy directly! Can be used with estimates of the value function.

36 Backwards induction V Finding the optimal policy with Backwards induction Input µ, S T. Initialise V T (s), for all s S T. for n = T 1, T 2,..., t do for s S n do a n (s) = arg max a µ(s s, a)[e(r s, s, µ) + V n+1(s )] V n(s) = P s S n+1 µ(s s, a n )[E(r s, s, µ) + V n+1(s )] end for end for Notes Finite horizon problems only, or approximations to finite horizon (i.e. lookahead in game trees). Hey, it works for stochastic problems too! (By marginalizing over states) Because we always know the optimal choice at the last step, we can find the optimal policy directly! Can be used with estimates of the value function.

37 Infinite horizon What happens when the horizon is infinite in stochastic shortest path problems? Episodic tasks still terminate with probability one for proper policies. Assumption: there exists at least one proper policy. Assumption: Every improper policy has negatively infinite value for at least one state.

38 Preliminaries Markoc decision processes Value functions and optimality Introduction Some examples Shortest-path problems Continuing problems Episodic, finite, infinite? Dynamic programming Introduction Backwards induction Iterative Methods Policy evaluation Value iteration Policy iteration Summary Lessons learnt Learning from reinforcement... Bibliography

39 Policy improvement Why evaluate a policy? We can always generate a better policy given the value function of any policy! Theorem (Policy improvement) Let some policy π P. If π (a s) = 1 for a = arg max a Q π (s, a) and 0 otherwise, then V π (s) V π (s), s S

40 Policy improvement theorem Theorem (Policy improvement) Let some policy π P. If π (a s) = 1 for a = arg max a Q π (s, a) and 0 otherwise, then V π (s) V π (s), s S Proof. Let π k be the policy which execute π for k steps and then reverts to π. Then π = π 0, π = lim k π k, and we have V π (s t) = X a t π(a t s t)q π (s, a) 2 3 max a t Q π (s, a) = max a t 4 X µ(s t+1 s t, a t)v π (s t+1) 5 = V π 1 (s t). st+1 Similarly, we show that V π k+1 (s) V π k (s) for all s. Then V π V π 1 (s) V π k (s) V π k+1 (s)... and so V π (s) = lim k V π k (s) V π (s).

41 Iterative policy evaluation Policy Evaluation Input π, µ and ˆV 0. n = 0. repeat n = n + 1 for s S do ˆV n(s) = P a A π(a s) P s S µ(s s, a)[e(r s, µ) + γ ˆV n 1(s )] end for until ˆV n ˆV n 1 < θ Notes Arbitrary initialization. V π, ˆV n R S, lim n ˆV n = V π, if the limit exists. Can be done in-place as well.

42 Policy evaluation example I iterations Random policy evaluation.

43 Policy evaluation example I iteration Random policy evaluation.

44 Policy evaluation example I iterations Random policy evaluation.

45 Policy evaluation example I iterations Random policy evaluation.

46 Policy evaluation example I Greedy policy with respect to value function of random policy Random policy evaluation.

47 Policy evaluation example II Random policy evaluation.

48 Policy evaluation example II Random policy evaluation.

49 Policy evaluation example II Random policy evaluation.

50 Policy evaluation example II Random policy evaluation.

51 Value iteration Value Iteration Input µ. ˆV 0(s) = 0 for all s S. n = 0. repeat n = n + 1 for s S do ˆV n(s) = max a A Ps S µ(s s, a)[e(r s, µ) + γ ˆV n 1(s )] end for until ˆV n ˆV n 1 < θ Notes No reason to assume a fixed policy, convergence holds. lim n ˆVn = V. Equivalent to backwards induction as horizon. This is because lim T V π t (s) = V π (s) for all t.

52 Value iteration example iter: 0

53 Value iteration example iter: 1

54 Value iteration example iter: 10

55 Value iteration example iter: 100

56 Preliminaries Markoc decision processes Value functions and optimality Introduction Some examples Shortest-path problems Continuing problems Episodic, finite, infinite? Dynamic programming Introduction Backwards induction Iterative Methods Policy evaluation Value iteration Policy iteration Summary Lessons learnt Learning from reinforcement... Bibliography

57 Policy iteration I Policy Iteration Input π, µ. repeat Evaluate V π. π : π (s) = arg max a Q π (s, a) until arg max a Q π (s, a) = V π (s) for all s Theorem (Policy iteration) The policy iteration algorithm generates an improving sequence of proper policies, i.e. V π k+1 (s) V π k (s), k > 0, s S and terminates with an optimal policy, i.e. lim k V π k = V. Remark (Policy iteration termination) If π k is not optimal, then s S : V π k+1 (s) > V π k (s). Conversely, if no such s exists, π k is optimal and we terminate.

58 Policy iteration II The evaluation step It can be done exactly by solving the linear equations. (Proper policy iteration) We can use a limited number n of policy evaluation iterations (Modified policy iteration algorithm). These can be initalised from the last evaluation. If we use just n = 1, then the method is identical to value iteration. If we use n, then we have proper policy iteration. Other methods Asynchronous policy iteration. Multistage lookahead policy iteration. See [1], section 2.2 for more details. See [3], Chapters 4,5,6 for detailed theory.

59 Preliminaries Markoc decision processes Value functions and optimality Introduction Some examples Shortest-path problems Continuing problems Episodic, finite, infinite? Dynamic programming Introduction Backwards induction Iterative Methods Policy evaluation Value iteration Policy iteration Summary Lessons learnt Learning from reinforcement... Bibliography

60 Lessons learnt Planning with a known model Find the optimal policy given model and objective. Bellman recursion is the basis of dynamic programming. Easy to solve for finite-horizon problems or episodic tasks. Stochasticity does not make the problem significantly harder. Infinite-horizon continuing problems harder, but tractable. Things to think about Would iterative methods be better than backwards induction? How does it depend on the problem? Does the discount factor have any effect? How can backwards induction be applied to iterative problems and vice-versa?

61 Learning from reinforcement... Bandit problems γ [0, 1], T > 0. S = 1. Rewards are random with expectation E[r t a t, µ] If µ known, trivial: a = arg max a E[r t a t = a, µ], for all t, γ. If µ is unknown, can be intractable. Simplest case of learning from reinforcement.

62 Further reading Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Morris H. DeGroot. Optimal Statistical Decisions. John Wiley & Sons, Republished in Marting L. Puterman. Markov Decision Processes : Discrete Stochastic Dynamic Programming. John Wiley & Sons, New Jersey, US, 1994,2005. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.

Non-Deterministic Search

Non-Deterministic Search Non-Deterministic Search MDP s 1 Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2 Example:

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 9: MDPs 2/16/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Announcements

More information

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 Lecture 17 & 18: Markov Decision Processes Oct 12 13, 2010 A subset of Lecture 9 slides from Dan Klein UC Berkeley Many slides over the course

More information

Complex Decisions. Sequential Decision Making

Complex Decisions. Sequential Decision Making Sequential Decision Making Outline Sequential decision problems Value iteration Policy iteration POMDPs (basic concepts) Slides partially based on the Book "Reinforcement Learning: an introduction" by

More information

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the reward function Must (learn to) act so as to maximize expected rewards Grid World The agent

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Markov Decision Processes (MDPs) Luke Zettlemoyer Many slides over the course adapted from Dan Klein, Stuart Russell or Andrew Moore 1 Announcements PS2 online now Due

More information

MDPs: Bellman Equations, Value Iteration

MDPs: Bellman Equations, Value Iteration MDPs: Bellman Equations, Value Iteration Sutton & Barto Ch 4 (Cf. AIMA Ch 17, Section 2-3) Adapted from slides kindly shared by Stuart Russell Sutton & Barto Ch 4 (Cf. AIMA Ch 17, Section 2-3) 1 Appreciations

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 CS 188: Artificial Intelligence Fall 2011 Lecture 9: MDPs 9/22/2011 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 2 Grid World The agent lives in

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non-Deterministic Search 1 Example: Grid World A maze-like problem The agent lives

More information

CS 188: Artificial Intelligence. Outline

CS 188: Artificial Intelligence. Outline C 188: Artificial Intelligence Markov Decision Processes (MDPs) Pieter Abbeel UC Berkeley ome slides adapted from Dan Klein 1 Outline Markov Decision Processes (MDPs) Formalism Value iteration In essence

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Markov Decision Processes (MDP)! Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Stuart Russell or Andrew Moore 1 Outline

More information

Basic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig]

Basic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig] Basic Framework [This lecture adapted from Sutton & Barto and Russell & Norvig] About this class Markov Decision Processes The Bellman Equation Dynamic Programming for finding value functions and optimal

More information

Decision Theory: Value Iteration

Decision Theory: Value Iteration Decision Theory: Value Iteration CPSC 322 Decision Theory 4 Textbook 9.5 Decision Theory: Value Iteration CPSC 322 Decision Theory 4, Slide 1 Lecture Overview 1 Recap 2 Policies 3 Value Iteration Decision

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

TDT4171 Artificial Intelligence Methods

TDT4171 Artificial Intelligence Methods TDT47 Artificial Intelligence Methods Lecture 7 Making Complex Decisions Norwegian University of Science and Technology Helge Langseth IT-VEST 0 helgel@idi.ntnu.no TDT47 Artificial Intelligence Methods

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non Deterministic Search Example: Grid World A maze like problem The agent lives in

More information

4 Reinforcement Learning Basic Algorithms

4 Reinforcement Learning Basic Algorithms Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems

More information

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2 COMP417 Introduction to Robotics and Intelligent Systems Reinforcement Learning - 2 Speaker: Sandeep Manjanna Acklowledgement: These slides use material from Pieter Abbeel s, Dan Klein s and John Schulman

More information

17 MAKING COMPLEX DECISIONS

17 MAKING COMPLEX DECISIONS 267 17 MAKING COMPLEX DECISIONS The agent s utility now depends on a sequence of decisions In the following 4 3grid environment the agent makes a decision to move (U, R, D, L) at each time step When the

More information

Intro to Reinforcement Learning. Part 3: Core Theory

Intro to Reinforcement Learning. Part 3: Core Theory Intro to Reinforcement Learning Part 3: Core Theory Interactive Example: You are the algorithm! Finite Markov decision processes (finite MDPs) dynamics p p p Experience: S 0 A 0 R 1 S 1 A 1 R 2 S 2 A 2

More information

Making Complex Decisions

Making Complex Decisions Ch. 17 p.1/29 Making Complex Decisions Chapter 17 Ch. 17 p.2/29 Outline Sequential decision problems Value iteration algorithm Policy iteration algorithm Ch. 17 p.3/29 A simple environment 3 +1 p=0.8 2

More information

Markov Decision Processes. Lirong Xia

Markov Decision Processes. Lirong Xia Markov Decision Processes Lirong Xia Today ØMarkov decision processes search with uncertain moves and infinite space ØComputing optimal policy value iteration policy iteration 2 Grid World Ø The agent

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Markov Decision Processes II Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC

More information

2D5362 Machine Learning

2D5362 Machine Learning 2D5362 Machine Learning Reinforcement Learning MIT GALib Available at http://lancet.mit.edu/ga/ download galib245.tar.gz gunzip galib245.tar.gz tar xvf galib245.tar cd galib245 make or access my files

More information

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010 Lecture 12: MDP1 Victor R. Lesser CMPSCI 683 Fall 2010 Biased Random GSAT - WalkSat Notice no random restart 2 Today s lecture Search where there is Uncertainty in Operator Outcome --Sequential Decision

More information

Handout 4: Deterministic Systems and the Shortest Path Problem

Handout 4: Deterministic Systems and the Shortest Path Problem SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 4: Deterministic Systems and the Shortest Path Problem Instructor: Shiqian Ma January 27, 2014 Suggested Reading: Bertsekas

More information

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018 Lecture 2: Making Good Sequences of Decisions Given a Model of World CS234: RL Emma Brunskill Winter 218 Human in the loop exoskeleton work from Steve Collins lab Class Structure Last Time: Introduction

More information

AM 121: Intro to Optimization Models and Methods

AM 121: Intro to Optimization Models and Methods AM 121: Intro to Optimization Models and Methods Lecture 18: Markov Decision Processes Yiling Chen and David Parkes Lesson Plan Markov decision processes Policies and Value functions Solving: average reward,

More information

Reinforcement Learning 04 - Monte Carlo. Elena, Xi

Reinforcement Learning 04 - Monte Carlo. Elena, Xi Reinforcement Learning 04 - Monte Carlo Elena, Xi Previous lecture 2 Markov Decision Processes Markov decision processes formally describe an environment for reinforcement learning where the environment

More information

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision

More information

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision

More information

The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions

The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions Optimality and Approximation Finite MDP: {S, A, R, p, γ}

More information

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N Markov Decision Processes: Making Decision in the Presence of Uncertainty (some of) R&N 16.1-16.6 R&N 17.1-17.4 Different Aspects of Machine Learning Supervised learning Classification - concept learning

More information

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1 Making Decisions CS 3793 Artificial Intelligence Making Decisions 1 Planning under uncertainty should address: The world is nondeterministic. Actions are not certain to succeed. Many events are outside

More information

CPS 270: Artificial Intelligence Markov decision processes, POMDPs

CPS 270: Artificial Intelligence  Markov decision processes, POMDPs CPS 270: Artificial Intelligence http://www.cs.duke.edu/courses/fall08/cps270/ Markov decision processes, POMDPs Instructor: Vincent Conitzer Warmup: a Markov process with rewards We derive some reward

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Hierarchical Reinforcement Learning Action hierarchy, hierarchical RL, semi-mdp Vien Ngo Marc Toussaint University of Stuttgart Outline Hierarchical reinforcement learning Learning

More information

Handout 8: Introduction to Stochastic Dynamic Programming. 2 Examples of Stochastic Dynamic Programming Problems

Handout 8: Introduction to Stochastic Dynamic Programming. 2 Examples of Stochastic Dynamic Programming Problems SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 8: Introduction to Stochastic Dynamic Programming Instructor: Shiqian Ma March 10, 2014 Suggested Reading: Chapter 1 of Bertsekas,

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. AIMA 3. Chris Amato Stochastic domains So far, we have studied search Can use

More information

Introduction to Dynamic Programming

Introduction to Dynamic Programming Introduction to Dynamic Programming http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Mengdi Wang s and Prof. Dimitri Bertsekas lecture notes Outline 2/65 1

More information

CS 461: Machine Learning Lecture 8

CS 461: Machine Learning Lecture 8 CS 461: Machine Learning Lecture 8 Dr. Kiri Wagstaff kiri.wagstaff@calstatela.edu 2/23/08 CS 461, Winter 2008 1 Plan for Today Review Clustering Reinforcement Learning How different from supervised, unsupervised?

More information

Reasoning with Uncertainty

Reasoning with Uncertainty Reasoning with Uncertainty Markov Decision Models Manfred Huber 2015 1 Markov Decision Process Models Markov models represent the behavior of a random process, including its internal state and the externally

More information

Introduction to Reinforcement Learning. MAL Seminar

Introduction to Reinforcement Learning. MAL Seminar Introduction to Reinforcement Learning MAL Seminar 2014-2015 RL Background Learning by interacting with the environment Reward good behavior, punish bad behavior Trial & Error Combines ideas from psychology

More information

Overview: Representation Techniques

Overview: Representation Techniques 1 Overview: Representation Techniques Week 6 Representations for classical planning problems deterministic environment; complete information Week 7 Logic programs for problem representations including

More information

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 7: Bayesian approach to MAB - Gittins index Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach

More information

Long-Term Values in MDPs, Corecursively

Long-Term Values in MDPs, Corecursively Long-Term Values in MDPs, Corecursively Applied Category Theory, 15-16 March 2018, NIST Helle Hvid Hansen Delft University of Technology Helle Hvid Hansen (TU Delft) MDPs, Corecursively NIST, 15/Mar/2018

More information

Reinforcement learning and Markov Decision Processes (MDPs) (B) Avrim Blum

Reinforcement learning and Markov Decision Processes (MDPs) (B) Avrim Blum Reinforcement learning and Markov Decision Processes (MDPs) 15-859(B) Avrim Blum RL and MDPs General scenario: We are an agent in some state. Have observations, perform actions, get rewards. (See lights,

More information

The Problem of Temporal Abstraction

The Problem of Temporal Abstraction The Problem of Temporal Abstraction How do we connect the high level to the low-level? " the human level to the physical level? " the decide level to the action level? MDPs are great, search is great,

More information

16 MAKING SIMPLE DECISIONS

16 MAKING SIMPLE DECISIONS 247 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action A will have possible outcome states Result

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Monte Carlo Methods Heiko Zimmermann 15.05.2017 1 Monte Carlo Monte Carlo policy evaluation First visit policy evaluation Estimating q values On policy methods Off policy methods

More information

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week CS 473: Artificial Intelligence Markov Decision Processes Dan Weld University of Washington [Slides originally created by Dan Klein & Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning MDP March May, 2013 MDP MDP: S, A, P, R, γ, µ State can be partially observable: Partially Observable MDPs () Actions can be temporally extended: Semi MDPs (SMDPs) and Hierarchical

More information

Markov Decision Process

Markov Decision Process Markov Decision Process Human-aware Robotics 2018/02/13 Chapter 17.3 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/mdp-ii.pdf

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA Stochastic domains Image: Berkeley CS188 course notes (downloaded Summer

More information

Temporal Abstraction in RL

Temporal Abstraction in RL Temporal Abstraction in RL How can an agent represent stochastic, closed-loop, temporally-extended courses of action? How can it act, learn, and plan using such representations? HAMs (Parr & Russell 1998;

More information

10703 Deep Reinforcement Learning and Control

10703 Deep Reinforcement Learning and Control 10703 Deep Reinforcement Learning and Control Russ Salakhutdinov Machine Learning Department rsalakhu@cs.cmu.edu Temporal Difference Learning Used Materials Disclaimer: Much of the material and slides

More information

Multi-step Bootstrapping

Multi-step Bootstrapping Multi-step Bootstrapping Jennifer She Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto February 7, 2017 J February 7, 2017 1 / 29 Multi-step Bootstrapping Generalization

More information

Deep RL and Controls Homework 1 Spring 2017

Deep RL and Controls Homework 1 Spring 2017 10-703 Deep RL and Controls Homework 1 Spring 2017 February 1, 2017 Due February 17, 2017 Instructions You have 15 days from the release of the assignment until it is due. Refer to gradescope for the exact

More information

Stochastic Optimal Control

Stochastic Optimal Control Stochastic Optimal Control Lecturer: Eilyan Bitar, Cornell ECE Scribe: Kevin Kircher, Cornell MAE These notes summarize some of the material from ECE 5555 (Stochastic Systems) at Cornell in the fall of

More information

Lecture 4: Model-Free Prediction

Lecture 4: Model-Free Prediction Lecture 4: Model-Free Prediction David Silver Outline 1 Introduction 2 Monte-Carlo Learning 3 Temporal-Difference Learning 4 TD(λ) Introduction Model-Free Reinforcement Learning Last lecture: Planning

More information

Elif Özge Özdamar T Reinforcement Learning - Theory and Applications February 14, 2006

Elif Özge Özdamar T Reinforcement Learning - Theory and Applications February 14, 2006 On the convergence of Q-learning Elif Özge Özdamar elif.ozdamar@helsinki.fi T-61.6020 Reinforcement Learning - Theory and Applications February 14, 2006 the covergence of stochastic iterative algorithms

More information

Lecture Notes 1

Lecture Notes 1 4.45 Lecture Notes Guido Lorenzoni Fall 2009 A portfolio problem To set the stage, consider a simple nite horizon problem. A risk averse agent can invest in two assets: riskless asset (bond) pays gross

More information

Reinforcement Learning and Simulation-Based Search

Reinforcement Learning and Simulation-Based Search Reinforcement Learning and Simulation-Based Search David Silver Outline 1 Reinforcement Learning 2 3 Planning Under Uncertainty Reinforcement Learning Markov Decision Process Definition A Markov Decision

More information

16 MAKING SIMPLE DECISIONS

16 MAKING SIMPLE DECISIONS 253 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action a will have possible outcome states Result(a)

More information

Reinforcement Learning Lectures 4 and 5

Reinforcement Learning Lectures 4 and 5 Reinforcement Learning Lectures 4 and 5 Gillian Hayes 18th January 2007 Reinforcement Learning 1 Framework Rewards, Returns Environment Dynamics Components of a Problem Values and Action Values, V and

More information

Lecture 1: Lucas Model and Asset Pricing

Lecture 1: Lucas Model and Asset Pricing Lecture 1: Lucas Model and Asset Pricing Economics 714, Spring 2018 1 Asset Pricing 1.1 Lucas (1978) Asset Pricing Model We assume that there are a large number of identical agents, modeled as a representative

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Ryan P. Adams COS 324 Elements of Machine Learning Princeton University We now turn to a new aspect of machine learning, in which agents take actions and become active in their

More information

CS 360: Advanced Artificial Intelligence Class #16: Reinforcement Learning

CS 360: Advanced Artificial Intelligence Class #16: Reinforcement Learning CS 360: Advanced Artificial Intelligence Class #16: Reinforcement Learning Daniel M. Gaines Note: content for slides adapted from Sutton and Barto [1998] Introduction Animals learn through interaction

More information

Temporal Abstraction in RL. Outline. Example. Markov Decision Processes (MDPs) ! Options

Temporal Abstraction in RL. Outline. Example. Markov Decision Processes (MDPs) ! Options Temporal Abstraction in RL Outline How can an agent represent stochastic, closed-loop, temporally-extended courses of action? How can it act, learn, and plan using such representations?! HAMs (Parr & Russell

More information

Motivation: disadvantages of MC methods MC does not work for scenarios without termination It updates only at the end of the episode (sometimes - it i

Motivation: disadvantages of MC methods MC does not work for scenarios without termination It updates only at the end of the episode (sometimes - it i Temporal-Di erence Learning Taras Kucherenko, Joonatan Manttari KTH tarask@kth.se manttari@kth.se March 7, 2017 Taras Kucherenko, Joonatan Manttari (KTH) TD-Learning March 7, 2017 1 / 68 Motivation: disadvantages

More information

Dynamic Programming: An overview. 1 Preliminaries: The basic principle underlying dynamic programming

Dynamic Programming: An overview. 1 Preliminaries: The basic principle underlying dynamic programming Dynamic Programming: An overview These notes summarize some key properties of the Dynamic Programming principle to optimize a function or cost that depends on an interval or stages. This plays a key role

More information

Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo

Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo Outline Sequential Decision Processes Markov chains Highlight Markov property Discounted rewards Value iteration Markov

More information

COS402- Artificial Intelligence Fall Lecture 17: MDP: Value Iteration and Policy Iteration

COS402- Artificial Intelligence Fall Lecture 17: MDP: Value Iteration and Policy Iteration COS402- Artificial Intelligence Fall 2015 Lecture 17: MDP: Value Iteration and Policy Iteration Outline The Bellman equation and Bellman update Contraction Value iteration Policy iteration The Bellman

More information

Optimal Stopping. Nick Hay (presentation follows Thomas Ferguson s Optimal Stopping and Applications) November 6, 2008

Optimal Stopping. Nick Hay (presentation follows Thomas Ferguson s Optimal Stopping and Applications) November 6, 2008 (presentation follows Thomas Ferguson s and Applications) November 6, 2008 1 / 35 Contents: Introduction Problems Markov Models Monotone Stopping Problems Summary 2 / 35 The Secretary problem You have

More information

CS 188: Artificial Intelligence Fall Markov Decision Processes

CS 188: Artificial Intelligence Fall Markov Decision Processes CS 188: Artificial Intelligence Fall 2007 Lecture 10: MDP 9/27/2007 Dan Klein UC Berkeley Markov Deciion Procee An MDP i defined by: A et of tate S A et of action a A A tranition function T(,a, ) Prob

More information

The Irrevocable Multi-Armed Bandit Problem

The Irrevocable Multi-Armed Bandit Problem The Irrevocable Multi-Armed Bandit Problem Ritesh Madan Qualcomm-Flarion Technologies May 27, 2009 Joint work with Vivek Farias (MIT) 2 Multi-Armed Bandit Problem n arms, where each arm i is a Markov Decision

More information

CS885 Reinforcement Learning Lecture 3b: May 9, 2018

CS885 Reinforcement Learning Lecture 3b: May 9, 2018 CS885 Reinforcement Learning Lecture 3b: May 9, 2018 Intro to Reinforcement Learning [SutBar] Sec. 5.1-5.3, 6.1-6.3, 6.5, [Sze] Sec. 3.1, 4.3, [SigBuf] Sec. 2.1-2.5, [RusNor] Sec. 21.1-21.3, CS885 Spring

More information

Stochastic Games and Bayesian Games

Stochastic Games and Bayesian Games Stochastic Games and Bayesian Games CPSC 532l Lecture 10 Stochastic Games and Bayesian Games CPSC 532l Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games 4 Analyzing Bayesian

More information

Monte Carlo Methods (Estimators, On-policy/Off-policy Learning)

Monte Carlo Methods (Estimators, On-policy/Off-policy Learning) 1 / 24 Monte Carlo Methods (Estimators, On-policy/Off-policy Learning) Julie Nutini MLRG - Winter Term 2 January 24 th, 2017 2 / 24 Monte Carlo Methods Monte Carlo (MC) methods are learning methods, used

More information

Announcements. CS 188: Artificial Intelligence Spring Outline. Reinforcement Learning. Grid Futures. Grid World. Lecture 9: MDPs 2/16/2011

Announcements. CS 188: Artificial Intelligence Spring Outline. Reinforcement Learning. Grid Futures. Grid World. Lecture 9: MDPs 2/16/2011 CS 188: Artificial Intelligence Spring 2011 Lecture 9: MDP 2/16/2011 Announcement Midterm: Tueday March 15, 5-8pm P2: Due Friday 4:59pm W3: Minimax, expectimax and MDP---out tonight, due Monday February

More information

6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE

6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE 6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE Rollout algorithms Cost improvement property Discrete deterministic problems Approximations of rollout algorithms Discretization of continuous time

More information

Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks

Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks Spring 2009 Main question: How much are patents worth? Answering this question is important, because it helps

More information

Rollout Allocation Strategies for Classification-based Policy Iteration

Rollout Allocation Strategies for Classification-based Policy Iteration Rollout Allocation Strategies for Classification-based Policy Iteration V. Gabillon, A. Lazaric & M. Ghavamzadeh firstname.lastname@inria.fr Workshop on Reinforcement Learning and Search in Very Large

More information

Dynamic Programming and Reinforcement Learning

Dynamic Programming and Reinforcement Learning Dynamic Programming and Reinforcement Learning Daniel Russo Columbia Business School Decision Risk and Operations Division Fall, 2017 Daniel Russo (Columbia) Fall 2017 1 / 34 Supervised Machine Learning

More information

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models IEOR E4707: Foundations of Financial Engineering c 206 by Martin Haugh Martingale Pricing Theory in Discrete-Time and Discrete-Space Models These notes develop the theory of martingale pricing in a discrete-time,

More information

Dynamic Portfolio Choice II

Dynamic Portfolio Choice II Dynamic Portfolio Choice II Dynamic Programming Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Dynamic Portfolio Choice II 15.450, Fall 2010 1 / 35 Outline 1 Introduction to Dynamic

More information

MDPs and Value Iteration 2/20/17

MDPs and Value Iteration 2/20/17 MDPs and Value Iteration 2/20/17 Recall: State Space Search Problems A set of discrete states A distinguished start state A set of actions available to the agent in each state An action function that,

More information

CS221 / Spring 2018 / Sadigh. Lecture 7: MDPs I

CS221 / Spring 2018 / Sadigh. Lecture 7: MDPs I CS221 / Spring 2018 / Sadigh Lecture 7: MDPs I cs221.stanford.edu/q Question How would you get to Mountain View on Friday night in the least amount of time? bike drive Caltrain Uber/Lyft fly CS221 / Spring

More information

Lecture 7: MDPs I. Question. Course plan. So far: search problems. Uncertainty in the real world

Lecture 7: MDPs I. Question. Course plan. So far: search problems. Uncertainty in the real world Lecture 7: MDPs I cs221.stanford.edu/q Question How would you get to Mountain View on Friday night in the least amount of time? bike drive Caltrain Uber/Lyft fly CS221 / Spring 2018 / Sadigh CS221 / Spring

More information

Probabilistic Robotics: Probabilistic Planning and MDPs

Probabilistic Robotics: Probabilistic Planning and MDPs Probabilistic Robotics: Probabilistic Planning and MDPs Slide credits: Wolfram Burgard, Dieter Fox, Cyrill Stachniss, Giorgio Grisetti, Maren Bennewitz, Christian Plagemann, Dirk Haehnel, Mike Montemerlo,

More information

Long Term Values in MDPs Second Workshop on Open Games

Long Term Values in MDPs Second Workshop on Open Games A (Co)Algebraic Perspective on Long Term Values in MDPs Second Workshop on Open Games Helle Hvid Hansen Delft University of Technology Helle Hvid Hansen (TU Delft) 2nd WS Open Games Oxford 4-6 July 2018

More information

Reinforcement Learning. Monte Carlo and Temporal Difference Learning

Reinforcement Learning. Monte Carlo and Temporal Difference Learning Reinforcement Learning Monte Carlo and Temporal Difference Learning Manfred Huber 2014 1 Monte Carlo Methods Dynamic Programming Requires complete knowledge of the MDP Spends equal time on each part of

More information

Markov Decision Processes II

Markov Decision Processes II Markov Decision Processes II Daisuke Oyama Topics in Economic Theory December 17, 2014 Review Finite state space S, finite action space A. The value of a policy σ A S : v σ = β t Q t σr σ, t=0 which satisfies

More information

Q1. [?? pts] Search Traces

Q1. [?? pts] Search Traces CS 188 Spring 2010 Introduction to Artificial Intelligence Midterm Exam Solutions Q1. [?? pts] Search Traces Each of the trees (G1 through G5) was generated by searching the graph (below, left) with a

More information

CS 234 Winter 2019 Assignment 1 Due: January 23 at 11:59 pm

CS 234 Winter 2019 Assignment 1 Due: January 23 at 11:59 pm CS 234 Winter 2019 Assignment 1 Due: January 23 at 11:59 pm For submission instructions please refer to website 1 Optimal Policy for Simple MDP [20 pts] Consider the simple n-state MDP shown in Figure

More information

Introduction to Fall 2007 Artificial Intelligence Final Exam

Introduction to Fall 2007 Artificial Intelligence Final Exam NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Final Exam You have 180 minutes. The exam is closed book, closed notes except a two-page crib sheet, basic calculators

More information

Monte-Carlo Planning: Introduction and Bandit Basics. Alan Fern

Monte-Carlo Planning: Introduction and Bandit Basics. Alan Fern Monte-Carlo Planning: Introduction and Bandit Basics Alan Fern 1 Large Worlds We have considered basic model-based planning algorithms Model-based planning: assumes MDP model is available Methods we learned

More information

STP Problem Set 3 Solutions

STP Problem Set 3 Solutions STP 425 - Problem Set 3 Solutions 4.4) Consider the separable sequential allocation problem introduced in Sections 3.3.3 and 4.6.3, where the goal is to maximize the sum subject to the constraints f(x

More information

Supplementary Material: Strategies for exploration in the domain of losses

Supplementary Material: Strategies for exploration in the domain of losses 1 Supplementary Material: Strategies for exploration in the domain of losses Paul M. Krueger 1,, Robert C. Wilson 2,, and Jonathan D. Cohen 3,4 1 Department of Psychology, University of California, Berkeley

More information

6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE

6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE 6.21 DYNAMIC PROGRAMMING LECTURE LECTURE OUTLINE Deterministic finite-state DP problems Backward shortest path algorithm Forward shortest path algorithm Shortest path examples Alternative shortest path

More information