Non-Deterministic Search

Similar documents
CS 188: Artificial Intelligence Spring Announcements

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

CSE 473: Artificial Intelligence

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence. Outline

CSEP 573: Artificial Intelligence

Reinforcement Learning

MDPs: Bellman Equations, Value Iteration

CS 188: Artificial Intelligence

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

CS 188: Artificial Intelligence

Markov Decision Processes. Lirong Xia

CS 343: Artificial Intelligence

Markov Decision Process

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week

Announcements. CS 188: Artificial Intelligence Spring Outline. Reinforcement Learning. Grid Futures. Grid World. Lecture 9: MDPs 2/16/2011

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2

CS 188: Artificial Intelligence Fall Markov Decision Processes

Complex Decisions. Sequential Decision Making

Markov Decision Processes

Sequential Decision Making

Decision Theory: Value Iteration

Making Complex Decisions

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N

Markov Decision Processes

TDT4171 Artificial Intelligence Methods

Example: Grid World. CS 188: Artificial Intelligence Markov Decision Processes II. Recap: MDPs. Optimal Quantities

Basic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig]

17 MAKING COMPLEX DECISIONS

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018

2D5362 Machine Learning

The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions

MDPs and Value Iteration 2/20/17

CS 6300 Artificial Intelligence Spring 2018

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1

Announcements. CS 188: Artificial Intelligence Fall Preferences. Rational Preferences. Rational Preferences. MEU Principle. Project 2 (due 10/1)

4 Reinforcement Learning Basic Algorithms

Lecture 17: More on Markov Decision Processes. Reinforcement learning

AM 121: Intro to Optimization Models and Methods

Deep RL and Controls Homework 1 Spring 2017

COS402- Artificial Intelligence Fall Lecture 17: MDP: Value Iteration and Policy Iteration

CPS 270: Artificial Intelligence Markov decision processes, POMDPs

CEC login. Student Details Name SOLUTIONS

16 MAKING SIMPLE DECISIONS

Reasoning with Uncertainty

Markov Decision Processes

Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

Reinforcement learning and Markov Decision Processes (MDPs) (B) Avrim Blum

Intro to Reinforcement Learning. Part 3: Core Theory

Lecture 7: MDPs I. Question. Course plan. So far: search problems. Uncertainty in the real world

CS221 / Spring 2018 / Sadigh. Lecture 7: MDPs I

Decision making in the presence of uncertainty

Monte Carlo Methods (Estimators, On-policy/Off-policy Learning)

Q1. [?? pts] Search Traces

16 MAKING SIMPLE DECISIONS

CS 234 Winter 2019 Assignment 1 Due: January 23 at 11:59 pm

Reinforcement Learning

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

Introduction to Reinforcement Learning. MAL Seminar

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Reinforcement Learning Lectures 4 and 5

Reinforcement Learning

Topics in Computational Sustainability CS 325 Spring 2016

Reinforcement Learning and Simulation-Based Search

CS221 / Spring 2018 / Sadigh. Lecture 9: Games I

Reinforcement Learning. Monte Carlo and Temporal Difference Learning

Lecture 7: Bayesian approach to MAB - Gittins index

CS 461: Machine Learning Lecture 8

Lecture 9: Games I. Course plan. A simple game. Roadmap. Machine learning. Example: game 1

CS 360: Advanced Artificial Intelligence Class #16: Reinforcement Learning

POMDPs: Partially Observable Markov Decision Processes Advanced AI

10/12/2012. Logistics. Planning Agent. MDPs. Review: Expectimax. PS 2 due Tuesday Thursday 10/18. PS 3 due Thursday 10/25.

Economics 2010c: Lecture 4 Precautionary Savings and Liquidity Constraints

CS188 Spring 2012 Section 4: Games

CS360 Homework 14 Solution

Lecture 8: Decision-making under uncertainty: Part 1

Introduction to Fall 2011 Artificial Intelligence Midterm Exam

MDP Algorithms. Thomas Keller. June 20, University of Basel

Overview: Representation Techniques

Lecture 7: Decision-making under uncertainty: Part 1

Lecture outline W.B.Powell 1

Markov Decision Processes. CS 486/686: Introduction to Artificial Intelligence

Probabilistic Robotics: Probabilistic Planning and MDPs

Reinforcement Learning 04 - Monte Carlo. Elena, Xi

EE266 Homework 5 Solutions

Monte-Carlo Planning Look Ahead Trees. Alan Fern

Lecture 1: Lucas Model and Asset Pricing

Intelligent Systems (AI-2)

CS 5522: Artificial Intelligence II

The exam is closed book, closed calculator, and closed notes except your three crib sheets.

Expectimax and other Games

Dynamic Programming: An overview. 1 Preliminaries: The basic principle underlying dynamic programming

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

CS 343: Artificial Intelligence

10703 Deep Reinforcement Learning and Control

Introduction to Fall 2011 Artificial Intelligence Midterm Exam

Transcription:

Non-Deterministic Search MDP s 1

Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2

Example: Grid World The agent lives in a grid Walls block the agent s path The agent s actions do not always go as planned: 80% of the time, the action North takes the agent North (if there is no wall there) 10% of the time, North takes the agent West; 10% East If there is a wall in the direction the agent would have been taken, the agent stays put Small living reward each step Big rewards come at the end Goal: maximize sum of rewards* 3

Action Results Deterministic Grid World Stochastic Grid World.1.8.1 Stochastic search tree looks like expectimax tree.

Markov Decision Processes An MDP is defined by: A set of states s S A set of actions a A A transition function T(s, a, s ) Prob that a from s leads to s i.e., P(s s, a) Also called the model A reward function R(s, a, s ) Sometimes just R(s) or R(s ) A start state (or distribution) Maybe a terminal state MDPs are a family of non-deterministic search problems One way to solve them is with expectimax search but we ll have a new tool soon 5

What is Markov about MDPs? Andrey Markov (1856-1922) Markov generally means that given the present state, the future and the past are independent For Markov decision processes, Markov means: = 6

Solving MDPs In deterministic single-agent search problems, we want an optimal plan, or sequence of actions, from start to a goal. In an MDP, we want an optimal policy π*: S A A policy π specifies an action for each state An optimal policy maximizes expected utility if followed Defines a reflex agent (if policy precomputed as a look-up table) Optimal policy when R(s, a, s ) = -0.03 for all non-terminal states 7

Example Optimal Policies The optimal behaviour changes as a function of reward. 8

MDP Example: High-Low game Rules Three card types: 2, 3, 4 Infinite deck, twice as many 2 s Start with 3 showing After each card, you guess the next card will be high or low New card is flipped If you re right, you win the points shown on the new card If Ties, redo If you re wrong, game ends How is this different from the chance games in previous lectures? #1: get rewards as you go #2: you might play forever! You can patch expectimax to deal with #1, but not #2, 9

High-Low as an MDP States: 2, 3, 4, done Actions: High, Low Model: T(s, a, s ): P(s =4 4, Low) = ¼ P(s =3 4, Low) = ¼ P(s =2 4, Low) = ½ P(s =done 4, Low) = 0 P(s =4 4, High) = ¼ P(s =3 4, High) = 0 P(s =2 4, High) = 0 P(s =done 4, High) = ¾ Rewards: R(s, a, s ): Number shown on s if s s 0 otherwise Start: 3 10

High-Low: Outcome Tree 11

MDP Search Trees Each MDP state gives an expectimax-like search tree 12

Utilities of Sequences What utility does a sequence of rewards have? Formally, we generally assume stationary preferences: Theorem: only two ways to define stationary utilities Additive utility: Discounted utility: 13

Infinite Utilities?! Problem: infinite state sequences have infinite rewards Solutions: Finite horizon: Terminate episodes after a fixed T steps (e.g. life) Gives non-stationary policies (π depends on time left) Absorbing state: guarantee that for every policy, a terminal state will eventually be reached (like done for High-Low) Discounting: for 0 < γ < 1 Smaller γ means smaller horizon shorter term focus 14

Typically discount rewards by γ < 1 each time step Sooner rewards have higher utility than later rewards Also helps the algorithms converge Discounting Example: discount of 0.5 U([1, 2, 3]) = 1*1 + 0.5*2 + 0.25*3 U([1, 2, 3]) < U([3, 2, 1])

Recap: Defining MDPS Markov decision processes: (a generalization of state-space search) States S Actions A Transitions P(s s, a) (or T(s, a, s )) Rewards R(s, a, s ) (and discount γ) Start state s 0 MDP quantities so far: Policy = Choice of action for each state Utility (or return) = expectimax value of a state 16

Solving MDPs Solving MDPs is governed by some quantities MDP Quantities Policy = map of states to actions Episode = one run of an MDP Utility (or return) = sum of discounted rewards Values = expected future utility from a state Q-Values = expected future utility from a q-state Fundamental operation: compute the values (optimal expectimax utilities) of states Why? Optimal values define optimal policies! 17

The value of a state s: Optimal Utilities V*(s) = expected utility starting in s and acting optimally The value of a q-state (s, a): Q*(s, a) = expected utility starting out having taken action a from state s and (thereafter) acting optimally The optimal policy: π*(s) = optimal action from state s

The Bellman Equations Definition of optimal utility leads to a simple one-step lookahead relationship amongst optimal utility values: Optimal rewards = maximize over first action and then follow optimal policy Formally: 19

Why Not Search Trees? Why not solve with expectimax? Problems: This tree is usually infinite (why?) Same states appear over and over (why?) We would search once per state (why?) Idea: Value iteration Compute optimal values for all states all at once using successive approximations Will be a bottom-up dynamic program Do all planning offline, no replanning needed! 20

Calculate estimates V k* (s) Not the optimal value of s! The optimal value considering only next k time steps (k rewards) What you d get with depth-k expectimax* As k, it approaches the optimal value* Almost solution: recursion (i.e. expectimax) Correct solution: dynamic Value Estimates programming 21

Idea: Value Iteration Algorithm Start with V 0 *(s) = 0 for all s s, which we know is right (why?) Given V i *, calculate the values for all states for depth i+1: Throw out old vector V i * Repeat until convergence This is called a value update or Bellman update Theorem: will converge to unique optimal values Basic idea: approximations get refined towards optimal values Policy may converge long before values do 22

Example: Bellman Updates max happens for a = right, other actions not shown 23

Example: Value Iteration Information propagates outward from terminal states and eventually all states have correct value estimates 24

Practice: Computing Actions Which action should we choose from state s: Given optimal values V of states? π * (s) = Given optimal q-values Q? π * (s) = Lesson: actions are easier to select from Q s! From the value of states we can not see the right optimal action of each state, but with Q-values we see the action with highest Q-value 25

Utilities for a Fixed Policy Another basic operation: compute the utility of a state s under a fixed (generally non-optimal) policy Define the utility of a state s, under a fixed policy π: V π (s) = expected total discounted rewards (return) starting in s and following π Recursive relation (one-step lookahead/bellman equation): There is no max over actions (for s the action is already chosen by the fixed policy) Value iteration algorithm will be faster. 26

Policy Evaluation How do we calculate the V s for a fixed policy? Idea one: turn recursive equations into updates (similar to optimal value iterative algorithm) Idea two: it s just a linear system, solve with Matlab (or whatever) 27

Policy Iteration Alternative approach for optimal values: Assume a (not optimal) policy Step 1: Policy evaluation: calculate utilities for the fixed policy (not optimal utilities!) until convergence Step 2: Policy improvement: update policy using one-step look-ahead with resulting converged (but not optimal!) utilities as future values Repeat steps until policy converges This is policy iteration It s still optimal! Can converge faster under some conditions 28

Policy Iteration: detailes Policy evaluation: with fixed current policy, find values with simplified Bellman updates: Iterate until values converge Policy improvement: with fixed utilities, find the best action according to one-step look-ahead 29

Comparison Both VI and PI compute the same thing (optimal values for all states) In value iteration: Every pass (or backup ) updates both utilities (explicitly, based on current utilities) and policy (implicitly, based on current utilities) Tracking the policy isn t necessary; we take the max In policy iteration: Several passes to update utilities with fixed policy After policy is evaluated, a new policy is chosen Both are dynamic programs for solving MDPs 30

Asynchronous Value Iteration In value iteration, we update value of every state in each iteration Actually, any sequences of Bellman updates will converge if every state is visited infinitely often (i.e. we do not need update all states in each iteration) In fact, we can update the policy as seldom or often as we like, and we will still converge Idea: Update states whose value we expect to change: If V i+1 (s) V i (s) is large then update predecessors of s 31

Types of Search Problems Deterministic search Single agent, goal, cost, minimize cost Games Multiagent, utility at the end, no cost per action Minimax, optimal adversarial opponent, Expectimax, Expectiminimax opponent s (agent or environment) action not known Non-deterministic search, Probabilities on action outcomes, Instant rewards, No utility at the end, May take forever 32