Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Quantities. Expectimax Pseudocode. Expectimax Pruning?

Similar documents
Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Example. Expectimax Pseudocode. Expectimax Pruning?

CS 188: Artificial Intelligence Fall 2011

Announcements. CS 188: Artificial Intelligence Spring Expectimax Search Trees. Maximum Expected Utility. What are Probabilities?

CS 188: Artificial Intelligence Spring Announcements

343H: Honors AI. Lecture 7: Expectimax Search 2/6/2014. Kristen Grauman UT-Austin. Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted

Uncertain Outcomes. CS 188: Artificial Intelligence Uncertainty and Utilities. Expectimax Search. Worst-Case vs. Average Case

CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

Worst-Case vs. Average Case. CSE 473: Artificial Intelligence Expectimax, Uncertainty, Utilities. Expectimax Search. Worst-Case vs.

Probabilities. CSE 473: Artificial Intelligence Uncertainty, Utilities. Reminder: Expectations. Reminder: Probabilities

CS 188: Artificial Intelligence. Maximum Expected Utility

Expectimax and other Games

CS 4100 // artificial intelligence

CS 6300 Artificial Intelligence Spring 2018

Uncertain Outcomes. CS 232: Ar)ficial Intelligence Uncertainty and U)li)es Sep 24, Worst- Case vs. Average Case.

CSL603 Machine Learning

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week

Utilities and Decision Theory. Lirong Xia

Announcements. CS 188: Artificial Intelligence Fall Preferences. Rational Preferences. Rational Preferences. MEU Principle. Project 2 (due 10/1)

Introduction to Artificial Intelligence Spring 2019 Note 2

Lecture 12: Introduction to reasoning under uncertainty. Actions and Consequences

CSEP 573: Artificial Intelligence

CSE 473: Artificial Intelligence

CS188 Spring 2012 Section 4: Games

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

CS 188: Artificial Intelligence Spring Announcements

CS 343: Artificial Intelligence

Making Simple Decisions

CS221 / Spring 2018 / Sadigh. Lecture 9: Games I

CS 188: Artificial Intelligence Fall 2011

Lecture 9: Games I. Course plan. A simple game. Roadmap. Machine learning. Example: game 1

Decision making in the presence of uncertainty

16 MAKING SIMPLE DECISIONS

16 MAKING SIMPLE DECISIONS

The exam is closed book, closed calculator, and closed notes except your three crib sheets.

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

To earn the extra credit, one of the following has to hold true. Please circle and sign.

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

CEC login. Student Details Name SOLUTIONS

Stochastic Games and Bayesian Games

Stochastic Games and Bayesian Games

Choice under risk and uncertainty

Markov Decision Processes

Uncertainty. Contingent consumption Subjective probability. Utility functions. BEE2017 Microeconomics

Decision making in the presence of uncertainty

Notes for Session 2, Expected Utility Theory, Summer School 2009 T.Seidenfeld 1

Markov Decision Process

Non-Deterministic Search

Lecture 6 Introduction to Utility Theory under Certainty and Uncertainty

MICROECONOMIC THEROY CONSUMER THEORY

Markov Decision Processes

CS 188: Artificial Intelligence. Outline

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Expected value is basically the average payoff from some sort of lottery, gamble or other situation with a randomly determined outcome.

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

Q1. [?? pts] Search Traces

Algorithms and Networking for Computer Games

CS360 Homework 14 Solution

MDPs: Bellman Equations, Value Iteration

Example: Grid World. CS 188: Artificial Intelligence Markov Decision Processes II. Recap: MDPs. Optimal Quantities

PAULI MURTO, ANDREY ZHUKOV

Markov Decision Processes

Game Theory - Lecture #8

Extending MCTS

INVERSE REWARD DESIGN

UC Berkeley Haas School of Business Economic Analysis for Business Decisions (EWMBA 201A) Fall Module I

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

UNCERTAINTY AND INFORMATION

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

UC Berkeley Haas School of Business Economic Analysis for Business Decisions (EWMBA 201A) Fall Module I

Probability and Expected Utility

Micro Theory I Assignment #5 - Answer key

October 9. The problem of ties (i.e., = ) will not matter here because it will occur with probability

MA 1125 Lecture 14 - Expected Values. Wednesday, October 4, Objectives: Introduce expected values.

Thursday, March 3

Introduction to Decision Making. CS 486/686: Introduction to Artificial Intelligence

Preliminary Notions in Game Theory

8/28/2017. ECON4260 Behavioral Economics. 2 nd lecture. Expected utility. What is a lottery?

Reinforcement Learning

Phil 321: Week 2. Decisions under ignorance

Up till now, we ve mostly been analyzing auctions under the following assumptions:

ECMC49S Midterm. Instructor: Travis NG Date: Feb 27, 2007 Duration: From 3:05pm to 5:00pm Total Marks: 100

Lecture 3 Representation of Games

CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma

Algorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information

Subject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10.

Introduction to Fall 2011 Artificial Intelligence Midterm Exam

Game theory and applications: Lecture 1

Financial Economics: Making Choices in Risky Situations

Yao s Minimax Principle

Lecture 17: More on Markov Decision Processes. Reinforcement learning

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #24 Scribe: Jordan Ash May 1, 2014

Sublinear Time Algorithms Oct 19, Lecture 1

Economics 101 Section 5

PAULI MURTO, ANDREY ZHUKOV. If any mistakes or typos are spotted, kindly communicate them to

Transcription:

CS 188: Artificial Intelligence Fall 2010 Expectimax Search Trees What if we don t know what the result of an action will be? E.g., In solitaire, next card is unknown In minesweeper, mine locations In pacman, the ghosts act randomly max Lecture 8: MEU / Utilities 9/21/2010 Can do expectimax search to maximize average score Chance nodes, like min nodes, except the outcome is uncertain Calculate expected utilities Max nodes as in minimax search Chance nodes take average (expectation) of value of children 10 10 4 59 100 7 chance Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 2 Later, we ll learn how to formalize the underlying problem as a Markov Decision Process [DEMO: minvsexp] 3 Expectimax Pseudocode Expectimax Quantities def value(s) if s is a max node return maxvalue(s) if s is an exp node return expvalue(s) if s is a terminal node return evaluation(s) def maxvalue(s) values = [value(s ) for s in successors(s)] return max(values) 8 4 5 6 def expvalue(s) values = [value(s ) for s in successors(s)] weights = [probability(s, s ) for s in successors(s)] return expectation(values, weights) 3 12 9 2 4 6 15 6 0 4 5 Expectimax Pruning? Expectimax Search 3 12 9 2 4 6 15 6 0 Chance nodes Chance nodes are like min nodes, except the outcome is uncertain Calculate expected utilities Chance nodes average successor values (weighted) Each chance node has a probability distribution over its outcomes (called a model) For now, assume we re given the model Utilities for terminal states Static evaluation functions give us limited-depth search 1 search ply 400 300 492 362 Estimate of true expectimax value (which would require a lot of work to compute) 6 1

Expectimax for Pacman Notice that we ve gotten away from thinking that the ghosts are trying to minimize pacman s score Instead, they are now a part of the environment Pacman has a belief (distribution) over how they will act Quiz: Can we see minimax as a special case of expectimax? Quiz: what would pacman s computation look like if we assumed that the ghosts were doing 1-ply minimax and taking the result 80% of the time, otherwise moving randomly? If you take this further, you end up calculating belief distributions over your opponents belief distributions over your belief distributions, etc Can get unmanageable very quickly! 8 Expectimax for Pacman Results from playing 5 games Minimax Pacman Expectimax Pacman Minimizing Ghost 493 Won 1/5-303 Random Ghost 483 503 [demo: world assumptions] Pacman used depth 4 search with an eval function that avoids trouble Ghost used depth 2 search with an eval function that seeks Pacman Expectimax Utilities For minimax, terminal function scale doesn t matter We just want better states to have higher evaluations (get the ordering right) We call this insensitivity to monotonic transformations For expectimax, we need magnitudes to be meaningful Maximum Expected Utility Why should we average utilities? Why not minimax? Principle of maximum expected utility: A rational agent should chose the action which maximizes its expected utility, given its knowledge Questions: Where do utilities come from? How do we know such utilities even exist? Why are we taking expectations of utilities (not, e.g. minimax)? What if our behavior can t be described by utilities? 0 40 20 30 x 2 0 1600 400 900 11 Utilities Utilities are functions from outcomes (states of the world) to real numbers that describe an agent s preferences Where do utilities come from? In a game, may be simple (+1/-1) Utilities summarize the agent s goals Theorem: any rational preferences can be summarized as a utility function Utilities: Uncertain Outcomes Oops Get Double Going to airport from home Whew Get Single We hard-wire utilities and let behaviors emerge Why don t we let agents pick utilities? Why don t we prescribe behaviors? 12 13 2

Preferences Rational Preferences An agent chooses among: Prizes: A, B, etc. Lotteries: situations with uncertain prizes Notation: We want some constraints on preferences before we call them rational For example: an agent with intransitive preferences can be induced to give away all of its money If B > C, then an agent with C would pay (say) 1 cent to get B If A > B, then an agent with B would pay (say) 1 cent to get A If C > A, then an agent with A would pay (say) 1 cent to get C ( Af B) ( Bf C) ( Af C) 14 15 Rational Preferences MEU Principle Preferences of a rational agent must obey constraints. The axioms of rationality: Theorem: [Ramsey, 1931; von Neumann & Morgenstern, 1944] Given any preferences satisfying these constraints, there exists a real-valued function U such that: Theorem: Rational preferences imply behavior describable as maximization of expected utility 16 Maximum expected likelihood (MEU) principle: Choose the action that maximizes expected utility Note: an agent can be entirely rational (consistent with MEU) without ever representing or manipulating utilities and probabilities E.g., a lookup table for perfect tictactoe, reflex vacuum cleaner 17 Utility Scales Human Utilities Normalized utilities: u + = 1.0, u - = 0.0 Micromorts: one-millionth chance of death, useful for paying to reduce product risks, etc. QALYs: quality-adjusted life years, useful for medical decisions involving substantial risk Note: behavior is invariant under positive linear transformation Utilities map states to real numbers. Which numbers? Standard approach to assessment of human utilities: Compare a state A to a standard lottery L p between best possible prize u + with probability p worst possible catastrophe u - with probability 1-p Adjust lottery probability p until A ~ L p Resulting p is a utility in [0,1] With deterministic prizes only (no lottery choices), only ordinal utility can be determined, i.e., total order on prizes 18 19 3

Money Money does not behave as a utility function, but we can talk about the utility of having money (or being in debt) Given a lottery L = [p, $X; (1-p), $Y] The expected monetary value EMV(L) is p*x + (1-p)*Y U(L) = p*u($x) + (1-p)*U($Y) Typically, U(L) < U( EMV(L) ): why? In this sense, people are risk-averse When deep in debt, we are risk-prone Utility curve: for what probability p am I indifferent between: Some sure outcome x A lottery [p,$m; (1-p),$0], M large Consider the lottery [0.5,$1000; 0.5,$0] What is its expected monetary value? ($500) What is its certainty equivalent? Monetary value acceptable in lieu of lottery $400 for most people Difference of $100 is the insurance premium There s an insurance industry because people will pay to reduce their risk If everyone were risk-neutral, no insurance needed! 20 21 Because people ascribe different utilities to different amounts of money, insurance agreements can increase both parties expected utility Because people ascribe different utilities to different amounts of money, insurance agreements can increase both parties expected utility You own a car. Your lottery: L Y = [0.8, $0 ; 0.2, -$200] i.e., 20% chance of crashing Amount Your Utility U Y $0 0 You own a car. Your lottery: L Y = [0.8, $0 ; 0.2, -$200] i.e., 20% chance of crashing Insurance company buys risk: L I = [0.8, $50 ; 0.2, -$150] i.e., $50 revenue + your L Y You do not want -$200! U Y (L Y ) = 0.2*U Y (-$200) = -200 U Y (-$50) = -150 -$50-150 -$200-1000 You do not want -$200! U Y (L Y ) = 0.2*U Y (-$200) = -200 U Y (-$50) = -150 Insurer is risk-neutral: U(L)=U(EMV(L)) U I (L I ) = U(0.8*50 + 0.2*(-150)) = U($10) > U($0) Example: Human Rationality? Non-Zero-Sum Utilities Famous example of Allais (1953) A: [0.8,$4k; 0.2,$0] B: [1.0,$3k; 0.0,$0] C: [0.2,$4k; 0.8,$0] D: [0.25,$3k; 0.75,$0] Most people prefer B > A, C > D But if U($0) = 0, then B > A U($3k) > 0.8 U($4k) C > D 0.8 U($4k) > U($3k) Similar to minimax: Terminals have utility tuples Node values are also utility tuples Each player maximizes its own utility Can give rise to cooperation and competition dynamically 1,6,6 7,1,2 6,1,2 7,2,1 5,1,7 1,5,2 7,7,1 5,2,5 24 25 4

Mixed Layer Types E.g. Backgammon Expectiminimax Environment is an extra player that moves after each agent Chance nodes take expectations, otherwise like minimax ExpectiMinimax-Value(state): Stochastic Two-Player Dice rolls increase b: 21 possible rolls with 2 dice Backgammon 20 legal moves Depth 2 = 20 x (21 x 20) 3 = 1.2 x 10 9 As depth increases, probability of reaching a given search node shrinks So usefulness of search is diminished So limiting depth is less damaging But pruning is trickier TDGammon uses depth-2 search + very good evaluation function + reinforcement learning: world-champion level play 1 st AI world champion in any game! 5