Expectimax and other Games

Similar documents
CS 5522: Artificial Intelligence II

Uncertain Outcomes. CS 188: Artificial Intelligence Uncertainty and Utilities. Expectimax Search. Worst-Case vs. Average Case

CS 343: Artificial Intelligence

Worst-Case vs. Average Case. CSE 473: Artificial Intelligence Expectimax, Uncertainty, Utilities. Expectimax Search. Worst-Case vs.

CS 4100 // artificial intelligence

Probabilities. CSE 473: Artificial Intelligence Uncertainty, Utilities. Reminder: Expectations. Reminder: Probabilities

343H: Honors AI. Lecture 7: Expectimax Search 2/6/2014. Kristen Grauman UT-Austin. Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Quantities. Expectimax Pseudocode. Expectimax Pruning?

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Example. Expectimax Pseudocode. Expectimax Pruning?

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence. Maximum Expected Utility

Announcements. CS 188: Artificial Intelligence Spring Expectimax Search Trees. Maximum Expected Utility. What are Probabilities?

CS 188: Artificial Intelligence Spring Announcements

Uncertain Outcomes. CS 232: Ar)ficial Intelligence Uncertainty and U)li)es Sep 24, Worst- Case vs. Average Case.

CSL603 Machine Learning

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week

CS 6300 Artificial Intelligence Spring 2018

CS188 Spring 2012 Section 4: Games

Introduction to Artificial Intelligence Spring 2019 Note 2

Utilities and Decision Theory. Lirong Xia

Announcements. CS 188: Artificial Intelligence Fall Preferences. Rational Preferences. Rational Preferences. MEU Principle. Project 2 (due 10/1)

Lecture 12: Introduction to reasoning under uncertainty. Actions and Consequences

CS360 Homework 14 Solution

Markov Decision Process

Making Simple Decisions

MICROECONOMIC THEROY CONSUMER THEORY

Thursday, March 3

Non-Deterministic Search

CSE 473: Artificial Intelligence

CS221 / Spring 2018 / Sadigh. Lecture 9: Games I

MA 1125 Lecture 14 - Expected Values. Wednesday, October 4, Objectives: Introduce expected values.

16 MAKING SIMPLE DECISIONS

CEC login. Student Details Name SOLUTIONS

16 MAKING SIMPLE DECISIONS

CSEP 573: Artificial Intelligence

Using the Maximin Principle

Lecture 9: Games I. Course plan. A simple game. Roadmap. Machine learning. Example: game 1

Decision making in the presence of uncertainty

CS 188: Artificial Intelligence

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Expected value is basically the average payoff from some sort of lottery, gamble or other situation with a randomly determined outcome.

Choice under risk and uncertainty

What do you think "Binomial" involves?

To earn the extra credit, one of the following has to hold true. Please circle and sign.

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

Notes for Session 2, Expected Utility Theory, Summer School 2009 T.Seidenfeld 1

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

Lecture 6 Introduction to Utility Theory under Certainty and Uncertainty

Markov Decision Processes

Markov Decision Processes

MATH 112 Section 7.3: Understanding Chance

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

CS 188: Artificial Intelligence

Game Theory - Lecture #8

Q1. [?? pts] Search Traces

Probability and Expected Utility

Up till now, we ve mostly been analyzing auctions under the following assumptions:

Extending MCTS

Markov Decision Processes

Reinforcement Learning

Expected Utility Theory

TOPIC: PROBABILITY DISTRIBUTIONS

TIm 206 Lecture notes Decision Analysis

Copyright (C) 2001 David K. Levine This document is an open textbook; you can redistribute it and/or modify it under the terms of version 1 of the

IV. Cooperation & Competition

Building Consistent Risk Measures into Stochastic Optimization Models

The exam is closed book, closed calculator, and closed notes except your three crib sheets.

Algorithms and Networking for Computer Games

CS 343: Artificial Intelligence

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Phil 321: Week 2. Decisions under ignorance

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2

Economics 101 Section 5

Announcements. Today s Menu

Session 9: The expected utility framework p. 1

15.053/8 February 28, person 0-sum (or constant sum) game theory

Game theory and applications: Lecture 1

Chapter 7. Random Variables: 7.1: Discrete and Continuous. Random Variables. 7.2: Means and Variances of. Random Variables

What do Coin Tosses and Decision Making under Uncertainty, have in common?

ECMC49S Midterm. Instructor: Travis NG Date: Feb 27, 2007 Duration: From 3:05pm to 5:00pm Total Marks: 100

Risk aversion and choice under uncertainty

April 28, Decision Analysis 2. Utility Theory The Value of Information

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1

Making Hard Decision. ENCE 627 Decision Analysis for Engineering. Identify the decision situation and understand objectives. Identify alternatives

Uncertainty. Contingent consumption Subjective probability. Utility functions. BEE2017 Microeconomics

8/28/2017. ECON4260 Behavioral Economics. 2 nd lecture. Expected utility. What is a lottery?

Dr. Abdallah Abdallah Fall Term 2014

Ambiguity Aversion. Mark Dean. Lecture Notes for Spring 2015 Behavioral Economics - Brown University

October 9. The problem of ties (i.e., = ) will not matter here because it will occur with probability

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

Decision Theory. Mário S. Alvim Information Theory DCC-UFMG (2018/02)

Chapter 9. Idea of Probability. Randomness and Probability. Basic Practice of Statistics - 3rd Edition. Chapter 9 1. Introducing Probability

Chapter 6: Random Variables. Ch. 6-3: Binomial and Geometric Random Variables

Decision Theory. Refail N. Kasimbeyli

5/2/2016. Intermediate Microeconomics W3211. Lecture 24: Uncertainty and Information 2. Today. The Story So Far. Preferences and Expected Utility

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Fundamentals of Managerial and Strategic Decision-Making

CS 188: Artificial Intelligence Spring Announcements

Transcription:

Expectimax and other Games 2018/01/30 Chapter 5 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/games.pdf q Project 2 released, due Feb 15 q Homework 2 will be released, due Feb 9 q Don t forget about Quiz 4, due Feb 6 before class, to be released q Poll 1 piazza, voluntary and anonymous, to be released Slides are largely based on information from http://ai.berkeley.edu and Russel 1

Last time Game Adversarial search Evaluation function Alpha-beta pruning Required reading (red means it will be on your exams): o R&N: Chapter 5 2

Outline for today Expectimax Expectiminimax General games Expected Maximum Utility Required reading (red means it will be on your exams): o R&N: Chapter 5 3

Worse vs. average case max min 10 10 9 100 Idea: Uncertain outcomes controlled by chance, not an adversary!

Expectimax search Why wouldn t we know what the result of an action will be? Explicit randomness: rolling dice Unpredictable opponents: the ghosts respond randomly Actions can fail: when moving a robot, wheels might slip Values should now reflect average-case (expectimax) outcomes, not worst-case (minimax) outcomes max chance Expectimax search: compute the average score under optimal play Max nodes as in minimax search Chance nodes are like min nodes but the outcome is uncertain Calculate their expected utilities I.e. take weighted average (expectation) of children 10 10 4 59 100 7

Demo: Minimax Human-aware vs. Expectimax Robotics

Demo: Minimax Human-aware vs. Expectimax Robotics

Expectimax def value(state): if the state is a terminal state: return the state s utility if the next agent is MAX: return max-value(state) if the next agent is EXP: return exp-value(state) def max-value(state): initialize v = - for each successor of state: v = max(v, value(successor)) return v def exp-value(state): initialize v = 0 for each successor of state: p = probability(successor) v += p * value(successor) return v

Expectimax def exp-value(state): initialize v = 0 for each successor of state: p = probability(successor) v += p * value(successor) return v 1/2 1/3 1/6 58 24 7-12 v = (1/2) (8) + (1/3) (24) + (1/6) (-12) = 10

Expectimax example 3 12 9 2 4 6 15 6 0

Expectimax pruning? 3 12 9 2

Depth-limited Expectimax 400 300 Estimate of true expectimax value (which would require a lot of work to compute) 492 362

Expectimax Human-aware vs minimax: Robotics optimism vs pessimism Dangerous Optimism Assuming chance when the world is adversarial Dangerous Pessimism Assuming the worst case when it s not likely

Expectimax Human-aware vs minimax: Robotics optimism vs pessimism Adversarial Ghost Random Ghost Minimax Pacman Expectimax Pacman Pacman used depth 4 search with an eval function that avoids trouble Ghost used depth 2 search with an eval function that seeks Pacman

Expectimax Human-aware vs minimax: Robotics optimism vs pessimism Adversarial Ghost Random Ghost Minimax Pacman Expectimax Pacman Lower score but always wins Disaster! Lower score but always wins; expected to achieve the highest average score Pacman used depth 4 search with an eval function that avoids trouble Ghost used depth 2 search with an eval function that seeks Pacman

Expectation The expected value of a function of a random variable is the average, weighted by the probability distribution over outcomes Example: How long to get to the airport? Time: Probability: 20 min 30 min 60 min + + x x x 0.25 0.50 0.25 35 min

Expectation Probability distribution??? 3 12 9 2 4 6 15 6 0

Probabilities in expectimax Aren t we essentially assuming that our opponent is flipping a coin? In expectimax search, we have a probabilistic model of how the opponent (or environment) will behave in any state Model could be a simple uniform distribution (roll a die) We have a chance node for any outcome out of our control: opponent or environment The model might say that adversarial actions are likely! Model could be sophisticated and require a great deal of computation AND statistical analysis

Probabilities in expectimax Let s say you know that your opponent is actually running a depth 2 minimax, using the result 80% of the time, and moving randomly otherwise Question: How to solve this problem? 0.1 0.9 Answer: Expectimax! To figure out EACH chance node s probabilities, you have to run a simulation of your opponent This kind of thing gets very slow very quickly Even worse if you have to simulate your opponent simulating you except for minimax, which has the nice property that it all collapses into one game tree

Probabilities in expectimax Let s say you know that your opponent is actually running a depth 2 minimax, using the result 80% of the time, and moving randomly otherwise Question: How to solve this problem? Answer: Expectimax! Issues: 0.1 0.9 1. Assume the opponent s knowledge about us! 2. Opponent model is difficult to come up with and may change over time 3. There is much more computational overhead on our side; may not be feasible

Outline for today Expectimax Expectiminimax General games Expected Maximum Utility Required reading (red means it will be on your exams): o R&N: Chapter 5 21

Expectiminimax

Expectiminimax

Expectiminimax

Outline for today Expectimax Expectiminimax General games Expected Maximum Utility Required reading (red means it will be on your exams): o R&N: Chapter 5 25

General games What if the game is not zero-sum? Generalization of minimax: Terminals have utility tuples Node values are also utility tuples Each player maximizes its own component Can give rise to cooperation and competition dynamically 1,6,6 7,1,2 6,1,2 7,2,1 5,1,7 1,5,2 7,7,1 5,2,5

Outline for today Expectimax Expectiminimax General games Expected Maximum Utility Required reading (red means it will be on your exams): o R&N: Chapter 5 27

Maximum expected utilities Why should we average utilities? Why not minimax? Principle of maximum expected utility: A rational agent should chose the action that maximizes its expected utility, given its knowledge Questions: Where do utilities come from? How do we know such utilities even exist? How do we know that averaging even makes sense? What if our behavior (preferences) can t be described by utilities?

Utilities Getting ice cream Get Single Get Double Oops Whew!

Utilities

What utilities to use 0 40 20 30 x 2 0 1600 400 900 For worst-case minimax reasoning, terminal function scale doesn t matter We just want better states to have higher evaluations (get the ordering right) We call this insensitivity to monotonic transformations For average-case expectimax reasoning, we need magnitudes to be meaningful

Utilities Utilities are functions from outcomes (states of the world) to real numbers that describe an agent s preferences Where do utilities come from? In a game, may be simple (+1/-1) Utilities summarize the agent s goals Theorem: any rational preferences can be summarized as a utility function We hard-wire utilities and let behaviors emerge Why don t we let agents pick utilities? Why don t we prescribe behaviors?

Preferences An agent must have preferences among: Prizes: A, B, etc. Lotteries: situations with uncertain prizes A Prize A A Lottery p 1-p A B Notation: Preference: Indifference:

Rational preference We want some constraints on preferences before we call them rational, such as: Axiom of Transitivity: ( A! B) Ù ( B! C) Þ ( A! C) For example: an agent with intransitive preferences can be induced to give away all of its money If B > C, then an agent with C would pay (say) 1 cent to get B If A > B, then an agent with B would pay (say) 1 cent to get A If C > A, then an agent with A would pay (say) 1 cent to get C

Rational preference The Axioms of Rationality Theorem: Rational preferences imply behavior describable as maximization of expected utility

MEU principle Theorem [Ramsey, 1931; von Neumann & Morgenstern, 1944] Given any preferences satisfying these constraints, there exists a realvalued function U such that: I.e. values assigned by U preserve preferences of both prizes and lotteries! Maximum expected utility (MEU) principle: Choose the action that maximizes expected utility Note: an agent can be entirely rational (consistent with MEU) without ever representing or manipulating utilities and probabilities E.g., a lookup table for perfect tic-tac-toe, a reflex vacuum cleaner

Utility scales Note: behavior is invariant under positive linear transformation Normalized utilities: u + = 1.0, u - = 0.0

Human utilities

Human utilities Utilities map states to real numbers. Which numbers? Standard approach to assessment (elicitation) of human utilities: Compare a prize A to a standard lottery L p between best possible prize u + with probability p worst possible catastrophe u - with probability 1-p Adjust lottery probability p until indifference: A ~ L p Resulting p is a utility in [0,1] 0.999999 0.000001 Pay $30 No change Instant death

Money Money does not behave as a utility function, but we can talk about the utility of having money (or being in debt) Given a lottery L = [p, $X; (1-p), $Y] The expected monetary value EMV(L) is p*x + (1-p)*Y U(L) = p*u($x) + (1-p)*U($Y) Typically, U(L) < U( EMV(L) ) In this sense, people are risk-averse When deep in debt, people are risk-prone

Insurance Consider the lottery [0.5, $1000; 0.5, $0] What is its expected monetary value? ($500) What is its certainty equivalent? Monetary value acceptable in lieu of lottery $400 for most people Difference of $100 is the insurance premium There s an insurance industry because people will pay to reduce their risk If everyone were risk-neutral, no insurance needed! It s win-win: you d rather have the $400 and the insurance company would rather have the lottery (their utility curve is flat and they have many lotteries)

Human rationality Famous example of Allais (1953) A: [0.8, $4k; 0.2, $0] B: [1.0, $3k; 0.0, $0] C: [0.2, $4k; 0.8, $0] D: [0.25, $3k; 0.75, $0] Most people prefer B > A, C > D But then B > A Þ U($3k) > 0.8 U($4k) C > D Þ 0.8 U($4k) > U($3k)

Outline for today Expectimax Expectiminimax General games Expected Maximum Utility Required reading (red means it will be on your exams): o R&N: Chapter 5 43