CS 188: Artificial Intelligence Spring Announcements

Similar documents
Announcements. CS 188: Artificial Intelligence Spring Expectimax Search Trees. Maximum Expected Utility. What are Probabilities?

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Example. Expectimax Pseudocode. Expectimax Pruning?

CS 188: Artificial Intelligence Fall 2011

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Quantities. Expectimax Pseudocode. Expectimax Pruning?

343H: Honors AI. Lecture 7: Expectimax Search 2/6/2014. Kristen Grauman UT-Austin. Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted

Uncertain Outcomes. CS 188: Artificial Intelligence Uncertainty and Utilities. Expectimax Search. Worst-Case vs. Average Case

CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

Worst-Case vs. Average Case. CSE 473: Artificial Intelligence Expectimax, Uncertainty, Utilities. Expectimax Search. Worst-Case vs.

Probabilities. CSE 473: Artificial Intelligence Uncertainty, Utilities. Reminder: Expectations. Reminder: Probabilities

CS 4100 // artificial intelligence

CS 6300 Artificial Intelligence Spring 2018

CS 188: Artificial Intelligence. Maximum Expected Utility

Expectimax and other Games

Uncertain Outcomes. CS 232: Ar)ficial Intelligence Uncertainty and U)li)es Sep 24, Worst- Case vs. Average Case.

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week

CSL603 Machine Learning

Utilities and Decision Theory. Lirong Xia

Announcements. CS 188: Artificial Intelligence Fall Preferences. Rational Preferences. Rational Preferences. MEU Principle. Project 2 (due 10/1)

Introduction to Artificial Intelligence Spring 2019 Note 2

CSEP 573: Artificial Intelligence

Lecture 12: Introduction to reasoning under uncertainty. Actions and Consequences

CS188 Spring 2012 Section 4: Games

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence

CSE 473: Artificial Intelligence

Making Simple Decisions

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence. Outline

CS221 / Spring 2018 / Sadigh. Lecture 9: Games I

Decision making in the presence of uncertainty

The exam is closed book, closed calculator, and closed notes except your three crib sheets.

Choice under risk and uncertainty

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1

Markov Decision Processes

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2

Probability and Expected Utility

MICROECONOMIC THEROY CONSUMER THEORY

Non-Deterministic Search

To earn the extra credit, one of the following has to hold true. Please circle and sign.

Lecture 9: Games I. Course plan. A simple game. Roadmap. Machine learning. Example: game 1

Markov Decision Process

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

CEC login. Student Details Name SOLUTIONS

Game Theory - Lecture #8

16 MAKING SIMPLE DECISIONS

CS360 Homework 14 Solution

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

16 MAKING SIMPLE DECISIONS

Uncertainty. Contingent consumption Subjective probability. Utility functions. BEE2017 Microeconomics

Expected value is basically the average payoff from some sort of lottery, gamble or other situation with a randomly determined outcome.

Notes for Session 2, Expected Utility Theory, Summer School 2009 T.Seidenfeld 1

Decision making in the presence of uncertainty

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Example: Grid World. CS 188: Artificial Intelligence Markov Decision Processes II. Recap: MDPs. Optimal Quantities

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

Algorithms and Networking for Computer Games

Markov Decision Processes

Lecture 6 Introduction to Utility Theory under Certainty and Uncertainty

Markov Decision Processes

Stochastic Games and Bayesian Games

Phil 321: Week 2. Decisions under ignorance

Q1. [?? pts] Search Traces

Stochastic Games and Bayesian Games

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

Preliminary Notions in Game Theory

INVERSE REWARD DESIGN

Introduction to Decision Making. CS 486/686: Introduction to Artificial Intelligence

Expected Utility Theory

CS 188: Artificial Intelligence Fall 2011

Reinforcement Learning

Micro Theory I Assignment #5 - Answer key

V. Lesser CS683 F2004

Up till now, we ve mostly been analyzing auctions under the following assumptions:

October 9. The problem of ties (i.e., = ) will not matter here because it will occur with probability

BEEM109 Experimental Economics and Finance

PAULI MURTO, ANDREY ZHUKOV

MA 1125 Lecture 14 - Expected Values. Wednesday, October 4, Objectives: Introduce expected values.

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

UNCERTAINTY AND INFORMATION

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Fundamentals of Managerial and Strategic Decision-Making

General Examination in Microeconomic Theory SPRING 2014

Lecture outline W.B.Powell 1

Copyright (C) 2001 David K. Levine This document is an open textbook; you can redistribute it and/or modify it under the terms of version 1 of the

Probability Basics. Part 1: What is Probability? INFO-1301, Quantitative Reasoning 1 University of Colorado Boulder. March 1, 2017 Prof.

TIm 206 Lecture notes Decision Analysis

Monte-Carlo tree search for multi-player, no-limit Texas hold'em poker. Guy Van den Broeck

TOPIC: PROBABILITY DISTRIBUTIONS

CS 361: Probability & Statistics

Notes 10: Risk and Uncertainty

Subject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10.

Introduction to Fall 2011 Artificial Intelligence Midterm Exam

Successor. CS 361, Lecture 19. Tree-Successor. Outline

CS 331: Artificial Intelligence Game Theory I. Prisoner s Dilemma

8/28/2017. ECON4260 Behavioral Economics. 2 nd lecture. Expected utility. What is a lottery?

Financial Economics: Making Choices in Risky Situations

Announcements. CS 188: Artificial Intelligence Spring Outline. Reinforcement Learning. Grid Futures. Grid World. Lecture 9: MDPs 2/16/2011

Transcription:

CS 188: Artificial Intelligence Spring 2010 Lecture 8: MEU / Utilities 2/11/2010 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein 1 Announcements W2 is due today (lecture or drop box) P2 is out and due on 2/18 2 1

Expectimax Search Trees What if we don t know what the result of an action will be? E.g., In solitaire, next card is unknown In minesweeper, mine locations In pacman, the ghosts act randomly Can do expectimax search Chance nodes, like min nodes, except the outcome is uncertain Calculate expected utilities Max nodes as in minimax search Chance nodes take average (expectation) of value of children max 10 4 5 7 chance Later, we ll learn how to formalize the underlying problem as a Markov Decision Process 4 Maximum Expected Utility Why should we average utilities? Why not minimax? Principle of maximum expected utility: an agent should choose the action which maximizes its expected utility, given its knowledge General principle for decision making Often taken as the definition of rationality We ll see this idea over and over in this course! Let s decompress this definition Probability --- Expectation --- Utility 5 2

Reminder: Probabilities A random variable represents an event whose outcome is unknown A probability distribution is an assignment of weights to outcomes Example: traffic on freeway? Random variable: T = amount of traffic Outcomes: T in {none, light, heavy} Distribution: P(T=none) = 0.25, P(T=light) = 0.55, P(T=heavy) = 0.20 Some laws of probability (more later): Probabilities are always non-negative Probabilities over all possible outcomes sum to one As we get more evidence, probabilities may change: P(T=heavy) = 0.20, P(T=heavy Hour=8am) = 0.60 We ll talk about methods for reasoning and updating probabilities later 6 What are Probabilities? Objectivist / frequentist answer: Averages over repeated experiments E.g. empirically estimating P(rain) from historical observation Assertion about how future experiments will go (in the limit) New evidence changes the reference class Makes one think of inherently random events, like rolling dice Subjectivist / Bayesian answer: Degrees of belief about unobserved variables E.g. an agent s belief that it s raining, given the temperature E.g. pacman s belief that the ghost will turn left, given the state Often learn probabilities from past experiences (more later) New evidence updates beliefs (more later) 7 3

Uncertainty Everywhere Not just for games of chance! I m sick: will I sneeze this minute? Email contains FREE! : is it spam? Tooth hurts: have cavity? 60 min enough to get to the airport? Robot rotated wheel three times, how far did it advance? Safe to cross street? (Look both ways!) Sources of uncertainty in random variables: Inherently random process (dice, etc) Insufficient or weak evidence Ignorance of underlying processes Unmodeled variables The world s just noisy it doesn t behave according to plan! 9 Reminder: Expectations We can define function f(x) of a random variable X The expected value of a function is its average value, weighted by the probability distribution over inputs Example: How long to get to the airport? Length of driving time as a function of traffic: L(none) = 20, L(light) = 30, L(heavy) = 60 What is my expected driving time? Notation: E[ L(T) ] Remember, P(T) = {none: 0.25, light: 0.5, heavy: 0.25} E[ L(T) ] = L(none) * P(none) + L(light) * P(light) + L(heavy) * P(heavy) E[ L(T) ] = (20 * 0.25) + (30 * 0.5) + (60 * 0.25) = 35 10 4

Utilities Utilities are functions from outcomes (states of the world) to real numbers that describe an agent s preferences Where do utilities come from? In a game, may be simple (+1/-1) Utilities summarize the agent s goals Theorem: any set of preferences between outcomes can be summarized as a utility function (provided the preferences meet certain conditions) In general, we hard-wire utilities and let actions emerge (why don t we let agents decide their own utilities?) More on utilities soon 12 Expectimax Search In expectimax search, we have a probabilistic model of how the opponent (or environment) will behave in any state Model could be a simple uniform distribution (roll a die) Model could be sophisticated and require a great deal of computation We have a node for every outcome out of our control: opponent or environment The model might say that adversarial actions are likely! For now, assume for any state we magically have a distribution to assign probabilities to opponent actions / environment outcomes Having a probabilistic belief about an agent s action does not mean that agent is flipping any coins! 13 5

Expectimax Search Chance nodes Chance nodes are like min nodes, except the outcome is uncertain Calculate expected utilities Chance nodes average successor values (weighted) Each chance node has a probability distribution over its outcomes (called a model) For now, assume we re given the model Utilities for terminal states Static evaluation functions give us limited-depth search 1 search ply 400 300 492 362 Estimate of true expectimax value (which would require a lot of work to compute) Expectimax Pseudocode def value(s) if s is a max node return maxvalue(s) if s is an exp node return expvalue(s) if s is a terminal node return evaluation(s) def maxvalue(s) values = [value(s ) for s in successors(s)] return max(values) 8 4 5 6 def expvalue(s) values = [value(s ) for s in successors(s)] weights = [probability(s, s ) for s in successors(s)] return expectation(values, weights) 15 6

Expectimax Evaluation Evaluation functions quickly return an estimate for a node s true value (which value, expectimax or minimax?) For minimax, evaluation function scale doesn t matter We just want better states to have higher evaluations (get the ordering right) We call this insensitivity to monotonic transformations For expectimax, we need magnitudes to be meaningful 0 40 20 30 x 2 0 1600 400 900 Mixed Layer Types E.g. Backgammon Expectiminimax Environment is an extra player that moves after each agent Chance nodes take expectations, otherwise like minimax ExpectiMinimax-Value(state): 7

Stochastic Two-Player Dice rolls increase b: 21 possible rolls with 2 dice Backgammon 20 legal moves Depth 4 = 20 x (21 x 20) 3 1.2 x 10 9 As depth increases, probability of reaching a given node shrinks So value of lookahead is diminished So limiting depth is less damaging But pruning is less possible TDGammon uses depth-2 search + very good eval function + reinforcement learning: worldchampion level play 23 24 8

Maximum Expected Utility Principle of maximum expected utility: A rational agent should choose the action which maximizes its expected utility, given its knowledge Questions: Where do utilities come from? How do we know such utilities even exist? Why are we taking expectations of utilities (not, e.g. minimax)? What if our behavior can t be described by utilities? 25 Utilities: Unknown Outcomes Going to airport from home Take freeway Take surface streets Clear, 10 min Traffic, 50 min Clear, 20 min Arrive early Arrive late Arrive on time 26 9

Preferences An agent chooses among: Prizes: A, B, etc. Lotteries: situations with uncertain prizes Notation: 27 Rational Preferences We want some constraints on preferences before we call them rational ( Af B) ( Bf C) ( Af C) For example: an agent with intransitive preferences can be induced to give away all of its money If B > C, then an agent with C would pay (say) 1 cent to get B If A > B, then an agent with B would pay (say) 1 cent to get A If C > A, then an agent with A would pay (say) 1 cent to get C 28 10

Rational Preferences Preferences of a rational agent must obey constraints. The axioms of rationality: Theorem: Rational preferences imply behavior describable as maximization of expected utility 29 MEU Principle Theorem: [Ramsey, 1931; von Neumann & Morgenstern, 1944] Given any preferences satisfying these constraints, there exists a real-valued function U such that: Maximum expected utility (MEU) principle: Choose the action that maximizes expected utility Note: an agent can be entirely rational (consistent with MEU) without ever representing or manipulating utilities and probabilities E.g., a lookup table for perfect tictactoe 30 11

Utility Scales Normalized utilities: u + = 1.0, u - = 0.0 Micromorts: one-millionth chance of death, useful for paying to reduce product risks, etc. QALYs: quality-adjusted life years, useful for medical decisions involving substantial risk Note: behavior is invariant under positive linear transformation With deterministic prizes only (no lottery choices), only ordinal utility can be determined, i.e., total order on prizes 31 Human Utilities Utilities map states to real numbers. Which numbers? Standard approach to assessment of human utilities: Compare a state A to a standard lottery L p between best possible prize u + with probability p worst possible catastrophe u - with probability 1-p Adjust lottery probability p until A ~ L p Resulting p is a utility in [0,1] 32 12

Money Money does not behave as a utility function, but we can talk about the utility of having money (or being in debt) Given a lottery L = [p, $X; (1-p), $Y] The expected monetary value EMV(L) is p*x + (1-p)*Y U(L) = p*u($x) + (1-p)*U($Y) Typically, U(L) < U( EMV(L) ): why? In this sense, people are risk-averse When deep in debt, we are risk-prone Utility curve: for what probability p am I indifferent between: Some sure outcome x A lottery [p,$m; (1-p),$0], M large 33 Example: Insurance Consider the lottery [0.5,$1000; 0.5,$0] What is its expected monetary value? ($500) What is its certainty equivalent? Monetary value acceptable in lieu of lottery $400 for most people Difference of $100 is the insurance premium There s an insurance industry because people will pay to reduce their risk If everyone were risk-neutral, no insurance needed! 35 13

Example: Insurance Because people ascribe different utilities to different amounts of money, insurance agreements can increase both parties expected utility You own a car. Your lottery: L Y = [0.8, $0 ; 0.2, -$200] i.e., 20% chance of crashing You do not want -$200! U Y (L Y ) = 0.2*U Y (-$200) = -200 U Y (-$50) = -150 Amount Your Utility U Y $0 0 -$50-150 -$200-1000 Example: Insurance Because people ascribe different utilities to different amounts of money, insurance agreements can increase both parties expected utility You own a car. Your lottery: L Y = [0.8, $0 ; 0.2, -$200] i.e., 20% chance of crashing You do not want -$200! U Y (L Y ) = 0.2*U Y (-$200) = -200 U Y (-$50) = -150 Insurance company buys risk: L I = [0.8, $50 ; 0.2, -$150] i.e., $50 revenue + your L Y Insurer is risk-neutral: U(L)=U(EMV(L)) U I (L I ) = U(0.8*50 + 0.2*(-150)) = U($10) > U($0) 14

Example: Human Rationality? Famous example of Allais (1953) A: [0.8,$4k; 0.2,$0] B: [1.0,$3k; 0.0,$0] C: [0.2,$4k; 0.8,$0] D: [0.25,$3k; 0.75,$0] Most people prefer B > A, C > D But if U($0) = 0, then B > A U($3k) > 0.8 U($4k) C > D 0.8 U($4k) > U($3k) 38 39 15