CS325 Artificial Intelligence Chs. 10, 11 Planning

Size: px
Start display at page:

Download "CS325 Artificial Intelligence Chs. 10, 11 Planning"

Transcription

1 CS325 Artificial Intelligence Chs. 10, 11 Planning Cengiz Günay, Emory Univ. Spring 2013 Günay () Chs. 10, 11 Planning Spring / 24

2 Planning Günay () Chs. 10, 11 Planning Spring / 24

3 Planning Planning is coming up with a solution for an agent: it s at the heart of AI Günay () Chs. 10, 11 Planning Spring / 24

4 Entry/Exit Surveys Exit survey: Knowledge Representation and Inference What knowledge representation do you think our brains have? What knowledge base topic/domain would you wish you had? Entry survey: Planning (0.25 points of final grade) What previous class topic would count as a planning algorithm? Where do you think you used an automated planning system? Günay () Chs. 10, 11 Planning Spring / 24

5 Graph Search Is a Form of Planning 71 Oradea Neamt Zerind 75 Arad Timisoara Drobeta 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea Lugoj Mehadia 120 Pitesti Bucharest 90 Craiova Giurgiu 87 Iasi Urziceni Vaslui Hirsova 86 Eforie Search algorithm like A is a planning method Günay () Chs. 10, 11 Planning Spring / 24

6 Graph Search Is a Form of Planning 71 Oradea Neamt Zerind 75 Arad Timisoara Drobeta 151 Sibiu 99 Fagaras 80 Rimnicu Vilcea Lugoj Mehadia 120 Pitesti Bucharest 90 Craiova Giurgiu 87 Iasi Urziceni Vaslui Hirsova 86 Eforie Search algorithm like A is a planning method Only works for observable and deterministic environment What if you cannot plan all the way? Günay () Chs. 10, 11 Planning Spring / 24

7 Blind Walking Günay () Chs. 10, 11 Planning Spring / 24

8 Blind Walking Need to alternate plan & execution Günay () Chs. 10, 11 Planning Spring / 24

9 How Does the Brain Do It? Günay () Chs. 10, 11 Planning Spring / 24

10 How Does the Brain Do It? Frontal lobes are biggest in humans, dedicated to planning, reasoning and other high-order skills Günay () Chs. 10, 11 Planning Spring / 24

11 So When Do You Need Planning? Stochastic environment Multi-agent situation Partial observable Unknown Hierarchical Günay () Chs. 10, 11 Planning Spring / 24

12 So When Do You Need Planning? Stochastic environment Multi-agent situation Partial observable Unknown Hierarchical In these cases, may need to plan in belief states Günay () Chs. 10, 11 Planning Spring / 24

13 Sensorless World L R R S L S L R R L R R S L S S L S L R R L S S Günay () Chs. 10, 11 Planning Spring / 24

14 Sensorless World L Günay () Chs. 10, 11 Planning Spring / 24

15 Sensorless World L Start: clean place Go: left wall, clean Günay () Chs. 10, 11 Planning Spring / 24

16 Partially Observable Environment Deterministic, but sense only local dirt & location Günay () Chs. 10, 11 Planning Spring / 24

17 Partially Observable Environment Deterministic, but sense only local dirt & location Belief states get smaller with actions Günay () Chs. 10, 11 Planning Spring / 24

18 Stochastic (Random) Environment Still local sensing Robot has slippery wheels, may not move Günay () Chs. 10, 11 Planning Spring / 24

19 Stochastic (Random) Environment Still local sensing Robot has slippery wheels, may not move Belief states get larger with actions Günay () Chs. 10, 11 Planning Spring / 24

20 Stochastic (Random) Environment Can reach clean world? Still local sensing Robot has slippery wheels, may not move Belief states get larger with actions Günay () Chs. 10, 11 Planning Spring / 24

21 Stochastic (Random) Environment Can reach clean world? No finite solution! Still local sensing Robot has slippery wheels, may not move Belief states get larger with actions Günay () Chs. 10, 11 Planning Spring / 24

22 Infinite Sequences? Finite sequences don t work: [R, S, R, S] Günay () Chs. 10, 11 Planning Spring / 24

23 Infinite Sequences? Finite sequences don t work: [R, S, R, S] Make a finite plan with branches Günay () Chs. 10, 11 Planning Spring / 24

24 Infinite Sequences? Finite sequences don t work: [R, S, R, S] Make a finite plan with branches Gives an infinite sequence! Günay () Chs. 10, 11 Planning Spring / 24

25 Infinite Sequences? Finite sequences don t work: [R, S, R, S] Make a finite plan with branches Gives an infinite sequence! Can write it like: [S, While A : R, S] Günay () Chs. 10, 11 Planning Spring / 24

26 Searching For a Plan Like searching for paths, find a plan that reaches the goal. Günay () Chs. 10, 11 Planning Spring / 24

27 Searching For a Plan Unbounded solution: Some leaf goal Every leaf goal No Bounded solution: No branch No loops No Like searching for paths, find a plan that reaches the goal. Günay () Chs. 10, 11 Planning Spring / 24

28 Searching For a Plan Unbounded solution: Some leaf goal Every leaf goal No Bounded solution: No branch No loops No Like searching for paths, find a plan that reaches the goal. Günay () Chs. 10, 11 Planning Spring / 24

29 Mathematical Notation [A, S, F ] Result(Result(A, A S), S F ) Goals Deterministic: s = Result(a, s) where s is state and a is action. Günay () Chs. 10, 11 Planning Spring / 24

30 Mathematical Notation Deterministic: [A, S, F ] Result(Result(A, A S), S F ) Goals s = Result(a, s) where s is state and a is action. Non-deterministic: Predict-update beliefs (b) cycle: b = Update(Predict(b, a), o) where o is observation. Günay () Chs. 10, 11 Planning Spring / 24

31 Predict-Update Cycle Babies randomly dirtying floors Günay () Chs. 10, 11 Planning Spring / 24

32 Predict-Update Cycle Babies randomly dirtying floors Problems: Large solution, simpler to list world state as variables Günay () Chs. 10, 11 Planning Spring / 24

33 Classical Planning State space: k boolean variables; size: Günay () Chs. 10, 11 Planning Spring / 24

34 Classical Planning State space: k boolean variables; size: 2 k Günay () Chs. 10, 11 Planning Spring / 24

35 Classical Planning State space: k boolean variables; size: 2 k World state: Complete assignment Belief state: Complete/partial/arbitrary formula Called an action schema Günay () Chs. 10, 11 Planning Spring / 24

36 Action Schemas 1 Action 2 Precondition 3 Effect Günay () Chs. 10, 11 Planning Spring / 24

37 Action Schemas 1 Action 2 Precondition 3 Effect Example: Action(Fly(p, from, to), Pre: At(p, from) Plane(p) Airport(from) Airport(to) Eff: At(p, from) At(p, to)) Günay () Chs. 10, 11 Planning Spring / 24

38 Action Schemas 1 Action 2 Precondition 3 Effect Example: Action(Fly(p, from, to), Pre: At(p, from) Plane(p) Airport(from) Airport(to) Eff: At(p, from) At(p, to)) Is it FOL? Günay () Chs. 10, 11 Planning Spring / 24

39 Air Cargo Example Günay () Chs. 10, 11 Planning Spring / 24

40 Progression/Forward Search Like regular search. Günay () Chs. 10, 11 Planning Spring / 24

41 Regression/Backward Search Start with goal, have many unknowns. Günay () Chs. 10, 11 Planning Spring / 24

42 When to Choose Forward vs. Backward? Remember when to choose breadth-first vs. depth first. Fewer options when starting from the goal. Can think of other examples? Günay () Chs. 10, 11 Planning Spring / 24

43 Searching the Plan Space Günay () Chs. 10, 11 Planning Spring / 24

44 Searching the Plan Space How would one use genetic algorithms? Günay () Chs. 10, 11 Planning Spring / 24

45 Current Trend: Forward Search with Heuristics Tiles example: Action(Slide(t, a, b), Pre: On(t, a) Tile(t) Blank(b) Adj(a, b) Eff: On(t, b) Blank(a) On(t, a) Blank(b)) Günay () Chs. 10, 11 Planning Spring / 24

46 Current Trend: Forward Search with Heuristics Tiles example: Action(Slide(t, a, b), Pre: On(t, a) Tile(t) Blank(b) Adj(a, b) Eff: On(t, b) Blank(a) On(t, a) Blank(b)) How to use heuristics? Remember Romania routes. Günay () Chs. 10, 11 Planning Spring / 24

47 Current Trend: Forward Search with Heuristics Tiles example: Action(Slide(t, a, b), Pre: On(t, a) Tile(t) Blank(b) Adj(a, b) Eff: On(t, b) Blank(a) On(t, a) Blank(b)) How to use heuristics? Remember Romania routes. Choose approximate solutions and use during search Delete terms from schema Günay () Chs. 10, 11 Planning Spring / 24

48 Current Trend: Forward Search with Heuristics Tiles example: Action(Slide(t, a, b), Pre: On(t, a) Tile(t) Blank(b) Adj(a, b) Eff: On(t, b) Blank(a) On(t, a) Blank(b)) How to use heuristics? Remember Romania routes. Choose approximate solutions and use during search Delete terms from schema Can also do this programmatically Günay () Chs. 10, 11 Planning Spring / 24

49 One Last Bit: Situation Calculus Uses First Order Logic (FOL) for planning: Actions: are objects; e.g., Fly(p, x, y) Situations: objects; e.g., s = Result(s, a) Fluents: change at each situation: e.g., At(p, x, s) Possible actions: objects; Poss(a, s) Written as in: Pre(s) Poss(a, s) Günay () Chs. 10, 11 Planning Spring / 24

50 One Last Bit: Situation Calculus Uses First Order Logic (FOL) for planning: Actions: are objects; e.g., Fly(p, x, y) Situations: objects; e.g., s = Result(s, a) Fluents: change at each situation: e.g., At(p, x, s) Possible actions: objects; Poss(a, s) Written as in: Pre(s) Poss(a, s) Plane example: Plane(p, s) Airport(x, s) Airport(y, s) At(p, x, s) Poss(Fly(p, x, y), s) Günay () Chs. 10, 11 Planning Spring / 24

51 Situation Calculus: Effects Effects: Successor state axioms: a, s Poss(a, s) (fluent caused it didn t undo it) Günay () Chs. 10, 11 Planning Spring / 24

52 Situation Calculus: Effects Effects: Successor state axioms: a, s Poss(a, s) (fluent caused it didn t undo it) Cargo example: Poss(a, s) In(c, p, Result(s, a)) (a = Load(c, p, x) (In(c, p, s) a Unload(c, p, x))) Günay () Chs. 10, 11 Planning Spring / 24

53 Situation Calculus: Effects Effects: Successor state axioms: a, s Poss(a, s) (fluent caused it didn t undo it) Cargo example: Poss(a, s) In(c, p, Result(s, a)) (a = Load(c, p, x) (In(c, p, s) a Unload(c, p, x))) Can use power of FOL Can use existing theorem provers Problem: slow Günay () Chs. 10, 11 Planning Spring / 24

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week CS 473: Artificial Intelligence Markov Decision Processes Dan Weld University of Washington [Slides originally created by Dan Klein & Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials

More information

Discrete Mathematics for CS Spring 2008 David Wagner Final Exam

Discrete Mathematics for CS Spring 2008 David Wagner Final Exam CS 70 Discrete Mathematics for CS Spring 2008 David Wagner Final Exam PRINT your name:, (last) SIGN your name: (first) PRINT your Unix account login: Your section time (e.g., Tue 3pm): Name of the person

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non Deterministic Search Example: Grid World A maze like problem The agent lives in

More information

TDT4171 Artificial Intelligence Methods

TDT4171 Artificial Intelligence Methods TDT47 Artificial Intelligence Methods Lecture 7 Making Complex Decisions Norwegian University of Science and Technology Helge Langseth IT-VEST 0 helgel@idi.ntnu.no TDT47 Artificial Intelligence Methods

More information

Introduction to Fall 2007 Artificial Intelligence Final Exam

Introduction to Fall 2007 Artificial Intelligence Final Exam NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Final Exam You have 180 minutes. The exam is closed book, closed notes except a two-page crib sheet, basic calculators

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non-Deterministic Search 1 Example: Grid World A maze-like problem The agent lives

More information

Introduction to AI Planning. Martin Kalisch Stefan König

Introduction to AI Planning. Martin Kalisch Stefan König Introduction to AI Planning Martin Kalisch Stefan König Introduction to AI Planning the task of coming up with a sequence of actions that will achieve a goal is called planning. Planning problems so far:

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the reward function Must (learn to) act so as to maximize expected rewards Grid World The agent

More information

The exam is closed book, closed calculator, and closed notes except your three crib sheets.

The exam is closed book, closed calculator, and closed notes except your three crib sheets. CS 188 Spring 2016 Introduction to Artificial Intelligence Final V2 You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your three crib sheets.

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Markov Decision Processes (MDP)! Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Stuart Russell or Andrew Moore 1 Outline

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. AIMA 3. Chris Amato Stochastic domains So far, we have studied search Can use

More information

CS 4100 // artificial intelligence

CS 4100 // artificial intelligence CS 4100 // artificial intelligence instructor: byron wallace (Playing with) uncertainties and expectations Attribution: many of these slides are modified versions of those distributed with the UC Berkeley

More information

Tableau Theorem Prover for Intuitionistic Propositional Logic

Tableau Theorem Prover for Intuitionistic Propositional Logic Tableau Theorem Prover for Intuitionistic Propositional Logic Portland State University CS 510 - Mathematical Logic and Programming Languages Motivation Tableau for Classical Logic If A is contradictory

More information

Tableau Theorem Prover for Intuitionistic Propositional Logic

Tableau Theorem Prover for Intuitionistic Propositional Logic Tableau Theorem Prover for Intuitionistic Propositional Logic Portland State University CS 510 - Mathematical Logic and Programming Languages Motivation Tableau for Classical Logic If A is contradictory

More information

CS 5522: Artificial Intelligence II

CS 5522: Artificial Intelligence II CS 5522: Artificial Intelligence II Uncertainty and Utilities Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 9: MDPs 2/16/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Announcements

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA Stochastic domains Image: Berkeley CS188 course notes (downloaded Summer

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Uncertainty and Utilities Instructors: Dan Klein and Pieter Abbeel University of California, Berkeley [These slides are based on those of Dan Klein and Pieter Abbeel for

More information

Uncertain Outcomes. CS 188: Artificial Intelligence Uncertainty and Utilities. Expectimax Search. Worst-Case vs. Average Case

Uncertain Outcomes. CS 188: Artificial Intelligence Uncertainty and Utilities. Expectimax Search. Worst-Case vs. Average Case CS 188: Artificial Intelligence Uncertainty and Utilities Uncertain Outcomes Instructor: Marco Alvarez University of Rhode Island (These slides were created/modified by Dan Klein, Pieter Abbeel, Anca Dragan

More information

UNIT VI TREES. Marks - 14

UNIT VI TREES. Marks - 14 UNIT VI TREES Marks - 14 SYLLABUS 6.1 Non-linear data structures 6.2 Binary trees : Complete Binary Tree, Basic Terms: level number, degree, in-degree and out-degree, leaf node, directed edge, path, depth,

More information

Q1. [?? pts] Search Traces

Q1. [?? pts] Search Traces CS 188 Spring 2010 Introduction to Artificial Intelligence Midterm Exam Solutions Q1. [?? pts] Search Traces Each of the trees (G1 through G5) was generated by searching the graph (below, left) with a

More information

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes. CS 188 Fall 2013 Introduction to Artificial Intelligence Midterm 1 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use

More information

17 MAKING COMPLEX DECISIONS

17 MAKING COMPLEX DECISIONS 267 17 MAKING COMPLEX DECISIONS The agent s utility now depends on a sequence of decisions In the following 4 3grid environment the agent makes a decision to move (U, R, D, L) at each time step When the

More information

Yao s Minimax Principle

Yao s Minimax Principle Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,

More information

Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 4

Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 4 Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 4 Steve Dunbar Due Mon, October 5, 2009 1. (a) For T 0 = 10 and a = 20, draw a graph of the probability of ruin as a function

More information

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the

More information

Worst-Case vs. Average Case. CSE 473: Artificial Intelligence Expectimax, Uncertainty, Utilities. Expectimax Search. Worst-Case vs.

Worst-Case vs. Average Case. CSE 473: Artificial Intelligence Expectimax, Uncertainty, Utilities. Expectimax Search. Worst-Case vs. CSE 473: Artificial Intelligence Expectimax, Uncertainty, Utilities Worst-Case vs. Average Case max min 10 10 9 100 Dieter Fox [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro

More information

Announcements. CS 188: Artificial Intelligence Spring Expectimax Search Trees. Maximum Expected Utility. What are Probabilities?

Announcements. CS 188: Artificial Intelligence Spring Expectimax Search Trees. Maximum Expected Utility. What are Probabilities? CS 188: Artificial Intelligence Spring 2010 Lecture 8: MEU / Utilities 2/11/2010 Announcements W2 is due today (lecture or drop box) P2 is out and due on 2/18 Pieter Abbeel UC Berkeley Many slides over

More information

Chapter 11: Artificial Intelligence

Chapter 11: Artificial Intelligence Chapter 11: Artificial Intelligence Computer Science: An Overview Tenth Edition by J. Glenn Brookshear Presentation files modified by Farn Wang Copyright 2008 Pearson Education, Inc. Publishing as Pearson

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2010 Lecture 8: MEU / Utilities 2/11/2010 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein 1 Announcements W2 is due today (lecture or

More information

Decision making in the presence of uncertainty

Decision making in the presence of uncertainty CS 2750 Foundations of AI Lecture 20 Decision making in the presence of uncertainty Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Decision-making in the presence of uncertainty Computing the probability

More information

Lecture 23: April 10

Lecture 23: April 10 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 23: April 10 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

We are not saying it s easy, we are just trying to make it simpler than before. An Online Platform for backtesting quantitative trading strategies.

We are not saying it s easy, we are just trying to make it simpler than before. An Online Platform for backtesting quantitative trading strategies. We are not saying it s easy, we are just trying to make it simpler than before. An Online Platform for backtesting quantitative trading strategies. Visit www.kuants.in to get your free access to Stock

More information

Monte-Carlo Planning: Basic Principles and Recent Progress

Monte-Carlo Planning: Basic Principles and Recent Progress Monte-Carlo Planning: Basic Principles and Recent Progress Alan Fern School of EECS Oregon State University Outline Preliminaries: Markov Decision Processes What is Monte-Carlo Planning? Uniform Monte-Carlo

More information

SAT and DPLL. Introduction. Preliminaries. Normal forms DPLL. Complexity. Espen H. Lian. DPLL Implementation. Bibliography.

SAT and DPLL. Introduction. Preliminaries. Normal forms DPLL. Complexity. Espen H. Lian. DPLL Implementation. Bibliography. SAT and Espen H. Lian Ifi, UiO Implementation May 4, 2010 Espen H. Lian (Ifi, UiO) SAT and May 4, 2010 1 / 59 Espen H. Lian (Ifi, UiO) SAT and May 4, 2010 2 / 59 Introduction Introduction SAT is the problem

More information

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes.

CS 188 Fall Introduction to Artificial Intelligence Midterm 1. ˆ You have approximately 2 hours and 50 minutes. CS 188 Fall 2013 Introduction to Artificial Intelligence Midterm 1 ˆ You have approximately 2 hours and 50 minutes. ˆ The exam is closed book, closed notes except your one-page crib sheet. ˆ Please use

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning MDP March May, 2013 MDP MDP: S, A, P, R, γ, µ State can be partially observable: Partially Observable MDPs () Actions can be temporally extended: Semi MDPs (SMDPs) and Hierarchical

More information

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010 May 19, 2010 1 Introduction Scope of Agent preferences Utility Functions 2 Game Representations Example: Game-1 Extended Form Strategic Form Equivalences 3 Reductions Best Response Domination 4 Solution

More information

MAT385 Final (Spring 2009): Boolean Algebras, FSM, and old stuff

MAT385 Final (Spring 2009): Boolean Algebras, FSM, and old stuff MAT385 Final (Spring 2009): Boolean Algebras, FSM, and old stuff Name: Directions: Problems are equally weighted. Show your work! Answers without justification will likely result in few points. Your written

More information

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2 COMP417 Introduction to Robotics and Intelligent Systems Reinforcement Learning - 2 Speaker: Sandeep Manjanna Acklowledgement: These slides use material from Pieter Abbeel s, Dan Klein s and John Schulman

More information

Decision making in the presence of uncertainty

Decision making in the presence of uncertainty CS 271 Foundations of AI Lecture 21 Decision making in the presence of uncertainty Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Decision-making in the presence of uncertainty Many real-world

More information

Optimal Satisficing Tree Searches

Optimal Satisficing Tree Searches Optimal Satisficing Tree Searches Dan Geiger and Jeffrey A. Barnett Northrop Research and Technology Center One Research Park Palos Verdes, CA 90274 Abstract We provide an algorithm that finds optimal

More information

5 Deduction in First-Order Logic

5 Deduction in First-Order Logic 5 Deduction in First-Order Logic The system FOL C. Let C be a set of constant symbols. FOL C is a system of deduction for the language L # C. Axioms: The following are axioms of FOL C. (1) All tautologies.

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Spring 2015 Introduction to Artificial Intelligence Midterm 1 You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib

More information

Search Space and Average Proof Length of Resolution. H. Kleine Buning T. Lettmann. Universitat { GH { Paderborn. Postfach 16 21

Search Space and Average Proof Length of Resolution. H. Kleine Buning T. Lettmann. Universitat { GH { Paderborn. Postfach 16 21 Search Space and Average roof Length of Resolution H. Kleine Buning T. Lettmann FB 7 { Mathematik/Informatik Universitat { GH { aderborn ostfach 6 2 D{4790 aderborn (Germany) E{mail: kbcsl@uni-paderborn.de

More information

CS 6300 Artificial Intelligence Spring 2018

CS 6300 Artificial Intelligence Spring 2018 Expectimax Search CS 6300 Artificial Intelligence Spring 2018 Tucker Hermans thermans@cs.utah.edu Many slides courtesy of Pieter Abbeel and Dan Klein Expectimax Search Trees What if we don t know what

More information

SAT and DPLL. Espen H. Lian. May 4, Ifi, UiO. Espen H. Lian (Ifi, UiO) SAT and DPLL May 4, / 59

SAT and DPLL. Espen H. Lian. May 4, Ifi, UiO. Espen H. Lian (Ifi, UiO) SAT and DPLL May 4, / 59 SAT and DPLL Espen H. Lian Ifi, UiO May 4, 2010 Espen H. Lian (Ifi, UiO) SAT and DPLL May 4, 2010 1 / 59 Normal forms Normal forms DPLL Complexity DPLL Implementation Bibliography Espen H. Lian (Ifi, UiO)

More information

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010 Lecture 12: MDP1 Victor R. Lesser CMPSCI 683 Fall 2010 Biased Random GSAT - WalkSat Notice no random restart 2 Today s lecture Search where there is Uncertainty in Operator Outcome --Sequential Decision

More information

CS221 / Spring 2018 / Sadigh. Lecture 7: MDPs I

CS221 / Spring 2018 / Sadigh. Lecture 7: MDPs I CS221 / Spring 2018 / Sadigh Lecture 7: MDPs I cs221.stanford.edu/q Question How would you get to Mountain View on Friday night in the least amount of time? bike drive Caltrain Uber/Lyft fly CS221 / Spring

More information

Lecture 7: MDPs I. Question. Course plan. So far: search problems. Uncertainty in the real world

Lecture 7: MDPs I. Question. Course plan. So far: search problems. Uncertainty in the real world Lecture 7: MDPs I cs221.stanford.edu/q Question How would you get to Mountain View on Friday night in the least amount of time? bike drive Caltrain Uber/Lyft fly CS221 / Spring 2018 / Sadigh CS221 / Spring

More information

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision

More information

CEC login. Student Details Name SOLUTIONS

CEC login. Student Details Name SOLUTIONS Student Details Name SOLUTIONS CEC login Instructions You have roughly 1 minute per point, so schedule your time accordingly. There is only one correct answer per question. Good luck! Question 1. Searching

More information

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Ryan P. Adams COS 324 Elements of Machine Learning Princeton University We now turn to a new aspect of machine learning, in which agents take actions and become active in their

More information

Algorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information

Algorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information Algorithmic Game Theory and Applications Lecture 11: Games of Perfect Information Kousha Etessami finite games of perfect information Recall, a perfect information (PI) game has only 1 node per information

More information

6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE

6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE 6.21 DYNAMIC PROGRAMMING LECTURE LECTURE OUTLINE Deterministic finite-state DP problems Backward shortest path algorithm Forward shortest path algorithm Shortest path examples Alternative shortest path

More information

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Quantities. Expectimax Pseudocode. Expectimax Pruning?

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Quantities. Expectimax Pseudocode. Expectimax Pruning? CS 188: Artificial Intelligence Fall 2010 Expectimax Search Trees What if we don t know what the result of an action will be? E.g., In solitaire, next card is unknown In minesweeper, mine locations In

More information

Decidability and Recursive Languages

Decidability and Recursive Languages Decidability and Recursive Languages Let L (Σ { }) be a language, i.e., a set of strings of symbols with a finite length. For example, {0, 01, 10, 210, 1010,...}. Let M be a TM such that for any string

More information

FE501 Stochastic Calculus for Finance 1.5:0:1.5

FE501 Stochastic Calculus for Finance 1.5:0:1.5 Descriptions of Courses FE501 Stochastic Calculus for Finance 1.5:0:1.5 This course introduces martingales or Markov properties of stochastic processes. The most popular example of stochastic process is

More information

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N Markov Decision Processes: Making Decision in the Presence of Uncertainty (some of) R&N 16.1-16.6 R&N 17.1-17.4 Different Aspects of Machine Learning Supervised learning Classification - concept learning

More information

Monte Carlo Methods in Structuring and Derivatives Pricing

Monte Carlo Methods in Structuring and Derivatives Pricing Monte Carlo Methods in Structuring and Derivatives Pricing Prof. Manuela Pedio (guest) 20263 Advanced Tools for Risk Management and Pricing Spring 2017 Outline and objectives The basic Monte Carlo algorithm

More information

Introduction to Fall 2011 Artificial Intelligence Midterm Exam

Introduction to Fall 2011 Artificial Intelligence Midterm Exam CS 188 Introduction to Fall 2011 Artificial Intelligence Midterm Exam INSTRUCTIONS You have 3 hours. The exam is closed book, closed notes except a one-page crib sheet. Please use non-programmable calculators

More information

The Traveling Salesman Problem. Time Complexity under Nondeterminism. A Nondeterministic Algorithm for tsp (d)

The Traveling Salesman Problem. Time Complexity under Nondeterminism. A Nondeterministic Algorithm for tsp (d) The Traveling Salesman Problem We are given n cities 1, 2,..., n and integer distances d ij between any two cities i and j. Assume d ij = d ji for convenience. The traveling salesman problem (tsp) asks

More information

343H: Honors AI. Lecture 7: Expectimax Search 2/6/2014. Kristen Grauman UT-Austin. Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted

343H: Honors AI. Lecture 7: Expectimax Search 2/6/2014. Kristen Grauman UT-Austin. Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted 343H: Honors AI Lecture 7: Expectimax Search 2/6/2014 Kristen Grauman UT-Austin Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted 1 Announcements PS1 is out, due in 2 weeks Last time Adversarial

More information

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018 Lecture 2: Making Good Sequences of Decisions Given a Model of World CS234: RL Emma Brunskill Winter 218 Human in the loop exoskeleton work from Steve Collins lab Class Structure Last Time: Introduction

More information

MFE Course Details. Financial Mathematics & Statistics

MFE Course Details. Financial Mathematics & Statistics MFE Course Details Financial Mathematics & Statistics Calculus & Linear Algebra This course covers mathematical tools and concepts for solving problems in financial engineering. It will also help to satisfy

More information

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Example. Expectimax Pseudocode. Expectimax Pruning?

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Example. Expectimax Pseudocode. Expectimax Pruning? CS 188: Artificial Intelligence Fall 2011 Expectimax Search Trees What if we don t know what the result of an action will be? E.g., In solitaire, next card is unknown In minesweeper, mine locations In

More information

16 MAKING SIMPLE DECISIONS

16 MAKING SIMPLE DECISIONS 247 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action A will have possible outcome states Result

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Markov Decision Processes (MDPs) Luke Zettlemoyer Many slides over the course adapted from Dan Klein, Stuart Russell or Andrew Moore 1 Announcements PS2 online now Due

More information

X i = 124 MARTINGALES

X i = 124 MARTINGALES 124 MARTINGALES 5.4. Optimal Sampling Theorem (OST). First I stated it a little vaguely: Theorem 5.12. Suppose that (1) T is a stopping time (2) M n is a martingale wrt the filtration F n (3) certain other

More information

0.1 Equivalence between Natural Deduction and Axiomatic Systems

0.1 Equivalence between Natural Deduction and Axiomatic Systems 0.1 Equivalence between Natural Deduction and Axiomatic Systems Theorem 0.1.1. Γ ND P iff Γ AS P ( ) it is enough to prove that all axioms are theorems in ND, as MP corresponds to ( e). ( ) by induction

More information

Mathematics for Algorithms and System Analysis

Mathematics for Algorithms and System Analysis UCSD CSE 21, Spring 2014 Mathematics for Algorithms and System Analysis Midterm Review Class URL: http://vlsicad.ucsd.edu/courses/cse21-s14/ Review Session Get out a pencil and paper Lets solve problems

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 CS 188: Artificial Intelligence Fall 2011 Lecture 7: Expectimax Search 9/15/2011 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 Expectimax Search

More information

Finance Mathematics. Part 1: Terms and their meaning.

Finance Mathematics. Part 1: Terms and their meaning. Finance Mathematics Part 1: Terms and their meaning. Watch the video describing call and put options at http://www.youtube.com/watch?v=efmtwu2yn5q and use http://www.investopedia.com or a search. Look

More information

Monte Carlo Methods (Estimators, On-policy/Off-policy Learning)

Monte Carlo Methods (Estimators, On-policy/Off-policy Learning) 1 / 24 Monte Carlo Methods (Estimators, On-policy/Off-policy Learning) Julie Nutini MLRG - Winter Term 2 January 24 th, 2017 2 / 24 Monte Carlo Methods Monte Carlo (MC) methods are learning methods, used

More information

Complex Decisions. Sequential Decision Making

Complex Decisions. Sequential Decision Making Sequential Decision Making Outline Sequential decision problems Value iteration Policy iteration POMDPs (basic concepts) Slides partially based on the Book "Reinforcement Learning: an introduction" by

More information

A Short Survey on Pursuit-Evasion Games

A Short Survey on Pursuit-Evasion Games A Short Survey on Pursuit-Evasion Games Peng Cheng Dept. of Computer Science University of Illinois at Urbana-Champaign 1 Introduction Pursuit-evasion game is about how to guide one or a group of pursuers

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2018 Objectives During this lesson we will learn to: distinguish between discrete and continuous

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2017 Objectives During this lesson we will learn to: distinguish between discrete and continuous

More information

Sublinear Time Algorithms Oct 19, Lecture 1

Sublinear Time Algorithms Oct 19, Lecture 1 0368.416701 Sublinear Time Algorithms Oct 19, 2009 Lecturer: Ronitt Rubinfeld Lecture 1 Scribe: Daniel Shahaf 1 Sublinear-time algorithms: motivation Twenty years ago, there was practically no investigation

More information

Probabilities. CSE 473: Artificial Intelligence Uncertainty, Utilities. Reminder: Expectations. Reminder: Probabilities

Probabilities. CSE 473: Artificial Intelligence Uncertainty, Utilities. Reminder: Expectations. Reminder: Probabilities CSE 473: Artificial Intelligence Uncertainty, Utilities Probabilities Dieter Fox [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are

More information

Markov Processes and Applications

Markov Processes and Applications Markov Processes and Applications Algorithms, Networks, Genome and Finance Etienne Pardoux Laboratoire d'analyse, Topologie, Probabilites Centre de Mathematiques et d'injormatique Universite de Provence,

More information

CTL Model Checking. Goal Method for proving M sat σ, where M is a Kripke structure and σ is a CTL formula. Approach Model checking!

CTL Model Checking. Goal Method for proving M sat σ, where M is a Kripke structure and σ is a CTL formula. Approach Model checking! CMSC 630 March 13, 2007 1 CTL Model Checking Goal Method for proving M sat σ, where M is a Kripke structure and σ is a CTL formula. Approach Model checking! Mathematically, M is a model of σ if s I = M

More information

Schizophrenic Representative Investors

Schizophrenic Representative Investors Schizophrenic Representative Investors Philip Z. Maymin NYU-Polytechnic Institute Six MetroTech Center Brooklyn, NY 11201 philip@maymin.com Representative investors whose behavior is modeled by a deterministic

More information

16 MAKING SIMPLE DECISIONS

16 MAKING SIMPLE DECISIONS 253 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action a will have possible outcome states Result(a)

More information

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 Lecture 17 & 18: Markov Decision Processes Oct 12 13, 2010 A subset of Lecture 9 slides from Dan Klein UC Berkeley Many slides over the course

More information

Making Complex Decisions

Making Complex Decisions Ch. 17 p.1/29 Making Complex Decisions Chapter 17 Ch. 17 p.2/29 Outline Sequential decision problems Value iteration algorithm Policy iteration algorithm Ch. 17 p.3/29 A simple environment 3 +1 p=0.8 2

More information

Artificial Neural Networks Lecture Notes

Artificial Neural Networks Lecture Notes Artificial Neural Networks Lecture Notes Part 10 About this file: This is the printer-friendly version of the file "lecture10.htm". In case the page is not properly displayed, use IE 5 or higher. Since

More information

Semantics with Applications 2b. Structural Operational Semantics

Semantics with Applications 2b. Structural Operational Semantics Semantics with Applications 2b. Structural Operational Semantics Hanne Riis Nielson, Flemming Nielson (thanks to Henrik Pilegaard) [SwA] Hanne Riis Nielson, Flemming Nielson Semantics with Applications:

More information

K-Swaps: Cooperative Negotiation for Solving Task-Allocation Problems

K-Swaps: Cooperative Negotiation for Solving Task-Allocation Problems K-Swaps: Cooperative Negotiation for Solving Task-Allocation Problems Xiaoming Zheng Department of Computer Science University of Southern California Los Angeles, CA 90089-0781 xiaominz@usc.edu Sven Koenig

More information

Bandit Learning with switching costs

Bandit Learning with switching costs Bandit Learning with switching costs Jian Ding, University of Chicago joint with: Ofer Dekel (MSR), Tomer Koren (Technion) and Yuval Peres (MSR) June 2016, Harvard University Online Learning with k -Actions

More information

Action Selection for MDPs: Anytime AO* vs. UCT

Action Selection for MDPs: Anytime AO* vs. UCT Action Selection for MDPs: Anytime AO* vs. UCT Blai Bonet 1 and Hector Geffner 2 1 Universidad Simón Boĺıvar 2 ICREA & Universitat Pompeu Fabra AAAI, Toronto, Canada, July 2012 Online MDP Planning and

More information

Introduction to Artificial Intelligence Spring 2019 Note 2

Introduction to Artificial Intelligence Spring 2019 Note 2 CS 188 Introduction to Artificial Intelligence Spring 2019 Note 2 These lecture notes are heavily based on notes originally written by Nikhil Sharma. Games In the first note, we talked about search problems

More information

CSE 417 Dynamic Programming (pt 2) Look at the Last Element

CSE 417 Dynamic Programming (pt 2) Look at the Last Element CSE 417 Dynamic Programming (pt 2) Look at the Last Element Reminders > HW4 is due on Friday start early! if you run into problems loading data (date parsing), try running java with Duser.country=US Duser.language=en

More information

Logic and Artificial Intelligence Lecture 24

Logic and Artificial Intelligence Lecture 24 Logic and Artificial Intelligence Lecture 24 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit

More information

UNIT 4 MATHEMATICAL METHODS

UNIT 4 MATHEMATICAL METHODS UNIT 4 MATHEMATICAL METHODS PROBABILITY Section 1: Introductory Probability Basic Probability Facts Probabilities of Simple Events Overview of Set Language Venn Diagrams Probabilities of Compound Events

More information

CSE 100: TREAPS AND RANDOMIZED SEARCH TREES

CSE 100: TREAPS AND RANDOMIZED SEARCH TREES CSE 100: TREAPS AND RANDOMIZED SEARCH TREES Midterm Review Practice Midterm covered during Sunday discussion Today Run time analysis of building the Huffman tree AVL rotations and treaps Huffman s algorithm

More information

CS221 / Spring 2018 / Sadigh. Lecture 9: Games I

CS221 / Spring 2018 / Sadigh. Lecture 9: Games I CS221 / Spring 2018 / Sadigh Lecture 9: Games I Course plan Search problems Markov decision processes Adversarial games Constraint satisfaction problems Bayesian networks Reflex States Variables Logic

More information

V. Lesser CS683 F2004

V. Lesser CS683 F2004 The value of information Lecture 15: Uncertainty - 6 Example 1: You consider buying a program to manage your finances that costs $100. There is a prior probability of 0.7 that the program is suitable in

More information

BROWNIAN MOTION Antonella Basso, Martina Nardon

BROWNIAN MOTION Antonella Basso, Martina Nardon BROWNIAN MOTION Antonella Basso, Martina Nardon basso@unive.it, mnardon@unive.it Department of Applied Mathematics University Ca Foscari Venice Brownian motion p. 1 Brownian motion Brownian motion plays

More information