DECISION ANALYSIS. Decision often must be made in uncertain environments. Examples:

Similar documents
DECISION ANALYSIS. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Decision Analysis. Carlos A. Santos Silva June 5 th, 2009

Chapter 13 Decision Analysis

TIm 206 Lecture notes Decision Analysis

Module 15 July 28, 2014

Decision Analysis under Uncertainty. Christopher Grigoriou Executive MBA/HEC Lausanne

A B C D E F 1 PAYOFF TABLE 2. States of Nature

Causes of Poor Decisions

Decision Making. DKSharma

Decision Analysis Models

UNIT 5 DECISION MAKING

Chapter 3. Decision Analysis. Learning Objectives

Decision Making Models

Agenda. Lecture 2. Decision Analysis. Key Characteristics. Terminology. Structuring Decision Problems

Decision Analysis. Chapter Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Decision Analysis. Chapter Topics

Decision Analysis CHAPTER LEARNING OBJECTIVES CHAPTER OUTLINE. After completing this chapter, students will be able to:

Dr. Abdallah Abdallah Fall Term 2014

Decision making under uncertainty

Decision Theory Using Probabilities, MV, EMV, EVPI and Other Techniques

The Course So Far. Decision Making in Deterministic Domains. Decision Making in Uncertain Domains. Next: Decision Making in Uncertain Domains

Chapter 18 Student Lecture Notes 18-1

Decision Analysis. Chapter 12. Chapter Topics. Decision Analysis Components of Decision Making. Decision Analysis Overview

Textbook: pp Chapter 3: Decision Analysis

SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT. BF360 Operations Research

MBF1413 Quantitative Methods

Full file at CHAPTER 3 Decision Analysis

Project Risk Analysis and Management Exercises (Part II, Chapters 6, 7)

1. A is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes,

The Course So Far. Atomic agent: uninformed, informed, local Specific KR languages

Decision Making Supplement A

ESD.71 Engineering Systems Analysis for Design

36106 Managerial Decision Modeling Decision Analysis in Excel

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Decision Analysis

Engineering Risk Benefit Analysis

Energy and public Policies

Subject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10.

Introduction LEARNING OBJECTIVES. The Six Steps in Decision Making. Thompson Lumber Company. Thompson Lumber Company

MBF1413 Quantitative Methods

DECISION ANALYSIS: INTRODUCTION. Métodos Cuantitativos M. En C. Eduardo Bustos Farias 1

Making Hard Decision. ENCE 627 Decision Analysis for Engineering. Identify the decision situation and understand objectives. Identify alternatives

INTERNATIONAL UNIVERSITY OF JAPAN Public Management and Policy Analysis Program Graduate School of International Relations

Learning Objectives = = where X i is the i t h outcome of a decision, p i is the probability of the i t h

Managerial Economics

April 28, Decision Analysis 2. Utility Theory The Value of Information

Johan Oscar Ong, ST, MT

19 Decision Making. Expected Monetary Value Expected Opportunity Loss Return-to-Risk Ratio Decision Making with Sample Information

Objective of Decision Analysis. Determine an optimal decision under uncertain future events

IX. Decision Theory. A. Basic Definitions

Review of Expected Operations

Decision Analysis. Introduction. Job Counseling

DECISION MAKING. Decision making under conditions of uncertainty

Decision making in the presence of uncertainty

Chapter 12. Decision Analysis

Lecture 12: Introduction to reasoning under uncertainty. Actions and Consequences

A Taxonomy of Decision Models

Project Risk Evaluation and Management Exercises (Part II, Chapters 4, 5, 6 and 7)

Uncertainty. Contingent consumption Subjective probability. Utility functions. BEE2017 Microeconomics

MGS 3100 Business Analysis. Chapter 8 Decision Analysis II. Construct tdecision i Tree. Example: Newsboy. Decision Tree

Chapter 4: Decision Analysis Suggested Solutions

Decision Theory. Refail N. Kasimbeyli

Decision Analysis REVISED TEACHING SUGGESTIONS ALTERNATIVE EXAMPLES

DECISION ANALYSIS WITH SAMPLE INFORMATION

CHAPTER 4 MANAGING STRATEGIC CAPACITY 1

Decision making in the presence of uncertainty

CUR 412: Game Theory and its Applications, Lecture 9

Introduction to Decision Analysis

Multistage decision-making

The Ohio State University Department of Economics Econ 601 Prof. James Peck Extra Practice Problems Answers (for final)

CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_03B_DecisionTheory

56:171 Operations Research Midterm Examination Solutions PART ONE

ECO303: Intermediate Microeconomic Theory Benjamin Balak, Spring 2008

G5212: Game Theory. Mark Dean. Spring 2017

Chapter 17 Student Lecture Notes 17-1

Learning Objectives 6/2/18. Some keys from yesterday

Decision Making. D.K.Sharma

Introduction to Decision Making. CS 486/686: Introduction to Artificial Intelligence

Probability, Expected Payoffs and Expected Utility

UNIT 10 DECISION MAKING PROCESS

IE5203 Decision Analysis Case Study 1: Exxoff New Product Research & Development Problem Solutions Guide using DPL9

Decision Making. BUS 735: Business Decision Making and Research. Learn how to conduct regression analysis with a dummy independent variable.

Unit 4.3: Uncertainty

Next Year s Demand -Alternatives- Low High Do nothing Expand Subcontract 40 70

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Fundamentals of Managerial and Strategic Decision-Making

Using the Maximin Principle

56:171 Operations Research Midterm Examination Solutions PART ONE

TECHNIQUES FOR DECISION MAKING IN RISKY CONDITIONS

Chapter 2 supplement. Decision Analysis

EVPI = EMV(Info) - EMV(A) = = This decision tree model is saved in the Excel file Problem 12.2.xls.

56:171 Operations Research Midterm Examination October 28, 1997 PART ONE

Outline. Decision Making Theory and Homeland Security. Readings. AGEC689: Economic Issues and Policy Implications of Homeland Security

CS188 Spring 2012 Section 4: Games

Finitely repeated simultaneous move game.

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

1.The 6 steps of the decision process are:

Sensitivity = NPV / PV of key input

- Economic Climate Country Decline Stable Improve South Korea Philippines Mexico

Notes 10: Risk and Uncertainty

Measuring Risk. Expected value and expected return 9/4/2018. Possibilities, Probabilities and Expected Value

Measuring and Utilizing Corporate Risk Tolerance to Improve Investment Decision Making

Transcription:

DECISION ANALYSIS Introduction Decision often must be made in uncertain environments. Examples: Manufacturer introducing a new product in the marketplace. Government contractor bidding on a new contract. Oil company deciding to drill for oil in a particular location. Type of decisions that decision analysis is designed to address. Making decisions with or without experimentation. 558 1

Prototype example Goferbroke Company owns a tract of land that can contain oil. Contracted geologist reports that chance of oil is 1 in 4. Another oil company offers 90 000 for land. Cost of drilling is 100 000. If oil is found, revenue is 800 000 (expected profit is 700 000 ). Status of land Payoff Alternative Oil Dry Drill for oil 700 000 100 000 Sell the land 90 000 90 000 Chance of status 1 in 4 3 in 4 559 Decision making without experimentation The decision maker needs to choose one of the alternative actions. Nature choose one of the possible states of nature. Each combination of an action and state of nature results in a payoff, which is one entry of a payoff table. Payoff table is used to find an optimal action for the decision making according to an appropriate criterion. Probabilities for states of nature provided by the prior distribution are prior probabilities. 560 2

Payoff table for Goferbroke Co. problem State of nature Alternative Oil Dry 1. Drill for oil 700 100 2. Sell the land 90 90 Prior probability 0.25 0.75 561 Maximin payoff criterion Game against nature. Maximin payoff criterion: For each possible decision alternative, find the minimum payoff over all states. Next, find the maximum of these minimum payoffs. Best guarantee of payoff: pessimistic viewpoint. State of nature Alternative Oil Dry Minimum 1. Drill for oil 700 100 100 2. Sell the land 90 90 90 Maximin value Prior probability 0.25 0.75 562 3

Maximum likelihood criterion Maximum likelihood criterion: Identify most likely state. For this state, find decision alternative with the maximum payoff. Choose this action. Most likely state: ignores important information. Alternative State of nature Oil Dry 1. Drill for oil 700 100 2. Sell the land 90 90 Maximum in this column Prior probability 0.25 0.75 Most likely 563 Bayes decision rule Bayes decision rule: Using prior probabilities, calculate the expected value of payoff for each decision alternative. Choose the action with the maximum expected payoff. For the prototype example: E[Payoff (drill)] = 0.25(700) + 0.75( 100) = 100. E[Payoff (sell)] = 0.25(90) + 0.75(90) = 90. Incorporates all available information (payoffs and prior probabilities). What happens when probabilities are inaccurate? 564 4

Sensitivity analysis with Bayes rules Prior probabilities can be questionable. True probabilities of having oil are 0.15 to 0.35 (so, probabilities for dry land are from 0.65 to 0.85). p = prior probability of oil. Example: expected payoff from drilling for any p: E[Payoff (drill)] = 700p 100(1 p) = 800p 100. In figure, the crossover point is where the decision changes from one alternative to another: E[Payoff (drill)] = E[Payoff (sell)] 800p 100 = 90 p = 0.2375 565 Expected payoff for alternative changes The decision is very sensitive to p! 566 5

Decision making with experimentation Improved estimates are called posterior probabilities. Example: a detailed seismic survey costs 30 000. USS: unfavorable seismic soundings: oil is fairly unlikely. FSS: favorable seismic soundings: oil is fairly likely. Based on past experience, the following probabilities are given: P(USS State=Oil) = 0.4; P(FSS State=Oil) = 1 0.4 = 0.6 P(USS State=Dry) = 0.8; P(FSS State=Dry) = 1 0.8 = 0.2 567 Posterior probabilities n = number of possible states. P(State = state i) = prior probability that true state is state i. Finding = finding from experimentation (random var.) Finding j = one possible value of finding. P(State = state i Finding = finding j) = posterior probability that true state of nature is state i, given Finding = finding j. Given P(State=state i) and P(Finding = find j P(State=state i), what is P(State=state i Finding = finding j)? 568 6

Posterior probabilities From probability theory the Bayes theorem can be obtained: P(State = state i Finding = finding j) n k 1 P(Finding = finding j State = state i) P(State = state i) P(Finding = finding j State = state k) P(State = state k) 569 Bayes theorem in prototype example If seismic survey in unfavorable (USS): 0.4(0.25) 1 P(State = Oil Finding = USS), 0.4(0.25) 0.8(0.75) 7 1 6 P(State = Dry Finding = USS) 1. 7 7 If seismic survey in favorable (FSS): 0.6(0.25) 1 P(State = Oil Finding = FSS), 0.6(0.25) 0.2(0.75) 2 1 1 P(State = Dry Finding = FSS) 1. 2 2 570 7

Probability tree diagram 571 Expected payoffs Expected payoffs can be found using again Bayes decision rule for the prototype example, with posterior probabilities replacing prior probabilities: Expected payoffs if finding is USS: 1 6 E[Payoff (drill Finding = USS) (700) ( 100) 30 15.7 7 7 1 6 E[Payoff (sell Finding = USS) (90) (90) 30 60 7 7 Expected payoffs if finding is FSS: 1 1 E[Payoff (drill Finding = FSS) (700) ( 100) 30 270 2 2 1 1 E[Payoff (sell Finding = FSS) (90) (90) 30 60 2 2 572 8

Optimal policy Using Bayes decision rule, the optimal policy of optimizing payoff is given by: Finding from seismic survey Optimal alternative Expected payoff excluding cost of survey Expected payoff including cost of survey USS Sell the land 90 60 FSS Drill for oil 300 270 Is it worth spending 30.000 to conduct the experimentation? 573 Value of experimentation Before performing an experimentation, determine its potential value. Two methods: 1. Expected value of perfect information it is assumed that all uncertainty is removed. Provides an upper bound of potential value of experiment. 2. Expected value of information is the expected increase in payoff, not just its upper bound. 574 9

Expected value of perfect information State of nature Alternative Oil Dry 1. Drill for oil 700 100 2. Sell the land 90 90 Maximum payoff 700 90 Prior probability 0.25 0.75 Expected payoff with perfect information = 0.25(700) + 0.75(90) = 242.5 Expected value of perfect information (EVPI) is: EVPI = expected payoff with perfect information expected payoff without experimentation Example: EVPI= 242.5 100 = 142.5. This value is > 30 575 Expected value of information Requires expected payoff with experimentation: Expected payoff with experimentation j P(Finding = finding j) E[payoff Finding = finding j] Example: see probability tree diagram, where: P(USS) = 0.7, P(FSS) = 0.3. Expected payoff (excluding cost of survey) was obtained in optimal policy: E(Payoff Finding = USS) = 90, E(Payoff Finding = FSS) = 300. 576 10

Expected value of information So, expected payoff with experimentation is Expected payoff with experim. = 0.7(90) + 0.3(300) = 153. Expected value of experimentation (EVE) is: EVE = expected payoff with experimentation expected payoff without experimentation Example: EVE = 153 100 = 53. As 53 exceeds 30, the seismic survey should be done. 577 Decision trees Prototype example has a sequence of two questions: 1. Should a seismic survey be conducted before an action is chosen? 2. Which action (drill for oil or sell the land) should be chosen? These questions have a corresponding tree search. Junction points are nodes, and lines are branches. A decision node, represented by a square, indicates that a decision needs to be made at that point. An event node, represented by a circle, indicates that a random event occurs at that point. 578 11

Decision tree for prototype example 579 Decision tree with probabilities probability cash flow 580 12

Performing the analysis 1. Start at right side of decision tree and move one column at a time. For each column, perform step 2 or step 3, depending if nodes are event or decision nodes. 2. For each event node, calculate its expected payoff, by multiplying expected payoff of each branch by probability of that branch and summing these products. 3. For each decision node, compare the expected payoffs of its branches, and choose alternative with largest expected payoff. Record the choice by inserting a double dash in each rejected branch. 581 Decision tree with analysis 582 13

Optimal policy for prototype example The decision tree results in the following decisions: 1. Do the seismic survey. 2. If the result is unfavorable, sell the land. 3. If the result is favorable, drill for oil. 4. The expected payoff (including the cost of the seismic survey) is 123 (123 000 ). Same result as obtained with experimentation. For any decision tree, the backward induction procedure always will lead to the optimal policy. 583 Utility theory You are offered the choice of: 1. Accepting a 50:50 chance of winning $100.000 or nothing; 2. Receiving $40.000 with certainty. What do you choose? A company may be unwilling to invest a large sum of money in a new product even when the expected profit is substantial if there is a risk of losing its investment and thereby becoming bankrupt. People buy insurance even though it is a poor investment from the viewpoint of the expected payoff. 584 14

Utility theory Utility functions u(m) for money M: usually there is a decreasing marginal utility for money (individual is risk-averse). 585 Utility function for money It also is possible to exhibit a mixture of these kinds of behavior (risk-averse, risk seeker, risk-neutral) An individual s attitude toward risk may be different when dealing with one s personal finances than when making decisions on behalf of an organization. When a utility function for money is incorporated into a decision analysis approach to a problem, this utility function must be constructed to fit the preferences and values of the decision maker involved. (The decision maker can be either a single individual or a group of people.) 586 15

Utility theory Fundamental property: the decision maker s utility function for money has the property that the decision maker is indifferent between two alternatives if they have the same expected utility. Example. Offer: an opportunity to obtain either $100000 (utility = 4) with probability p or nothing (utility = 0) with probability 1 p. Thus, E(utility) = 4p. Decision maker is indifferent for e.g.: Offer with p = 0.25 (E(utility) = 1) or definitely obtaining $10 000 (utility = 1). Offer with p = 0.75 (E(utility) = 3) or definitely obtaining $60 000 (utility = 3). 587 Role of utility theory If utility function is used to measure worth of possible monetary outcomes, Bayes decision rule replaces monetary payoffs by corresponding utilities. Thus, optimal action is the one that maximizes the expected utility. Note that utility functions my not be monetary. Example: doctor s decision alternatives in treating a patient involves the future health of the patient. 588 16

Applying utility theory to example The Goferbroke Co. does not have much capital, so a loss of 100 000 would be quite serious. The complete utility function can be found using the following values: Monetary payoff Utility 130 150 100 105 60 60 90 90 670 580 700 600 589 Utility function for Goferbroke Co. 590 17

Estimating u(m) A popular form is the exponential utility function: M R u( M) R 1 e R = decision maker s risk tolerance. This is designing a risk-averse individual. For prototype example, R = 2250 for u(670), and R = 465 for u( 130). Note that, in general, it is not possible to have different values of R. 591 Decision trees with utility function The solution is exactly the same as before, except for substituting utilities for monetary payoffs. Thus, the value obtained to evaluate each fork of the tree is now the expected utility rather than the expected monetary payoff. Optimal decisions selected by Bayes decision rule maximize the expected utility for the overall problem. 592 18

Decision tree using utility function Different decision tree but same optimal policy. 593 19