DECISION ANALYSIS (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
Introduction Decision often must be made in uncertain environments Examples: Manufacturer introducing a new product in the marketplace. Government contractor bidding on a new contract. Oil company deciding to drill for oil in a particular location. Decision analysis: decision making in face of great uncertainty; rational decision when the outcomes are uncertain. Making decisions with or without experimentation. Optimization and Decision 2009 530
Prototype example Goferbroke Company owns a tract of land that can contain oil. Contracted geologist reports that chance of oil is 1 in 4. Another oil company offers 90.000 for the land. Cost of drilling is 100.000. 000 If oil is found, expected revenue is 800.000 (expected profit is 700.000 ). Alternative Status of land Oil Payoff Dry Drill for oil 700.000 100.000 Sell the land 90.000 90.000 Chance of status 1 in 4 3 in 4 Optimization and Decision 2009 531
Decision making without experimentation Analogyl to game theory: Players: decision maker (player 1) and nature (player 2). Available strategies for 2 players: alternative actions and possible states of nature, respectively. Combination of strategies results in some payoff to player 1 (decision maker). But, are both players still rational? Optimization and Decision 2009 532
Decision analysis framework 1. Decision maker needs to choose one of alternative actions. 2. Nature then chooses one of possible states of nature. 3. Each combination of an action and state of nature results in a payoff, one of the entries in a payoff table. 4. Probabilities for states of nature provided by prior distribution are prior probabilities. 5. Payoff table should be used to find an optimal action for the decision maker according to an appropriate criterion. i Optimization and Decision 2009 533
Payoff table for Goferbroke Co. problem State of nature Alternative Oil Dry 1. Drill for oil 700 100 2. Sell the land 90 90 Prior probability 0.25 0.75 Optimization and Decision 2009 534
Maximin payoff criterion Maximin payoff criterion: for each possible action, find the minimum payoff over all states of nature. Next, find the maximum of these minimum payoffs. Choose the action whose minimum payoff gives this maximum. Best guarantee of payoff: pessimistic viewpoint. Alternative State t of nature Oil Dry Minimum 1. Drill for oil 700 100 100 2. Sell the land 90 90 90 Maximin value Prior probability 0.25 0.75 Optimization and Decision 2009 535
Maximum likelihood criterion Maximum likelihood criterion: Identify most likely state of nature (with largest prior probability). For this state of nature, find the action with maximum payoff. Choose this action. Most likely state: ignores important information and low probability big payoff. Alternative State of nature Oil Dry 1. Drill for oil 700 100 2. Sell the land 90 90 Maximum in this column Prior probability 0.25 0.75 Optimization and Decision 2009 Most likely 536
Bayes decision rule Bayes decision rule: Using prior probabilities, calculate the expected value of payoff for each possible decision alternative. Choose the decision alternative with the maximum expected payoff. For the prototype example: E[Payoff (drill)] = 0.25(700) + 0.75( 100) = 100. E[Payoff (sell)] = 0.25(90) + 0.75(90) = 90. Incorporates all available information (payoffs and prior probabilities). biliti What if probabilities are wrong? Optimization and Decision 2009 537
Sensitivity analysis with Bayes rules Prior probabilities can be questionable. True probabilities of having oil are 0.15 to 0.35 (so, probabilities for dry land are from 0.85 to 0.65). p = prior probability of oil. Example: expected payoff from drilling for any p: E[Payoff (drill)] = 700p 100(1 p) = 800p 100. In figure, the crossover point is where the decision changes from one alternative to another: E[Payoff (drill)] = E[Payoff (sell)] 800p 100 = 90 p = 0.2375 Optimization and Decision 2009 538
Expected payoff for alternative changes The decision is very sensitive to p! And for more than 2 variables? Use Excel SensIt. Optimization and Decision 2009 539
Decision making with experimentation Improved estimates are called posterior probabilities. Example: a detailed seismic survey costs 30.000. USS: unfavorable seismic soundings: oil is fairly unlikely. FSS: favorable seismic soundings: oil is fairly likely. Based on past experience, the following probabilities are given: P(USS State=Oil) = 0.4; P(FSS State=Oil) = 1 04 0.4 = 0.6. P(USS State=Dry) = 0.8; P(FSS State=Dry) = 1 0.8 = 0.2. Optimization and Decision 2009 540
Posterior probabilities n = number of possible estates of nature. P(State = state i) = prior probability that true state of nature is state i, for i=1,2,,n. 2 Finding = finding from experimentation (random var.) Finding d j = one possible value of finding. P(State=state i Finding = finding j) = posterior probability that true state of nature is state i, given Finding = finding j, for i=1,2,,n. Given P(State=state i) and P(Finding = finding j State=state i) for i=1,2,,n,,,,, what is P(State=state ( i Finding= finding j)? Optimization and Decision 2009 541
Posterior probabilities From probability theory, the Bayes theorem can be obtained: P(State = state i Finding = finding j) = = n k= 1 P(Finding = finding j State = state ip ) (State = state i ) P(Finding = finding j State = state k) P(State = state k) Optimization and Decision 2009 542
Bayes theorem in prototype example If seismic survey in unfavorable (USS, j=1): P 0.4(0.25) 1 (State = Oil Finding = USS) =, 04(025) 0.4(0.25) + 0.8(0.75) 08(075) = 7 1 6 P (State = Dry Finding = USS) = 1 =. 7 7 If seismic survey in favorable (FSS, j=2): 0.6(0.25) 1 P (State = Oil Finding = FSS) = =, 0.6(0.25) + 0.2(0.75) 2 1 1 P (State = Dry Finding = FSS) = 1 =. 2 2 Optimization and Decision 2009 543
Probability tree diagram Optimization and Decision 2009 544
Expected payoffs Expected payoffs can be found again using Bayes decision rule for the prototype example, with posterior probabilities replacing prior probabilities: Expected payoffs if finding is USS: 1 6 E [Payoff (drill Finding = USS) = (700) + ( 100) 30 = 15.7 57 7 7 1 6 E [Payoff (sell Finding = USS) = (90) + (90) 60 30 = 7 7 Expected payoffs if finding is FSS: 1 1 E [Payoff (drill Finding = FSS) = (700) + ( 100) 30 = 27 0 2 2 1 1 E[Payoff (sell Finding = FSS) = (90) + (90) 303 = 60 2 2 Optimization and Decision 2009 545
Optimal policy Using Bayes decision rule, the optimal policy of optimizing payoff is given by: Finding from seismic survey Optimal alternative Expected payoff excluding cost of survey Expected payoff including cost of survey USS Sell the land 90 60 FSS Drill for oil 300 270 Is it worth spending 30.000 to conduct the experimentation? Optimization and Decision 2009 546
Value of experimentation Before performing an experimentation, determine its potential value. Two complementary methods: 1. Expected value of perfect information it is assumed that experiment will remove all uncertainty. Provides desan upper bound on potential t value ueof experiment. 2. Expected value of information is the actual improvement in expected payoff. Optimization and Decision 2009 547
Expected value of perfect information State of nature Alternative Oil Dry 1. Drill for oil 700 100 2. Sell the land 90 90 Maximum payoff 700 90 Pi Prior probability bili 0.25 0.75 Expected payoff with perfect information = 0.25(700) + 0.75(90) = 242.5 Expected value of perfect information (EVPI) is: EVPI = expected payoff with perfect information expected payoff without experimentation Example: EVPI = 242.5 100 = 142.5. This value is > 30. Optimization and Decision 2009 548
Expected value of information Requires expected payoff with experimentation: Expected payoff with experimentation = P(Finding = finding j) E[payoff Finding = finding j] j Example: see probability tree diagram, where: P(USS) = 0.7, P(FSS) = 0.3. Expected payoff (excluding cost of survey) was obtained in optimal policy: E(Payoff Finding = USS) = 90, E(Payoff Finding = FSS) = 300. Optimization and Decision 2009 549
Expected value of information So, expected payoff with experimentation is Expected payoff with experim. = 0.7(90) + 0.3(300) = 153. Expected value of experimentation (EVE) is: EVE = expected payoff with experimentation expected payoff without experimentation Example: EVE = 153 100 = 53. As 53 exceeds 30, the seismic survey should be done. Optimization and Decision 2009 550
Decision trees Prototype example has a sequence of two decisions: i Should a seismic survey be conducted before an action is chosen? Which action (drill for oil or sell the land) should be chosen? These h questions have a corresponding decision tree. Junction points are nodes, and lines are branches. A decision node, represented by a square, indicates that a decision needs to be made at that point. An event node, represented by a circle, indicates that a random event occurs at that point. Optimization and Decision 2009 551
Decision tree for prototype example Optimization and Decision 2009 552
Decision tree with probabilities probability cash flow Optimization and Decision 2009 553
Performing the analysis 1. Start at right side of decision tree and move left one column at a time. For each column, perform step 2 or step 3 depending if nodes are event or decision nodes. 2. For each event node, calculate its expected payoff by multiplying expected payoff of each branch by probability of that branch and summing these products. Record value next to each node in bold. 3. For each decision node, compare the expected payoffs of its branches, and choose alternative ti with largest expected payoff. Record the choice by inserting a double dash in each rejected branch. Optimization and Decision 2009 554
Decision tree with analysis Optimization and Decision 2009 555
Optimal policy for prototype example The decision tree results in the following decisions: 1. Do the seismic survey. 2. If the result is unfavorable, sell the land. 3. If the result is favorable, drill for oil. 4. The expected payoff (including the cost of the seismic survey) is 123 (123 000 ). Same result as obtained with experimentation For any decision tree, the backward induction procedure always leads to optimal policy. Optimization and Decision 2009 556
Utility theory You are offered the choice of: 1. Accepting a 50:50 chance of winning $100.000 or nothing; 2. Receiving $40.000 with certainty. What do you choose? Optimization and Decision 2009 557
Utility function for money Utility functions (u(m)) for money (M): usually there is a decreasing marginal utility for money (individual id is riskaverse). ik Optimization and Decision 2009 558
Utility function for money It also is possible to exhibit a mixture of these kinds of behavior (risk averse, risk seeker, risk neutral) An individual s attitude toward risk may be different when dealing with one s personal finances than when making decisions on behalf of an organization. When a utility function for money is incorporated into a decision analysis approach to a problem, this utility function must be constructed to fit the preferences and values of the decision maker involved. (The decision maker can be either a single individual or a group of people.) Optimization and Decision 2009 559
Utility theory Fundamental property: the decision maker s utility function for money has the property that the decision maker is indifferent between two alternative courses of action if they have the same expected utility. Example. p Offer: an opportunity to obtain either $100.000 (utility = 4) with probability p or nothing (utility = 0) with probability 1 p. Thus, E(utility) = 4p. Decision maker is indifferent for e.g.: Offer with p = 0.25 (E(utility) = 1) or definitely obtaining $10.000 000 (utility = 1). Offer with p = 0.75 (E(utility) = 3) or definitely obtaining $60.000 (utility = 3). Optimization and Decision 2009 560
Role of utility theory When decision maker s utility function for money is used to measure relative worth of various possible monetary outcomes, Bayes decision rule replaces monetary payoffs by corresponding utilities. Thus, optimal action is the one that maximizes the expected utility. Note that utility functions may not be monetary. Example: doctor s decision of treating or not a patient involves future health of the patient. Optimization and Decision 2009 561
Applying utility theory to example The Goferbroke Co. does not have much capital, so a loss of 100.000 would be quite serious. Scale of utility function is irrelevant. Only the relative values of the utilities matter. The complete utility function can be found using the following values: Monetary payoff Utility 130 150 100 105 60 60 90 90 670 580 700 600 Optimization and Decision 2009 562
Utility function for Goferborke Co. Optimization and Decision 2009 563
Estimating u(m) ( ) A popular form is the exponential utility function: M R u ( M ) = R 1 e R = decision maker s risk tolerance. Designed to fit a risk averse individual. For prototype example, R = 2250 for u(670) ) = 580 and u(700) = 600, and R = 465 for u( 130) = 150. However, it is not possible to have different values of R for the same utility function. Optimization and Decision 2009 564
Decision tree with utility function The solution is exactly the same as before, except for substituting utilities for monetary payoffs. Thus, the value obtained to evaluate each fork of the tree is now the expected utility rather than the expected monetary payoff. Optimal decisions s selected ected by Bayes decision rule ue maximize the expected utility for the overall problem. Optimization and Decision 2009 565
Decision tree using utility function Different decision tree but same optimal policy. Optimization and Decision 2009 566