Decision Analysis Max, min, minimax, maximin, maximax, minimin All good cat names! 1 Introduction Models provide insight and understanding We make decisions Decision making is difficult because: future uncertainty conflicting values conflicting objectives 2 Job Counseling Company A New industry, volatile Low starting salary, but could grow rapidly. Signing bonus Options Located near family and friends. Company B Established firm financially Committed to employees Higher starting salary Slower advancement opportunity Distant location, offering few sporting or cultural activities. Which job would you recommend the student to take? 3
Good Decisions? Good Outcomes Structured approach can help No approach can guarantee good outcomes Good decisions sometimes result in negative outcomes unintended side effects 4 Decision Problems Characteristics Alternatives - different courses of action Work for company A Work for company B Reject both offers, stay at home, keep looking,. Criteria - factors that are important and influenced by the alternatives. Salary Career potential Location Natural States - future events not under decision makers control Company A grows Company A goes bust etc 5 An Example: American Inns O Hare Airport in Chicago, Illinois, is the busiest airport Expanded numerous times to handle increasing air traffic Plans are to double capacity (2003) Residential and Commercial development around airport restrict additional runways to handle future demands without substantial payments Plans are being developed to build another airport outside the city limits Serve South West part of Chicago, Champaign-Urbana, Indiana. 6
An Example: American Inns (con t) Two possible locations for the new airport known, but final decision not expected for some time Venture capitalists with American Inns hotel chain intend to build a conference facility close to the new airport Land values of two possible sites are increasing as investors speculate that property values will increase greatly 7 Decision Alternatives 1. Buy parcel A, PP = $12 million CE = $35 million NPV = $48 million SV = $6 million 2. Buy parcel B, PP = $10 million CE = $30 million NPV = $46 million SV = $6 million 3. Buy both parcels, develop one where airport is located 4. Do nothing 8 Possible Natural States 1. New airport is built at location A 2. New airport is built at location B (Assume that the parcel where the airport is not located can be sold for 90% of the purchase price. Later consider this as an option!) 9
Building Payoff Matrix Enumerate each outcome for each possibility. Payoff Matrix Airport location is selected to be: Land Purchased at Locations Parcel A Parcel B Parcels A & B Neither Do nothing Location A $7.00 -$9.00 -$2.00 $0.00 Location B -$10.80 $12.00 $1.20 $0.00 10 Decision Rules If future state (airport location) was known, then be easy to make the decision. If future state of nature (airport location) was not known, then a variety of deterministic decision rules can be applied: Maximax Maximin Minimax regret No decision rule is always best Each has a weaknesses 11 Maximax Decision Rule Identify the maximum payoff for each alternative. Choose the alternative with the largest maximum payoff. Payoff Matrix Land Purchased at Locations Parcel A Parcel B Parcels A & B Airport location is selected to be: Location A $7.00 -$9.00 -$2.00 Location B -$10.80 $12.00 $1.20 Neither Do $0.00 $0.00 nothing MAXIMAX Max of each row $7.00 $12.00 <<<<<< $1.20 $0.00 Max of all rows $12.00 12
Maximax Decision Rule Weakness Consider the following payoff matrix where B is a guaranteed payoff! State of Nature Decision 1 2 MAX A 40-100 40 <--maximum B 30 30 30 13 Maximin Decision Rule Identify the minimum payoff for each alternative. Choose the alternative with the largest minimum payoff. Payoff Matrix Land Purchased at Locations Airport location is selected to be: Location A Location B MAXIMIN Min of each row Max of all rows $0.00 Parcel A $7.00 -$10.80 -$10.80 Parcel B -$9.00 $12.00 -$9.00 Parcels A & B -$2.00 $1.20 -$2.00 Neither Do nothing $0.00 $0.00 $0.00 <<<<<< 14 Maximin Decision Rule Consider the following payoff matrix Where A pays off far more for a slight loss! State of Nature Decision 1 2 MIN A 100 25 25 B 30 30 30 <--maximum 15
Minimax Regret Decision Rule Compute the possible regret for each alternative under each state of nature. Identify the maximum possible regret for each alternative. Choose the alternative with the smallest maximum regret. Maximum for each natural state is added to the negative of each choice 16 Regret Matrix Land Purchased at Locations Parcel A Parcel B Parcels A & B Neither Do nothing Location A $0.00 $16.00 $9.00 $7.00 Location B $22.80 $0.00 $10.80 $12.00 17 Minimax Regret Rule Anomalies Consider the following payoff matrix State of Nature Decision 1 2 A 10 6 B 5 10 Regret matrix is: State of Nature Decision 1 2 MAX A 0 4 4 <--minimum B 5 0 5 Note that we prefer A to B. Now let s add an alternative... 18
Adding an Alternative Consider the following payoff matrix State of Nature Decision 1 2 A 10 6 B 5 10 C 3 11 Regret matrix is: State of Nature Decision 1 2 MAX A 0 5 5 minimum B 5 1 5 minimum C 7 0 7 Still prefer B to A? Does C cause the problem? 19 Probabilistic Methods Assume that the states of nature can be assigned probabilities to represent likelihood of occurrence. If decision problems occur more than once, one can often estimate these probabilities from historical data. Other decision problems are one-time decisions where historical data for estimating probabilities don t exist or are hard to find. Probabilities can be assigned subjectively based on interviews with one or more domain experts (Delphi). Highly structured interviewing techniques exist for soliciting probability estimates that are reasonably accurate and free of the unconscious biases that may impact an expert s opinions. Focus on techniques that can be used once appropriate probability estimates have been obtained. 20 Expected Monetary Value Selects alternative with the largest expected monetary value (EMV) EMV i = r ij p j j Where r ij = payoff for option i under the j th state of nature And p i = probability of the j th state of nature EMV i is the average payoff received if one has to solve same decision problem numerous times and always selected option i. Note that probabilities must add to 1. p = 1. 0 n j = 1 j 21
EMV Caution EMV rule should be used with caution for one-time decision problems. Weakness Consider the following payoff matrix State of Nature Decision 1 2 EMV A 15,000-5,000 5,000 <--maximum B 5,000 4,000 4,500 Probability 0.5 0.5 22 Expected Regret Selects alternative with the smallest expected regret or opportunity loss (EOL) EOL i = g p ij j j gij = regret for alternative i under the jth state of nature p = the probability of the jth state of nature j The decision with the largest EMV will also have the smallest EOL! Opportunity Loss Equivalent 23 Expected Value of Perfect Information Suppose we could hire a consultant who could predict the future with 100% accuracy. With such perfect information, American Inns average payoff would be: EV with PI = 0.4*$13 + 0.6*$11 = $11.8 (in millions) Without perfect information, the EMV was million. The expected value of perfect information is therefore, EV of PI = $11.8 - $3.4 = $8.4 (in millions) In general, EV of PI = EV with PI - maximum EMV It will always be the case that, EV of PI = minimum EOL 24
Decision Tree for American Inns Land Purchase Decision Airport Location Payoff Buy A -cost A A 1 B p 1-p 13-12 0 Buy B -cost B Buy A&B -cost A & B A 2 B A 3 B p 1-p p 1-p -8 11 5-1 Buy nothing 0 A 4 B p 1-p 0 0 25 Rolling Back A Decision Tree Land Purchase Decision Airport Location Payoff Buy A EMV= -2 A 1 B 0.4 0.6 13-12 0 EMV=3.4 Buy B Buy A&B EMV=3.4 EMV=1.4 A 2 B A 3 B 0.4 0.6 0.4 0.6-8 11 5-1 Buy nothing EMV= 0 A 4 B 0.4 0.6 0 0 26 Multi-stage Decision Problems Many problems involve a series of decisions Example Should you go out to dinner tonight? If so, How much will you spend? Where will you go? How will you get there? Multi -stage decisions can be analyzed using decision trees 27
Multi-Stage Decision Example: COMED COMED is considering whether or not to apply for a $85,000 OSHA research grant for using wireless communications technology to enhance safety in the coal industry. COMED would spend approximately $5,000 preparing the proposal and estimates a 50-50 chance of actually receiving the grant If awarded the grant, COMED would select communications technology microwave cellular or infrared 28 COMED (continued) COMED would need to acquire some new equipment depending on which technology is used. Equipment cost is estimated as: Technology Equipment Cost Microwave $4,000 Cellular $5,000 Infrared $4,000 COMED knows that it will be necessary to spend money on R&D, but no one knows exactly what R&D costs will be 29 COMED (continued) Estimates for best case and worst case R&D costs and probabilities are Best Case Worst Case Cost Prob. Cost Prob. Microwave $30,000 0.4 $60,000 0.6 Cellular $40,000 0.8 $70,000 0.2 Infrared $40,000 0.9 $80,000 0.1 Need to synthesize all factors to decide whether or not to submit a grant proposal to OSHA. 30
Analyzing Risk in a Decision Tree How sensitive is the decision in the COMED problem to changes in the probability estimates? We can use Solver to determine the smallest probability of receiving the grant for which COMED should still be willing to submit the proposal. 31 Risk Profiles A risk profile summarizes the make-up of an EMV. The $13,500 EMV for COMED was created as follows: Event Probability Payoff Receive grant, Low R&D costs 0.5*0.9=0.45 $36,000 Receive grant, High R&D costs 0.5*0.1=0.05 -$4,000 Don t receive grant 0.5 -$5,000 EMV $13,500 Clean table 32 Using Sample Information One can often obtain information about the possible outcomes of decisions before decisions are made. Such sample information allows one to refine probability estimates associated with each outcome 33
Example: New Wind Generators New Wind Generators (NWG) needs to determine whether to build a large or small plant for a new machine under development Cost of constructing large plant is $25 million Cost of constructing a small plant is $15 million NWG believes that demand for the new machine will be high as a 70% chance and that it will be low as a 30% chance The payoffs (in millions of dollars) are summarized below. Demand Factory Size High Low Large $175 $95 Small $125 $105 34 Including Sample Information Before making a decision, suppose NWG conducts an industrial survey at zero cost Survey can indicate favorable or unfavorable attitudes toward the new machine Assume: P(favorable response) = 0.67 P(unfavorable response) = 0.33 If the response is favorable, this should increase NWG s belief that demand will be high Assume: P(high demand favorable response)=0.9 P(low demand favorable response)=0.1 35 Including Sample Information (con t) If the survey response is unfavorable, this should increase NWG s belief that demand will be low Assume: P(low demand unfavorable response)=0.7 P(high demand unfavorable response)=0.3 36
Expected Value of Sample Information How much is NWG be willing to pay to conduct survey? Hint: find difference between with and without Expected Value of Sample Information = Expected Value with Sample Information Expected Value without Sample Information In this example, E.V. of Sample Info. = $126.82 - $126 = $0.82 million 37 Computing Conditional Probabilities Conditional probabilities computed from joint probability tables High Demand Low Demand Total Favorable Response 0.600 0.067 0.667 Unfavorable Response 0.100 0.233 0.333 Total 0.700 0.300 1.000 Joint probabilities P(F H) = 0.6, P(F L)=0.067 P(U H)=0.1, P(U L) = 0.233 Marginal probabilities P(F) = 0.667, P(U)=0.333 P(H)=0.700, P(L) = 0.300 38 Computing Conditional Probabilities (cont d) High Low Demand Demand Total Favorable Response 0.600 0.067 0.667 Unfavorable Response 0.100 0.233 0.333 Total 0.700 0.300 1.000 P(A B) Generally P(A B)= P(B) P(L U) 0.233 P(L F) 0.067 P(L U)= = = 0.70 P(L F)= = = 0.10 P(U) 0.333 P(F) 0.667 P(H U) 0.10 P(H F) 0.60 P(H U)= = = 0.30 P(H F)= = = 0.90 P(U) 0.333 P(F) 0.667 39
Bayes s Theorem Bayes s Theorem provides another definition of conditional probability P(B A)P(A) P(A B)= P(B A)P(A)+P(B A)P(A) Example P(F H)P(H) (0.857)(0.70) P(H F)= = = 0.90 P(F H)P(H)+P(F L)P(L) (0.857)(0.70) + (0.223)(0.30) 40 Utility Theory Decision with the highest EMV is not always the most desired or most preferred alternative Consider the following payoff table, State of Nature Decision 1 2 EMV A 150,000-30,000 60,000<--maximum B 70,000 40,000 55,000 Probability 0.5 0.5 Decision makers have different attitudes toward risk Some might prefer alternative A, Others would prefer alternative B. Utility Theory includes risk preferences in decision process 41 Utility 1.00 0.75 0.50 Common Utility Functions risk averse risk neutral risk seeking 0.25 0.00 Payoff 42
Constructing Utility Functions Assign utility values of 0 to worst payoff and 1 to best For the previous example, U(-$30,000)=0 and U($150,000)=1 To find the utility associated with a $70,000 payoff identify the value p at which the decision maker is indifferent between: Alternative 1: Receive $70,000 with certainty. Alternative 2: Receive $150,000 with probability p and lose $30,000 with probability (1 -p). 43 Constructing Utility Functions (cont d) If decision maker is indifferent when p = 0.8: U($70,000)=U($150,000)*0.8+U(-30,000)*0.2 =1*0.8+0*0.2=0.8 When p=0.8, the expected value of Alternative 2 is: $150,000*0.8 + $30,000*0.2 = $114,000 The decision maker is risk averse. (Willing to accept $70,000 with certainty versus a risky situation with an expected value of $114,000.) 44 Constructing Utility Functions (cont d) If we repeat this process with different values in Alternative 1, the decision maker s utility function emerges (e.g., if U($40,000)=0.65): Utility Utiles $160 $140 $120 $100 $80 $60 $40 $20 $0 Payoff 0% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% Payoff 70.00% 80.00% 90.00% 100.00% 45
Comments Certainty Equivalent -- amount that is equivalent to the decision maker to a situation involving risk. (e.g., $70,000 was equivalent to Alternative 2 with p = 0.8) Risk Premium -- EMV the decision maker is willing to give up to avoid a risky decision. (e.g., Risk premium = $114,000-$70,000 = $44,000) 46 Using Utilities Replace monetary values in payoff tables with utilities. Consider the following utility table from the earlier example, State of Nature Expected Decision 1 2 Utility A 1 0 0.500 B 0.8 0.65 0.725 <--maximum Probability 0.5 0.5 Decision B provides the greatest utility even though it the payoff table indicated it had a smaller EMV. 47 Exponential function is often U( used x) = to 1- model e - x /R classic risk averse behavior U(x) 1.00 Exponential Utility Function 0.80 0.60 R=100 R=200 0.40 R=300 0.20 0.00-0.20-0.40-0.60-0.80-50 -25 0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 x 48
Multicriteria Decision Making Decision problem often involve two or more conflicting criterion or objectives: Investing: risk vs. return Choosing Among Job Offers: salary, location, career potential, etc. Selecting a Camcorder: price, warranty, zoom, weight, lighting, etc. Choosing Among Job Applicants: education, experience, personality, etc. Consider two techniques for these types of problems Multicriteria Scoring Model Analytic Hierarchy Process (AHP) 49 Multicriteria Scoring Model Score (or rate) each alternative on each criterion. Assign weights the criterion reflecting their relative importance. For each alternative j, compute a weighted average score as: i w i s ij w i = weight for criterion i s ij = score for alternative i on criterion j 50 Analytic Hierarchy Process (AHP) Provides a structured approach for determining the scores and weights in a multicriteria scoring model. Consider AHP for following example: Company wants to purchase a new information system. Three systems are being considered (X, Y and Z). Three criteria are relevant: Price User support Ease of use 51
Pairwise Comparisons First step, create a pairwise comparison matrix for each alternative on each criterion using the following categories Value Preference 1Equally Preferred 2Equally to Moderately Preferred 3Moderately Preferred 4Moderately to Strongly Preferred 5Strongly Preferred 6Strongly to Very Strongly Preferred 7Very Strongly Preferred 8Very Strongly to Extremely Preferred 9Extremely Preferred 52 Pairwise Comparisons (con t.) Let P ij = extent to which alternative i is preferred to j on a given criterion. Assume P ji = 1/P ij 53 Normalization & Scoring To normalize a pairwise comparison matrix, 1) Compute the sum of each column, 2) Divide each entry in the matrix by its column sum. The score (s j ) for each alternative is given by the average of each row in the normalized comparison matrix. 54
Consistency Check that decision maker was consistent in comparisons. The consistency measure for alternative i is: where C i = j Ps s P ij = pairwise comparison of alternative i to j s j = score for alternative j If decision maker was perfectly consistent, then each C i would equal number of alternatives in the problem i ij j 55 Consistency (cont d) Typically, some inconsistency exists. Inconsistency is not considered a problem provided the Consistency Ratio (CR) is not more than 10% where, CI CR = 0.10 RI Ci CI= n n /( n 1) i n = the number of alternatives RI = 0.00 0.58 0.90 1.12 1.24 1.32 1.41 for n = 2 3 4 5 6 7 8 56 Obtaining Remaining Scores & Weights This process is repeated to obtain scores for the other criterion as well as the criterion weights. The scores and weights are then used as inputs to a multicriteria scoring model in the usual way. 57
DA Matrix Questions? 58