ESD.71 Engineering Systems Analysis for Design

Similar documents
Decision Making Models

Decision making under uncertainty

Chapter 13 Decision Analysis

DECISION ANALYSIS WITH SAMPLE INFORMATION

Decision Analysis Models

Decision Analysis CHAPTER LEARNING OBJECTIVES CHAPTER OUTLINE. After completing this chapter, students will be able to:

DECISION ANALYSIS. Decision often must be made in uncertain environments. Examples:

A Taxonomy of Decision Models

Decision making in the presence of uncertainty

Chapter 4: Decision Analysis Suggested Solutions

April 28, Decision Analysis 2. Utility Theory The Value of Information

TIm 206 Lecture notes Decision Analysis

A B C D E F 1 PAYOFF TABLE 2. States of Nature

DECISION ANALYSIS. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Objective of Decision Analysis. Determine an optimal decision under uncertain future events

Project Risk Analysis and Management Exercises (Part II, Chapters 6, 7)

UNIT 5 DECISION MAKING

CUR 412: Game Theory and its Applications, Lecture 11

Decision Making. DKSharma

MGS 3100 Business Analysis. Chapter 8 Decision Analysis II. Construct tdecision i Tree. Example: Newsboy. Decision Tree

Causes of Poor Decisions

Decision Analysis. Chapter Topics

MATH1215: Mathematical Thinking Sec. 08 Spring Worksheet 9: Solution. x P(x)

Textbook: pp Chapter 3: Decision Analysis

DECISION ANALYSIS: INTRODUCTION. Métodos Cuantitativos M. En C. Eduardo Bustos Farias 1

Module 15 July 28, 2014

Decision Analysis. Chapter Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Fundamentals of Managerial and Strategic Decision-Making

Full file at CHAPTER 3 Decision Analysis

Review of whole course

Chapter 3. Decision Analysis. Learning Objectives

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

36106 Managerial Decision Modeling Decision Analysis in Excel

Module 1: Decision Making Under Uncertainty

Random Variables and Applications OPRE 6301

Decision Analysis. Chapter 12. Chapter Topics. Decision Analysis Components of Decision Making. Decision Analysis Overview

Finish what s been left... CS286r Fall 08 Finish what s been left... 1

Lesson 9: Comparing Estimated Probabilities to Probabilities Predicted by a Model

MBF1413 Quantitative Methods

Project Risk Evaluation and Management Exercises (Part II, Chapters 4, 5, 6 and 7)

ANSWERS TO PRACTICE PROBLEMS oooooooooooooooo

Dr. Abdallah Abdallah Fall Term 2014

ECONS 424 STRATEGY AND GAME THEORY HANDOUT ON PERFECT BAYESIAN EQUILIBRIUM- III Semi-Separating equilibrium

ESD 71 / / etc 2004 Final Exam de Neufville ENGINEERING SYSTEMS ANALYSIS FOR DESIGN. Final Examination, 2004

ESD MID-TERM QUIZ Page 1 of 11 SOLUTION

Chapter 2 Linear programming... 2 Chapter 3 Simplex... 4 Chapter 4 Sensitivity Analysis and duality... 5 Chapter 5 Network... 8 Chapter 6 Integer

What do Coin Tosses and Decision Making under Uncertainty, have in common?

Agenda. Lecture 2. Decision Analysis. Key Characteristics. Terminology. Structuring Decision Problems

The Central Limit Theorem. Sec. 8.2: The Random Variable. it s Distribution. it s Distribution

CMSC 474, Introduction to Game Theory 16. Behavioral vs. Mixed Strategies

Introduction to Decision Making. CS 486/686: Introduction to Artificial Intelligence

UTILITY ANALYSIS HANDOUTS

IX. Decision Theory. A. Basic Definitions

CS 4100 // artificial intelligence

Game Theory and Economics Prof. Dr. Debarshi Das Department of Humanities and Social Sciences Indian Institute of Technology, Guwahati.

Lecture Notes 1

16 MAKING SIMPLE DECISIONS

56:171 Operations Research Midterm Exam Solutions Fall 1994

16 MAKING SIMPLE DECISIONS

Utilities and Decision Theory. Lirong Xia

Section 9, Chapter 2 Moral Hazard and Insurance

Decision Analysis. Introduction. Job Counseling

Valuation of Options: Theory

Lecture 12: Introduction to reasoning under uncertainty. Actions and Consequences

Probability, Expected Payoffs and Expected Utility

EVPI = EMV(Info) - EMV(A) = = This decision tree model is saved in the Excel file Problem 12.2.xls.

17 MAKING COMPLEX DECISIONS

56:171 Operations Research Midterm Exam Solutions October 19, 1994

Portfolio Investment

1.The 6 steps of the decision process are:

Comparing Allocations under Asymmetric Information: Coase Theorem Revisited

Multistage decision-making

THE TRAVELING SALESMAN PROBLEM FOR MOVING POINTS ON A LINE

Chapter 18 Student Lecture Notes 18-1

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

8. International Financial Allocation

Managerial Economics

ECONS STRATEGY AND GAME THEORY QUIZ #3 (SIGNALING GAMES) ANSWER KEY

Chapter 6: Risky Securities and Utility Theory

Overview:Time and Uncertainty. Economics of Time: Some Issues

1 x i c i if x 1 +x 2 > 0 u i (x 1,x 2 ) = 0 if x 1 +x 2 = 0

19 Decision Making. Expected Monetary Value Expected Opportunity Loss Return-to-Risk Ratio Decision Making with Sample Information

MBF1413 Quantitative Methods

Making Hard Decision. ENCE 627 Decision Analysis for Engineering. Identify the decision situation and understand objectives. Identify alternatives

The Course So Far. Decision Making in Deterministic Domains. Decision Making in Uncertain Domains. Next: Decision Making in Uncertain Domains

Notes 10: Risk and Uncertainty

Dynamic games with incomplete information

Engineering Risk Benefit Analysis

CUR 412: Game Theory and its Applications, Lecture 12

Econ 101A Final exam Mo 18 May, 2009.

Johan Oscar Ong, ST, MT

CEC login. Student Details Name SOLUTIONS

IE5203 Decision Analysis Case Study 1: Exxoff New Product Research & Development Problem Solutions Guide using DPL9

Monetary Economics Valuation: Cash Flows over Time. Gerald P. Dwyer Fall 2015

Expectimax Search Trees. CS 188: Artificial Intelligence Fall Expectimax Quantities. Expectimax Pseudocode. Expectimax Pruning?

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Decision Analysis

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

Extensive-Form Games with Imperfect Information

How do we cope with uncertainty?

Sequential-move games with Nature s moves.

Transcription:

ESD.71 Engineering Systems Analysis for Design Assignment 4 Solution November 18, 2003 15.1 Money Bags Call Bag A the bag with $640 and Bag B the one with $280. Also, denote the probabilities: P (A) = 0.5 P (10 A) = 0.6 P (10 B) = 0.2 P (1 A) = 0.4 P (1 B) = 0.8 P (10) = P (A)P (10 A) + P (B)P (10 B) = 0.4 The probability of choosing bag A with no information The probability of a $10 bill being drawn out of bag A The probability of a $10 bill being drawn out of bag B The probability of a $1 bill being drawn out of bag A The probability of a $1 bill being drawn out of bag B The total probability of drawing a $10 bill Question A With direct application of Bayes theorem, P (A 10) = P (A) P (10 A) P (10) = (0.5)(0.6) 0.4 = 0.75 Question B Suppose you pick up a wallet and draw one $10 bill, replace it, and subsequently draw two $1 bills (replacing them each time). You want to know the probability that the wallet you picked up is wallet A. The easiest way to solve this is using likelihood ratios: CLR(10) = P (10 A)/P (10 B) = 3 CLR(1) = P (1 A)/P (1 B) = 0.5 LR 0 = P (A)/P (B) = 1 LR 3 = LR 0 CLR(10) 1 CLR(1) 2 = 0.75 P (A {10, 1, 1}) = LR 3 1 + LR 3 = 0.43 Since P (A {10, 1, 1}) < 0.5 you should pick the other wallet for a higher probability that it is wallet A. 1

The same obtains from successive applications of Bayes theorem. Using the notation above, where P (A {10, 1}) = P (A) P ({10, 1} A) P ({10, 1}) P ({10, 1} A) = (0.6)(0.4) = 0.24 and P ({10, 1}) = P ({10, 1} A) P (A) + P ({10, 1} B) P (B) = 0.2 Similarly, where P (A {10, 1, 1}) = P (A) P ({10, 1, 1} A) P ({10, 1, 1}) = 0.6 P ({10, 1, 1} A) = (0.6)(0.4) 2 = 0.096 = 0.43 and P ({10, 1, 1}) = P ({10, 1, 1} A) P (A) + P ({10, 1, 1} B) P (B) = 0.112 Question C Using likelihood ratios, LR 4 = LR 0 CLR(10) 2 CLR(1) 2 = 2.25 P (A {10, 1, 1, 10}) = LR 4 1 + LR 4 = 0.69 and since P (A {10, 1, 1, 10}) > 0.5 you should pick this wallet for a higher probability that it is wallet A. 16.2 Money Bags, take 2 Here we have two wallets to choose from. Call them Wallet 1000 and Wallet 320, with $100 $20 Wallet 1000 10 0 Wallet 320 3 1 Figure 1 shows the corresponding decision tree. This construction assumes that you may take the cash, open a random wallet, or choose to do the test. In the last case, you pick a wallet randomly and draw a bill from it. Then, you may take the wallet you drew a bill from or keep your remaining cash. After you draw a bill, you cannot choose to take the other wallet. The probabilities were calculated as shown: 2

Figure 1: Decision tree for 16.2 Probability you randomly pick Wallet 1000 P (1000) = 0.5 Probability you randomly pick Wallet 320 P (320) = 0.5 Probability that you pull $20 P (20) = P (1000) 0 + P (320) 0.25 = 0.125 Probability that you pull $100 P (100) = P (1000) 1 + P (320) 0.75 = 0.875 Posterior Probability P (320 20) = 1.0 Posterior Probability P (1000 20) = 0 Posterior Probability P (320 100) = P (100) P (100 320)/P (100) Posterior Probability P (1000 100) = P (1000) P (100 1000)/P (100) The optimal policy is to open a random wallet without taking the test, for an expected value of $660. 17.1 Money Bags, take 3 Interpret EVPI as the expected value of perfect information before you decide to take the test or randomly open a wallet. That is, you pick a wallet just like in Problem 15.1 (therefore limiting yourself to that wallet or the cash) 1. EVPI will be the expected value of Monty telling you what is in the wallet you picked, versus your optimal policy without the opportunity of such a test 2. The former expected value will be EV 1 = (0.5)(600) + (0.5)(1000), corresponding to a half-chance of having picked the $1000 wallet and a half-chance of having picked the $320 wallet, in which case you keep the cash. 1 If you assume that you can choose the other wallet once Monty tells you what s in the one you picked, the problem becomes trivial. 2 Obviously, you only need to consider taking the cash and picking at random as alternatives, since EVPI>EVSI always 3

The expected value of the optimal policy without the opportunity of the test is EV 2 = (1.0)(660) = 660 and their difference gives EV P I = 800 660 = 140. The EVSI is calculated by comparing the EV of th best strategy given the pull one bill test, versus the best strategy without the opportunity of the test. Thus, EV SI = (0.125)(600) + (0.875)(708) 660 = 34.5. Notice that I have not included the cost of the test($100) in the calculation; the purpose of this calculation is to find the maximum cost we should be willing to pay for the test, so it should not be included. In this case, 34.5 < 100 and the test should not be taken. Alternative interpretation & solution During office hours, some showed me an alternative interpretation of the problem. I am presenting it here because I think it demonstrates some subtleties of the process. The difference is about when perfect information becomes available (Figure 2). According to this scenario, you have the same three alternative decisions available at the outset: to open the wallet randomly, to take the (imperfect) test, or to take the cash. The imperfect test (drawing a bill) still costs $100. If you do the test and pull out a $20, you know you should just take the cash, but if you draw a $100, you then have the choice of asking Monty what is in the wallet, and this will be perfect information. Rolling back the tree reveals that the optimal policy after you draw a $100 is to ask Monty, for an expected value of $700 versus $608 if that perfect test was not available. Therefore, EVPI for this interpretation of the problem is 700 608 = 92. This time, I included the cost of $100 for the imperfect test. Why? Because it refers to a different test, which happens upstream. EVPI, as calculated here, will refer to the remainder of the tree after the node the perfect test is available. 19.3 Utility manipulation III The utility of a particular x is given by U(x) = 100 + x 200 We are looking for the probability that makes the utility of the lottery (28, p; 68) equal to the certaintyequivalent 50. Equivalently, pu(28) + (1 p)u( 68) = U( 50) p = U( 50) U( 68) U(28) U( 68) = 0.25 19.8 Workstations Question A The interview implies U(4000) = 1 2 U(6000) + 1 2U(1000). Normalizing utility so that U(1000) = 1 and U(6000) = 0 yields U(4000) = 0.5, which is Lee s certainty equivalent for such a lottery. The expected value would be x = 1 2 (6000) + 1 2 (1000) = 3500. The difference of 500 shows Lee s risk aversion (see Figure 3). 4

Figure 2: Decision tree for 16.2 alternative interpretation Figure 3: Lee s utility function for cost 5

Figure 4: Lee s utility function for speed Question B With regard to speed, follow the same procedure noting that U(10) = 0.2U(24) + 0.8U(4) U(18) = 0.75U(24) + 0.25U(10) Now normalize so that U(24) = 1 and U(4) = 0. Substituting leads to U(10) = (0.2)(1) + (0.8)(0) = 0.2 U(18) = (0.75)(1) + (0.25)(0.2) = 0.8 Lee s utility function for speed is shown in Figure 4 Optimal Investment Plan Question 1 Running the drypress model for each growth scenario gives the results in Table 1. Denote the alternative plans, Plan A, Plan B + and Plan B, where Plan B + denotes Plan B with expansion. For Plan B + it was assumed that the decision to expand is made at year 3, and the second plant becomes operational in year 4. This is not terribly important, and you could obtain a correct solution (although different from this one) if you had assumed that the second plant becomes operational in year 3. Also, it is assumed that production is equally divided between two plants (if they exist and if total demand is between 5 and 10 million parts per year). 3 3 If you look into this closer, you will find that it is not on the expansion path; i.e., it is optimal to operate one plant at full capacity. Either approach is acceptable. 6

Table 1: Calculated NPV from each plan Growth rate: Low Medium High Plan A -3.37 18.88 24.60 Plan B + -3.66 18.53 22.57 Plan B 1.40 12.49 13.51 Question 2 If all decisions are made in year 0 there are essentially three plans the ones mentioned above. The corresponding decision tree is shown in Figure 5. According to this, the one-plant strategy (Plan A) is preferred for an expected value of $13.37M. Question 3 Of course, there is no reason for anybody to consider making a smaller plant (thus suffering lower production costs per unit), unless they can decide to expand it or not when more information becomes available. The decision tree for this case is shown in Figure 6. If the decision to expand plan B is made in year 3, then the expected value of the optimal policy is to build plant B with the possibility of expanding it in year 3 if demand is sufficiently high. The second tree is enough for computing the expected value of a perfect test that predicts demand in year 0. This is done by probability-weighting the highest NPV alternatives for each demand growth, as shown below: EV P I = 1 3 max[a L, B + L, B L]+ 1 3 max[a m, B + m, B m ]+ 1 3 max[a h, B + h, B h] 14.17 = 14.96 14.17 = 0.79 Surely, a clearer way to obtain the same result is to build another tree which reflects the sequence in which information becomes available (Figure 7). The take-away s from this exercise are the following: 1. We can avoid the downside risks and take advantage of the upside uncetainty by delaying part of investment. 2. By the value of perfect information, we know the upper bound of the expected value of sample information. EVPI thus helps us decide if extra information is worthwhile exploring given its cost. 3. If there exists flexibility in the investment, e.g., in the form of sequential decisions, it is worth modeling and valuating explicitly. The NPV computed in Exercise 1 using the average growth rate is, as expected, different from the average NPV computed from three different growth scenarios (remember E[V ( x)] < V (E[ x]) if V is a concave function of the random variable x). Sequential decision-making reduces the risk of project. In theory, this reduction of risk can be incorporated in the choice of discount rate. However, this never happens in practice, mainly because the reduction in the discount rate is very hard to compute. 7

Figure 5: Capacity expansion with decisions made in Year 0 8

Figure 6: Capacity expansion with decisions made in Year 0 and Year 3 Figure 7: Capacity expansion. Expected value of perfect information 9