ESD.71 Engineering Systems Analysis for Design Assignment 4 Solution November 18, 2003 15.1 Money Bags Call Bag A the bag with $640 and Bag B the one with $280. Also, denote the probabilities: P (A) = 0.5 P (10 A) = 0.6 P (10 B) = 0.2 P (1 A) = 0.4 P (1 B) = 0.8 P (10) = P (A)P (10 A) + P (B)P (10 B) = 0.4 The probability of choosing bag A with no information The probability of a $10 bill being drawn out of bag A The probability of a $10 bill being drawn out of bag B The probability of a $1 bill being drawn out of bag A The probability of a $1 bill being drawn out of bag B The total probability of drawing a $10 bill Question A With direct application of Bayes theorem, P (A 10) = P (A) P (10 A) P (10) = (0.5)(0.6) 0.4 = 0.75 Question B Suppose you pick up a wallet and draw one $10 bill, replace it, and subsequently draw two $1 bills (replacing them each time). You want to know the probability that the wallet you picked up is wallet A. The easiest way to solve this is using likelihood ratios: CLR(10) = P (10 A)/P (10 B) = 3 CLR(1) = P (1 A)/P (1 B) = 0.5 LR 0 = P (A)/P (B) = 1 LR 3 = LR 0 CLR(10) 1 CLR(1) 2 = 0.75 P (A {10, 1, 1}) = LR 3 1 + LR 3 = 0.43 Since P (A {10, 1, 1}) < 0.5 you should pick the other wallet for a higher probability that it is wallet A. 1
The same obtains from successive applications of Bayes theorem. Using the notation above, where P (A {10, 1}) = P (A) P ({10, 1} A) P ({10, 1}) P ({10, 1} A) = (0.6)(0.4) = 0.24 and P ({10, 1}) = P ({10, 1} A) P (A) + P ({10, 1} B) P (B) = 0.2 Similarly, where P (A {10, 1, 1}) = P (A) P ({10, 1, 1} A) P ({10, 1, 1}) = 0.6 P ({10, 1, 1} A) = (0.6)(0.4) 2 = 0.096 = 0.43 and P ({10, 1, 1}) = P ({10, 1, 1} A) P (A) + P ({10, 1, 1} B) P (B) = 0.112 Question C Using likelihood ratios, LR 4 = LR 0 CLR(10) 2 CLR(1) 2 = 2.25 P (A {10, 1, 1, 10}) = LR 4 1 + LR 4 = 0.69 and since P (A {10, 1, 1, 10}) > 0.5 you should pick this wallet for a higher probability that it is wallet A. 16.2 Money Bags, take 2 Here we have two wallets to choose from. Call them Wallet 1000 and Wallet 320, with $100 $20 Wallet 1000 10 0 Wallet 320 3 1 Figure 1 shows the corresponding decision tree. This construction assumes that you may take the cash, open a random wallet, or choose to do the test. In the last case, you pick a wallet randomly and draw a bill from it. Then, you may take the wallet you drew a bill from or keep your remaining cash. After you draw a bill, you cannot choose to take the other wallet. The probabilities were calculated as shown: 2
Figure 1: Decision tree for 16.2 Probability you randomly pick Wallet 1000 P (1000) = 0.5 Probability you randomly pick Wallet 320 P (320) = 0.5 Probability that you pull $20 P (20) = P (1000) 0 + P (320) 0.25 = 0.125 Probability that you pull $100 P (100) = P (1000) 1 + P (320) 0.75 = 0.875 Posterior Probability P (320 20) = 1.0 Posterior Probability P (1000 20) = 0 Posterior Probability P (320 100) = P (100) P (100 320)/P (100) Posterior Probability P (1000 100) = P (1000) P (100 1000)/P (100) The optimal policy is to open a random wallet without taking the test, for an expected value of $660. 17.1 Money Bags, take 3 Interpret EVPI as the expected value of perfect information before you decide to take the test or randomly open a wallet. That is, you pick a wallet just like in Problem 15.1 (therefore limiting yourself to that wallet or the cash) 1. EVPI will be the expected value of Monty telling you what is in the wallet you picked, versus your optimal policy without the opportunity of such a test 2. The former expected value will be EV 1 = (0.5)(600) + (0.5)(1000), corresponding to a half-chance of having picked the $1000 wallet and a half-chance of having picked the $320 wallet, in which case you keep the cash. 1 If you assume that you can choose the other wallet once Monty tells you what s in the one you picked, the problem becomes trivial. 2 Obviously, you only need to consider taking the cash and picking at random as alternatives, since EVPI>EVSI always 3
The expected value of the optimal policy without the opportunity of the test is EV 2 = (1.0)(660) = 660 and their difference gives EV P I = 800 660 = 140. The EVSI is calculated by comparing the EV of th best strategy given the pull one bill test, versus the best strategy without the opportunity of the test. Thus, EV SI = (0.125)(600) + (0.875)(708) 660 = 34.5. Notice that I have not included the cost of the test($100) in the calculation; the purpose of this calculation is to find the maximum cost we should be willing to pay for the test, so it should not be included. In this case, 34.5 < 100 and the test should not be taken. Alternative interpretation & solution During office hours, some showed me an alternative interpretation of the problem. I am presenting it here because I think it demonstrates some subtleties of the process. The difference is about when perfect information becomes available (Figure 2). According to this scenario, you have the same three alternative decisions available at the outset: to open the wallet randomly, to take the (imperfect) test, or to take the cash. The imperfect test (drawing a bill) still costs $100. If you do the test and pull out a $20, you know you should just take the cash, but if you draw a $100, you then have the choice of asking Monty what is in the wallet, and this will be perfect information. Rolling back the tree reveals that the optimal policy after you draw a $100 is to ask Monty, for an expected value of $700 versus $608 if that perfect test was not available. Therefore, EVPI for this interpretation of the problem is 700 608 = 92. This time, I included the cost of $100 for the imperfect test. Why? Because it refers to a different test, which happens upstream. EVPI, as calculated here, will refer to the remainder of the tree after the node the perfect test is available. 19.3 Utility manipulation III The utility of a particular x is given by U(x) = 100 + x 200 We are looking for the probability that makes the utility of the lottery (28, p; 68) equal to the certaintyequivalent 50. Equivalently, pu(28) + (1 p)u( 68) = U( 50) p = U( 50) U( 68) U(28) U( 68) = 0.25 19.8 Workstations Question A The interview implies U(4000) = 1 2 U(6000) + 1 2U(1000). Normalizing utility so that U(1000) = 1 and U(6000) = 0 yields U(4000) = 0.5, which is Lee s certainty equivalent for such a lottery. The expected value would be x = 1 2 (6000) + 1 2 (1000) = 3500. The difference of 500 shows Lee s risk aversion (see Figure 3). 4
Figure 2: Decision tree for 16.2 alternative interpretation Figure 3: Lee s utility function for cost 5
Figure 4: Lee s utility function for speed Question B With regard to speed, follow the same procedure noting that U(10) = 0.2U(24) + 0.8U(4) U(18) = 0.75U(24) + 0.25U(10) Now normalize so that U(24) = 1 and U(4) = 0. Substituting leads to U(10) = (0.2)(1) + (0.8)(0) = 0.2 U(18) = (0.75)(1) + (0.25)(0.2) = 0.8 Lee s utility function for speed is shown in Figure 4 Optimal Investment Plan Question 1 Running the drypress model for each growth scenario gives the results in Table 1. Denote the alternative plans, Plan A, Plan B + and Plan B, where Plan B + denotes Plan B with expansion. For Plan B + it was assumed that the decision to expand is made at year 3, and the second plant becomes operational in year 4. This is not terribly important, and you could obtain a correct solution (although different from this one) if you had assumed that the second plant becomes operational in year 3. Also, it is assumed that production is equally divided between two plants (if they exist and if total demand is between 5 and 10 million parts per year). 3 3 If you look into this closer, you will find that it is not on the expansion path; i.e., it is optimal to operate one plant at full capacity. Either approach is acceptable. 6
Table 1: Calculated NPV from each plan Growth rate: Low Medium High Plan A -3.37 18.88 24.60 Plan B + -3.66 18.53 22.57 Plan B 1.40 12.49 13.51 Question 2 If all decisions are made in year 0 there are essentially three plans the ones mentioned above. The corresponding decision tree is shown in Figure 5. According to this, the one-plant strategy (Plan A) is preferred for an expected value of $13.37M. Question 3 Of course, there is no reason for anybody to consider making a smaller plant (thus suffering lower production costs per unit), unless they can decide to expand it or not when more information becomes available. The decision tree for this case is shown in Figure 6. If the decision to expand plan B is made in year 3, then the expected value of the optimal policy is to build plant B with the possibility of expanding it in year 3 if demand is sufficiently high. The second tree is enough for computing the expected value of a perfect test that predicts demand in year 0. This is done by probability-weighting the highest NPV alternatives for each demand growth, as shown below: EV P I = 1 3 max[a L, B + L, B L]+ 1 3 max[a m, B + m, B m ]+ 1 3 max[a h, B + h, B h] 14.17 = 14.96 14.17 = 0.79 Surely, a clearer way to obtain the same result is to build another tree which reflects the sequence in which information becomes available (Figure 7). The take-away s from this exercise are the following: 1. We can avoid the downside risks and take advantage of the upside uncetainty by delaying part of investment. 2. By the value of perfect information, we know the upper bound of the expected value of sample information. EVPI thus helps us decide if extra information is worthwhile exploring given its cost. 3. If there exists flexibility in the investment, e.g., in the form of sequential decisions, it is worth modeling and valuating explicitly. The NPV computed in Exercise 1 using the average growth rate is, as expected, different from the average NPV computed from three different growth scenarios (remember E[V ( x)] < V (E[ x]) if V is a concave function of the random variable x). Sequential decision-making reduces the risk of project. In theory, this reduction of risk can be incorporated in the choice of discount rate. However, this never happens in practice, mainly because the reduction in the discount rate is very hard to compute. 7
Figure 5: Capacity expansion with decisions made in Year 0 8
Figure 6: Capacity expansion with decisions made in Year 0 and Year 3 Figure 7: Capacity expansion. Expected value of perfect information 9