FW544: Sensitivity analysis and estimating the value of information

Similar documents
UNIT 5 DECISION MAKING

Chapter 13 Decision Analysis

MBF1413 Quantitative Methods

Subject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10.

A Taxonomy of Decision Models

A B C D E F 1 PAYOFF TABLE 2. States of Nature

TIm 206 Lecture notes Decision Analysis

Learning Objectives = = where X i is the i t h outcome of a decision, p i is the probability of the i t h

EVPI = EMV(Info) - EMV(A) = = This decision tree model is saved in the Excel file Problem 12.2.xls.

Full Monte. Looking at your project through rose-colored glasses? Let s get real.

April 28, Decision Analysis 2. Utility Theory The Value of Information

E&G, Ch. 1: Theory of Choice; Utility Analysis - Certainty

Decision Analysis CHAPTER LEARNING OBJECTIVES CHAPTER OUTLINE. After completing this chapter, students will be able to:

Chapter 19 Optimal Fiscal Policy

DECISION ANALYSIS: INTRODUCTION. Métodos Cuantitativos M. En C. Eduardo Bustos Farias 1

Chapter 17 Student Lecture Notes 17-1

IE5203 Decision Analysis Case Study 1: Exxoff New Product Research & Development Problem Solutions Guide using DPL9

IX. Decision Theory. A. Basic Definitions

Target Date Glide Paths: BALANCING PLAN SPONSOR GOALS 1

19 Decision Making. Expected Monetary Value Expected Opportunity Loss Return-to-Risk Ratio Decision Making with Sample Information

Engineering Risk Benefit Analysis

Decision Making Models

Financial Economics Field Exam August 2011

Chapter 18 Student Lecture Notes 18-1

Decision Analysis under Uncertainty. Christopher Grigoriou Executive MBA/HEC Lausanne

Chapter 3. Consumer Behavior

Asset Allocation Mappings Guide

Full file at CHAPTER 3 Decision Analysis

SCHEDULE CREATION AND ANALYSIS. 1 Powered by POeT Solvers Limited

Textbook: pp Chapter 3: Decision Analysis

Project Risk Evaluation and Management Exercises (Part II, Chapters 4, 5, 6 and 7)

Stochastic Games and Bayesian Games

DECISION ANALYSIS. Decision often must be made in uncertain environments. Examples:

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

Getting started with WinBUGS

Econ 323 Microeconomic Theory. Practice Exam 2 with Solutions

Managerial Economics

Chapter 23: Choice under Risk

Simon Fraser University Department of Economics. Econ342: International Trade. Final Examination. Instructor: N. Schmitt

5.- RISK ANALYSIS. Business Plan

P2.T5. Market Risk Measurement & Management. Bruce Tuckman, Fixed Income Securities, 3rd Edition

Econ 323 Microeconomic Theory. Chapter 10, Question 1

Chapter 3 Dynamic Consumption-Savings Framework

Chapter 1 Microeconomics of Consumer Theory

Monash University School of Information Management and Systems IMS3001 Business Intelligence Systems Semester 1, 2004.

Agenda. Lecture 2. Decision Analysis. Key Characteristics. Terminology. Structuring Decision Problems

10 Economic Uncertainty Analysis Probabilistic Analysis and Sensitivities Chapter Overview... 1

Capital Budgeting Decision Methods

Review of Expected Operations

Intermediate Microeconomics (UTS 23567) * Preliminary and incomplete Available at

2c Tax Incidence : General Equilibrium

Dot Plot: A graph for displaying a set of data. Each numerical value is represented by a dot placed above a horizontal number line.

INSE 6230 Total Quality Project Management

Topics in Computational Sustainability CS 325 Spring 2016

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 18 PERT

Math 140 Introductory Statistics

Chapter 9 The IS LM FE Model: A General Framework for Macroeconomic Analysis

Decision making in the presence of uncertainty

DECISION ANALYSIS. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

36106 Managerial Decision Modeling Decision Analysis in Excel

E&G, Chap 10 - Utility Analysis; the Preference Structure, Uncertainty - Developing Indifference Curves in {E(R),σ(R)} Space.

Event A Value. Value. Choice

Measuring Retirement Plan Effectiveness

Making Choices. Making Choices CHAPTER FALL ENCE 627 Decision Analysis for Engineering. Making Hard Decision. Third Edition

Module 6 Portfolio risk and return

The Sensitive Side of Cost Effectiveness

XLSTAT TIP SHEET FOR BUSINESS STATISTICS CENGAGE LEARNING

2

Introduction to economics for PhD Students of The Institute of Physical Chemistry, PAS Lecture 3 Consumer s choice

Project Risk Analysis and Management Exercises (Part II, Chapters 6, 7)

1. A is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes,

Stochastic Analysis Of Long Term Multiple-Decrement Contracts

Objective of Decision Analysis. Determine an optimal decision under uncertain future events

Decision making in the presence of uncertainty

Decision Theory. Refail N. Kasimbeyli

FIN 6160 Investment Theory. Lecture 7-10

DATA HANDLING Five-Number Summary

Chapter 12. Decision Analysis

User Guide to FinaMetrica s Asset Allocation Mappings: Comparing Risk Tolerance and Investment Risk

Johan Oscar Ong, ST, MT

Ec101: Behavioral Economics

CHAPTER 12 APPENDIX Valuing Some More Real Options

Economics 325 Intermediate Macroeconomic Analysis Problem Set 1 Suggested Solutions Professor Sanjay Chugh Spring 2009

Basic Procedure for Histograms

Chapter 4: Decision Analysis Suggested Solutions

Decision Making. DKSharma

DATA SUMMARIZATION AND VISUALIZATION

Introduction. The Theory of Consumer Choice. In this chapter, look for the answers to these questions:

Subjects: What is an auction? Auction formats. True values & known values. Relationships between auction formats

Traditional Optimization is Not Optimal for Leverage-Averse Investors

Sensitivity analysis for risk-related decision-making

Project Management -- Developing the Project Plan

Project Theft Management,

u w 1.5 < 0 These two results imply that the utility function is concave.

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Decision Analysis

We will make several assumptions about these preferences:

Optimization: Stochastic Optmization

Lecture 5: Flexible prices - the monetary model of the exchange rate. Lecture 6: Fixed-prices - the Mundell- Fleming model

How Wealthy Are Europeans?

Transcription:

FW544: Sensitivity analysis and estimating the value of information During the previous laboratories, we learned how to build influence diagrams for estimating the outcomes of management actions and how to calculate posterior probabilities using prior probabilities and new information. Here we integrate both approaches to evaluate the sensitivity of management decisions to various sources of uncertainty and quantify the value of information. SENSITIVITY ANALYSIS The sensitivity analysis is used to identify the components that have the greatest effect on the expected value of the decision and more importantly, the components that have the greatest effect on what decision alternative is estimated to be the best. As we discussed in lecture, there are many methods for evaluating the sensitivity of decision models and no single method is best. Thus, practitioners should always evaluate the sensitivity of decision models using multiple approaches. In lecture, we introduced four types of sensitivity analysis that are commonly used in natural resource decision modeling, one-way and two-way sensitivity analysis, response profile sensitivity analysis, and indifference curves. In this lab, we will learn how to conduct the analysis. We illustrate sensitivity analysis using an amphibian conservation decision influence diagram of a hypothetical management decision for an isolated metapopulation of a rare amphibian species. Small isolated are the only suitable habitats for the amphibian species and are assumed to be suitable habitat patches. The fundamental management objective is to maximize the persistence of the metapopulation and the utility is probability of persistence. However, managers have a fixed budget and can only implement various combinations of two management options: improve connectivity among by removing migration barriers or increase the number of through restoration activities. Open Wetland_decision_model.neta and examine.

ONE-WAY SENSITIVITY ANALYSIS To evaluate the sensitivity of the influence diagram model using one-way sensitivity analysis, the values of each model component except the decision and utility are systematically varied from minimum to maximum. For example, here the values of the node current wetland colonization probability are varied from minimum and maximum. Across the range of values for current wetland colonization probability, the smallest expected value of the optimal decision is 1.43 when the probability is set to the range 0 to 0.2 and the largest value is 88.65 when probability is set to the range 0.8 to 1. In a one way sensitivity analysis, the process is repeated for each model component (except the decision and utility nodes) and the minimum expected value of the optimal decision and the maximum value are recorded, as shown below. Minimum Maximum Optimal Expected Optimal Expected Model component decision value decision value Current number of 3.8 improve connectivity 31.6 Current wetland colonization probability 1.4 88.6 Wetland colonization probability 0.2 90.6 Number of improve connectivity 0.7 improve connectivity 31.8 Wetland persistence probability 3.8 81.2

Here we also recorded the identity of the optimal decision. Notice that the optimal decision for several model components is the same (i.e., ) regardless of the value of the model component. In other words, the decision maker would always decide to. Recall from lecture that this is known as stochastic dominance. The results of a one-way sensitivity analysis are graphically displayed in a tornado diagram that is created using the values obtained during the sensitivity analysis. Open Sensitivity analysis results.xlsx and examine. A tornado diagram is a two dimensional plot with the expected values for the optimal decision plotted along the x- axis and the model components on the y-axis. The sensitivity of the expected value to changes in each model component is represented by a horizontal bar that corresponds to the range of expected values observed during the sensitivity analysis. A tornado diagram can be created in excel using the horizontal bar chart option shown in the one way sensitivity tab. The same plot can be created in R using the data in one.way.sensitivity.csv and the R program one.way.sensitivity.plot.r. Open the script and examine. Try reading the file into R and running the code. es it look any different than the excel version? TWO-WAY SENSITIVITY ANALYSIS In two-way sensitivity analysis, two model components are varied simultaneously. For example, here the values of the node current wetland colonization probability are varied from minimum and maximum, while the value of current number of is fixed at 5. The value of current number of is then fixed at 6 and process is then repeated until you have the expected value of the optimal decision for each combination of the two components as shown in the Two-way sensitivity tab in this excel worksheet. A two-way sensitivity analysis can be displayed in a contour plot of the expected value of the optimal decision for each combination of the value of two model components. The contour plot can be created in R using this comma separated file,

two.way.sensitivity.data.csv and the R script two.way.sensitivity.plot.r. Open the R script and examine. Here we need to create a matrix of the expected values and two vectors of current wetland colonization probability and current number of in ascending orders. Note that we are using the midpoint value of each range in the current wetland colonization probability node. Try reading the file into R and running the script. Which component is the model most sensitive to? the two components interact or are their effects additive? RESPONSE PROFILE SENSITIVITY ANALYSIS Response profile sensitivity analysis is used to evaluate the sensitivity of decisions to the model components. Similar to one-way and two-way sensitivity analysis, one or two model components are varied to evaluate the change in the expected value. Unlike oneway sensitivity analysis, the expected value for each decision, not just the optimal decision, is recorded. These values then are plotted to examine how optimal decisions change over the range of values for the model component. The one-way sensitivity analysis of the amphibian conservation decision model in the table above indicated that the expected value was most sensitive to future and current wetland colonization components but, was stochastically dominant. That is, the decision did not change across the range of values, so a response profile sensitivity analysis in not necessary because the decision does not change. In contrast, the decision changed across the values for the current number of component. To create a response profile for current number of, the value of the component is varied from 5 to15 (i.e., the range) and the expected value is recorded for each decision and plotted. After completing the task, we have the values for each decision in the Profile sensitivity tab in this excel file. The response profile plot is created by plotting the expected values for each decision as shown in the excel file. The plot can also be created using the comma separated file, profile.sensitivity.data.csv, and the following R script response.profile.sensitivity.r. Open the data and the script and examine. Try running

the program. Notice that the optimal decision, i.e., the one with greatest utility, is until a certain point. What is that point? What value is current number of when the optimal decision changes? What does that mean? INDIFFERENCE CURVES Previously, we discussed various methods for combining the outcomes for multiple objectives in a single value. The weighting and scoring of multiple objectives can be potentially problematic, especially if the objectives are on very different scales (e.g., dollars vs. number of fish). Indifference curves are used to evaluate tradeoffs when considering multiple objectives. The basic idea is to evaluate alternative weighting schemes to find the point at which the values of the decision alternatives are equal and the decision-maker is indifferent to the decision (i.e., no single decision is optimal). To illustrate, the table below contains the set of values for thetemporary wetland management decision, Wetland_decision_model.neta. Open the model in Netica. Notice that the utility in a function of the management action is influenced by duckling density and the management action, two things on very different scales. Open the table for the utility node. The utility was created by combining ranks of the states for each node. The value of the management decision is ranked with do nothing as 3 because it is the least expensive and risky decision and burn ranked as 1 because it is the costly and riskiest action. Similarly, dabbling ducking density is valued (ranked) from 1 at the lowest density to 6 at the greatest density.

Management actions Duckling density Decision Value (M) no. / ha Value (D) Combined value (M + D) nothing 3 0-5 1 4 nothing 3 5-10 2 5 nothing 3 10-15 3 6 nothing 3 15-20 4 7 nothing 3 20-25 5 8 nothing 3 25-30 6 9 Mow 2 0-5 1 3 Mow 2 5-10 2 4 Mow 2 10-15 3 5 Mow 2 15-20 4 6 Mow 2 20-25 5 7 Mow 2 25-30 6 8 Burn 1 0-5 1 2 Burn 1 5-10 2 3

Burn 1 10-15 3 4 Burn 1 15-20 4 5 Burn 1 20-25 5 6 Burn 1 25-30 6 7 The combined objective value was as an additive function of each separate objective value. If they are given equal weight (importance), we add the two values together as shown above. To evaluate tradeoffs between the weighting of the objectives, we create alternative combinations by weighting either the decision rank or dabbling duck density rank greater than the other and evaluating how the new scoring criteria for each objective affects decision-making. For example under an equal weighting scheme, the value of the decision is to do nothing and the value of the outcome 0-5 ducklings/ha is 3 + 1 = 4. If the duckling density rank is given twice the weight of the decision rank, the value of that same decision and outcome combination is 3 + 2*1 = 5. The Indifference curve tab of the excel worksheet has the utilities for weight schemes from 1 to 5. These values can be copied form the worksheet and pasted into the utility table in Netica. To create indifference curves, alternative sets of combined utilities are created by systematically increasing the weight of a single objective (e.g., the ranks in the table above) and recording the expected value for each decision. These values are plotted to identify the weight at which the decision maker is indifferent to the optimal decisions. The indifference curves can be created using excel or indifference.curve.data.csv and R script indifference.curve.r. Run the script and examine the plot. For the temporary wetland management decision, the expected value is greatest for the do nothing alternative when the weight of the dabbling duck density is equal to the cost and risk of the management decision. The point where the decision-maker would change the decision from do nothing to mow corresponds to a weight of approximately 1.6. This suggests that to decide to mow

requires the decision maker to value the dabbling duck density outcome 1.6 times more than the cost and risk of mowing. VALUE OF INFORMATION EXPECTED VALUE OF PERFECT INFORMATION The expected value of perfect information (EVPI) is the increase in the expected value of a decision if the 'true' value of a model component(s) or the relationship among components is known with certainty. Thus, it can be used, in part, to identify and rank potential variables for monitoring or additional data collection efforts. To illustrate the calculation of EVPI, consider a simple conservation decision model in EVPI.neta. Open the Netica file and examine. Here the decision to conduct a management action affects the degree of environmental disturbance which in turn, affects the future status of an animal population. The future population status also is affected by the current status of the population and takes two states, present and absent. The value of the decision depends on the management action and the future population status. The optimal decision and value when the probability of current species presence is 50% is low intensity with an expected value of 9.65. If we had information from a space age sensor that told us that assume that the species is present with a 100% probability, the optimal decision is none with a value of 18.25. If that same sensor told us the species is absent with a 100% probability, the optimal decision is high intensity with a value of 5.45. Unfortunately, we don t have the space age sensor so probabilities of current species presence and absence in the model are 50%-50% based on a, say and occupancy model. These are our best estimates of the probability that the species is present, so we use them to weight the expected values of the optimal decision under perfect information as follows 18.25*0.5 + 5.45*0.5 = 11.89 This is the expected value of the decision if the population status was known with certainty. EVPI is calculated as the difference between the expected value with and without a knowing the current population status, 11.89-9.65 = 2.20.

The preceding was a simple example of two discrete states. Let s assume that rather than presence and absence, we had the density of the species with a mean = 10 and sd = 2.5 and abundance is discretized into 3 states: 0 to 8, 8 to 11, and 11-20 animals per unit area. The probabilities of each state can be calculated using the following R code: > mu <- 10 > sd <- 2.25 > s1<-pnorm(8,mu,sd)- pnorm(0,mu,sd) > s2<-pnorm(11,mu,sd)- pnorm(8,mu,sd) > s3<-pnorm(20,mu,sd)- pnorm(11,mu,sd) > > probs<-c(s1,s2,s3) > probs [1] 0.1870270 0.4846080 0.3283562 Assume that the expected values of the optimal decisions, given abundance was with 100% probability in states 1 3 (i.e., we used the space age sensor) are 10, 15, and 21, respectively. The weighted expected value of when we have perfect information is (using R again): > s1*10 + s2*15+s3*21 [1] 16.03487 Using an expected value of the optima decision when the true population size is unknown of 10 the expected value of perfect information is: 6.03487-10 = 16.03487. EXPECTED VALUE OF IMPERFECT INFORMATION Although the EVPI can be useful as a first approximation, it is usually not realistic to expect information to be perfect. To account for the error in measurements or models, requires the estimation of the expected value of imperfect information (EVII). EVII is considerably more complicated to estimate than EVPI. It requires an estimate of the expected outcome and the use of Bayes rule that we learned in the previous lesson to calculate probabilities and expected values.

We illustrate calculating EVII using the previous conservation decision model. Open EVII.neta and examine. Here the decision-maker is interested in evaluating the value of conducting sample surveys in the potential management area to detect the species. One source of imperfection in sample data that would affect the value of conducting studies is incomplete detection. We first need an estimate the probability of detecting the species give it was present. Assume that 10 samples are needed to detect the species with an 80% probability, given the species is present in the area. The probability of detecting the species if they are not present is, of course, zero. Assuming that there is a 50% chance that the species is present, the probability of detecting the species with 10 samples is 0.8*0.5 = 0.4 and the probability of not detecting them is 1.0-0.4 = 0.6. This new component is added to the model and is graphically displayed in an influence diagram with current species presence pointing into a sampling result node. If the species is detected, we assume it is present. If the species is not detected, the species may be absent or the species may be present but was missed during sampling. To incorporate the possibility of falsely concluding the species was absent, we need to use Bayes rule to estimate the probability the species is present, given it was not detected, known as the posterior probability of presence. The posterior probability of presence requires four estimates: # prior probability of presence > prior.pres<- 0.5 #prior probability of absence > prior.abs<- 1- prior.pres #probability of detection given presence > p.detect.pres<-0.8 #probability not detection given absence >p.not_detect.abs <-1 > # sample outcome of one is not detected > like<-dbinom(0,1, p.detect.pres) > post<-like*prior/(like*prior.pres + p.not_detect.abs *prior.abs) > post [1] 0.1666667

The probability of absence given the species was not detected is the compliment 1-0.1667 = 0.8333. The values are then entered into the model as the conditional probabilities of species presence, given the sampling result not detected. Graphically, this is depicted in an influence diagram reversing the arrow from bull trout population to sampling results. The next step is to estimate the value of the optimal decision for the each sampling result using the new model. Open the influence diagram with the sampling results node and examine the expected value of information when the sampling result is known with 100% certainty. For the sampling result, detected, the optimal decision in none with a value of 18.25. The same value as present = 100%. When the sampling result is not detected, the optimal decision is high intensity management action with a value of 6.83. Notice that this is different from the expected value of perfect information example, where we assumed absence was 100%. This is because there is a 16.7% chance that the species occurs given it was not detected, which changes the value of the optimal decision. The final step to estimate EVII the expected values are again weighted by the probability of obtaining that sampling result and sum the values as: 18.45*0.40 + 6.83*0.6 = 11.40. This is the expected value of the decision if sampling is conducted. We subtract that value from the expected value if the species was not sampled and we get 11.40-9.65 = 1.75 as the EVII. Notice that the EVII 1.75 is lower than the EVPI 2.20. This should always be the case; the value of imperfect information should always be smaller than that of perfect information. Of course, this value would be discounted by the actual cost of collecting samples. Assuming that the probability of detecting a species is related to the number of samples, the decision maker could examine how the EVII varies with sample size to determine the optimal level of sampling effort.