Decision Theory. Course details. course notes 2008/2009. Studymanual (online) c L.C. van der Gaag, S. Renooij, P.

Size: px
Start display at page:

Download "Decision Theory. Course details. course notes 2008/2009. Studymanual (online) c L.C. van der Gaag, S. Renooij, P."

Transcription

1 Decision Theory course notes 2008/2009 c L.C. van der Gaag, S. Renooij, P. de Waal Master Applied Computing Science, UU ICS Lecturer: Prerequisite: Literature: Examination: Course details dr. S. Renooij silja@cs.uu.nl Probability theory Textbook Studymanual (online) written exam (closed book) practical assignment Deadlines: date: March 2 describing choice of decision problem date: April 6 written report (on paper) 1 / / 401 Decision Theory: Course overview A precise and systematic study of the formal and abstract properties of decision making scenarios. This course: Models and decision analysis techniques for optimal decision making with and without uncertainty or risk Subjects: Decision criteria Decision trees Utility theory Multi-attribute decision making Analyses of uncertainties Overview of treated techniques for rational decision making 1 attribute n 2 attributes certainty - Analytic Hierarchy Process (AHP) uncertainty - maximin - maximax - minimax regret - dominance (det.) risk - Bayes criterion - multi-attribute UT - dominance (prob.) (MAUT) - utility theory (UT) The risk versus uncertainty distinction is often used to distinguish between theories which: use the (subjective or objective) assignment of mathematical probabilities (risk) make probability assignments (uncertainty) 3 / / 401

2 The Straatnieuws problem Introduction Arie is a Straatnieuws seller: each morning, he purchases a number of copies at 1.00 euro each; throughout the day, he sells as many copies as he can, typically between 16 and 20, at 1.50 euro each; at the end of the day, he discards the remaining copies. How many copies should he purchase tomorrow morning? 5 / / 401 The aneurysm problem Maria presents with an intracranial aneurysm. The attending physician can decide to perform a preventive surgical procedure: if left untreated, the aneurysm may rupture in time Maria may die or may be seriously disabled; the surgical procedure has associated immediate risks Maria may die during the surgery or may be seriously disabled. What should the physician do? The Amsterdam Airport problem The Dutch government has to decide upon a site for future airport facilities. The objectives are: to minimise the costs to the government; to raise the capacity of airport facilities in the Netherlands; to improve the safety of airport traffic; to minimise environmental damage; to minimise noise levels; to minimise displacement of inhabitants;... 7 / / 401

3 The decision-analysis cycle identification Hard decision problems Many real-life decision problems are hard because they involve inherent uncertainty; they involve complex patterns of preference; they involve multiple objectives with trade-offs; they involve a large number of issues. Decision analysis provides effective methods and techniques for structuring and solving hard decision problems. structuring quantification solution sensitivity analysis further analysis needed? no implementation yes 9 / / 401 Elements of decision problems: variables Identification What is the problem to solve (decision context), and what are the alternatives? What are the objectives, and how are they measured (attributes)? Which uncertain events are relevant, and what are their outcomes? What are the possible consequences from decisions and uncertain events, and which do you prefer? Note that the relevancy of uncertain events and the consequences depend on your objectives. A simple decision problem includes two types of variable: a decision variable D with values, or decisions, d 1,...,d n, n 2; a chance variable C with values, or states, c 1,...,c m, m 2. Example Reconsider the decision context of the Straatnieuws problem: the decision variable is the number of copies to buy: D = {0, 1,,...,16, 17, 18, 19, 20,...,50} the chance variable is tomorrow s demand of copies: C = {16, 17, 18, 19, 20} 11 / / 401

4 Elements of decision problems: probabilities A simple decision problem includes, for each decision d i, a probability distribution Pr di (C) Pr(C d i ) over all states of C given that the decision d i has been implemented. Example Reconsider the Straatnieuws problem. We assume that the likelihood of any state is independent of the decision taken; the likelihood of the various states is uniformly distributed. Pr(16) = 0.2 Pr(18) = 0.2 Pr(20) = 0.2 So: Pr(17) = 0.2 Pr(19) = / 401 Elements of decision problems: rewards A simple decision problem includes a reward function r : D C IR that assigns a reward to any decision d i and any state c j, i = 1,...,n, j = 1,...,m. Example Reconsider the Straatnieuws problem. The reward function is: { 1.5 cj d r(d i,c j ) = i if c j d i 0.5 d i otherwise 14 / 401 Elements of decision problems: rewards Example Reconsider the Straatnieuws problem. The reward function is: { 1.5 cj d r(d i,c j ) = i if c j d i 0.5 d i otherwise The associated reward matrix is: Organising the elements The elements of a decision problem are often organised in a decision tree. Example Reconsider the aneurysm problem. The following decision tree organises the elements of the problem: no surgery surgery p(rupture) = 0.29 p(no rupture) = 0.71 p(deceased) = 0.02 p(disabled) = 0.06 p(cured) = 0.92 p(deceased) = 0.55 p(disabled) = 0.15 p(cured) = / / 401

5 The Mexico City Airport problem Case study: the Mexico City Airport problem It is The Mexican government has to formulate a strategy for developing major airport facilities for the Mexico City metropolitan area: expand the current Texcoco airport, built in 1928; develope a new airport at a different location (Zumpango); a (temporary(?)) combination of the previous two options. The decision problem phrased: Over the next 30 years, how should airport facilities be developed to assure adequate service for the region during the period from now to the year 2000? In particular: what kind of aircraft activity (International, Domestic, Military, or General) should be operating at which site and when? (1971 SOP study, 50 man-days consultancy; for details see Keeney & Raiffa 1993, Chapter 8) 17 / / 401 M.C. Airport: some background M.C. Airport: government objectives 10 miles Zumpango Expressway Mexico City airport Lake 1930 : population < : population ; passengers a year 2000 : population ; passengers 19 / 401 For the decision problem at hand the following objectives are identified: 1 minimise the total construction and maintenance costs; 2 provide adequate capacity to meet air traffic demands; 3 minimise access time to airports; 4 maximise safety of airport traffic; 5 minimise social disruption (displacement) caused by provision of new facilities; 6 minimise effects of noise pollution due to air traffic. Note that objectives concern different interest groups: 1 and 2 affect the government; 2, 3 and 4 affect the passengers; 4, 5 and 6 affect the residents. 20 / 401

6 M.C. Airport: design choices Analysing a large scale decision problem involving long-term planning is hard. The type of analysis depends on its purpose: Dynamic: designed to only indicate what action to take right now; subsequent actions will depend on the unfolding of future critical events; Static: designed to identify strategies, which currently seem effective, for development over a period of time. For different reasons the static approach is adopted. In addition, some simplifying assumptions w.r.t time are made: the 30 year period is divided into 3 decades; at any one time a category of activity can operate at only one site; changes are allowed in 1975, 1985 and / 401 M.C. Airport: decision variables The decision problem at hand includes one decision variable: variable D with alternatives d 75 d 85 d 95, d j {T, Z} {I, D, M, G}, for site and aircraft activity over the subsequent time periods. The total number of alternatives is (2 4 ) 3 = 4096, but only about 100 are sensible. It seems more insightful to model the problem using three subsequent decision variables: a decision variable D 75 with alternatives di 75 for site and aircraft activity in the period ; a decision variable D 85 with alternatives di 85 for site and aircraft activity in the period ; a decision variable D 95 with alternatives di 95 for site and aircraft activity in 1995 and after; Each d j i is again an element of the set {T, Z} {I, D, M, G} and has 2 4 = 16 alternatives. 22 / 401 M.C. Airport: chance variables Different uncertain events were identified as influencing the objectives. These are captured by the following six chance variables: C 1 = the net present value of the total costs in millions of pesos (discount rate of 12%); C 2 = the maximum number of aircraft operations possible (capacity); C 3 = the average two-way access time weighed by the expected number of travellers from each zone in M.C.; C 4 = the average number of people killed or seriously injured per aircraft accident over a 30 year period (safety); C 5 = the number of people displaced by airport development; C 6 = the average number of people that is annually subjected to a noise level 90 CNR. Assuming that all chance variables are statistically independent, the problem can be modelled with a single chance variable C = (C 1,C 2,C 3,C 4,C 5,C 6 ). 23 / 401 M.C. Airport: the decision tree D 75 Z-IDMG T-IDMG T-GM,Z-ID D 85 D 85 D 85 T-M,Z-IDG T-GM,Z-ID D 95 D 95 Z-IDMG T-M,Z-IDG C C Pr(c) (x 1, x 2, x 3, x 4, x 5, x 6 ) 24 / 401

7 Dominance Decision criteria (Not in textbook) Consider a simple decision problem with the decision variable D = {d 1,...,d n }, n 2, the chance variable C = {c 1,...,c m }, m 2, and the reward function r(d,c). A decision d i can dominate a decision d j ; this dominance is said to be deterministic, if the property can be established from the reward function only; probabilistic (or stochastic), if the property is established from the distribution of rewards. Note: the textbook (Clemen) uses different definitions! 25 / / 401 Deterministic dominance Consider a simple decision problem with the decision variable D = {d 1,...,d n }, n 2, the chance variable C = {c 1,...,c m }, m 2, and the reward function r(d,c). A decision d i is said to deterministically dominate d j if r(d i,c k ) r(d j,c k ) for all c k C, and there is a c l with r(d i,c l ) > r(d j,c l ); A decision is deterministically admissible if it is not dominated by any other decision. 27 / 401 Deterministic dominance an example (I) Reconsider the Straatnieuws problem with its reward matrix Now suppose that Arie purchases 12 copies: The decision to buy 12 copies is deterministically dominated by the decision to buy 16 copies and is deterministically inadmissible. 28 / 401

8 Deterministic dominance an example (II) Reconsider the Straatnieuws problem with its reward matrix Likewise, Arie does not consider purchasing 21 (or more) copies: The decision to buy 21 copies is deterministically dominated by the decision to buy 20 copies. Deterministic dominance an example (III) Suppose that in the Straatnieuws problem, we have the following reward matrix Now obviously all alternatives are equivalent and none of the decisions deterministically dominate any of the other. 29 / / 401 Three simple decision criteria Consider a simple decision problem with the decision variable D = {d 1,...,d n }, n 2, the chance variable C = {c 1,...,c m }, m 2, and the reward function r(d,c): the maximin criterion selects a decision d i that maximises the minimal reward over all outcomes; the maximax criterion selects a decision d i that maximises the maximal reward over all outcomes; the minimax regret criterion selects a d i that minimises the maximal regret over all outcomes, where the regret r(d i,c j ) for d i and c j is defined by ( r(d i,c j ) = max k ) r(d k,c j ) r(d i,c j ) 31 / 401 The maximin criterion The maximin criterion selects a decision d i that maximises min j r(d i,c j ) Example: Reconsider Arie s Straatnieuws problem with its reward matrix: min The maximum of the minimal reward is 8.00 euro. Arie decides to purchase 16 copies. 32 / 401

9 The maximax criterion The maximax criterion selects a decision d i that maximises max j r(d i,c j ) Example: Reconsider Arie s Straatnieuws problem with its reward matrix: max The maximum of the maximal reward is euro. Arie decides to purchase 20 copies. 33 / 401 The minimax regret criterion The minimax regret criterion selects a decision d i that minimises max j r(d i,c j ), where r(d i,c j ) is a regret matrix: ( ) r(d i,c j ) = maxr(d k,c j ) r(d i,c j ) k 34 / 401 Example: Reconsider Arie s Straatnieuws problem: max: The regret matrix is: The minimax regret criterion continued Example Reconsider the regret matrix of the Straatnieuws problem: max The minimum of the maximal regret is 1.50 euro. Arie decides to purchase 17 copies. 35 / / 401

10 The Bayes criterion A note on the three criteria Reconsider the Straatnieuws problem, this time with Pr(16) = 0.96 Pr(18) = 0.01 Pr(20) = 0.01 Pr(17) = 0.01 Pr(19) = 0.01 The maximax criterion still opts for purchasing 20 copies. Consider a simple decision problem with the decision variable D = {d 1,...,d n },n 2; the chance variable C = {c 1,...,c m },m 2; the probability distributions Pr(C d i ), d i D; the reward function r(d, C). The Bayes criterion selects a decision d i that maximises the expected reward IE(r(d i,c)) = j Pr(c j d i ) r(d i,c j ) 37 / / 401 The Bayes criterion an example Reconsider Arie s Straatnieuws problem with and Pr(16) = 0.1 Pr(18) = 0.3 Pr(20) = 0.2 Pr(17) = 0.2 Pr(19) = 0.2 The expected rewards are IE(r(16, C)) = 8.00 IE(r(18, C)) = 8.40 IE(r(20, C)) = 7.30 IE(r(17, C)) = 8.35 IE(r(19, C)) = 8.00 Arie decides to purchase 18 copies. 39 / 401 A distribution of rewards Consider a simple decision problem as before. Let R be a reward variable with values R = {r(d i,c j ) d i D,c j C} The distribution Pr(R d i ) over the reward variable R, induced by a decision d i, and defined by Pr(r d i ) = Pr(c j d i ) c j with r(d i, c j ) = r for each r R, is termed the distribution of rewards for d i (also: reward profile, or risk profile). 40 / 401

11 A distribution of rewards an example Reconsider the Straatnieuws problem with Pr(16) = 0.2 Pr(18) = 0.1 Pr(20) = 0.2 Pr(17) = 0.3 Pr(19) = 0.2 The decision to purchase 19 copies induces the following distribution of rewards: probability reward and the following cumulative distribution: cumulative probability reward 41 / 401 Probabilistic dominance For each decision d i of a simple decision problem, let Pr(R d i ) be the distribution of rewards for d i and let F(R d i ) be the associated cumulative distribution of rewards. A decision d i is said to probabilistically dominate d j if F(r k d i ) F(r k d j ) for all r k R, and there is an r l with F(r l d i ) < F(r l d j ); A decision is probabilistically admissible if it is not dominated by any other decision. cumulative probability d j reward d i 42 / 401 Probabilistic dominance an example Reconsider the, simplified, aneurysm problem with no surgery surgery p(deceased) = 0.05 p(disabled) = 0.25 p(cured) = 0.70 p(deceased) = 0.01 p(disabled) = 0.04 p(cured) = 0.95 The cumulative distributions of rewards are cumulative probability no surgery reward surgery The decision to abstain from surgery is probabilistically dominated by the decision to perform a surgical procedure. 43 / 401 Probabilistic dominance and expected reward Reconsider the, simplified, aneurysm problem with no surgery surgery p(deceased) = 0.05 p(disabled) = 0.25 p(cured) = 0.70 p(deceased) = 0.01 p(disabled) = 0.04 p(cured) = 0.95 The decision to abstain from surgery is probabilistically dominated by the decision to perform a surgical procedure, and therefore has a lower expected reward: vs / 401

12 Deterministic dominance (Clemen) Consider a simple decision problem with the decision variable D = {d 1,...,d n }, n 2, the chance variable C = {c 1,...,c m }, m 2, and the reward function r(d,c). For each decision d i, let Pr(R d i ) be the distribution of rewards for d i and let F(R d i ) be the associated cumulative distribution of rewards. A decision d i is said to deterministically dominate d j if there exists an r R such that F(r d j ) = 1, and F(r d i ) = Pr(r d i ); Note that the above amounts to min k r(d i,c k ) max k r(d j,c k ) 45 / 401 Deterministic dominance (Clemen) an example (I) Reconsider the Straatnieuws problem with its reward matrix Recall that if Arie purchases 12 copies, then his reward will be 6.00 euros, regardless of the demand. The decision to buy 12 copies is also deterministically dominated by the decision to buy 16 copies, according to the textbook: min k r(16,c k ) = 8.00 maxr(12,c k ) = 6.00 k 46 / 401 Deterministic dominance (Clemen) an example (II) Reconsider the Straatnieuws problem with its reward matrix Should Arie consider purchasing 21 (or more) copies? The decision to buy 21 copies is not deterministically dominated by the decision to buy 20 copies, according to the textbook: min k r(20,c k ) = 4.00 maxr(21,c k ) = 9.00 k 47 / 401 Deterministic dominance (Clemen) an example (III) Suppose that in the Straatnieuws problem, we have the following reward matrix Now we have for all pairs of alternatives d i and d j that min k r(d i,c k ) maxr(d j,c k ) k so all alternatives deterministically dominate all others, according to the textbook definition. 48 / 401

13 The ing problem Decision trees Colaco has some assets and considers putting a new soda on the national. It has a choice of ing the new soda; not ing the new soda. It can also decide to first perform a local survey to gain additional information. Then, the first decision is whether or not to perform a local survey; the second decision is whether or not to. What should Colaco do? 49 / / 401 Sequential decisions In general, a decision problem may involve taking a sequence of decisions over time: any decision may influence a subsequent one; inbetween the decisions, observations may become available that provide additional information. The decision maker then has to consider the entire planning horizon: first decision second decision planning horizon last decision Chance trees A chance tree is a tree-like structure that organises the chance variables of a decision problem, along with their probabilities: each (chance) node represents a chance variable; each edge emanating from a chance node has associated a value of the represented variable; the label of such an edge captures the represented value and associated probability. Example p(local success) = 0.60 before taking the first decision. 51 / 401 p(local failure) = / 401

14 Scenarios in a chance tree Inverting a chance tree A scenario is a path from the root of a tree to an endpoint. A chance tree describes one or more scenarios. The probability of a scenario equals the product of the probabilities associated with the values along the scenario. Example: Reconsider the ing problem with Consider a chance tree with the statistical variables B and C: Pr(b) Pr( b) Pr(c b) Pr( c b) Pr(c b) Pr( c b) Pr(bc) Pr(b c) Pr( bc) Pr( b c) Pr(l) = 0.60 Pr( l ) = 0.40 Pr(n l) = 0.85 Pr( n l) = 0.15 Pr(n l ) = 0.10 Pr( n l ) = 0.90 Pr(ln) = 0.51 Pr(l n) = 0.09 Pr( ln) = 0.04 Pr( l n) = 0.36 Inverting the tree is equivalent to applying Bayes Theorem: Pr(c) Pr( c) Pr(b c) Pr( b c) Pr(b c) Pr(bc) Pr( bc) Pr(b c) The scenario probability Pr(ln) equals Pr(ln) = Pr(l) Pr(n l) = / 401 Pr( b c) Pr( b c) 54 / 401 The ing problem revisited The ing problem continued Consider the chance tree with the chance variables L and N: Pr(l) = 0.60 Pr( l ) = 0.40 Pr(n l) = 0.85 Pr( n l) = 0.15 Pr(n l ) = 0.10 Pr( n l ) = 0.90 Pr(ln) = 0.51 Pr(l n) = 0.09 Pr( ln) = 0.04 Pr( l n) = 0.36 Now, consider the incomplete chance tree: Pr(n) =? Pr( n) =? Pr(l n) =? Pr( l n) =? Pr(l n) =? Pr(ln) = 0.51 Pr( ln) = 0.04 Pr(l n) = 0.09 To invert the tree, first its structure is inverted. Then, the scenario probabilities are associated with the appropriate endpoints: Pr(n) =? Pr( n) =? Pr(l n) =? Pr( l n) =? Pr(l n) =? Pr( l n) =? Pr(ln) = 0.51 Pr( ln) = 0.04 Pr(l n) = 0.09 Pr( l n) = 0.36 Pr( l n) =? Pr( l n) = 0.36 The prior probabilities of the values of N are computed from the scenario probabilities using the basic rule of marginalisation: Pr(n) = Pr(ln) + Pr( ln) = 0.55 Pr( n) = Pr(l n) + Pr( l n) = / / 401

15 The ing problem continued Consider the incomplete chance tree: The ing problem continued Pr(n) = 0.55 Pr(l n) =? Pr(ln) = 0.51 The inverted chance tree thus is: Pr( n) = 0.45 Pr( l n) =? Pr(l n) =? Pr( l n) =? Pr( ln) = 0.04 Pr(l n) = 0.09 Pr( l n) = 0.36 Pr(n) = 0.55 Pr(l n) = 0.93 Pr( l n) = 0.07 Pr(ln) = 0.51 Pr( ln) = 0.04 The conditional probability Pr(l n) is computed from So, Pr(n) Pr(l n) = Pr(ln) Pr( n) = 0.45 Pr(l n) = 0.2 Pr( l n) = 0.8 Pr(l n) = 0.09 Pr( l n) = 0.36 Pr(l n) = Pr(ln) Pr(n) = = / / 401 Decision trees A decision tree is a tree-like structure that organises the elements of a decision problem: each decision node represents a decision variable; each edge emanating from a decision node has associated a value of the represented variable; the label of such an edge captures the represented decision. Scenarios in a decision tree (Recall: a scenario is a path from the root of a tree to an endpoint) A decision tree describes various scenarios: each endpoint represents a consequence of the preceding scenario; the label of an endpoint captures the reward of the preceding scenario. Example Example p() = euro p() = euro 59 / / 401

16 The ing problem revisited Time A decision tree reflects, from left to right, the course of time within its scenarios: the value of a chance node before a decision node should be known with certainty before the decision is taken; the value of a chance node after a decision node need not be known before the decision is taken. Reconsider Colaco s ing problem and its elements: the decision variables are the decision S whether (s) or not ( s) to perform a survey; the decision M whether (m) or not ( m) to the new soda; the chance variables are the variable L capturing local success (l) or failure ( l ) of the soda; the variable N capturing (n) or failure ( n) of the soda; the probability distribution Pr(LN) has Pr(n l) = 0.85 Pr(l) = 0.60 Pr( n l) = 0.15 Pr( l ) = 0.40 Pr(n l ) = 0.10 Pr( n l ) = / / 401 The ing problem revisited local success p = 0.60 p = 0.85 given assets of euro; the costs of survey of euro; the gain from of euro; the loss from of euro; the rewards r(sm, LN) associated with the consequences of different scenarios are: r(sm,ln) = r( sm,ln) = r(sm,l n) = r( sm,l n) = r(s m,ln) = r( s m,ln) = survey survey local failure p = 0.40 p = 0.55 p = 0.15 p = 0.10 p = / 401 p = / 401

17 Strategies Reduced decision trees A strategy is a prescription for choosing decisions upon traversing a decision tree from the root to an endpoint. Example: The decision tree for Colaco s ing problem organises six strategies: 1 perform a local survey if successful, then else do not ; 2 perform a local survey if successful, then else either; 3 perform a local survey if successful, then else also; 4 perform a local survey if successful, then else ; 5 perform a local survey and rightaway; 6 perform a local survey and. 65 / 401 A decision tree is in reduced form if the root of the tree is the only decision node Any decision tree has an equivalent tree in reduced form. Example: The reduced decision tree for Colaco s ing problem organises the six strategies explicitly: strategy 1 strategy 5 local success p = 0.60 local failure p = 0.40 p = 0.55 p = 0.45 p = 0.85 p = / 401 Evaluating decision trees Solving the ing problem Reconsider the fifth strategy for the ing problem: 5. perform a local survey and immediately. strategy 5 p = p = Bayes criterion computes the expected reward to be = euro. 67 / / 401

18 Solving the ing problem cntd. Now reconsider the first strategy for the ing problem: p = 0.85 local success p = 0.60 strategy 1 p = 0.15 local failure p = 0.40 Consider the following part of a decision tree: Folding back part 1 s Pr(c 1 ) Pr( c 1 ) Then, by the distributive law, we have that Pr(c 2 c 1 ) Pr( c 2 c 1 ) Pr(c 2 c 1 ) Pr( c 2 c 1 ) Pr(c 1 c 2 ) r(s,c 1 c 2 ) + Pr(c 1 c 2 ) r(s,c 1 c 2 ) + Pr( c 1 c 2 ) r(s, c 1 c 2 ) + Pr( c 1 c 2 ) r(s, c 1 c 2 ) r(s, c 1 c 2 ) r(s, c 1 c 2 ) r(s, c 1 c 2 ) r(s, c 1 c 2 ) Bayes criterion computes the expected reward of this strategy to be ( ) + ( ) = = euro. 69 / 401 = Pr(c 1 ) [Pr(c 2 c 1 ) r(s,c 1 c 2 ) + Pr( c 2 c 1 ) r(s,c 1 c 2 ) ] + Pr( c 1 ) [Pr(c 2 c 1 ) r(s, c 1 c 2 ) + Pr( c 2 c 1 ) r(s, c 1 c 2 ) ] The property generalises to a sequence of chance variables of arbitrary length. 70 / 401 Solving the ing problem cntd. Now reconsider the first strategy for the ing problem: p = 0.85 local success p = 0.60 strategy 1 p = 0.15 Solving the ing problem cntd. Reconsider the following strategies of the ing problem: local success p = 0.85 p = 0.60 strategy 1 p = 0.15 local failure p = 0.40 local failure p = 0.40 if the survey shows local failure, then the (expected) reward is euro; if the survey shows local success, then the expected reward is = ; the expected reward of the strategy is = / 401 strategy 2 local success p = 0.60 local failure p = 0.40 Strategy 1 has a higher expected reward than strategy 2 and is therefore preferred. 72 / 401

19 Folding back part 2 Consider the following part of a decision tree: IE(r(d,C 1 C 2 )) = Pr(c 1 ) d d Pr(c 2 c 1 d) Pr( c 2 c 1 d) Pr(c 2 c 1 d ) Pr( c 2 c 1 d ) r(d, c 1 c 2 ) r(d, c 1 c 2 ) r( d, c 1 c 2 ) r( d, c 1 c 2 ) = Pr(c 1 ) Pr(c 2 c 1 d) r(d,c 1 c 2 ) + + Pr( c 1 ) Pr( c 2 c 1 d) r(d, c 1 c 2 ) = = Pr(c 1 ) [Pr(c 2 c 1 d) r(d,c 1 c 2 ) + Pr( c 2 c 1 d) r(d,c 1 c 2 )] + Pr( c 1 ) [Pr(c 2 c 1 d) r(d, c 1 c 2 ) + Pr( c 2 c 1 d) r(d, c 1 c 2 )] IE(r( d,c 1 C 2 )) = = Pr(c 1 ) [Pr(c 2 c 1 d) r( d,c1 c 2 ) + Pr( c 2 c 1 d) r( d,c1 c 2 ) ] + Pr( c 1 ) [Pr(c 2 c 1 d) r( d, c1 c 2 ) + Pr( c 2 c 1 d) r( d, c1 c 2 ) ] 73 / 401 Solving the ing problem cntd. Reconsider part of Colaco s decision tree: survey local success p = 0.60 local failure p = 0.40 p = 0.85 p = 0.15 p = 0.10 p = 0.90 if the survey shows local success, then the best decision is to ; if the survey shows local failure, then the best decision is not to. 74 / 401 Fold-back analysis The following procedure implements Bayes criterion for computing an optimal strategy from a decision tree: From the endpoints to the root do for each chance node C with values c 1,...,c m, m 2, compute the expected reward IEr(C) = Pr(c i π) IEr(L i ) i=1,...,m where L 1,...,L m are the successors of C and π is the preceding scenario; for each decision node D with values d 1,...,d n, n 2, compute the maximum expected reward IEr(D) = max i=1,...,n IEr(C i) survey survey local success p = 0.60 local failure p = p = 0.55 p = 0.85 p = 0.15 p = 0.10 p = where C 1,...,C n are the successors of D. 75 / p = / 401

Decision Analysis. Introduction. Job Counseling

Decision Analysis. Introduction. Job Counseling Decision Analysis Max, min, minimax, maximin, maximax, minimin All good cat names! 1 Introduction Models provide insight and understanding We make decisions Decision making is difficult because: future

More information

Decision Making. DKSharma

Decision Making. DKSharma Decision Making DKSharma Decision making Learning Objectives: To make the students understand the concepts of Decision making Decision making environment; Decision making under certainty; Decision making

More information

UNIT 5 DECISION MAKING

UNIT 5 DECISION MAKING UNIT 5 DECISION MAKING This unit: UNDER UNCERTAINTY Discusses the techniques to deal with uncertainties 1 INTRODUCTION Few decisions in construction industry are made with certainty. Need to look at: The

More information

Chapter 18 Student Lecture Notes 18-1

Chapter 18 Student Lecture Notes 18-1 Chapter 18 Student Lecture Notes 18-1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter 18 Introduction to Decision Analysis 5 Prentice-Hall, Inc. Chap 18-1 Chapter Goals After completing

More information

Decision making in the presence of uncertainty

Decision making in the presence of uncertainty CS 2750 Foundations of AI Lecture 20 Decision making in the presence of uncertainty Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Decision-making in the presence of uncertainty Computing the probability

More information

Module 15 July 28, 2014

Module 15 July 28, 2014 Module 15 July 28, 2014 General Approach to Decision Making Many Uses: Capacity Planning Product/Service Design Equipment Selection Location Planning Others Typically Used for Decisions Characterized by

More information

16 MAKING SIMPLE DECISIONS

16 MAKING SIMPLE DECISIONS 247 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action A will have possible outcome states Result

More information

16 MAKING SIMPLE DECISIONS

16 MAKING SIMPLE DECISIONS 253 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action a will have possible outcome states Result(a)

More information

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Decision Analysis

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Decision Analysis Resource Allocation and Decision Analysis (ECON 800) Spring 04 Foundations of Decision Analysis Reading: Decision Analysis (ECON 800 Coursepak, Page 5) Definitions and Concepts: Decision Analysis a logical

More information

Decision Making. D.K.Sharma

Decision Making. D.K.Sharma Decision Making D.K.Sharma 1 Decision making Learning Objectives: To make the students understand the concepts of Decision making Decision making environment; Decision making under certainty; Decision

More information

Subject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10.

Subject : Computer Science. Paper: Machine Learning. Module: Decision Theory and Bayesian Decision Theory. Module No: CS/ML/10. e-pg Pathshala Subject : Computer Science Paper: Machine Learning Module: Decision Theory and Bayesian Decision Theory Module No: CS/ML/0 Quadrant I e-text Welcome to the e-pg Pathshala Lecture Series

More information

Introduction to Decision Making. CS 486/686: Introduction to Artificial Intelligence

Introduction to Decision Making. CS 486/686: Introduction to Artificial Intelligence Introduction to Decision Making CS 486/686: Introduction to Artificial Intelligence 1 Outline Utility Theory Decision Trees 2 Decision Making Under Uncertainty I give a robot a planning problem: I want

More information

What do Coin Tosses and Decision Making under Uncertainty, have in common?

What do Coin Tosses and Decision Making under Uncertainty, have in common? What do Coin Tosses and Decision Making under Uncertainty, have in common? J. Rene van Dorp (GW) Presentation EMSE 1001 October 27, 2017 Presented by: J. Rene van Dorp 10/26/2017 1 About René van Dorp

More information

Chapter 13 Decision Analysis

Chapter 13 Decision Analysis Problem Formulation Chapter 13 Decision Analysis Decision Making without Probabilities Decision Making with Probabilities Risk Analysis and Sensitivity Analysis Decision Analysis with Sample Information

More information

Energy and public Policies

Energy and public Policies Energy and public Policies Decision making under uncertainty Contents of class #1 Page 1 1. Decision Criteria a. Dominated decisions b. Maxmin Criterion c. Maximax Criterion d. Minimax Regret Criterion

More information

Decision Analysis under Uncertainty. Christopher Grigoriou Executive MBA/HEC Lausanne

Decision Analysis under Uncertainty. Christopher Grigoriou Executive MBA/HEC Lausanne Decision Analysis under Uncertainty Christopher Grigoriou Executive MBA/HEC Lausanne 2007-2008 2008 Introduction Examples of decision making under uncertainty in the business world; => Trade-off between

More information

The Course So Far. Decision Making in Deterministic Domains. Decision Making in Uncertain Domains. Next: Decision Making in Uncertain Domains

The Course So Far. Decision Making in Deterministic Domains. Decision Making in Uncertain Domains. Next: Decision Making in Uncertain Domains The Course So Far Decision Making in Deterministic Domains search planning Decision Making in Uncertain Domains Uncertainty: adversarial Minimax Next: Decision Making in Uncertain Domains Uncertainty:

More information

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models IEOR E4707: Foundations of Financial Engineering c 206 by Martin Haugh Martingale Pricing Theory in Discrete-Time and Discrete-Space Models These notes develop the theory of martingale pricing in a discrete-time,

More information

Q1. [?? pts] Search Traces

Q1. [?? pts] Search Traces CS 188 Spring 2010 Introduction to Artificial Intelligence Midterm Exam Solutions Q1. [?? pts] Search Traces Each of the trees (G1 through G5) was generated by searching the graph (below, left) with a

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

Decision Making Models

Decision Making Models Decision Making Models Prof. Yongwon Seo (seoyw@cau.ac.kr) College of Business Administration, CAU Decision Theory Decision theory problems are characterized by the following: A list of alternatives. A

More information

SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT. BF360 Operations Research

SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT. BF360 Operations Research SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT BF360 Operations Research Unit 5 Moses Mwale e-mail: moses.mwale@ictar.ac.zm BF360 Operations Research Contents Unit 5: Decision Analysis 3 5.1 Components

More information

Decision Analysis CHAPTER LEARNING OBJECTIVES CHAPTER OUTLINE. After completing this chapter, students will be able to:

Decision Analysis CHAPTER LEARNING OBJECTIVES CHAPTER OUTLINE. After completing this chapter, students will be able to: CHAPTER 3 Decision Analysis LEARNING OBJECTIVES After completing this chapter, students will be able to: 1. List the steps of the decision-making process. 2. Describe the types of decision-making environments.

More information

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N Markov Decision Processes: Making Decision in the Presence of Uncertainty (some of) R&N 16.1-16.6 R&N 17.1-17.4 Different Aspects of Machine Learning Supervised learning Classification - concept learning

More information

Handout 4: Deterministic Systems and the Shortest Path Problem

Handout 4: Deterministic Systems and the Shortest Path Problem SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 4: Deterministic Systems and the Shortest Path Problem Instructor: Shiqian Ma January 27, 2014 Suggested Reading: Bertsekas

More information

GAME THEORY: DYNAMIC. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Dynamic Game Theory

GAME THEORY: DYNAMIC. MICROECONOMICS Principles and Analysis Frank Cowell. Frank Cowell: Dynamic Game Theory Prerequisites Almost essential Game Theory: Strategy and Equilibrium GAME THEORY: DYNAMIC MICROECONOMICS Principles and Analysis Frank Cowell April 2018 1 Overview Game Theory: Dynamic Mapping the temporal

More information

Decision making under uncertainty

Decision making under uncertainty Decision making under uncertainty 1 Outline 1. Components of decision making 2. Criteria for decision making 3. Utility theory 4. Decision trees 5. Posterior probabilities using Bayes rule 6. The Monty

More information

Almost essential MICROECONOMICS

Almost essential MICROECONOMICS Prerequisites Almost essential Games: Mixed Strategies GAMES: UNCERTAINTY MICROECONOMICS Principles and Analysis Frank Cowell April 2018 1 Overview Games: Uncertainty Basic structure Introduction to the

More information

Decision Making Supplement A

Decision Making Supplement A Decision Making Supplement A Break-Even Analysis Break-even analysis is used to compare processes by finding the volume at which two different processes have equal total costs. Break-even point is the

More information

Johan Oscar Ong, ST, MT

Johan Oscar Ong, ST, MT Decision Analysis Johan Oscar Ong, ST, MT Analytical Decision Making Can Help Managers to: Gain deeper insight into the nature of business relationships Find better ways to assess values in such relationships;

More information

Full file at CHAPTER 3 Decision Analysis

Full file at   CHAPTER 3 Decision Analysis CHAPTER 3 Decision Analysis TRUE/FALSE 3.1 Expected Monetary Value (EMV) is the average or expected monetary outcome of a decision if it can be repeated a large number of times. 3.2 Expected Monetary Value

More information

Agenda. Lecture 2. Decision Analysis. Key Characteristics. Terminology. Structuring Decision Problems

Agenda. Lecture 2. Decision Analysis. Key Characteristics. Terminology. Structuring Decision Problems Agenda Lecture 2 Theory >Introduction to Making > Making Without Probabilities > Making With Probabilities >Expected Value of Perfect Information >Next Class 1 2 Analysis >Techniques used to make decisions

More information

Engineering Risk Benefit Analysis

Engineering Risk Benefit Analysis Engineering Risk Benefit Analysis 1.155, 2.943, 3.577, 6.938, 10.816, 13.621, 16.862, 22.82, ES.72, ES.721 A 1. The Multistage ecision Model George E. Apostolakis Massachusetts Institute of Technology

More information

MGS 3100 Business Analysis. Chapter 8 Decision Analysis II. Construct tdecision i Tree. Example: Newsboy. Decision Tree

MGS 3100 Business Analysis. Chapter 8 Decision Analysis II. Construct tdecision i Tree. Example: Newsboy. Decision Tree MGS 3100 Business Analysis Chapter 8 Decision Analysis II Decision Tree An Alternative e (Graphical) Way to Represent and Solve Decision Problems Under Risk Particularly l Useful lfor Sequential Decisions

More information

Making Choices. Making Choices CHAPTER FALL ENCE 627 Decision Analysis for Engineering. Making Hard Decision. Third Edition

Making Choices. Making Choices CHAPTER FALL ENCE 627 Decision Analysis for Engineering. Making Hard Decision. Third Edition CHAPTER Duxbury Thomson Learning Making Hard Decision Making Choices Third Edition A. J. Clark School of Engineering Department of Civil and Environmental Engineering 4b FALL 23 By Dr. Ibrahim. Assakkaf

More information

CS188 Spring 2012 Section 4: Games

CS188 Spring 2012 Section 4: Games CS188 Spring 2012 Section 4: Games 1 Minimax Search In this problem, we will explore adversarial search. Consider the zero-sum game tree shown below. Trapezoids that point up, such as at the root, represent

More information

MBF1413 Quantitative Methods

MBF1413 Quantitative Methods MBF1413 Quantitative Methods Prepared by Dr Khairul Anuar 4: Decision Analysis Part 1 www.notes638.wordpress.com 1. Problem Formulation a. Influence Diagrams b. Payoffs c. Decision Trees Content 2. Decision

More information

Decision Making. BUS 735: Business Decision Making and Research. exercises. Assess what we have learned. 2 Decision Making Without Probabilities

Decision Making. BUS 735: Business Decision Making and Research. exercises. Assess what we have learned. 2 Decision Making Without Probabilities Making BUS 735: Business Making and Research 1 1.1 Goals and Agenda Goals and Agenda Learning Objective Learn how to make decisions with uncertainty, without using probabilities. Practice what we learn.

More information

Non-Deterministic Search

Non-Deterministic Search Non-Deterministic Search MDP s 1 Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2 Example:

More information

TIm 206 Lecture notes Decision Analysis

TIm 206 Lecture notes Decision Analysis TIm 206 Lecture notes Decision Analysis Instructor: Kevin Ross 2005 Scribes: Geoff Ryder, Chris George, Lewis N 2010 Scribe: Aaron Michelony 1 Decision Analysis: A Framework for Rational Decision- Making

More information

CSEP 573: Artificial Intelligence

CSEP 573: Artificial Intelligence CSEP 573: Artificial Intelligence Markov Decision Processes (MDP)! Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Stuart Russell or Andrew Moore 1 Outline

More information

Learning Objectives = = where X i is the i t h outcome of a decision, p i is the probability of the i t h

Learning Objectives = = where X i is the i t h outcome of a decision, p i is the probability of the i t h Learning Objectives After reading Chapter 15 and working the problems for Chapter 15 in the textbook and in this Workbook, you should be able to: Distinguish between decision making under uncertainty and

More information

The Course So Far. Atomic agent: uninformed, informed, local Specific KR languages

The Course So Far. Atomic agent: uninformed, informed, local Specific KR languages The Course So Far Traditional AI: Deterministic single agent domains Atomic agent: uninformed, informed, local Specific KR languages Constraint Satisfaction Logic and Satisfiability STRIPS for Classical

More information

stake and attain maximum profitability. Therefore, it s judicious to employ the best practices in

stake and attain maximum profitability. Therefore, it s judicious to employ the best practices in 1 2 Success or failure of any undertaking mainly lies with the decisions made in every step of the undertaking. When it comes to business the main goal would be to maximize shareholders stake and attain

More information

Decision Making. BUS 735: Business Decision Making and Research. Learn how to conduct regression analysis with a dummy independent variable.

Decision Making. BUS 735: Business Decision Making and Research. Learn how to conduct regression analysis with a dummy independent variable. Making BUS 735: Business Making and Research 1 Goals of this section Specific goals: Learn how to conduct regression analysis with a dummy independent variable. Learning objectives: LO5: Be able to use

More information

Handout 8: Introduction to Stochastic Dynamic Programming. 2 Examples of Stochastic Dynamic Programming Problems

Handout 8: Introduction to Stochastic Dynamic Programming. 2 Examples of Stochastic Dynamic Programming Problems SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 8: Introduction to Stochastic Dynamic Programming Instructor: Shiqian Ma March 10, 2014 Suggested Reading: Chapter 1 of Bertsekas,

More information

Finding Equilibria in Games of No Chance

Finding Equilibria in Games of No Chance Finding Equilibria in Games of No Chance Kristoffer Arnsfelt Hansen, Peter Bro Miltersen, and Troels Bjerre Sørensen Department of Computer Science, University of Aarhus, Denmark {arnsfelt,bromille,trold}@daimi.au.dk

More information

TUTORIAL KIT OMEGA SEMESTER PROGRAMME: BANKING AND FINANCE

TUTORIAL KIT OMEGA SEMESTER PROGRAMME: BANKING AND FINANCE TUTORIAL KIT OMEGA SEMESTER PROGRAMME: BANKING AND FINANCE COURSE: BFN 425 QUANTITATIVE TECHNIQUE FOR FINANCIAL DECISIONS i DISCLAIMER The contents of this document are intended for practice and leaning

More information

V. Lesser CS683 F2004

V. Lesser CS683 F2004 The value of information Lecture 15: Uncertainty - 6 Example 1: You consider buying a program to manage your finances that costs $100. There is a prior probability of 0.7 that the program is suitable in

More information

DECISION ANALYSIS: INTRODUCTION. Métodos Cuantitativos M. En C. Eduardo Bustos Farias 1

DECISION ANALYSIS: INTRODUCTION. Métodos Cuantitativos M. En C. Eduardo Bustos Farias 1 DECISION ANALYSIS: INTRODUCTION Cuantitativos M. En C. Eduardo Bustos Farias 1 Agenda Decision analysis in general Structuring decision problems Decision making under uncertainty - without probability

More information

Sequential Decision Making

Sequential Decision Making Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008 Introduction Some examples Dynamic programming

More information

Chapter 2 supplement. Decision Analysis

Chapter 2 supplement. Decision Analysis Chapter 2 supplement At the operational level hundreds of decisions are made in order to achieve local outcomes that contribute to the achievement of the company's overall strategic goal. These local outcomes

More information

Dynamic Programming: An overview. 1 Preliminaries: The basic principle underlying dynamic programming

Dynamic Programming: An overview. 1 Preliminaries: The basic principle underlying dynamic programming Dynamic Programming: An overview These notes summarize some key properties of the Dynamic Programming principle to optimize a function or cost that depends on an interval or stages. This plays a key role

More information

Optimal Dam Management

Optimal Dam Management Optimal Dam Management Michel De Lara et Vincent Leclère July 3, 2012 Contents 1 Problem statement 1 1.1 Dam dynamics.................................. 2 1.2 Intertemporal payoff criterion..........................

More information

Causes of Poor Decisions

Causes of Poor Decisions Lecture 7: Decision Analysis Decision process Decision tree analysis The Decision Process Specify objectives and the criteria for making a choice Develop alternatives Analyze and compare alternatives Select

More information

MORE DATA OR BETTER DATA? A Statistical Decision Problem. Jeff Dominitz Resolution Economics. and. Charles F. Manski Northwestern University

MORE DATA OR BETTER DATA? A Statistical Decision Problem. Jeff Dominitz Resolution Economics. and. Charles F. Manski Northwestern University MORE DATA OR BETTER DATA? A Statistical Decision Problem Jeff Dominitz Resolution Economics and Charles F. Manski Northwestern University Review of Economic Studies, 2017 Summary When designing data collection,

More information

Binomial Option Pricing

Binomial Option Pricing Binomial Option Pricing The wonderful Cox Ross Rubinstein model Nico van der Wijst 1 D. van der Wijst Finance for science and technology students 1 Introduction 2 3 4 2 D. van der Wijst Finance for science

More information

Known unknowns and unknown unknowns: uncertainty from the decision-makers perspective. Neil Hawkins Oxford Outcomes

Known unknowns and unknown unknowns: uncertainty from the decision-makers perspective. Neil Hawkins Oxford Outcomes Known unknowns and unknown unknowns: uncertainty from the decision-makers perspective Neil Hawkins Oxford Outcomes Outline Uncertainty Decision making under uncertainty Role of sensitivity analysis Fundamental

More information

A B C D E F 1 PAYOFF TABLE 2. States of Nature

A B C D E F 1 PAYOFF TABLE 2. States of Nature Chapter Decision Analysis Problem Formulation Decision Making without Probabilities Decision Making with Probabilities Risk Analysis and Sensitivity Analysis Decision Analysis with Sample Information Computing

More information

1.The 6 steps of the decision process are:

1.The 6 steps of the decision process are: 1.The 6 steps of the decision process are: a. Clearly define the problem Discussion and the factors that Questions influence it. b. Develop specific and measurable objectives. c. Develop a model. d. Evaluate

More information

A Taxonomy of Decision Models

A Taxonomy of Decision Models Decision Trees and Influence Diagrams Prof. Carlos Bana e Costa Lecture topics: Decision trees and influence diagrams Value of information and control A case study: Drilling for oil References: Clemen,

More information

Chapter 3. Decision Analysis. Learning Objectives

Chapter 3. Decision Analysis. Learning Objectives Chapter 3 Decision Analysis To accompany Quantitative Analysis for Management, Eleventh Edition, by Render, Stair, and Hanna Power Point slides created by Brian Peterson Learning Objectives After completing

More information

6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE

6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE 6.21 DYNAMIC PROGRAMMING LECTURE LECTURE OUTLINE Deterministic finite-state DP problems Backward shortest path algorithm Forward shortest path algorithm Shortest path examples Alternative shortest path

More information

Economic decision analysis: Concepts and applications

Economic decision analysis: Concepts and applications Economic decision analysis: Concepts and applications Jeffrey M. Keisler Stockholm, 23 May 2016 My background and this work Education in DA and Economics Government and industry consulting Portfolio DA

More information

Chapter 12. Decision Analysis

Chapter 12. Decision Analysis Page 1 of 80 Chapter 12. Decision Analysis [Page 514] [Page 515] In the previous chapters dealing with linear programming, models were formulated and solved in order to aid the manager in making a decision.

More information

Comparison of Decision-making under Uncertainty Investment Strategies with the Money Market

Comparison of Decision-making under Uncertainty Investment Strategies with the Money Market IBIMA Publishing Journal of Financial Studies and Research http://www.ibimapublishing.com/journals/jfsr/jfsr.html Vol. 2011 (2011), Article ID 373376, 16 pages DOI: 10.5171/2011.373376 Comparison of Decision-making

More information

IX. Decision Theory. A. Basic Definitions

IX. Decision Theory. A. Basic Definitions IX. Decision Theory Techniques used to find optimal solutions in situations where a decision maker is faced with several alternatives (Actions) and an uncertain or risk-filled future (Events or States

More information

Complex Decisions. Sequential Decision Making

Complex Decisions. Sequential Decision Making Sequential Decision Making Outline Sequential decision problems Value iteration Policy iteration POMDPs (basic concepts) Slides partially based on the Book "Reinforcement Learning: an introduction" by

More information

Decision Analysis Models

Decision Analysis Models Decision Analysis Models 1 Outline Decision Analysis Models Decision Making Under Ignorance and Risk Expected Value of Perfect Information Decision Trees Incorporating New Information Expected Value of

More information

Corporate Finance, Module 21: Option Valuation. Practice Problems. (The attached PDF file has better formatting.) Updated: July 7, 2005

Corporate Finance, Module 21: Option Valuation. Practice Problems. (The attached PDF file has better formatting.) Updated: July 7, 2005 Corporate Finance, Module 21: Option Valuation Practice Problems (The attached PDF file has better formatting.) Updated: July 7, 2005 {This posting has more information than is needed for the corporate

More information

Prioritisation Methodology

Prioritisation Methodology Prioritisation Methodology March 2014 PRIORITISATION METHODOLOGY Table of contents 1 Introduction... 5 2 The Projects Prioritisation Process... 7 3 The Methodological Assumptions... 8 3.1 Background...

More information

Decision making in the presence of uncertainty

Decision making in the presence of uncertainty Lecture 19 Decision making in the presence of uncertainty Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Decision-making in the presence of uncertainty Many real-world problems require to choose

More information

Using the Maximin Principle

Using the Maximin Principle Using the Maximin Principle Under the maximin principle, it is easy to see that Rose should choose a, making her worst-case payoff 0. Colin s similar rationality as a player induces him to play (under

More information

Framework and Methods for Infrastructure Management. Samer Madanat UC Berkeley NAS Infrastructure Management Conference, September 2005

Framework and Methods for Infrastructure Management. Samer Madanat UC Berkeley NAS Infrastructure Management Conference, September 2005 Framework and Methods for Infrastructure Management Samer Madanat UC Berkeley NAS Infrastructure Management Conference, September 2005 Outline 1. Background: Infrastructure Management 2. Flowchart for

More information

Lecture 12: Introduction to reasoning under uncertainty. Actions and Consequences

Lecture 12: Introduction to reasoning under uncertainty. Actions and Consequences Lecture 12: Introduction to reasoning under uncertainty Preferences Utility functions Maximizing expected utility Value of information Bandit problems and the exploration-exploitation trade-off COMP-424,

More information

BSc (Hons) Software Engineering BSc (Hons) Computer Science with Network Security

BSc (Hons) Software Engineering BSc (Hons) Computer Science with Network Security BSc (Hons) Software Engineering BSc (Hons) Computer Science with Network Security Cohorts BCNS/ 06 / Full Time & BSE/ 06 / Full Time Resit Examinations for 2008-2009 / Semester 1 Examinations for 2008-2009

More information

When one firm considers changing its price or output level, it must make assumptions about the reactions of its rivals.

When one firm considers changing its price or output level, it must make assumptions about the reactions of its rivals. Chapter 3 Oligopoly Oligopoly is an industry where there are relatively few sellers. The product may be standardized (steel) or differentiated (automobiles). The firms have a high degree of interdependence.

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2017 These notes have been used and commented on before. If you can still spot any errors or have any suggestions for improvement, please

More information

56:171 Operations Research Midterm Examination Solutions PART ONE

56:171 Operations Research Midterm Examination Solutions PART ONE 56:171 Operations Research Midterm Examination Solutions Fall 1997 Answer both questions of Part One, and 4 (out of 5) problems from Part Two. Possible Part One: 1. True/False 15 2. Sensitivity analysis

More information

Dr. Abdallah Abdallah Fall Term 2014

Dr. Abdallah Abdallah Fall Term 2014 Quantitative Analysis Dr. Abdallah Abdallah Fall Term 2014 1 Decision analysis Fundamentals of decision theory models Ch. 3 2 Decision theory Decision theory is an analytic and systemic way to tackle problems

More information

Chapter 21. Dynamic Programming CONTENTS 21.1 A SHORTEST-ROUTE PROBLEM 21.2 DYNAMIC PROGRAMMING NOTATION

Chapter 21. Dynamic Programming CONTENTS 21.1 A SHORTEST-ROUTE PROBLEM 21.2 DYNAMIC PROGRAMMING NOTATION Chapter 21 Dynamic Programming CONTENTS 21.1 A SHORTEST-ROUTE PROBLEM 21.2 DYNAMIC PROGRAMMING NOTATION 21.3 THE KNAPSACK PROBLEM 21.4 A PRODUCTION AND INVENTORY CONTROL PROBLEM 23_ch21_ptg01_Web.indd

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Markov Decision Processes (MDPs) Luke Zettlemoyer Many slides over the course adapted from Dan Klein, Stuart Russell or Andrew Moore 1 Announcements PS2 online now Due

More information

Committees and rent-seeking effort under probabilistic voting

Committees and rent-seeking effort under probabilistic voting Public Choice 112: 345 350, 2002. 2002 Kluwer Academic Publishers. Printed in the Netherlands. 345 Committees and rent-seeking effort under probabilistic voting J. ATSU AMEGASHIE Department of Economics,

More information

Next Year s Demand -Alternatives- Low High Do nothing Expand Subcontract 40 70

Next Year s Demand -Alternatives- Low High Do nothing Expand Subcontract 40 70 Lesson 04 Decision Making Solutions Solved Problem #1: see text book Solved Problem #2: see textbook Solved Problem #3: see textbook Solved Problem #6: (costs) see textbook #1: A small building contractor

More information

INCORPORATING RISK IN A DECISION SUPPORT SYSTEM FOR PROJECT ANALYSIS AND EVALUATION a

INCORPORATING RISK IN A DECISION SUPPORT SYSTEM FOR PROJECT ANALYSIS AND EVALUATION a INCORPORATING RISK IN A DECISION SUPPORT SYSTEM FOR PROJECT ANALYSIS AND EVALUATION a Pedro C. Godinho, Faculty of Economics of the University of Coimbra and INESC João Paulo Costa, Faculty of Economics

More information

Externality and Corrective Measures

Externality and Corrective Measures Externality and Corrective Measures Ram Singh Microeconomic Theory Lecture 20 Ram Singh: (DSE) Market Failure Lecture 20 1 / 25 Questions Question What is an externality? What corrective measures are available

More information

Decision making in the presence of uncertainty

Decision making in the presence of uncertainty CS 271 Foundations of AI Lecture 21 Decision making in the presence of uncertainty Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Decision-making in the presence of uncertainty Many real-world

More information

TDT4171 Artificial Intelligence Methods

TDT4171 Artificial Intelligence Methods TDT47 Artificial Intelligence Methods Lecture 7 Making Complex Decisions Norwegian University of Science and Technology Helge Langseth IT-VEST 0 helgel@idi.ntnu.no TDT47 Artificial Intelligence Methods

More information

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.

The exam is closed book, closed calculator, and closed notes except your one-page crib sheet. CS 188 Spring 2015 Introduction to Artificial Intelligence Midterm 1 You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib

More information

Learning Objectives 6/2/18. Some keys from yesterday

Learning Objectives 6/2/18. Some keys from yesterday Valuation and pricing (November 5, 2013) Lecture 12 Decisions Risk & Uncertainty Olivier J. de Jong, LL.M., MM., MBA, CFD, CFFA, AA www.centime.biz Some keys from yesterday Learning Objectives v Explain

More information

Auction Theory: Some Basics

Auction Theory: Some Basics Auction Theory: Some Basics Arunava Sen Indian Statistical Institute, New Delhi ICRIER Conference on Telecom, March 7, 2014 Outline Outline Single Good Problem Outline Single Good Problem First Price Auction

More information

MS-E2114 Investment Science Lecture 4: Applied interest rate analysis

MS-E2114 Investment Science Lecture 4: Applied interest rate analysis MS-E2114 Investment Science Lecture 4: Applied interest rate analysis A. Salo, T. Seeve Systems Analysis Laboratory Department of System Analysis and Mathematics Aalto University, School of Science Overview

More information

343H: Honors AI. Lecture 7: Expectimax Search 2/6/2014. Kristen Grauman UT-Austin. Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted

343H: Honors AI. Lecture 7: Expectimax Search 2/6/2014. Kristen Grauman UT-Austin. Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted 343H: Honors AI Lecture 7: Expectimax Search 2/6/2014 Kristen Grauman UT-Austin Slides courtesy of Dan Klein, UC-Berkeley Unless otherwise noted 1 Announcements PS1 is out, due in 2 weeks Last time Adversarial

More information

The Diversification of Employee Stock Options

The Diversification of Employee Stock Options The Diversification of Employee Stock Options David M. Stein Managing Director and Chief Investment Officer Parametric Portfolio Associates Seattle Andrew F. Siegel Professor of Finance and Management

More information

Applying Risk Theory to Game Theory Tristan Barnett. Abstract

Applying Risk Theory to Game Theory Tristan Barnett. Abstract Applying Risk Theory to Game Theory Tristan Barnett Abstract The Minimax Theorem is the most recognized theorem for determining strategies in a two person zerosum game. Other common strategies exist such

More information

DECISION MAKING. Decision making under conditions of uncertainty

DECISION MAKING. Decision making under conditions of uncertainty DECISION MAKING Decision making under conditions of uncertainty Set of States of nature: S 1,..., S j,..., S n Set of decision alternatives: d 1,...,d i,...,d m The outcome of the decision C ij depends

More information

TR : Knowledge-Based Rational Decisions and Nash Paths

TR : Knowledge-Based Rational Decisions and Nash Paths City University of New York (CUNY) CUNY Academic Works Computer Science Technical Reports Graduate Center 2009 TR-2009015: Knowledge-Based Rational Decisions and Nash Paths Sergei Artemov Follow this and

More information

Consumption and Portfolio Choice under Uncertainty

Consumption and Portfolio Choice under Uncertainty Chapter 8 Consumption and Portfolio Choice under Uncertainty In this chapter we examine dynamic models of consumer choice under uncertainty. We continue, as in the Ramsey model, to take the decision of

More information

Phil 321: Week 2. Decisions under ignorance

Phil 321: Week 2. Decisions under ignorance Phil 321: Week 2 Decisions under ignorance Decisions under Ignorance 1) Decision under risk: The agent can assign probabilities (conditional or unconditional) to each state. 2) Decision under ignorance:

More information

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015. FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.) Hints for Problem Set 2 1. Consider a zero-sum game, where

More information