Stochastic Manufacturing & Service Systems. Discrete-time Markov Chain
|
|
- Shauna Nancy Cummings
- 6 years ago
- Views:
Transcription
1 ISYE 33 B, Fall Week #7, September 9-October 3, Introduction Stochastic Manufacturing & Service Systems Xinchang Wang H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Discrete-time Markov Chain Newsvendor-type models, including profit maximization and cost minimization models, are specifically developed for controlling inventory levels of perishable products, for which you get marginal (or negative) salvage values for leftovers. One immediate question is: So, what about non-perishable products, such as cars, steal, and furniture? If you are making order/production plans based on months, you do not want to just simply get rid of leftovers at the end of a month; the leftovers are still valuable. For sure, the leftovers will be put on market for the next month if any just like brand new products. We can use leftovers to fill the demand for the next month. It is often the case that we also need to order new products which together with the leftovers to fill the demand for the next month. Yes, we need a new method to model this problem. Notice that we only care about one period for newsvendor problems and our order/production decision (e.g., order or not) is pretty much a one-time deal. For non-perishable products, a product could be put on market for more than one periods until when it is sold. We could have leftovers for every period and we need to make order/production decisions at the end of each period. The decisions are simply to order or not, and if order, how many to order? We have multiple periods to take care of and we are interested in comping up with the best order policy that make our business most profitable for long time. Here is another question: In what sense we can judge a decision policy to be the best? In other words, what policy/policies should be thought of as the best? We need a evaluation criterion. The criterion is the Long-run Average Profit. If an order policy gives us the maximum long-run average profit, that policy is the one that we are seeking for and that we call the best order policy. Let Π be the set of possible order policies. Let R π denote the long-run average profit under policy π Π. We are looking for some policy π such that R π R π () for all other π s. As you can see, the long-run average is only defined after we specify a policy to be used, i.e., we can only make sense of R π after we know π. How to solve ()? A intuitive idea is that for each possible policy π, we compute R π and we obtain the best π that gives us the biggest R π. For example, (a) Policy π gives R π = $.
2 (b) Policy π gives R π = $. (c) Policy π 3 gives R π3 = $5. So, choose policy π!. Long-run Average Profit Consider a business for selling non-perishable products. Our objective is to define what the long-run average profit is under a policy π. We suppress π for ease of notation. Start with a bunch of symbols: (a) n: periods: n =,,, 3,.... A period can be a day, week, month and year (light years too?). (b) X n : the # of products you have on hand at the end of period n =,,,...; X n is also called the inventory level at the end of period n. X is the initial inventory level before you start doing your business and is assumed to be known. (c) D n : the demand for period n =,,.... {D n } n= is assumed to be an i.i.d. sequence of rv s. First, I wanna reason that {X n } n= the following story given by steps. is a sequence of rv s, dependent on policy π. Consider Step. You have x = products at the end of period n =, i.e., X =. Step. You decide not to order. Step 3. Then, how many products you have on hand at the beginning of period n + = 3?! Step. Now, the inventory level, X n+, at the end of period n + = 3 is dependent on two things: () products on hand () the demand D n+. You can verify that X n+ = ( D n+ ) +, which is exactly the # of leftovers we calculated in the newsvendor problem. Step 5. A theory tells us that consider any rv X and a (strictly speaking, measurable function) function g. Then g(x) is a rv too. Thus, X n+ = ( D n+ ) + is a rv because D n+ is a rv. Step 6. We can conclude that {X n } n= is a sequence of rv s. Step 7. It follows from Step that X n is dependent on making the order or not. Yes, it is dependent on the order policy π. Remark.. {X n } n= is dependent on policy π and it becomes well defined only after π is given. Remark.. {X n } n= is a sequence of random rv s. We have a terminological name for this sequence of rv s, {X n } n= : Stochastic Process. Definition.. A sequence of random variables with a discrete set of indices is said to be a discrete-time stochastic process.
3 Second, let s compute the profit for each period n. Since we need to find a way to define the long-run average, we have to start with the profit for each period. Question: What is the profit for period n? Let R n denote the profit for period n. At the end of last period n, assume that we have X n = x leftovers and our policy π tells us not to order (for the ease of illustration). We ll keep x to the beginning of period n. The demand in period n is D n. (a) What we can sell in period n: (x D n ) (b) Profit we get: R n = c p (x D n ). Remember that the leftovers will be put on market for the next period and will become part of the inventory at the beginning of the next period. So, we do not have salvage value contributed to the profit for this period. You see, we have R n for period n. Do this for each period. Now, we are ready to define the long-run average profit R is represented by R = lim n R + R R n. n One last thing: we can remove the assumption that X n = x, i.e., we can allow the inventory level to be random (and it is random) and we can still get the above expression for the long-run average profit. In that case, R n = (X n D n ) +. Now, as you can observe here, the long-run average profit R is computed as the average of the profits for all periods in the long-run, and more importantly, it is dependent on X, X,..., X n,.... Remark.3. The long-run average profit R is dependent on {X n } n= Go back to our aim: to solve ()! It follows from Remark. and.3 that, we have the following chain: (a): understand (b): Compute a policy π {X, X,..., X n..., } R Thus, we have steps to approach our goal. A month of lectures will be given just for step (a): understand X n given a policy π. But I will be back to step (b) and solve () after we understand X n.. A Popular (s, S) policy A popular type of inventory control policies is as follows: Remember X n is the inventory level at the end of period n. Say if X n > do not order. If X n, order up to S = items. This type of policies is called (s, S) policy. In general, we can define it as. If X n s, order S X n items.. Otherwise, do not order. This is very popular policy. Virtually every company has some version of this policy. We only restrict our attention on this type of policies, called (s, S) policies. In other worlds, we will understand X n under a (s, S) policy. 3
4 An Inventory Model for Non-perishable Products Example. (Inventory Model for Non-Perishable Item). D n is the demand in the nth period. Note that inventory that is left at the end of a week can be used to satisfy the demand in the following week. For example, {D n } is an iid sequence. d 3 P(D n = d) How do you analyze the performance of an (s, S) policy? Every Friday 5pm, let s say we decide how much to order for the following week so that the ordered items will arrive at am (or arrive immediately by assumption) the following Monday.. Matrix Representation of a Bunch of Transition Probabilities We begin analysis with building a table. Assume i.i.d. demand d 3 P(D = d) / / / / In the real world, demand is usually not i.i.d. There could be some seasonality. But, in some business like WalMart selling items as cheap as possible, demand can be modeled as i.i.d.. Let X n be inventory level at the end of week n. Note that values that X n can take is in {,,, 3, } (why?). Question.. What is the value for P(X n+ = 3 X n = ) =?,! Question.. How about the following probability? P(X n+ = 3 X n = ) = P(D n+ = ) = Note that we will order according to our inventory policy over the nth weekend, so the demand for (n + )th week should be. Matrix is a good way to denote these probability in a neat form. 3 5 Some properties that I want you to notice: 3 := P (a) Row index: from what state the system transitions (b) Column index: to what state the system transitions (c) P = (p ij ) i,j S satisfies that p i,j [, ]. (d) Each row sums to unity. (e) Each column may not sum to unity.
5 3 Formal Definition of a Discrete-time Markov Chain (DTMC) Let us formally define discrete-time Markov Chain. A discrete-time Markov Chain has the following elements. (a) State space S, e.g. S = {,,, 3, }: You will see that S does not have to be finite. A state presents a possible value that X n can take. The state space is the set of all such values. (b) Transition probability matrix P = (p ij ) such that p ij and j S P ij = : This is the matrix you just saw above. (The sum of row should be but the sum of column does not have to be.) (c) Initial state (distribution): This is how much inventory you are given at the starting point. It is the information about X. Definition 3. (Discrete Time Markov Chain). A discrete time stochastic process X = {X n : n =,,, } is said to be a DTMC on state space S with transition matrix P if for each n for i, i, i,, i, j S P(X n+ = j X = i, X = i, X = i,, X n = i n, X n = i) = P ij. () The most important part of this definition is (). At this point, let us recall the definition of conditional probability. P(A B) = P(A B) P(B) = P(A, B) P(B) This () is called the Markov property. In plain English, it says that once you know today s state, tomorrow s state has nothing to do with past information. No matter how you reached the current state, your tomorrow will only depend on the current state. In mathematical notation, P(X n+ = j X = i, X = i, X = i,, X n = i n, X n = i) = P(X n+ = j X n = i).. Past states: X = i, X = i, X = i,, X n = i n. Current state: X n = i 3. Future state: X n+ = j 5
Markov Chains (Part 2)
Markov Chains (Part 2) More Examples and Chapman-Kolmogorov Equations Markov Chains - 1 A Stock Price Stochastic Process Consider a stock whose price either goes up or down every day. Let X t be a random
More informationDefinition 4.1. In a stochastic process T is called a stopping time if you can tell when it happens.
102 OPTIMAL STOPPING TIME 4. Optimal Stopping Time 4.1. Definitions. On the first day I explained the basic problem using one example in the book. On the second day I explained how the solution to the
More informationFinal exam solutions
EE365 Stochastic Control / MS&E251 Stochastic Decision Models Profs. S. Lall, S. Boyd June 5 6 or June 6 7, 2013 Final exam solutions This is a 24 hour take-home final. Please turn it in to one of the
More information6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE
6.21 DYNAMIC PROGRAMMING LECTURE LECTURE OUTLINE Deterministic finite-state DP problems Backward shortest path algorithm Forward shortest path algorithm Shortest path examples Alternative shortest path
More informationMath-Stat-491-Fall2014-Notes-V
Math-Stat-491-Fall2014-Notes-V Hariharan Narayanan December 7, 2014 Martingales 1 Introduction Martingales were originally introduced into probability theory as a model for fair betting games. Essentially
More informationSequential Decision Making
Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008 Introduction Some examples Dynamic programming
More information2. Modeling Uncertainty
2. Modeling Uncertainty Models for Uncertainty (Random Variables): Big Picture We now move from viewing the data to thinking about models that describe the data. Since the real world is uncertain, our
More informationStochastic Games and Bayesian Games
Stochastic Games and Bayesian Games CPSC 532l Lecture 10 Stochastic Games and Bayesian Games CPSC 532l Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games 4 Analyzing Bayesian
More informationStochastic Games and Bayesian Games
Stochastic Games and Bayesian Games CPSC 532L Lecture 10 Stochastic Games and Bayesian Games CPSC 532L Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games Stochastic Games
More informationHandout 4: Deterministic Systems and the Shortest Path Problem
SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 4: Deterministic Systems and the Shortest Path Problem Instructor: Shiqian Ma January 27, 2014 Suggested Reading: Bertsekas
More informationModule 10:Application of stochastic processes in areas like finance Lecture 36:Black-Scholes Model. Stochastic Differential Equation.
Stochastic Differential Equation Consider. Moreover partition the interval into and define, where. Now by Rieman Integral we know that, where. Moreover. Using the fundamentals mentioned above we can easily
More informationInventory Analysis and Management. Single-Period Stochastic Models
Single-Period Stochastic Models 61 Newsvendor Models: Optimality of the Base-Stock Policy A newsvendor problem is a single-period stochastic inventory problem First assume that there is no initial inventory,
More informationHomework 1 posted, due Friday, September 30, 2 PM. Independence of random variables: We say that a collection of random variables
Generating Functions Tuesday, September 20, 2011 2:00 PM Homework 1 posted, due Friday, September 30, 2 PM. Independence of random variables: We say that a collection of random variables Is independent
More informationMulti-state transition models with actuarial applications c
Multi-state transition models with actuarial applications c by James W. Daniel c Copyright 2004 by James W. Daniel Reprinted by the Casualty Actuarial Society and the Society of Actuaries by permission
More informationthen for any deterministic f,g and any other random variable
Martingales Thursday, December 03, 2015 2:01 PM References: Karlin and Taylor Ch. 6 Lawler Sec. 5.1-5.3 Homework 4 due date extended to Wednesday, December 16 at 5 PM. We say that a random variable is
More informationMA 1125 Lecture 14 - Expected Values. Wednesday, October 4, Objectives: Introduce expected values.
MA 5 Lecture 4 - Expected Values Wednesday, October 4, 27 Objectives: Introduce expected values.. Means, Variances, and Standard Deviations of Probability Distributions Two classes ago, we computed the
More informationEE266 Homework 5 Solutions
EE, Spring 15-1 Professor S. Lall EE Homework 5 Solutions 1. A refined inventory model. In this problem we consider an inventory model that is more refined than the one you ve seen in the lectures. The
More informationSTOR Lecture 7. Random Variables - I
STOR 435.001 Lecture 7 Random Variables - I Shankar Bhamidi UNC Chapel Hill 1 / 31 Example 1a: Suppose that our experiment consists of tossing 3 fair coins. Let Y denote the number of heads that appear.
More informationStochastic Processes and Financial Mathematics (part one) Dr Nic Freeman
Stochastic Processes and Financial Mathematics (part one) Dr Nic Freeman December 15, 2017 Contents 0 Introduction 3 0.1 Syllabus......................................... 4 0.2 Problem sheets.....................................
More informationHomework 3: Asset Pricing
Homework 3: Asset Pricing Mohammad Hossein Rahmati November 1, 2018 1. Consider an economy with a single representative consumer who maximize E β t u(c t ) 0 < β < 1, u(c t ) = ln(c t + α) t= The sole
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More information36106 Managerial Decision Modeling Monte Carlo Simulation in Excel: Part IV
36106 Managerial Decision Modeling Monte Carlo Simulation in Excel: Part IV Kipp Martin University of Chicago Booth School of Business November 29, 2017 Reading and Excel Files 2 Reading: Handout: Optimal
More informationFinding the Sum of Consecutive Terms of a Sequence
Mathematics 451 Finding the Sum of Consecutive Terms of a Sequence In a previous handout we saw that an arithmetic sequence starts with an initial term b, and then each term is obtained by adding a common
More informationTIM 206 Lecture Notes: Inventory Theory
TIM 206 Lecture Notes: Inventory Theory Prof. Kevin Ross Scribes: Vidyuth Srivatsaa, Ramya Gopalakrishnan, Mark Storer and Rolando Menchaca Contents 1 Main Ideas 1 2 Basic Model: Economic Order Quantity
More informationAdvanced Operations Research Prof. G. Srinivasan Dept of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Dept of Management Studies Indian Institute of Technology, Madras Lecture 23 Minimum Cost Flow Problem In this lecture, we will discuss the minimum cost
More informationAM 121: Intro to Optimization Models and Methods
AM 121: Intro to Optimization Models and Methods Lecture 18: Markov Decision Processes Yiling Chen and David Parkes Lesson Plan Markov decision processes Policies and Value functions Solving: average reward,
More information6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE
6.21 DYNAMIC PROGRAMMING LECTURE LECTURE OUTLINE Deterministic finite-state DP problems Backward shortest path algorithm Forward shortest path algorithm Shortest path examples Alternative shortest path
More informationDifferent Monotonicity Definitions in stochastic modelling
Different Monotonicity Definitions in stochastic modelling Imène KADI Nihal PEKERGIN Jean-Marc VINCENT ASMTA 2009 Plan 1 Introduction 2 Models?? 3 Stochastic monotonicity 4 Realizable monotonicity 5 Relations
More informationThe Optimization Process: An example of portfolio optimization
ISyE 6669: Deterministic Optimization The Optimization Process: An example of portfolio optimization Shabbir Ahmed Fall 2002 1 Introduction Optimization can be roughly defined as a quantitative approach
More informationScenario Generation and Sampling Methods
Scenario Generation and Sampling Methods Güzin Bayraksan Tito Homem-de-Mello SVAN 2016 IMPA May 9th, 2016 Bayraksan (OSU) & Homem-de-Mello (UAI) Scenario Generation and Sampling SVAN IMPA May 9 1 / 30
More informationProblem 1: Random variables, common distributions and the monopoly price
Problem 1: Random variables, common distributions and the monopoly price In this problem, we will revise some basic concepts in probability, and use these to better understand the monopoly price (alternatively
More informationCentral Limit Theorem 11/08/2005
Central Limit Theorem 11/08/2005 A More General Central Limit Theorem Theorem. Let X 1, X 2,..., X n,... be a sequence of independent discrete random variables, and let S n = X 1 + X 2 + + X n. For each
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2018 Last Time: Markov Chains We can use Markov chains for density estimation, p(x) = p(x 1 ) }{{} d p(x
More informationMartingale Pricing Theory in Discrete-Time and Discrete-Space Models
IEOR E4707: Foundations of Financial Engineering c 206 by Martin Haugh Martingale Pricing Theory in Discrete-Time and Discrete-Space Models These notes develop the theory of martingale pricing in a discrete-time,
More informationStochastic Optimal Control
Stochastic Optimal Control Lecturer: Eilyan Bitar, Cornell ECE Scribe: Kevin Kircher, Cornell MAE These notes summarize some of the material from ECE 5555 (Stochastic Systems) at Cornell in the fall of
More informationProblem 1: Random variables, common distributions and the monopoly price
Problem 1: Random variables, common distributions and the monopoly price In this problem, we will revise some basic concepts in probability, and use these to better understand the monopoly price (alternatively
More informationStandard Decision Theory Corrected:
Standard Decision Theory Corrected: Assessing Options When Probability is Infinitely and Uniformly Spread* Peter Vallentyne Department of Philosophy, University of Missouri-Columbia Originally published
More informationBonus-malus systems 6.1 INTRODUCTION
6 Bonus-malus systems 6.1 INTRODUCTION This chapter deals with the theory behind bonus-malus methods for automobile insurance. This is an important branch of non-life insurance, in many countries even
More informationDrunken Birds, Brownian Motion, and Other Random Fun
Drunken Birds, Brownian Motion, and Other Random Fun Michael Perlmutter Department of Mathematics Purdue University 1 M. Perlmutter(Purdue) Brownian Motion and Martingales Outline Review of Basic Probability
More informationECON Microeconomics II IRYNA DUDNYK. Auctions.
Auctions. What is an auction? When and whhy do we need auctions? Auction is a mechanism of allocating a particular object at a certain price. Allocating part concerns who will get the object and the price
More informationProbability. An intro for calculus students P= Figure 1: A normal integral
Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided
More informationProbability and Stochastics for finance-ii Prof. Joydeep Dutta Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur
Probability and Stochastics for finance-ii Prof. Joydeep Dutta Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur Lecture - 07 Mean-Variance Portfolio Optimization (Part-II)
More informationMartingales. by D. Cox December 2, 2009
Martingales by D. Cox December 2, 2009 1 Stochastic Processes. Definition 1.1 Let T be an arbitrary index set. A stochastic process indexed by T is a family of random variables (X t : t T) defined on a
More informationStat511 Additional Materials
Binomial Random Variable Stat511 Additional Materials The first discrete RV that we will discuss is the binomial random variable. The binomial random variable is a result of observing the outcomes from
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models SOLUTIONS to Final Exam, Sunday, December 16, 2012
IEOR 306: Introduction to Operations Research: Stochastic Models SOLUTIONS to Final Exam, Sunday, December 6, 202 Four problems, each with multiple parts. Maximum score 00 (+3 bonus) = 3. You need to show
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2019 Last Time: Markov Chains We can use Markov chains for density estimation, d p(x) = p(x 1 ) p(x }{{}
More informationEconS Constrained Consumer Choice
EconS 305 - Constrained Consumer Choice Eric Dunaway Washington State University eric.dunaway@wsu.edu September 21, 2015 Eric Dunaway (WSU) EconS 305 - Lecture 12 September 21, 2015 1 / 49 Introduction
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Distribution of Random Samples & Limit Theorems Néhémy Lim University of Washington Winter 2017 Outline Distribution of i.i.d. Samples Convergence of random variables The Laws
More informationBSc (Hons) Software Engineering BSc (Hons) Computer Science with Network Security
BSc (Hons) Software Engineering BSc (Hons) Computer Science with Network Security Cohorts BCNS/ 06 / Full Time & BSE/ 06 / Full Time Resit Examinations for 2008-2009 / Semester 1 Examinations for 2008-2009
More informationDescriptive Statistics: Measures of Central Tendency and Crosstabulation. 789mct_dispersion_asmp.pdf
789mct_dispersion_asmp.pdf Michael Hallstone, Ph.D. hallston@hawaii.edu Lectures 7-9: Measures of Central Tendency, Dispersion, and Assumptions Lecture 7: Descriptive Statistics: Measures of Central Tendency
More informationCharacterization of the Optimum
ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing
More informationMA 1125 Lecture 05 - Measures of Spread. Wednesday, September 6, Objectives: Introduce variance, standard deviation, range.
MA 115 Lecture 05 - Measures of Spread Wednesday, September 6, 017 Objectives: Introduce variance, standard deviation, range. 1. Measures of Spread In Lecture 04, we looked at several measures of central
More informationEvery data set has an average and a standard deviation, given by the following formulas,
Discrete Data Sets A data set is any collection of data. For example, the set of test scores on the class s first test would comprise a data set. If we collect a sample from the population we are interested
More informationSTAT Chapter 5: Continuous Distributions. Probability distributions are used a bit differently for continuous r.v. s than for discrete r.v. s.
STAT 515 -- Chapter 5: Continuous Distributions Probability distributions are used a bit differently for continuous r.v. s than for discrete r.v. s. Continuous distributions typically are represented by
More informationECON 815. Uncertainty and Asset Prices
ECON 815 Uncertainty and Asset Prices Winter 2015 Queen s University ECON 815 1 Adding Uncertainty Endowments are now stochastic. endowment in period 1 is known at y t two states s {1, 2} in period 2 with
More informationIteration. The Cake Eating Problem. Discount Factors
18 Value Function Iteration Lab Objective: Many questions have optimal answers that change over time. Sequential decision making problems are among this classification. In this lab you we learn how to
More informationCSCI 1951-G Optimization Methods in Finance Part 00: Course Logistics Introduction to Finance Optimization Problems
CSCI 1951-G Optimization Methods in Finance Part 00: Course Logistics Introduction to Finance Optimization Problems January 26, 2018 1 / 24 Basic information All information is available in the syllabus
More informationPricing Problems under the Markov Chain Choice Model
Pricing Problems under the Markov Chain Choice Model James Dong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jd748@cornell.edu A. Serdar Simsek
More informationIterated Dominance and Nash Equilibrium
Chapter 11 Iterated Dominance and Nash Equilibrium In the previous chapter we examined simultaneous move games in which each player had a dominant strategy; the Prisoner s Dilemma game was one example.
More informationGovernment debt. Lecture 9, ECON Tord Krogh. September 10, Tord Krogh () ECON 4310 September 10, / 55
Government debt Lecture 9, ECON 4310 Tord Krogh September 10, 2013 Tord Krogh () ECON 4310 September 10, 2013 1 / 55 Today s lecture Topics: Basic concepts Tax smoothing Debt crisis Sovereign risk Tord
More informationECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017
ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2017 These notes have been used and commented on before. If you can still spot any errors or have any suggestions for improvement, please
More informationMAT 4250: Lecture 1 Eric Chung
1 MAT 4250: Lecture 1 Eric Chung 2Chapter 1: Impartial Combinatorial Games 3 Combinatorial games Combinatorial games are two-person games with perfect information and no chance moves, and with a win-or-lose
More informationELEMENTS OF MATRIX MATHEMATICS
QRMC07 9/7/0 4:45 PM Page 5 CHAPTER SEVEN ELEMENTS OF MATRIX MATHEMATICS 7. AN INTRODUCTION TO MATRICES Investors frequently encounter situations involving numerous potential outcomes, many discrete periods
More informationPAULI MURTO, ANDREY ZHUKOV
GAME THEORY SOLUTION SET 1 WINTER 018 PAULI MURTO, ANDREY ZHUKOV Introduction For suggested solution to problem 4, last year s suggested solutions by Tsz-Ning Wong were used who I think used suggested
More informationMathematical Statistics İST2011 PROBABILITY THEORY (3) DEU, DEPARTMENT OF STATISTICS MATHEMATICAL STATISTICS SUMMER SEMESTER, 2017.
Mathematical Statistics İST2011 PROBABILITY THEORY (3) 1 DEU, DEPARTMENT OF STATISTICS MATHEMATICAL STATISTICS SUMMER SEMESTER, 2017 If the five balls are places in five cell at random, find the probability
More informationECON322 Game Theory Half II
ECON322 Game Theory Half II Part 1: Reasoning Foundations Rationality Christian W. Bach University of Liverpool & EPICENTER Agenda Introduction Rational Choice Strict Dominance Characterization of Rationality
More informationNewsvendor Model OPRE 6302 Lecture Notes by Metin Çakanyıldırım Compiled at 14:04 on Friday 13 th November, 2015
Newsvendor Model OPRE 6302 Lecture Notes by Metin Çakanyıldırım Compiled at 14:04 on Friday 13 th November, 2015 1 Obtaining the Empirical Demand Distribution 1. Find forecasted vs. actual demand of all
More informationX ln( +1 ) +1 [0 ] Γ( )
Problem Set #1 Due: 11 September 2014 Instructor: David Laibson Economics 2010c Problem 1 (Growth Model): Recall the growth model that we discussed in class. We expressed the sequence problem as ( 0 )=
More informationm 11 m 12 Non-Zero Sum Games Matrix Form of Zero-Sum Games R&N Section 17.6
Non-Zero Sum Games R&N Section 17.6 Matrix Form of Zero-Sum Games m 11 m 12 m 21 m 22 m ij = Player A s payoff if Player A follows pure strategy i and Player B follows pure strategy j 1 Results so far
More informationBest counterstrategy for C
Best counterstrategy for C In the previous lecture we saw that if R plays a particular mixed strategy and shows no intention of changing it, the expected payoff for R (and hence C) varies as C varies her
More informationSYSM 6304: Risk and Decision Analysis Lecture 6: Pricing and Hedging Financial Derivatives
SYSM 6304: Risk and Decision Analysis Lecture 6: Pricing and Hedging Financial Derivatives M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October
More informationMulti-armed bandit problems
Multi-armed bandit problems Stochastic Decision Theory (2WB12) Arnoud den Boer 13 March 2013 Set-up 13 and 14 March: Lectures. 20 and 21 March: Paper presentations (Four groups, 45 min per group). Before
More informationLecture 10. Ski Jacket Case Profit calculation Spreadsheet simulation Analysis of results Summary and Preparation for next class
Decision Models Lecture 10 1 Lecture 10 Ski Jacket Case Profit calculation Spreadsheet simulation Analysis of results Summary and Preparation for next class Yield Management Decision Models Lecture 10
More informationGame Theory Notes: Examples of Games with Dominant Strategy Equilibrium or Nash Equilibrium
Game Theory Notes: Examples of Games with Dominant Strategy Equilibrium or Nash Equilibrium Below are two different games. The first game has a dominant strategy equilibrium. The second game has two Nash
More information1 No capital mobility
University of British Columbia Department of Economics, International Finance (Econ 556) Prof. Amartya Lahiri Handout #7 1 1 No capital mobility In the previous lecture we studied the frictionless environment
More informationPart 10: The Binomial Distribution
Part 10: The Binomial Distribution The binomial distribution is an important example of a probability distribution for a discrete random variable. It has wide ranging applications. One readily available
More informationMixed Strategies. Samuel Alizon and Daniel Cownden February 4, 2009
Mixed Strategies Samuel Alizon and Daniel Cownden February 4, 009 1 What are Mixed Strategies In the previous sections we have looked at games where players face uncertainty, and concluded that they choose
More informationLecture 8: Introduction to asset pricing
THE UNIVERSITY OF SOUTHAMPTON Paul Klein Office: Murray Building, 3005 Email: p.klein@soton.ac.uk URL: http://paulklein.se Economics 3010 Topics in Macroeconomics 3 Autumn 2010 Lecture 8: Introduction
More informationMaximum Contiguous Subsequences
Chapter 8 Maximum Contiguous Subsequences In this chapter, we consider a well-know problem and apply the algorithm-design techniques that we have learned thus far to this problem. While applying these
More informationMath 489/Math 889 Stochastic Processes and Advanced Mathematical Finance Dunbar, Fall 2007
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Math 489/Math 889 Stochastic
More informationMicroeconomic theory focuses on a small number of concepts. The most fundamental concept is the notion of opportunity cost.
Microeconomic theory focuses on a small number of concepts. The most fundamental concept is the notion of opportunity cost. Opportunity Cost (or "Wow, I coulda had a V8!") The underlying idea is derived
More informationLecture 3. Chapter 4: Allocating Resources Over Time
Lecture 3 Chapter 4: Allocating Resources Over Time 1 Introduction: Time Value of Money (TVM) $20 today is worth more than the expectation of $20 tomorrow because: a bank would pay interest on the $20
More informationA Risk-Sensitive Inventory model with Random Demand and Capacity
STOCHASTIC MODELS OF MANUFACTURING AND SERVICE OPERATIONS SMMSO 2013 A Risk-Sensitive Inventory model with Random Demand and Capacity Filiz Sayin, Fikri Karaesmen, Süleyman Özekici Dept. of Industrial
More informationStatistical Methods for NLP LT 2202
LT 2202 Lecture 3 Random variables January 26, 2012 Recap of lecture 2 Basic laws of probability: 0 P(A) 1 for every event A. P(Ω) = 1 P(A B) = P(A) + P(B) if A and B disjoint Conditional probability:
More informationProblem Set 3 Solutions
Problem Set 3 Solutions Ec 030 Feb 9, 205 Problem (3 points) Suppose that Tomasz is using the pessimistic criterion where the utility of a lottery is equal to the smallest prize it gives with a positive
More informationPRESCRIPTION DRUG COVERAGE AND MEDICARE. December Dear Prudential Employee and/or Covered Dependent:
This is an important notice from Prudential about your prescription drug coverage and Medicare. If you are not eligible for Medicare benefits, this notice does not apply to you and you do not need to take
More informationTDT4171 Artificial Intelligence Methods
TDT47 Artificial Intelligence Methods Lecture 7 Making Complex Decisions Norwegian University of Science and Technology Helge Langseth IT-VEST 0 helgel@idi.ntnu.no TDT47 Artificial Intelligence Methods
More information2c Tax Incidence : General Equilibrium
2c Tax Incidence : General Equilibrium Partial equilibrium tax incidence misses out on a lot of important aspects of economic activity. Among those aspects : markets are interrelated, so that prices of
More informationClass 12. Daniel B. Rowe, Ph.D. Department of Mathematics, Statistics, and Computer Science. Marquette University MATH 1700
Class 12 Daniel B. Rowe, Ph.D. Department of Mathematics, Statistics, and Computer Science Copyright 2017 by D.B. Rowe 1 Agenda: Recap Chapter 6.1-6.2 Lecture Chapter 6.3-6.5 Problem Solving Session. 2
More information4 Martingales in Discrete-Time
4 Martingales in Discrete-Time Suppose that (Ω, F, P is a probability space. Definition 4.1. A sequence F = {F n, n = 0, 1,...} is called a filtration if each F n is a sub-σ-algebra of F, and F n F n+1
More informationIntroduction to Dynamic Programming
Introduction to Dynamic Programming http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html Acknowledgement: this slides is based on Prof. Mengdi Wang s and Prof. Dimitri Bertsekas lecture notes Outline 2/65 1
More informationThe Normal Distribution
Will Monroe CS 09 The Normal Distribution Lecture Notes # July 9, 207 Based on a chapter by Chris Piech The single most important random variable type is the normal a.k.a. Gaussian) random variable, parametrized
More informationFeb. 20th, Recursive, Stochastic Growth Model
Feb 20th, 2007 1 Recursive, Stochastic Growth Model In previous sections, we discussed random shocks, stochastic processes and histories Now we will introduce those concepts into the growth model and analyze
More informationDo all of Part One (1 pt. each), one from Part Two (15 pts.), and four from Part Three (15 pts. each) <><><><><> PART ONE <><><><><>
56:171 Operations Research Final Exam - December 13, 1989 Instructor: D.L. Bricker Do all of Part One (1 pt. each), one from Part Two (15 pts.), and four from
More informationStat 476 Life Contingencies II. Profit Testing
Stat 476 Life Contingencies II Profit Testing Profit Testing Profit testing is commonly done by actuaries in life insurance companies. It s useful for a number of reasons: Setting premium rates or testing
More informationOutline Brownian Process Continuity of Sample Paths Differentiability of Sample Paths Simulating Sample Paths Hitting times and Maximum
Normal Distribution and Brownian Process Page 1 Outline Brownian Process Continuity of Sample Paths Differentiability of Sample Paths Simulating Sample Paths Hitting times and Maximum Searching for a Continuous-time
More information6.262: Discrete Stochastic Processes 3/2/11. Lecture 9: Markov rewards and dynamic prog.
6.262: Discrete Stochastic Processes 3/2/11 Lecture 9: Marov rewards and dynamic prog. Outline: Review plus of eigenvalues and eigenvectors Rewards for Marov chains Expected first-passage-times Aggregate
More informationMA 1125 Lecture 12 - Mean and Standard Deviation for the Binomial Distribution. Objectives: Mean and standard deviation for the binomial distribution.
MA 5 Lecture - Mean and Standard Deviation for the Binomial Distribution Friday, September 9, 07 Objectives: Mean and standard deviation for the binomial distribution.. Mean and Standard Deviation of the
More informationArbitrage Pricing. What is an Equivalent Martingale Measure, and why should a bookie care? Department of Mathematics University of Texas at Austin
Arbitrage Pricing What is an Equivalent Martingale Measure, and why should a bookie care? Department of Mathematics University of Texas at Austin March 27, 2010 Introduction What is Mathematical Finance?
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 8 The portfolio selection problem The portfolio
More information