IEOR E4004: Introduction to OR: Deterministic Models

Similar documents
0/1 knapsack problem knapsack problem

CMPSCI 311: Introduction to Algorithms Second Midterm Practice Exam SOLUTIONS

June 11, Dynamic Programming( Weighted Interval Scheduling)

Handout 4: Deterministic Systems and the Shortest Path Problem

Optimization Methods. Lecture 16: Dynamic Programming

Problem Set 2: Answers

Chapter 15: Dynamic Programming

Essays on Some Combinatorial Optimization Problems with Interval Data

Lecture 10: The knapsack problem

Yao s Minimax Principle

Dynamic Programming: An overview. 1 Preliminaries: The basic principle underlying dynamic programming

1 Online Problem Examples

THE TRAVELING SALESMAN PROBLEM FOR MOVING POINTS ON A LINE

Homework #4. CMSC351 - Spring 2013 PRINT Name : Due: Thu Apr 16 th at the start of class

Maximum Contiguous Subsequences

Deterministic Dynamic Programming

UNIT 2. Greedy Method GENERAL METHOD

Chapter 21. Dynamic Programming CONTENTS 21.1 A SHORTEST-ROUTE PROBLEM 21.2 DYNAMIC PROGRAMMING NOTATION

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #24 Scribe: Jordan Ash May 1, 2014

Handout 8: Introduction to Stochastic Dynamic Programming. 2 Examples of Stochastic Dynamic Programming Problems

Math 167: Mathematical Game Theory Instructor: Alpár R. Mészáros

2. This algorithm does not solve the problem of finding a maximum cardinality set of non-overlapping intervals. Consider the following intervals:

Lecture l(x) 1. (1) x X

Homework solutions, Chapter 8

The Traveling Salesman Problem. Time Complexity under Nondeterminism. A Nondeterministic Algorithm for tsp (d)

6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE

Lecture 7: Bayesian approach to MAB - Gittins index

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

Notes on the symmetric group

Dynamic Programming (DP) Massimo Paolucci University of Genova

COS 445 Final. Due online Monday, May 21st at 11:59 pm. Please upload each problem as a separate file via MTA.

Advanced Operations Research Prof. G. Srinivasan Dept of Management Studies Indian Institute of Technology, Madras

Dynamic Programming cont. We repeat: The Dynamic Programming Template has three parts.

Introduction to Dynamic Programming

is a path in the graph from node i to node i k provided that each of(i i), (i i) through (i k; i k )isan arc in the graph. This path has k ; arcs in i

NOTES ON FIBONACCI TREES AND THEIR OPTIMALITY* YASUICHI HORIBE INTRODUCTION 1. FIBONACCI TREES

CSE202: Algorithm Design and Analysis. Ragesh Jaiswal, CSE, UCSD

Tug of War Game. William Gasarch and Nick Sovich and Paul Zimand. October 6, Abstract

arxiv: v1 [q-fin.rm] 1 Jan 2017

On the Optimality of a Family of Binary Trees Techical Report TR

More Advanced Single Machine Models. University at Buffalo IE661 Scheduling Theory 1

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

For every job, the start time on machine j+1 is greater than or equal to the completion time on machine j.

CHAPTER 5: DYNAMIC PROGRAMMING

Maximizing the Spread of Influence through a Social Network Problem/Motivation: Suppose we want to market a product or promote an idea or behavior in

CHAPTER 14: REPEATED PRISONER S DILEMMA

Notes on the EM Algorithm Michael Collins, September 24th 2005

1) S = {s}; 2) for each u V {s} do 3) dist[u] = cost(s, u); 4) Insert u into a 2-3 tree Q with dist[u] as the key; 5) for i = 1 to n 1 do 6) Identify

Game Theory: Normal Form Games

6.231 DYNAMIC PROGRAMMING LECTURE 5 LECTURE OUTLINE

ECE 586GT: Problem Set 1: Problems and Solutions Analysis of static games

6.231 DYNAMIC PROGRAMMING LECTURE 3 LECTURE OUTLINE

Node betweenness centrality: the definition.

6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE

Optimization Methods in Management Science

DRAFT. 1 exercise in state (S, t), π(s, t) = 0 do not exercise in state (S, t) Review of the Risk Neutral Stock Dynamics

Lecture 6. 1 Polynomial-time algorithms for the global min-cut problem

MAT 4250: Lecture 1 Eric Chung

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.

Q1. [?? pts] Search Traces

SET 1C Binary Trees. 2. (i) Define the height of a binary tree or subtree and also define a height balanced (AVL) tree. (2)

The Assignment Problem

Zhen Sun, Milind Dawande, Ganesh Janakiraman, and Vijay Mookerjee

The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.

6 -AL- ONE MACHINE SEQUENCING TO MINIMIZE MEAN FLOW TIME WITH MINIMUM NUMBER TARDY. Hamilton Emmons \,«* Technical Memorandum No. 2.

EE/AA 578 Univ. of Washington, Fall Homework 8

Chapter wise Question bank

Max Registers, Counters and Monotone Circuits

An Optimal Algorithm for Calculating the Profit in the Coins in a Row Game

56:171 Operations Research Midterm Exam Solutions Fall 1994

An Optimal Algorithm for Finding All the Jumps of a Monotone Step-Function. Stutistics Deportment, Tel Aoio Unioersitv, Tel Aoiu, Isrue169978

EE365: Risk Averse Control

1 Shapley-Shubik Model

Quadrant marked mesh patterns in 123-avoiding permutations

Decision Making in Robots and Autonomous Agents

The Ohio State University Department of Economics Second Midterm Examination Answers

Sublinear Time Algorithms Oct 19, Lecture 1

Advanced Numerical Methods

56:171 Operations Research Midterm Exam Solutions October 19, 1994

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

Elif Özge Özdamar T Reinforcement Learning - Theory and Applications February 14, 2006

Roll No. :... Invigilator s Signature :.. CS/B.TECH(IT)/SEM-5/M(CS)-511/ OPERATIONS RESEARCH AND OPTIMIZATION TECHNIQUES

Term Structure Lattice Models

CSE 417 Dynamic Programming (pt 2) Look at the Last Element

Lecture 5 January 30

Alain Hertz 1 and Sacha Varone 2. Introduction A NOTE ON TREE REALIZATIONS OF MATRICES. RAIRO Operations Research Will be set by the publisher

Decidability and Recursive Languages

A Branch-and-Price method for the Multiple-depot Vehicle and Crew Scheduling Problem

COSC 311: ALGORITHMS HW4: NETWORK FLOW

Integer Programming Models

Log-linear Dynamics and Local Potential

6.231 DYNAMIC PROGRAMMING LECTURE 8 LECTURE OUTLINE

Optimal Integer Delay Budget Assignment on Directed Acyclic Graphs

Lecture Notes 1

What is Greedy Approach? Control abstraction for Greedy Method. Three important activities

then for any deterministic f,g and any other random variable

Structural Induction

MS-E2114 Investment Science Lecture 4: Applied interest rate analysis

Lecture 6 Dynamic games with imperfect information

CPS 270: Artificial Intelligence Markov decision processes, POMDPs

Transcription:

IEOR E4004: Introduction to OR: Deterministic Models 1 Dynamic Programming Following is a summary of the problems we discussed in class. (We do not include the discussion on the container problem or the cannibals missionaries problem because these were mostly philosophical discussions.) 1.1 Matrix Multiplication We wish to compute the matrix product A 1 A 2... A n 1 A n, where A i has a i rows a i+1 columns. How can find this product while minimizing the number of multiplications used? To study this problem using dynamic programming, we let m ij be the optimal number of multiplications to find the product A i A i+1... A j, for i < j. The DP recursion is easily seen to be: Alternatively, one can say: m i,i+1 = a i a i+1 a i+2, m i,j = min {m i,k + m k,j + a i a k+1 a j+1, j > i + 1. k:i<k<j m i,i = 0, m i,j = min {m i,k + m k,j + a i a k+1 a j+1, j > i. k:i k j We first find m( ) for all i j for which j i is 0, then for all i j for which j i is 1, etc. Observe that the DP recursion is such that if (j i) = k, then m(i, j) depends on m( ) values for those (i, j ) pairs for which j i < k. (Thus our way of finding the m( ) will work.) We are of course interested in m(1, n). The algorithm runs in O(n 3 ) time: we need to fill a table of size O(n 2 ), each of which requires O(n) time to fill. 1.2 Knapsack problem We are given a knapsack with capacity W. We have n items labeled 1, 2,..., n 1, n; item i has size s i value v i. Any subset of items with total size W can be packed into the knapsack. Thus our goal is to pack into the knapsack a subset of maximum value, among all possible subsets whose total size is at most W. Assume all the data are non-negative integers. How does one identify a maximum value subset? 1

We let f(j, w) be the maximum value one can achieve in the knapsack problem with items {1, 2,..., j with knapsack capacity w. We are interested in determining f(n, W ). It is easy to see that { f(j + 1, w) = max f(j, w), v j+1 + f(j, w s j+1 ). (The second term exists only if w s j+1 ; otherwise it is taken to be zero.) To justify this, consider the following argument. If we are given a knapsack problem with j + 1 items knapsack capacity w, an optimal packing will either exclude the last item or include it; in the former case, we are left with a knapsack problem involving the first j items knapsack capacity w, whereas in the latter case we are left with a knapsack problem involving the first j items knapsack capacity w s j+1, but we also pick up an additional value v j+1. We can now solve the recursion backwards: clearly f(1, w) is trivial to determine, for all 0 w W ; using this the recursion described above, we can determine f(2, w) for all 0 w W, etc. There are n(w + 1) entries to find, each of which can be determined in O(1) time; so the complexity of the DP algorithm we just described is O(nW ). 1.3 Shortest path problem Given a graph G = (V, E), given distances c ij 0 on the edges, find a shortest path from a given s V to all other nodes in V. Let c i,i = 0, for all i V. We also let c i,j = for all (i, j) E, so that we may now assume c i,j is defined for every i, j V. We considered two versions of the shortest path problem: the one-to-many version in which we need to find a shortest path from s to every other node in V ; the many-to-many version in which we need to find a shortest path from any node to every other node. One-to-many. Let f k (j) be the shortest path length from s to j using at most k edges for k = 1, 2,..., n 1. Observe that we are interested in determining f n 1 (v) for each v V \ {s. The DP recursion is: f k+1 (j) = min {f k (i) + c ij. i V The justification is as follows: consider any path that takes you from s to j using at most k + 1 edges. Let i be the node that this path visits just before it reaches j, that is, let (i, j) be the last edge of this path. In that case, it is clear that the length of the optimal such path is f k (i) + c ij. Since we do not know what i is for the optimal path from s to j using at most k + 1 edges, we try all possible values of i. We can compute the f k ( ) starting from f 1 ( ) (where it is trivial), then using the recursion to find f 2 ( ), then f 3 ( ), etc. Since the shortest paths from s to any node j does not use more than n 1 edges, it is enough to find f n 1 (j) for all j V \ {s. (What is the complexity of this procedure?) Many-to-Many. One could solve the many-to-many problem by solving n independent one-to-many problems, one for each element v V as a source. Here is another way. Let g k (i, j) be the shortest path 2

length from i to j using only a subset of {1, 2,..., k as intermediate nodes. We maintain this quantity for all pairs i, j V. The DP recursion is: g k+1 (i, j) = min { g k (i, j), g k (i, k + 1) + g k (k + 1, j). The justification is as follows: the optimal path from i to j using only a subset of {1, 2,..., k, k + 1 as intermediate nodes may or may not use node k +1 as an intermediate node. If the path does not use k +1 as an intermediate node, then the path only uses a subset of {1, 2,..., k as intermediate nodes, in that case its length is g k (i, j) (by definition). If the path does use k + 1 as an intermediate node, then the total cost of this path can be decomposed into two parts: the part that takes you from i to k + 1, the part that takes you from k + 1 to j; each of these paths can use only a subset of {1, 2,..., k as intermediate nodes, so the cost of the two parts are g k (i, k + 1) g k (k + 1, j) respectively. Again, g 1 ( ) is trivial to compute; starting from this we can compute g 2 ( ) using the DP recursion, then we can find g 3 ( ), etc. The solution to the problem is given by g n (i, j) for all i, j V. The complexity of this procedure can be estimated as follows: there are O(n 3 ) entries to be determined (n possible values of k, O(n) possible values of i j each); each entry can be found by looking up three entries that are already stored in the table, so can be found in O(1) time. 1.4 Traveling salesman problem In the traveling salesman problem, we are again given a complete directed graph with V being the set of nodes of the graph, with c ij representing the cost of going from node i to node j for all i, j V, i j. The goal is to determine the minimum cost tour starting at node 1, visiting each node exactly once, returning to node 1. To solve this problem by dynamic programming, we maintain f(s, k), for each S V \ {1, for each k S. The interpretation of f(s, k) is that it is the cost of an optimal path that starts at node 1, visits each node in S exactly once, ends at node k. The DP recursion is easily seen to be: f(s, k) = min j S\{k {f(s \ {k, j) + c jk. That this recursion is correct follows from making a simple observation: any path that starts at 1, ends at k, visits each node in S exactly once must have a last edge that goes from some node j to node k; in that case, the length of this path is precisely f(s \ {k, j) + c jk. Since we do not know the identity of j, we try all possibilities. Once again we start with all subsets S with one element, in which case determining f( ) is trivial; we then use this to determine f( ) on all subsets S with 2 elements, then all subsets with 3 elements, etc. The optimal tour-length can be determined by finding min k f({2, 3,..., n 1, k) + c k1. This is so because if any optimal tour visits some node last (before returning to node 1); if this node is k, then its length is precisely f({2, 3,..., n 1, k) + c k1. Again, since we do not know k, we try all the 3

(n 1) possibilities. The complexity of the DP procedure here is easy to determine: the number of states is O(n2 n ); each f( ) can be determined in O(n) time, so the overall complexity is O(n 2 2 n ). While this is expensive, it is substantially better than naively enumerating all tours (in this case we should list (n-1)! tours. 1.5 A scheduling problem As our last problem we turn to a very simple single machine scheduling problem. We have a single machine, n jobs, with job i needing processing time p i on the machine. The completion time of job i is denoted C i represents the epoch at which job i completes its processing on the machine. To illustrate the definitions, suppose n = 3, p 1 = 1, p 2 = 4, p 3 = 2. If the jobs are processed in the order 123, then C 1 = 1, C 2 = 5, C 3 = 7; if the jobs are processed in the order 231, then C 2 = 4, C 3 = 6, C 1 = 7. Let f i (x) be the cost incurred by job i if its completion time is x. Suppose f i (x) is nondecreasing in x for each i. (Note that different jobs could have different cost functions.) For each sequence of jobs σ, let g(σ) = max{f j (Cj σ ), j where Cj σ is the completion time of job j in the schedule σ. The goal is to find a σ that minimizes g(σ). To illustrate, suppose f 1 (x) = 5x 2, f 2 (x) = 3x, f 3 (x) = x + 3. Then for the sequence (123), the cost of job 1 is f 1 (1) = 5, the cost of job 2 is f 2 (5) = 15, the cost of job 3 is f 3 (7) = 10; thus, g(123) = max{5, 15, 10 = 15. For the sequence (231), the cost of jobs 1, 2, 3 are 245, 12, 9 respectively, which implies g(231) = 245. To determine the best sequence, we can try the remaining four sequences, find g( ) for each of them find the best. This is doable for a problem with 3 jobs because there are only 6 sequences to try. In general for n jobs, there will be n! sequences to try, so this becomes computationally expensive very soon. Can we do better using dynamic programming? Observe that some job has to be scheduled last. This job finishes at time P := n k=1 p k, so if job j is scheduled last, then its cost is f j (P ), regardless of how the remaining jobs are sequenced. Let σ be any sequence in which j appears as the last job. Then σ = (σ, j), where σ is some ordering of the jobs 1, 2,..., j 1, j + 1,..., n. It should be clear that g(σ) = max{g(σ ), f j (P ). This is because the job with the largest cost in σ is some job other than j (in which case its cost is g(σ )), or it is job j (in which case its cost is f j (P )). Now, let j arg max f j (P ). j 4

In other words, among all jobs j, let j be a job that minimizes f j (P ). Consider the schedule obtained by scheduling j last; we are left with a problem involving (n 1) jobs to which we apply the same rule recursively. We claim that this schedule is optimal. Let g (S) be the optimal cost of the scheduling problem when only the jobs in S are to be scheduled. Then it is clear that: g (S) g (S \ {j), foranyj S (1) g (S) min f j( p k ) (2) j S k S The first inequality is true because if we have one job less to schedule we can only do better in an optimal solution as all costs are nondecreasing in the completion times; the second inequality is true because some job in S must have completion time k S p k, so its cost must be at least min j S f j ( k S p k). We are now ready to prove the optimality of our scheduling rule. If the number of jobs is 1, the rule is trivially optimal. Suppose the rule is optimal whenever there are fewer than n jobs. Consider an instance with n jobs. Let j be scheduled last, let σ be the schedule computed by the rule on the remaining instance. By induction, σ is an optimal schedule for the problem in which all jobs are present except job j. Also, g(σ, j ) = max{g (σ ), fj (P ). But we know from (1) (2) that each of the terms on the RHS is a lower bound on g( ) of an optimal schedule. This completes the proof. 5