Lecture 5: Tuesday, January 27, Peterson s Algorithm satisfies the No Starvation property (Theorem 1)

Similar documents
Introduction to Greedy Algorithms: Huffman Codes

Lecture l(x) 1. (1) x X

1 Solutions to Tute09

AVL Trees. The height of the left subtree can differ from the height of the right subtree by at most 1.

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

COMPUTER SCIENCE 20, SPRING 2014 Homework Problems Recursive Definitions, Structural Induction, States and Invariants

Lecture 2: The Simple Story of 2-SAT

Design and Analysis of Algorithms 演算法設計與分析. Lecture 9 November 19, 2014 洪國寶

Tug of War Game. William Gasarch and Nick Sovich and Paul Zimand. October 6, Abstract

Design and Analysis of Algorithms

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Design and Analysis of Algorithms. Lecture 9 November 20, 2013 洪國寶

Structural Induction

Semantics with Applications 2b. Structural Operational Semantics

1 Binomial Tree. Structural Properties:

Math 167: Mathematical Game Theory Instructor: Alpár R. Mészáros

Algorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information

On the Optimality of a Family of Binary Trees Techical Report TR

Design and Analysis of Algorithms 演算法設計與分析. Lecture 8 November 16, 2016 洪國寶

TABLEAU-BASED DECISION PROCEDURES FOR HYBRID LOGIC

CS4311 Design and Analysis of Algorithms. Lecture 14: Amortized Analysis I

NOTES ON FIBONACCI TREES AND THEIR OPTIMALITY* YASUICHI HORIBE INTRODUCTION 1. FIBONACCI TREES

Practice Second Midterm Exam II

Sublinear Time Algorithms Oct 19, Lecture 1

Data Structures. Binomial Heaps Fibonacci Heaps. Haim Kaplan & Uri Zwick December 2013

Notes on Natural Logic

Lecture 6. 1 Polynomial-time algorithms for the global min-cut problem

Outline. Objective. Previous Results Our Results Discussion Current Research. 1 Motivation. 2 Model. 3 Results

> asympt( ln( n! ), n ); n 360n n

MSU CSE Spring 2011 Exam 2-ANSWERS

Decidability and Recursive Languages

COMP Analysis of Algorithms & Data Structures

About this lecture. Three Methods for the Same Purpose (1) Aggregate Method (2) Accounting Method (3) Potential Method.

The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.

SET 1C Binary Trees. 2. (i) Define the height of a binary tree or subtree and also define a height balanced (AVL) tree. (2)

Supporting Information

The potential function φ for the amortized analysis of an operation on Fibonacci heap at time (iteration) i is given by the following equation:

Lecture 14: Basic Fixpoint Theorems (cont.)

In this lecture, we will use the semantics of our simple language of arithmetic expressions,

It is used when neither the TX nor RX knows anything about the statistics of the source sequence at the start of the transmission

Heap Building Bounds

Homework #4. CMSC351 - Spring 2013 PRINT Name : Due: Thu Apr 16 th at the start of class

MAT25 LECTURE 10 NOTES. = a b. > 0, there exists N N such that if n N, then a n a < ɛ

PRIORITY QUEUES. binary heaps d-ary heaps binomial heaps Fibonacci heaps. Lecture slides by Kevin Wayne Copyright 2005 Pearson-Addison Wesley

Algorithms PRIORITY QUEUES. binary heaps d-ary heaps binomial heaps Fibonacci heaps. binary heaps d-ary heaps binomial heaps Fibonacci heaps

Chapter 16. Binary Search Trees (BSTs)

CSCI 104 B-Trees (2-3, 2-3-4) and Red/Black Trees. Mark Redekopp David Kempe

3/7/13. Binomial Tree. Binomial Tree. Binomial Tree. Binomial Tree. Number of nodes with respect to k? N(B o ) = 1 N(B k ) = 2 N(B k-1 ) = 2 k

Fundamental Algorithms - Surprise Test

Outline for this Week

PARELLIZATION OF DIJKSTRA S ALGORITHM: COMPARISON OF VARIOUS PRIORITY QUEUES

Section 5 Foundations of Shared Memory: Fault-Tolerant Simulations of read/write objects

A DNC function that computes no effectively bi-immune set

CSE 21 Winter 2016 Homework 6 Due: Wednesday, May 11, 2016 at 11:59pm. Instructions

Online Algorithms SS 2013

Probability Basics. Part 1: What is Probability? INFO-1301, Quantitative Reasoning 1 University of Colorado Boulder. March 1, 2017 Prof.

Fibonacci Heaps Y Y o o u u c c an an s s u u b b m miitt P P ro ro b blle e m m S S et et 3 3 iin n t t h h e e b b o o x x u u p p fro fro n n tt..

Lecture 4: Divide and Conquer

Smoothed Analysis of Binary Search Trees

Heaps. Heap/Priority queue. Binomial heaps: Advanced Algorithmics (4AP) Heaps Binary heap. Binomial heap. Jaak Vilo 2009 Spring

CS360 Homework 14 Solution

Heaps

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Copyright 1973, by the author(s). All rights reserved.

1.6 Heap ordered trees

Recitation 1. Solving Recurrences. 1.1 Announcements. Welcome to 15210!

Strong normalisation and the typed lambda calculus

Lecture 14: Examples of Martingales and Azuma s Inequality. Concentration

Notes on the symmetric group

Gödel algebras free over finite distributive lattices

Binary and Binomial Heaps. Disclaimer: these slides were adapted from the ones by Kevin Wayne

Advanced Algorithmics (4AP) Heaps

Lecture 7: Bayesian approach to MAB - Gittins index

Fibonacci Heaps CLRS: Chapter 20 Last Revision: 21/09/04

On the Optimality of a Family of Binary Trees

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Splay Trees Date: 9/27/16

TR : Knowledge-Based Rational Decisions and Nash Paths

Algebra homework 8 Homomorphisms, isomorphisms

Tug of War Game: An Exposition

Chapter ML:III. III. Decision Trees. Decision Trees Basics Impurity Functions Decision Tree Algorithms Decision Tree Pruning

Discrete Mathematics for CS Spring 2008 David Wagner Final Exam

Crash-tolerant Consensus in Directed Graph Revisited

Priority Queues. Fibonacci Heap

6.854J / J Advanced Algorithms Fall 2008

CSE 417 Algorithms. Huffman Codes: An Optimal Data Compression Method

Lattices and the Knaster-Tarski Theorem

Maximum Contiguous Subsequences

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

Analysis of Link Reversal Routing Algorithms for Mobile Ad Hoc Networks

Initializing A Max Heap. Initializing A Max Heap

The suffix binary search tree and suffix AVL tree

Finding Equilibria in Games of No Chance

A Testing Read-Once Formula Satisfaction

Practical session No. 5 Trees

6.254 : Game Theory with Engineering Applications Lecture 3: Strategic Form Games - Solution Concepts

The Traveling Salesman Problem. Time Complexity under Nondeterminism. A Nondeterministic Algorithm for tsp (d)

Recall: Data Flow Analysis. Data Flow Analysis Recall: Data Flow Equations. Forward Data Flow, Again

1 Online Problem Examples

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

The Probabilistic Method - Probabilistic Techniques. Lecture 7: Martingales

Transcription:

Com S 611 Spring Semester 2015 Advanced Topics on Distributed and Concurrent Algorithms Lecture 5: Tuesday, January 27, 2015 Instructor: Soma Chaudhuri Scribe: Nik Kinkel 1 Introduction This lecture covers additional properties of Peterson s algorithm as well as a procedure to extend Peterson s Algorithm to accomodate an arbitrary number of processors. Additionally, the concept of configuration similarity is introduced, both with respect to a single processor and a processor set, along with some properties of configuration similarity. Specifically, we prove: 1. Peterson s Algorithm satisfies the No Starvation property (Theorem 1) 2. A generalized Peterson s algorithm satisfies the Mutual Exclusion property (Theorem 3) 3. A generalized Peterson s algorithm satisfies the No Starvation property (Theorem 4) 4. Two configurations similar to processor p or processor set P remain similar after applying a p-only or P-only schedule, respectively. (Lemma 5) 2 Peterson s Algorithm 2.1 Review See Figure 1 for a review of Peterson s Algorithm for side = 0. 2.2 The No Starvation Property Theorem 1 Peterson s Algorithm provides the No Starvation property. Proof: Consider any execution and two processors, P 0 and P 1. Assume, for sake of contradiction, that P 0 is starved (without loss of generality). Then, starting at some time t, P 0 is stuck forever in the Entry section. There are two possibilities to consider: Case 1: P 1 does not enter the Critical Section after some time t >t. Starting at t, no processor enters the Critical Section, and P 0 is forever in the Entry section. Therefore, at all times after time t there is some processor in the Entry section but 1

Algorithm 1 Peterson s Algorithm, Side 0 1: procedure Peterson 0 2: Entry want 0 := 0 3: wait until want 1 = 0 or priority = 0 4: want 0 := 1 5: if priority = 1 then 6: if want 1 = 1 then 7: goto 2 8: else wait until want 1 = 0 9: Critical Section 10: Exit priority := 1 11: want 0 := 0 12: Remainder no processor in the Critical Section. This violates the No Deadlock property of Peterson s Algorithm (Lecture 4, Theorem 4), and thus is a contradiction. Case 2: P 1 enters the Critical Section infinitely often after some time t >t. Consider a time s >t when P 1 sets priority := 0 (line 10). Since P 0 does not enter the Critical Section after time t, P 0 must be stuck in one of the following cases: A P 0 loops in line 3 B P 0 cycles through lines 2-7 C P 0 loops in line 8 At all times s >s priority = 0, so P 0 must pass through both lines 3 and 5 and thus cannot be stuck in either case A or B. Therefore P 0 must loop in line 8. Consider the next time s >s that P 1 enters the Entry section. P 1 sets want 1 := 0 and loops in line 3, since want 0 = 1 and priority = 0. Therefore, as P 1 loops from this time s, P 0 must eventually take its next step. P 0 then reads want 1 = 0 in line 8, and P 0 will enter the Critical Section, contradicting the original assumption that P 0 is starved. Either case leads to a contradiction, and thus Peterson s Algorithm provides the No Starvation property. 2.3 A Bounded Mutual Exclusion Algorithm for n Processors We now seek to generalize Peterson s Algorithm to an n-processor algorithm. We construct an n-processor algorithm by repeated application of Peterson s Algorithm in a tournament tree (a 2

Figure 1: A Tournament Tree for 8 Processors N 1 N 2 N 3 N 4 N 5 N 6 N 7 P 0 P 1 P 2 P 3 P 4 P 5 P 6 P 7 binary tree with n leaves, where each node represents an instance of Peterson s Algorithm and each leaf represents a processor). See Figure 1 on page 3. Let n = 2 k, giving a complete binary tree with 2 k+1 1 nodes and 2 k leaves. Processor P i (at leaf 2 k + i) executes Peterson s Algorithm at node 2 k 1 + i/2. We call this new procedure NODE. Algorithm 2 NODE 1: procedure NODE(v: int, side: 0,1) 2: Entry want v 1 side := 0 3: wait until want v 1 side = 0 or priorityv = side 4: want v side := 1 5: if priority v = 1 side then 6: if want v 1 side = 1 then 7: goto 2 8: else wait until want v 1 side = 0 9: if v = 1 then 10: Critical Section 11: else NODE( v/2, v mod 2) 12: Exit want v side := 0 13: priority v := 1 side 14: Remainder Remarks: Informally, each processor (leaf node) runs Peterson s Algorithm with its sibling. Whoever wins then advances up the tournament tree to its parent node, say node u, and runs Peterson s Algorithm again with the winner that has advanced to u s sibling. This procedure continues until a processor advances to the root node, at which point it can finally enter the Critical Section. When the winning processors exits the Critical Section, each waiting processor starts to advance up the tree using the same procedure. Note that it s ok if the tournament tree is not complete (i.e. we have an odd number of processors); in this case, the processor with no sibling simply wins immediately and gets to advance. 3

The NODE algorithm requires 3(n 1) variables to run, where n is the number of processors. NODE has a better time complexity in the case of no contention than Lamport s Bakery Algorithm. In Lamport s Bakery Algorithm, the cost is O(n) (where n = the number of processors) since both the NUM and CHOOSE arrays must be scanned for every processor P i. The NODE algorithm is O(log n), since each step up the tree continually halves the remaining work. Definition 1 Any processor p is said to be in node v s processor set if it has taken step 9 of NODE but has not yet taken step 12. Lemma 2 For any node v of the tournament tree in the NODE algorithm, at most 1 processor in v s processor set is in v s Critical Section at any time t. Proof: We prove Lemma 2 by induction on the height of node v. Base Case: Consider a node v whose children are leaves. Then the only processors that access v are its two children, and clearly each will have a different side value. So the algorithm run at v is Peterson s Algorithm. Since Peterson s Algorithm satisfies the Mutual Exclusion property (Lecture 4, Theorem 3), at most 1 processor is in v s Critical Section at any given time. Inductive Hypothesis: Let the Lemma 2 be true for all nodes of height k. That is, assume that at most 1 processor is in v s Critical Section for any node v at height k at any time t. Inductive Step: Consider a node v at height k + 1 with nodes u l and u r as its left and right children, respectively. By our Inductive Hypothesis we know that at most 1 processor is in node u l s Critical Section and at most 1 processor is in node u r s Critical Section at any time t. Therefore, at most 2 processors run v s algorithm with different side values at any given time. Since Peterson s Algorithm satisfies the Mutual Exclusion property (Lecture 4, Theorem 3), at most 1 processor will ever be in node v s Critical Section at any time. Theorem 3 The n-processor NODE algorithm satisfies the Mutual Exclusion property. Proof: We prove Theorem 3 by applying Lemma 2 to the root node of the tournament tree constructed in the NODE algorithm. All processesors in the tree are clearly in the root node s processor set. Therefore, by Lemma 2, at most 1 processor in the tree is in the Critical Section at any time t. Thus, the Mutual Exclusion property holds. Theorem 4 The n-processor NODE algorithm satisfies the No Starvation property. Proof: Suppose, for sake of contradiction, that some processor P i is starved. Let u be a node such that P i enters the Critical Section of node u at time t but never enters the Critical Section of node v, the parent node of u, after time t. Let node u be the other child of node v. By the Lemma 2, we know there is at most 1 processor in the Critical Section of both nodes u and u. 4

Let P j be the processor in the Critical Section of node u, if one is active. Since P i and P j have different side values, they run Peterson s Algorithm at node v. Since node v provides the No Starvation property in the 2-Processor case by Theorem 1, processor P i must, at some time, be able to enter the Critical Section of node v. This contradicts our original assumption that P i is starved, therefore the NODE algorithm satisfies the No Starvation property. 3 A Lower Bound in the Number of Read/Write Variables Definition 2 Let C 1 and C 2 be two configurations, and let p be a processor. C 1 and C 2 are similar to p if 1. p is in the same state in both C 1 and C 2 2. the values of all shared registers are the same in both C 1 and C 2. Similarity between C 1 and C 2 on processor p is denoted by C 1 p C2. Informally, this means that p cannot tell the difference between C 1 and C 2. Definition 3 Let C 1 and C 2 be two configurations, and let P be a set of processor states. C 1 P p i C 2 if processors p i P, C 1 C2. Definition 4 Let σ be a schedule and p be a processor. σ is p-only if it only contains events by processor p. Definition 5 Let σ be a schedule and P be a set of processors. σ is P-only if it contains only events by processors in P. Definition 6 Let σ be a schedule and p be a processor. σ is p-free if it contains no events by processor p. Definition 7 Let σ be a schedule and P be a set of processor states. σ is P-free if it contains no events by any processor p in P. Lemma 5 Let C 1 and C 2 be configurations, and let P be a set of processors. If C 1 σ is a P-only schedule, then σ(c 1 ) P σ(c 2 ). P C 2 and 5

Proof: We prove Lemma 5 by induction on the events of schedule σ. Base Case: Let C 1 and C 2 be configurations and let P be a set of processors such that C 1 P C 2. Let σ be a schedule with σ = 0. Then clearly σ(c 1 ) P σ(c 2 ) since σ contains no processor events. Inductive Hypothesis: Assume that σ (C 1 ) P σ (C 2 ) for some arbitrary P-only schedule σ. Inductive Step: Let e be an event by some processor in P such that e is applicable to σ(c 1 ) and σ(c 2 ). Let p be the processor in P whose event is e, and let σ = σe. By our Inductive Hypothesis and the definition of similarity, σ(c 1 ) and σ(c 2 ) have the same shared register values and the same state for all processors in P. We prove that σ (C 1 ) P σ (C 2 ) We must consider two cases, when e is a READ operation and when e is a WRITE operation. READ Case: p reads the same register in both σ(c 1 ) and σ(c 2 ). Since σ(c 1 ) and σ(c 2 ) have the same register values, p reads the same value in both executions, and p s state is the same in both σ (C 1 ) and σ (C 2 ). Additionally, the register values do not change, so the register values are the same in both σ (C 1 ) and σ (C 2 ). WRITE Case: σ (C 1 ) P σ (C 2 ) by our inductive hypothesis, so p writes value v 1 to the register r in both σ (C 1 ) and σ (C 2 ). Since σ(c 1 ) and σ(c 2 ) have the same shared register values and p has the same state before v 1 is written to register r, and since p takes a step and the algorithm is deterministic, σ (C 1 ) and σ (C 2 ) have the same shared register values and p has the same state in σ (C 1 ) and σ (C 2 ) after v 1 is written to r. Clearly, no other processors in P change state, and no other shared registers are modified. Thus, if σ(c 1 ) P σ(c 2 ), then σ (C 1 ) P σ (C 2 ), where σ = σe and e is some event by a processor in P. Remark: Note that, informally, this means that if, given a set of processors P, a P-only schedule σ, and two configurations C 1 and C 2 similar for P, after any number of events of schedule σ, or after the application of any number of P-only schedules, C 1 and C 2 will remain similar for P and any processor p P. Definition 8 A configuration C is quiescent if all processors are in the Remainder section. 6