Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Similar documents
Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

CS 4110 Programming Languages and Logics Lecture #2: Introduction to Semantics. 1 Arithmetic Expressions

In this lecture, we will use the semantics of our simple language of arithmetic expressions,

CS 4110 Programming Languages & Logics. Lecture 2 Introduction to Semantics

Proof Techniques for Operational Semantics. Questions? Why Bother? Mathematical Induction Well-Founded Induction Structural Induction

Notes on Natural Logic

Semantics with Applications 2b. Structural Operational Semantics

Structural Induction

Lecture 5: Tuesday, January 27, Peterson s Algorithm satisfies the No Starvation property (Theorem 1)

2 Deduction in Sentential Logic

Programming Languages

HW 1 Reminder. Principles of Programming Languages. Lets try another proof. Induction. Induction on Derivations. CSE 230: Winter 2007

Recitation 1. Solving Recurrences. 1.1 Announcements. Welcome to 15210!

5 Deduction in First-Order Logic

Proof Techniques for Operational Semantics

Proof Techniques for Operational Semantics

TABLEAU-BASED DECISION PROCEDURES FOR HYBRID LOGIC

4 Martingales in Discrete-Time

Maximum Contiguous Subsequences

Lecture Notes on Type Checking

Microeconomics of Banking: Lecture 5

COMPUTER SCIENCE 20, SPRING 2014 Homework Problems Recursive Definitions, Structural Induction, States and Invariants

LECTURE 2: MULTIPERIOD MODELS AND TREES

CSE 21 Winter 2016 Homework 6 Due: Wednesday, May 11, 2016 at 11:59pm. Instructions

CS792 Notes Henkin Models, Soundness and Completeness

Algorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information

Notes on the symmetric group

Introduction to Greedy Algorithms: Huffman Codes

Strong normalisation and the typed lambda calculus

Lecture Notes on Bidirectional Type Checking

MAC Learning Objectives. Learning Objectives (Cont.)

The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.

Sampling and sampling distribution

Algebra homework 8 Homomorphisms, isomorphisms

Gödel algebras free over finite distributive lattices

0.1 Equivalence between Natural Deduction and Axiomatic Systems

1 Solutions to Tute09

TEST 1 SOLUTIONS MATH 1002

AVL Trees. The height of the left subtree can differ from the height of the right subtree by at most 1.

arxiv: v1 [math.lo] 24 Feb 2014

Lecture l(x) 1. (1) x X

Outline for this Week

MA 1125 Lecture 05 - Measures of Spread. Wednesday, September 6, Objectives: Introduce variance, standard deviation, range.

Syllogistic Logics with Verbs

CIS 500 Software Foundations Fall October. CIS 500, 6 October 1

Semantics and Verification of Software

Global Joint Distribution Factorizes into Local Marginal Distributions on Tree-Structured Graphs

TR : Knowledge-Based Rational Decisions

Math 167: Mathematical Game Theory Instructor: Alpár R. Mészáros

5.7 Probability Distributions and Variance

Finding the Sum of Consecutive Terms of a Sequence

3 Arbitrage pricing theory in discrete time.

Mixed Strategies. Samuel Alizon and Daniel Cownden February 4, 2009

CS 6110 S11 Lecture 8 Inductive Definitions and Least Fixpoints 11 February 2011

TR : Knowledge-Based Rational Decisions and Nash Paths

Chapter 5. Statistical inference for Parametric Models

CHAPTER 7 INTRODUCTION TO SAMPLING DISTRIBUTIONS

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

Homework #4. CMSC351 - Spring 2013 PRINT Name : Due: Thu Apr 16 th at the start of class

The Assumption(s) of Normality

MLLunsford 1. Activity: Central Limit Theorem Theory and Computations

CH 39 CREATING THE EQUATION OF A LINE

Sy D. Friedman. August 28, 2001

Lecture 4: Divide and Conquer

Homework Assignments

Decidability and Recursive Languages

Interest Rates: Credit Cards and Annuities

MAT25 LECTURE 10 NOTES. = a b. > 0, there exists N N such that if n N, then a n a < ɛ

Lecture 2: The Simple Story of 2-SAT

MATH3075/3975 FINANCIAL MATHEMATICS TUTORIAL PROBLEMS

So we turn now to many-to-one matching with money, which is generally seen as a model of firms hiring workers

Tug of War Game. William Gasarch and Nick Sovich and Paul Zimand. October 6, Abstract

Economics 101A Spring A Revised Version of the Slutsky Equation Using the Expenditure Function or, the expenditure function is our friend!

CS 188: Artificial Intelligence

A relation on 132-avoiding permutation patterns

Outline Introduction Game Representations Reductions Solution Concepts. Game Theory. Enrico Franchi. May 19, 2010

A Formally Verified Interpreter for a Shell-like Programming Language

Math 546 Homework Problems. Due Wednesday, January 25. This homework has two types of problems.

A Consistent Semantics of Self-Adjusting Computation

In Discrete Time a Local Martingale is a Martingale under an Equivalent Probability Measure

The Monthly Payment. ( ) ( ) n. P r M = r 12. k r. 12C, which must be rounded up to the next integer.

1.1 Interest rates Time value of money

Section 0: Introduction and Review of Basic Concepts

Operational Semantics

The Binomial Theorem and Consequences

Brief Notes on the Category Theoretic Semantics of Simply Typed Lambda Calculus

École normale supérieure, MPRI, M2 Year 2007/2008. Course 2-6 Abstract interpretation: application to verification and static analysis P.

CS360 Homework 14 Solution

Best response cycles in perfect information games

Lecture 23: April 10

Maximizing the Spread of Influence through a Social Network Problem/Motivation: Suppose we want to market a product or promote an idea or behavior in

CS 188: Artificial Intelligence

Fibonacci Heaps Y Y o o u u c c an an s s u u b b m miitt P P ro ro b blle e m m S S et et 3 3 iin n t t h h e e b b o o x x u u p p fro fro n n tt..

An effective perfect-set theorem

Recall: Data Flow Analysis. Data Flow Analysis Recall: Data Flow Equations. Forward Data Flow, Again

Outline for this Week

Expected Value and Variance

Mohammad Hossein Manshaei 1394

Machine Learning in Computer Vision Markov Random Fields Part II

Transcription:

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Lecture 3 Tuesday, February 2, 2016 1 Inductive proofs, continued Last lecture we considered inductively defined sets, and saw how the principle of mathematical induction (i.e., induction on the natural numbers) could be generalized to induction on other inductively-defined sets. 1.1 Inductive reasoning principle The inductive reasoning principle for natural numbers can be stated as follows. P (0) holds For all natural numbers n, if P (n) holds P (n + 1) holds for all natural numbers k, P (k) This inductive reasoning principle gives us a technique to prove that a property holds for all natural numbers, which is an infinite set. Why is the inductive reasoning principle for natural numbers sound? That is, why does it work? One intuition is that for any natural number k you choose, k is either zero, or the result of applying the successor operation a finite number of times to zero. That is, we have a finite proof tree that k is a natural number, using the inference rules given in Example 2 of Lecture 2. Given this proof tree, the leaf of this tree is that 0 N. We know that P (0) Moreover, since we have for all natural numbers n, if P (n) holds P (n + 1) holds, and we have P (0), we also have P (1). Since we have P (1), we also have P (2), and so on. That is, for each node of the proof tree, we are showing that the property holds of that node. Eventually we will reach the root of the tree, that k N, and we will have P (k). For every inductively defined set, we have a corresponding inductive reasoning principle. The template for this inductive reasoning principle, for an inductively defined set A, is as follows. Base cases: For each axiom P (a) Inductive cases: For each inference rule if P (a 1 ) and... and P (a n ) P (a). for all a A, P (a) a A, a 1 A... a n A a A,

The intuition for why the inductive reasoning principle works is that same as the intuition for why mathematical induction works, i.e., for why the inductive reasoning principle for natural numbers works. Let s consider a specific inductively defined set, and consider the inductive reasoning principle for that set: the set of arithmetic expressions AExp, inductively defined by the grammar e ::= x n e 1 + e 2 e 1 e 2 x := e 1 ; e 2 Here is the inductive reasoning principle for the set AExp. For all variables x, P (x) For all integers n, P (n) For all e 1 AExp and e 2 AExp, if P (e 1 ) and P (e 2 ) P (e 1 + e 2 ) For all e 1 AExp and e 2 AExp, if P (e 1 ) and P (e 2 ) P (e 1 e 2 ) For all variables x and e 1 AExp and e 2 AExp, if P (e 1 ) and P (e 2 ) P (x := e 1 ; e 2 ) for all e AExp, P (e) Here is the inductive reasoning principle for the small step relation on arithmetic expressions, i.e., for the set. VAR: For all variables x, stores σ and integers n such that σ(x) = n, P ( x, σ n, σ ) ADD: For all integers n, m, p such that p = n + m, and stores σ, P ( n + m, σ p, σ ) MUL: For all integers n, m, p such that p = n m, and stores σ, P ( n m, σ p, σ ) ASG: For all variables x, integers n and expressions e AExp, P ( x := n; e, σ e, σ[x n] ) LADD: For all expressions e 1, e 2, e 1 AExp and stores σ and σ, if P ( e 1, σ e 1, σ ) holds P ( e 1 + e 2, σ e 1 + e 2, σ ) RADD: For all integers n, expressions e 2, e 2 AExp and stores σ and σ, if P ( e 2, σ e 2, σ ) holds P ( n + e 2, σ n + e 2, σ ) LMUL: For all expressions e 1, e 2, e 1 AExp and stores σ and σ, if P ( e 1, σ e 1, σ ) holds P ( e 1 e 2, σ e 1 e 2, σ ) RMUL: For all integers n, expressions e 2, e 2 AExp and stores σ and σ, if P ( e 2, σ e 2, σ ) holds P ( n e 2, σ n e 2, σ ) ASG1: For all variables x, expressions e 1, e 2, e 1 AExp and stores σ and σ, if P ( e 1, σ e 1, σ ) holds P ( x := e 1 ; e 2, σ x := e 1; e 2, σ ) for all e, σ e, σ, P ( e, σ e, σ ) Note that there is one case for each inference rule: 4 axioms (VAR, ADD, MUL and ASG) and 5 inductive rules (LADD, RADD, LMUL, RMUL, ASG1). The inductive reasoning principles give us a technique for showing that a property holds of every element in an inductively defined set. Let s consider some examples. Make sure you understand how the appropriate inductive reasoning principle is being used in each of these examples. Page 2 of 5

1.2 Example: Proving progress Let s consider the progress property defined above, and repeated here: Progress: For each store σ and expression e that is not an integer, there exists a possible transition for e, σ : e Exp. σ Store. either e Int or e, σ. e, σ e, σ Let s rephrase this property as: for all expressions e, P (e) holds, where: P (e) = σ. (e Int) ( e, σ. e, σ e, σ ) The idea is to build a proof that follows the inductive structure in the grammar of expressions: e ::= x n e 1 + e 2 e 1 e 2 x := e 1 ; e 2. This is called structural induction on the expressions e. We must examine each case in the grammar and show that P (e) holds for that case. Since the grammar productions e = e 1 + e 2 and e = e 1 e 2 and e = x := e 1 ; e 2 are inductive definitions of expressions, they are inductive steps in the proof; the other two cases e = x and e = n are the basis of induction. The proof goes as follows: We will show by structural induction that for all expressions e we have Consider the possible cases for e. P (e) = σ. (e Int) ( e, σ. e, σ e, σ ). Case e = x. By the VAR axiom, we can evaluate x, σ in any state: x, σ n, σ, where n = σ(x). So e = n is a witness that there exists e such that x, σ e, σ, and P (x) Case e = n. Then e Int, so P (n) trivially Case e = e 1 +e 2. This is an inductive step. The inductive hypothesis is that P holds for subexpressions e 1 and e 2. We need to show that P holds for e. In other words, we want to show that P (e 1 ) and P (e 2 ) implies P (e). Let s expand these properties. We know that the following hold: and we want to show: We must inspect several subcases. P (e 1 ) = σ. (e 1 Int) ( e, σ. e 1, σ e, σ ) P (e 2 ) = σ. (e 2 Int) ( e, σ. e 2, σ e, σ ) P (e) = σ. (e Int) ( e, σ. e, σ e, σ ) First, if both e 1 and e 2 are integer constants, say e 1 = n 1 and e 2 = n 2, by rule ADD we know that the transition n 1 +n 2, σ n, σ is valid, where n is the sum of n 1 and n 2. Hence, P (e) = P (n 1 +n 2 ) holds (with witness e = n). Second, if e 1 is not an integer constant, by the inductive hypothesis P (e 1 ) we know that e 1, σ e, σ for some e and σ. We can use rule LADD to conclude e 1 + e 2, σ e + e 2, σ, so P (e) = P (e 1 + e 2 ) Third, if e 1 is an integer constant, say e 1 = n 1, but e 2 is not, by the inductive hypothesis P (e 2 ) we know that e 2, σ e, σ for some e and σ. We can use rule RADD to conclude n 1 + e 2, σ n 1 + e, σ, so P (e) = P (n 1 + e 2 ) Case e = e 1 e 2 and case e = x := e 1 ; e 2. These are also inductive cases, and their proofs are similar to the previous case. [Note that if you were writing this proof out for a homework, you should write these cases out in full.] Page 3 of 5

1.3 A recipe for inductive proofs In this class, you will be asked to write inductive proofs. Until you are used to doing them, inductive proofs can be difficult. Here is a recipe that you should follow when writing inductive proofs. Note that this recipe was followed above. 1. State what you are inducting over. In the example above, we are doing structural induction on the expressions e. 2. State the property P that you are proving by induction. (Sometimes, as in the proof above the property P will be essentially identical to the theorem/lemma/property that you are proving; other times the property we prove by induction will need to be stronger than theorem/lemma/property you are proving in order to get the different cases to go through.) 3. Make sure you know the inductive reasoning principle for the set you are inducting on. 4. Go through each case. For each case, don t be afraid to be verbose, spelling out explicitly how the meta-variables in an inference rule are instantiated in this case. 1.4 Example: the store changes incremental Let s see another example of an inductive proof, this time doing an induction on the derivation of the small step operational semantics relation. The property we will prove is that for all expressions e and stores σ, if e, σ e, σ either σ = σ or there is some variable x and integer n such that σ = σ[x n]. That is, in one small step, either the new store is identical to the old store, or is the result of updating a single program variable. Theorem 1. For all expressions e and stores σ, if e, σ e, σ either σ = σ or there is some variable x and integer n such that σ = σ[x n]. Proof of Theorem 1. We proceed by induction on the derivation of e, σ e, σ. Suppose we have e, σ, e and σ such that e, σ e, σ. The property P that we will prove of e, σ, e and σ, which we will write as P ( e, σ e, σ ), is that either σ = σ or there is some variable x and integer n such that σ = σ[x n]: P ( e, σ e, σ ) σ = σ ( x Var, n Int. σ = σ[x n]). Consider the cases for the derivation of e, σ e, σ. Case ADD. This is an axiom. Here, e n + m and e = p where p is the sum of m and n, and σ = σ. The result holds immediately. Case LADD. This is an inductive case. Here, e e 1 + e 2 and e e 1 + e 2 and e 1, σ e 1, σ. By the inductive hypothesis, applied to e 1, σ e 1, σ, we have that either σ = σ or there is some variable x and integer n such that σ = σ[x n], as required. Case ASG. This is an axiom. Here e x := n; e 2 and e e 2 and σ = σ[x n]. The result holds immediately. We leave the other cases (VAR, RADD, LMUL, RMUL, MUL, and ASG1) as exercises for the reader. Seriously, try them. Make sure you can do them. Go on, you re reading these notes, you may as well try the exercise. Page 4 of 5

2 Large-step semantics So far we have defined the small step evaluation relation Config Config for our simple language of arithmetic expressions, and used its transitive and reflexive closure to describe the execution of multiple steps of evaluation. In particular, if e, σ is some start configuration, and n, σ is a final configuration, the evaluation e, σ n, σ shows that by executing expression e starting with the store σ, we get the result n, and the final store σ. Large-step semantics is an alternative way to specify the operational semantics of a language. Large-step semantics directly give the final result. We ll use the same configurations as before, but define a large step evaluation relation: where Config FinalConfig Config = Exp Store and Final Config = Int Store Config. We write e, σ n, σ to mean that ( e, σ, n, σ ). In other words, configuration e, σ evaluates in one big step directly to final configuration n, σ. In general, the big step semantics takes a configuration to an answer. For our language of arithmetic expressions, answers are a subset of configurations, but this is not always true in general. The large step semantics boils down to defining the relation. We use inference rules to inductively define the relation, similar to how we specified the small-step operational semantics. INT LRG n, σ n, σ VAR LRG x, σ n, σ where σ(x) = n ADD LRG e 1, σ n 1, σ e 2, σ n 2, σ e 1 + e 2, σ n, σ where n is the sum of n 1 and n 2 MUL LRG e 1, σ n 1, σ e 2, σ n 2, σ e 1 e 2, σ n, σ where n is the product of n 1 and n 2 ASG LRG e 1, σ n 1, σ e 2, σ [x n 1 ] n 2, σ x := e 1 ; e 2, σ n 2, σ Page 5 of 5