Abstract stack machines for LL and LR parsing

Size: px
Start display at page:

Download "Abstract stack machines for LL and LR parsing"

Transcription

1 Abstract stack machines for LL and LR parsing Hayo Thielecke August 13, 2015

2 Contents Introduction Background and preliminaries Parsing machines LL machine LL(1) machine LR machine

3 Parsing and (non-)deterministic stack machines We decompose the parsing problem into to parts: What the parser does: a stack machine, possibly nondeterministic. The what is very simple and elegant and has not changed in 50 years. How the parser knows which step to take, making the machine deterministic. The how can be very complex, e.g. LALR(1) item or LL(*) constructions ; there is still ongoing research, even controversy. There are tools for computing the how, e.g. yacc and ANTLR You need to understand some of the theory for really using tools, e.g. what does it mean when yacc complains about a reduce/reduce conflict

4 Exercise Consider the language of matching round and square brackets. For example, these strings are in the language: and but this is not: [()] [()]()()([]) [(]) How would you write a program that recognizes this language, so that a string is accepted if and only if all the brackets match? It is not terribly hard. But are you sure your solution is correct? You will see how the LL and LR machines solve this problem, and are correct by construction.

5 Parsing stack and function call stack A useful analogy: a grammar rule is like a function definition void A() { B(); C(); } A B C Function calls also use a stack ANTLR uses the function call stack as its parsing stack yacc maintains its own parsing stack

6 Grammars: formal definition A context-free grammar consists of some terminal symbols a, b,..., +, ),... some non-terminal symbols A, B, S,... a distinguished non-terminal start symbol S some rules of the form A X 1... X n where n 0, A is a non-terminal, and the X i are symbols.

7 Notation: Greek letters Mathematicians and computer scientists are inordinately fond of Greek letters. α β γ ε σ alpha beta gamma epsilon sigma

8 Notational conventions for grammars We will use Greek letters α, β, γ, σ..., to stand for strings of symbols that may contain both terminals and non-terminals. In particular, ε is used for the empty string (of length 0). We will write A, B,... for non-terminals. We will write S for the start symbol. Terminal symbols are usually written as lower case letters a, b, c,... These conventions are handy once you get used to them and are found in most books, e.g. the Dragon Book

9 Derivations If A α is a rule, we can replace A by α for any strings β and γ on the left and right: This is one derivation step. β A γ β α γ A string w consisting only of terminal symbols is generated by the grammar if there is a sequence of derivation steps leading to it from the start symbol S: S w

10 An example derivation Consider this grammar: D [ D ] D (1) D ( D ) D (2) D (3) There is a unique leftmost derivation for each string in the language. For example, we derive [ ] [ ] as follows: D

11 An example derivation Consider this grammar: D [ D ] D (1) D ( D ) D (2) D (3) There is a unique leftmost derivation for each string in the language. For example, we derive [ ] [ ] as follows: D [ D ] D

12 An example derivation Consider this grammar: D [ D ] D (1) D ( D ) D (2) D (3) There is a unique leftmost derivation for each string in the language. For example, we derive [ ] [ ] as follows: D [ D ] D [ ] D

13 An example derivation Consider this grammar: D [ D ] D (1) D ( D ) D (2) D (3) There is a unique leftmost derivation for each string in the language. For example, we derive [ ] [ ] as follows: D [ D ] D [ ] D [ ] [ D ] D

14 An example derivation Consider this grammar: D [ D ] D (1) D ( D ) D (2) D (3) There is a unique leftmost derivation for each string in the language. For example, we derive [ ] [ ] as follows: D [ D ] D [ ] D [ ] [ D ] D [ ] [ ] D

15 An example derivation Consider this grammar: D [ D ] D (1) D ( D ) D (2) D (3) There is a unique leftmost derivation for each string in the language. For example, we derive [ ] [ ] as follows: D [ D ] D [ ] D [ ] [ D ] D [ ] [ ] D [ ] [ ]

16 Lexer and parser The raw input is processed by the lexer before the parser sees it. For instance, while or count as a single symbol for the purpose of parsing. Lexers can be automagically generated (just like parsers by parser generators. Example: lex regular expression deterministic finite automaton We ll skip this phase of the compiler. Parsing is much harder and has more interesting ideas.

17 Parsing stack machines The states of the machines are of the form where σ, w σ is the stack, a string of symbols which may include non-terminals w is the remaining input, a string of input symbols no non-terminal symbols may appear in the input Transitions or steps are of the form σ 1, w 1 }{{} old state σ 2, w 2 }{{} new state pushing or popping the stack changes σ 1 to σ 2 consuming input changes w 1 to w 2

18 LL vs LR idea There are two main classes of parsers: LL and LR. Both use a parsing stack, but in different ways. LL the stack contains a ion of what the parser expects to see in the input LR the stack contains a reduction of what the parser has already seen in the input Which is more powerful, LL or LR?

19 LL vs LR idea There are two main classes of parsers: LL and LR. Both use a parsing stack, but in different ways. LL the stack contains a ion of what the parser expects to see in the input LR the stack contains a reduction of what the parser has already seen in the input Which is more powerful, LL or LR? LL: Never make ions, especially about the future. LR: Benefit of hindsight Theoretically, LR is much more powerful than LL. But LL is much easier to understand.

20 Abstract and less abstract machines You could easily implement these parsing stack machines when they are deterministic. In OCAML, Haskell, Agda: state = two lists of symbols transitions by pattern matching In C: state = stack pointer + input pointer yacc does this, plus an LALR(1) automaton

21 Deterministic and nondeterministic machines The machine is deterministic if for every σ 1, w 1, there is at most one state σ 2, w 2 such that σ 1, w 1 σ 2, w 2 In theory, one uses non-deterministic parsers (see PDA in Models of Computation). In compilers, we want deterministic parser for efficiency (linear time). Some real parsers (ANTLR) tolerate some non-determinism and so some backtracking.

22 Parser generators and the LL and LR machines The LL and LR machines: encapsulate the main ideas (stack = ion vs reduction) can be used for abstract reasoning, like partial correctness cannot be used off the shelf, since they are nondeterministic A parser generator: computes information that makes these machines deterministic does not work on all grammars some grammars are not suitable for some (or all) deterministic parsing techniques parser generators produce errors such as reduce/reduce conflicts we may redesign our grammar to make the parser generator work

23 Examples of parser generators and their parsing principles ANTLR: LL(k) for any k, also called LL(*) Menhir for OCAML: LR(1) yacc/bison, ocamlyacc: LALR(1) The numbers refer to the amount of lookahead. LALR(1) is an version of LR(1) that uses less main memory useful in the 1970s. Now, not so much.

24 LL parsing stack machine Assume a fixed context-free grammar. We construct the LL machine for that grammar. The top of stack is on the left. A σ, w a σ, a w α σ, w if there is a rule A α match σ, w S, w is the initial state for input w ε, ε is the accepting state

25 Accepting a given input in the LL machine Definition: An input string w is accepted if and only if there is a sequence of machine steps leading to the accepting state: S, w ε, ε Theorem: an input string is accepted if and only if it can be derived by the grammar. More precisely: LL machine run = leftmost derivation in the grammar Stretch exercise: prove this in Agda

26 LL example Consider this grammar S L b L a L L Show how the LL machine for this grammar can accept the input a a a b.

27 LL machine run example S L b L a L L S, a a b

28 LL machine run example S L b L a L L S, a a b L b, a a b

29 LL machine run example S L b L a L L S, a a b L b, a a b a L b, a a b

30 LL machine run example S L b L a L L S, a a b L b, a a b a L b, a a b match L b, a b

31 LL machine run example S L b L a L L S, a a b L b, a a b a L b, a a b match L b, a b a L b, a b

32 LL machine run example S L b L a L L S, a a b L b, a a b a L b, a a b match L b, a b a L b, a b match L b, b

33 LL machine run example S L b L a L L S, a a b L b, a a b a L b, a a b match L b, a b a L b, a b match L b, b b, b

34 LL machine run example S L b L a L L S, a a b L b, a a b a L b, a a b match L b, a b a L b, a b match L b, b b, b match ε, ε

35 LL machine run example S L b L a L L S, a a b L b, a a b a L b, a a b match L b, a b a L b, a b match L b, b b, b match ε, ε

36 LL machine run example: what should not happen S L b L a L L S, a a b

37 LL machine run example: what should not happen S L b L a L L S, a a b L b, a a b

38 LL machine run example: what should not happen S L b L a L L S, a a b L b, a a b b, a a b

39 LL machine run example: what should not happen S L b L a L L S, a a b L b, a a b b, a a b When it makes bad nondeterministic choice, the LL machine gets stuck.

40 Making the LL machine deterministic Idea: we use one symbol of lookahead to guide the moves. LL(1) machine. Formally: FIRST and FOLLOW construction. Can be done by hand, though tedious The construction does not work for all grammars! Real-world: ANTLR does a more powerful version, LL(k) for any k.

41 FIRST and FOLLOW We define FIRST, FOLLOW and nullable: A terminal symbol b is in FIRST (α) if there exist a β such that α b β that is, b is the first symbol in something derivable from α A terminal symbol b is in FOLLOW (X ) if there exist α and β such that S α X b γ that is, b follows X in some derivation X is nullable if X ε that is, we can derive the empty string from it

42 FIRST and FOLLOW examples Consider S L b L a L L Then a FIRST (L) b FOLLOW (L) L is nullable

43 LL(1) machine This is a ive parser with 1 symbol of lookahead. A σ, b w A σ, b w a σ, a w α σ, b w if there is a rule A α and b FIRST (α) σ, b w if there is a rule A ε and b FOLLOW (A) match σ, w S, w is the initial state for input w ε, ε is the accepting state

44 Parsing errors in the LL(1) machine Suppose the LL(1) machine reaches a state of the form a σ, b w where a b. Then the machine can report an error, like expecting a found b in the input instead. Similarly, if it reaches a state of the form ε, w where w ε, the machine can report unexpected input w at the end.

45 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b

46 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b L b, a a b as a FIRST (L)

47 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b L b, a a b as a FIRST (L) a L b, a a b as a FIRST (L)

48 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b L b, a a b as a FIRST (L) a L b, a a b as a FIRST (L) match L b, a b

49 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b L b, a a b as a FIRST (L) a L b, a a b as a FIRST (L) match L b, a b a L b, a b as a FIRST (L)

50 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b L b, a a b as a FIRST (L) a L b, a a b as a FIRST (L) match L b, a b a L b, a b as a FIRST (L) match L b, b

51 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b L b, a a b as a FIRST (L) a L b, a a b as a FIRST (L) match L b, a b a L b, a b as a FIRST (L) match L b, b b, b as b FOLLOW (L)

52 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b L b, a a b as a FIRST (L) a L b, a a b as a FIRST (L) match L b, a b a L b, a b as a FIRST (L) match L b, b match ε, ε b, b as b FOLLOW (L)

53 LL(1) machine run example Just like LL machine, but now deterministic S L b L a L L S, a a b L b, a a b as a FIRST (L) a L b, a a b as a FIRST (L) match L b, a b a L b, a b as a FIRST (L) match L b, b match b, b as b FOLLOW (L) ε, ε

54 Is the LL(1) machine deterministic? A σ, b w A σ, b w α σ, b w if there is a rule A α and b FIRST (α) σ, b w if there is a rule A ε and b FOLLOW (A)

55 Is the LL(1) machine deterministic? A σ, b w A σ, b w α σ, b w if there is a rule A α and b FIRST (α) σ, b w if there is a rule A ε and b FOLLOW (A) For some grammars, there may be:

56 Is the LL(1) machine deterministic? A σ, b w A σ, b w α σ, b w if there is a rule A α and b FIRST (α) σ, b w if there is a rule A ε and b FOLLOW (A) For some grammars, there may be: FIRST/FIRST conflicts

57 Is the LL(1) machine deterministic? A σ, b w A σ, b w α σ, b w if there is a rule A α and b FIRST (α) σ, b w if there is a rule A ε and b FOLLOW (A) For some grammars, there may be: FIRST/FIRST conflicts A α 1 A α 2 FIRST (α 1 ) FIRST (α 2 )

58 Is the LL(1) machine deterministic? A σ, b w A σ, b w α σ, b w if there is a rule A α and b FIRST (α) σ, b w if there is a rule A ε and b FOLLOW (A) For some grammars, there may be: FIRST/FIRST conflicts A α 1 A α 2 FIRST (α 1 ) FIRST (α 2 ) FIRST/FOLLOW conflicts

59 Is the LL(1) machine deterministic? A σ, b w A σ, b w α σ, b w if there is a rule A α and b FIRST (α) σ, b w if there is a rule A ε and b FOLLOW (A) For some grammars, there may be: FIRST/FIRST conflicts A α 1 A α 2 FIRST (α 1 ) FIRST (α 2 ) FIRST/FOLLOW conflicts A α FIRST (α) FOLLOW (A) NB: FIRST/FIRST conflict do not mean that the grammar is ambiguous. Ambiguous means different parse trees for the same string.

60 Computing FIRST and FOLLOW for each symbol X, nullable[x ] is initialised to false for each symbol X, FOLLOW [X ] is initialised to the empty set for each terminal symbol a, first[a] is initialised to {a} for each non-terminal symbol A, first[a] is initialised to the empty set repeat for each production X Y 1... Y k if all the Y i are nullable then set nullable[x ] to true for each i from 1 to k, and j from i + 1 to k if Y 1,..., Y i 1 are all nullable then add all symbols in first[y i ] to first[x ] if Y i+1,..., Y k are all nullable then add all symbols in follow[x ] to follow[y i ] if Y i+1,..., Y j 1 are all nullable then add all symbols in first[y j ] to follow[y i ] until first, follow and nullable did not change in this iteration

61 LL(1) machine exercise Consider the grammar D [ D ] D (1) D ( D ) D (2) D (3) Implement the LL(1) machine for this grammar in a language of your choice, preferably C. Bonus for writing the shortest possible implementation in C.

62 LR machine Assume a fixed context free grammar. We construct the LL machine for it. The top of stack is on the right. σ, a w σ α, w shift σ a, w reduce σ A, w if there is a rule A α ε, w is the initial state for input w S, ε is the accepting state

63 Accepting a given input in the LR machine Definition: An input string w is accepted if and only if there is a sequence of machine steps leading to the accepting state: ε, w S, ε Theorem: an input string is accepted if and only if it can be derived by the grammar. More precisely: LR machine run = rightmost derivation in the grammar Stretch exercise: prove this in Agda

64 LL vs LR example Consider this grammar S A B A c B d Show how the LL and LR machine can accept the input c d.

65 Is the LR machine deterministic? σ, a w σ α, w shift σ a, w reduce σ A, w if there is a rule A α

66 Is the LR machine deterministic? σ, a w σ α, w shift σ a, w reduce σ A, w if there is a rule A α For some grammars, there may be: shift/reduce conflicts reduce/reduce conflicts

67 LL vs LR in more detail LL A σ, w a σ, a w α σ, w if there is a rule A α match σ, w LR σ, a w σ α, w shift σ a, w reduce σ A, w if there is a rule A α

68 LL vs LR in more detail LL A σ, w a σ, a w α σ, w if there is a rule A α match σ, w LR σ, a w σ α, w shift σ a, w reduce σ A, w if there is a rule A α The LR machines can make its choices after it has seen the right hand side of a rule.

69 LL vs LR example Here is a simple grammar: S A S B A a b B a c One symbol of lookahead is not enough for the LL machine. An LR machine can look at the top of its stack and base its choice on that.

70 LR machine run example S A S B A a b B a c ε, a b

71 LR machine run example S A S B A a b B a c ε, a b shift a, b

72 LR machine run example S A S B A a b B a c ε, a b shift a, b shift a b, ε

73 LR machine run example S A S B A a b B a c ε, a b shift a, b shift a b, ε reduce A, ε

74 LR machine run example S A S B A a b B a c ε, a b shift a, b shift a b, ε reduce A, ε reduce S, ε

75 LR machine run example S A S B A a b B a c ε, a b shift a, b shift a b, ε reduce A, ε reduce S, ε

76 Experimenting with ANTLR and Menhir errors Construct some grammar rules that are not: LL(k) for any k and feed them to ANTLR LR(1) and feed them to Menhir and observe the error messages. The error messages are allegedly human-readable. It helps if you understand LL and LR.

77 How to make the LR machine deterministic Construction of LR items Much more complex than FIRST/FOLLOW construction Even more complex: LALR(1) items, to consume less memory You really want a tool to compute it for you real world: yacc performs LALR(1) construction. Generations of CS students had to simulate the LALR(1) automaton in the exam Hardcore: compute LALR(1) items by hand in the exam fun: does not fit on a sheet; if you make a mistake it never stops But I don t think that teaches you anything

78 Problem: ambiguous grammars A grammar is ambiguous if there is a string that has more than one parse tree. Standard example: E E E E 1 One such string is It could mean (1-1)-1 or 1-(1-1) depending on how you parse it. Ambiguous grammars are a problem for parsing, as we do not know which tree is intended. Note: do not confuse ambiguous with FIRST/FIRST conflict.

79 Left recursion In fact, this grammar also has a FIRST/FIRST conflict. E E E E 1 1 is in FIRST of both rules ive parser construction fails Standard solution: left recursion elimination

80 Left recursion elimination example E E - E E 1 We observe that E Idea: 1 followed by 0 or more - 1 E 1 F F - 1 F F This refactored grammar also eliminates the ambiguity. Yay.

81 FIRST/FIRST ambiguity This grammar has a FIRST/FIRST conflict No left recursion. No ambiguity. A a b A a c

82 FIRST/FIRST ambiguity This grammar has a FIRST/FIRST conflict No left recursion. No ambiguity. A a b A a c

83 FIRST/FIRST ambiguity This grammar has a FIRST/FIRST conflict No left recursion. No ambiguity. A a b A a c

LR Parsing. Bottom-Up Parsing

LR Parsing. Bottom-Up Parsing LR Parsing Bottom-Up Parsing #1 Outline No Stopping The Parsing! LL(1 Construction Bottom-Up Parsing LR Parsing Shift and Reduce LR(1 Parsing Algorithm LR(1 Parsing Tables #2 Software ngineering The most

More information

LR Parsing. Bottom-Up Parsing

LR Parsing. Bottom-Up Parsing LR Parsing Bottom-Up Parsing #1 How To Survive CS 415 PA3 Parser Testing, CRM PA4 Type Checker Checkpo PA5 Interpreter Checkpo Written Assignments Midterms Cheat Sheet? Final xam #2 Outline No Stopping

More information

CS 4110 Programming Languages and Logics Lecture #2: Introduction to Semantics. 1 Arithmetic Expressions

CS 4110 Programming Languages and Logics Lecture #2: Introduction to Semantics. 1 Arithmetic Expressions CS 4110 Programming Languages and Logics Lecture #2: Introduction to Semantics What is the meaning of a program? When we write a program, we represent it using sequences of characters. But these strings

More information

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the

More information

Copyright 1973, by the author(s). All rights reserved.

Copyright 1973, by the author(s). All rights reserved. Copyright 1973, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are

More information

CS 4110 Programming Languages & Logics. Lecture 2 Introduction to Semantics

CS 4110 Programming Languages & Logics. Lecture 2 Introduction to Semantics CS 4110 Programming Languages & Logics Lecture 2 Introduction to Semantics 29 August 2012 Announcements 2 Wednesday Lecture Moved to Thurston 203 Foster Office Hours Today 11a-12pm in Gates 432 Mota Office

More information

CHAPTER 5 STOCHASTIC SCHEDULING

CHAPTER 5 STOCHASTIC SCHEDULING CHPTER STOCHSTIC SCHEDULING In some situations, estimating activity duration becomes a difficult task due to ambiguity inherited in and the risks associated with some work. In such cases, the duration

More information

Problem set 1 Answers: 0 ( )= [ 0 ( +1 )] = [ ( +1 )]

Problem set 1 Answers: 0 ( )= [ 0 ( +1 )] = [ ( +1 )] Problem set 1 Answers: 1. (a) The first order conditions are with 1+ 1so 0 ( ) [ 0 ( +1 )] [( +1 )] ( +1 ) Consumption follows a random walk. This is approximately true in many nonlinear models. Now we

More information

MA 1125 Lecture 05 - Measures of Spread. Wednesday, September 6, Objectives: Introduce variance, standard deviation, range.

MA 1125 Lecture 05 - Measures of Spread. Wednesday, September 6, Objectives: Introduce variance, standard deviation, range. MA 115 Lecture 05 - Measures of Spread Wednesday, September 6, 017 Objectives: Introduce variance, standard deviation, range. 1. Measures of Spread In Lecture 04, we looked at several measures of central

More information

Q1. [?? pts] Search Traces

Q1. [?? pts] Search Traces CS 188 Spring 2010 Introduction to Artificial Intelligence Midterm Exam Solutions Q1. [?? pts] Search Traces Each of the trees (G1 through G5) was generated by searching the graph (below, left) with a

More information

Notes on the symmetric group

Notes on the symmetric group Notes on the symmetric group 1 Computations in the symmetric group Recall that, given a set X, the set S X of all bijections from X to itself (or, more briefly, permutations of X) is group under function

More information

Maximum Contiguous Subsequences

Maximum Contiguous Subsequences Chapter 8 Maximum Contiguous Subsequences In this chapter, we consider a well-know problem and apply the algorithm-design techniques that we have learned thus far to this problem. While applying these

More information

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1 Making Decisions CS 3793 Artificial Intelligence Making Decisions 1 Planning under uncertainty should address: The world is nondeterministic. Actions are not certain to succeed. Many events are outside

More information

Semantics with Applications 2b. Structural Operational Semantics

Semantics with Applications 2b. Structural Operational Semantics Semantics with Applications 2b. Structural Operational Semantics Hanne Riis Nielson, Flemming Nielson (thanks to Henrik Pilegaard) [SwA] Hanne Riis Nielson, Flemming Nielson Semantics with Applications:

More information

First-Order Logic in Standard Notation Basics

First-Order Logic in Standard Notation Basics 1 VOCABULARY First-Order Logic in Standard Notation Basics http://mathvault.ca April 21, 2017 1 Vocabulary Just as a natural language is formed with letters as its building blocks, the First- Order Logic

More information

CEC login. Student Details Name SOLUTIONS

CEC login. Student Details Name SOLUTIONS Student Details Name SOLUTIONS CEC login Instructions You have roughly 1 minute per point, so schedule your time accordingly. There is only one correct answer per question. Good luck! Question 1. Searching

More information

HW 1 Reminder. Principles of Programming Languages. Lets try another proof. Induction. Induction on Derivations. CSE 230: Winter 2007

HW 1 Reminder. Principles of Programming Languages. Lets try another proof. Induction. Induction on Derivations. CSE 230: Winter 2007 CSE 230: Winter 2007 Principles of Programming Languages Lecture 4: Induction, Small-Step Semantics HW 1 Reminder Due next Tue Instructions about turning in code to follow Send me mail if you have issues

More information

3: Balance Equations

3: Balance Equations 3.1 Balance Equations Accounts with Constant Interest Rates 15 3: Balance Equations Investments typically consist of giving up something today in the hope of greater benefits in the future, resulting in

More information

In this lecture, we will use the semantics of our simple language of arithmetic expressions,

In this lecture, we will use the semantics of our simple language of arithmetic expressions, CS 4110 Programming Languages and Logics Lecture #3: Inductive definitions and proofs In this lecture, we will use the semantics of our simple language of arithmetic expressions, e ::= x n e 1 + e 2 e

More information

CSE 21 Winter 2016 Homework 6 Due: Wednesday, May 11, 2016 at 11:59pm. Instructions

CSE 21 Winter 2016 Homework 6 Due: Wednesday, May 11, 2016 at 11:59pm. Instructions CSE 1 Winter 016 Homework 6 Due: Wednesday, May 11, 016 at 11:59pm Instructions Homework should be done in groups of one to three people. You are free to change group members at any time throughout the

More information

Chapter 15: Dynamic Programming

Chapter 15: Dynamic Programming Chapter 15: Dynamic Programming Dynamic programming is a general approach to making a sequence of interrelated decisions in an optimum way. While we can describe the general characteristics, the details

More information

Sequential Decision Making

Sequential Decision Making Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008 Introduction Some examples Dynamic programming

More information

CS4311 Design and Analysis of Algorithms. Lecture 14: Amortized Analysis I

CS4311 Design and Analysis of Algorithms. Lecture 14: Amortized Analysis I CS43 Design and Analysis of Algorithms Lecture 4: Amortized Analysis I About this lecture Given a data structure, amortized analysis studies in a sequence of operations, the average time to perform an

More information

Introduction to Fall 2007 Artificial Intelligence Final Exam

Introduction to Fall 2007 Artificial Intelligence Final Exam NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Final Exam You have 180 minutes. The exam is closed book, closed notes except a two-page crib sheet, basic calculators

More information

CS360 Homework 14 Solution

CS360 Homework 14 Solution CS360 Homework 14 Solution Markov Decision Processes 1) Invent a simple Markov decision process (MDP) with the following properties: a) it has a goal state, b) its immediate action costs are all positive,

More information

COMPUTER SCIENCE 20, SPRING 2014 Homework Problems Recursive Definitions, Structural Induction, States and Invariants

COMPUTER SCIENCE 20, SPRING 2014 Homework Problems Recursive Definitions, Structural Induction, States and Invariants COMPUTER SCIENCE 20, SPRING 2014 Homework Problems Recursive Definitions, Structural Induction, States and Invariants Due Wednesday March 12, 2014. CS 20 students should bring a hard copy to class. CSCI

More information

A Semantic Framework for Program Debugging

A Semantic Framework for Program Debugging A Semantic Framework for Program Debugging State Key Laboratory of Software Development Environment Beihang University July 3, 2013 Outline 1 Introduction 2 The Key Points 3 A Structural Operational Semantics

More information

Decidability and Recursive Languages

Decidability and Recursive Languages Decidability and Recursive Languages Let L (Σ { }) be a language, i.e., a set of strings of symbols with a finite length. For example, {0, 01, 10, 210, 1010,...}. Let M be a TM such that for any string

More information

Final exam solutions

Final exam solutions EE365 Stochastic Control / MS&E251 Stochastic Decision Models Profs. S. Lall, S. Boyd June 5 6 or June 6 7, 2013 Final exam solutions This is a 24 hour take-home final. Please turn it in to one of the

More information

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Lecture 2 Thursday, January 30, 2014 1 Expressing Program Properties Now that we have defined our small-step operational

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non-Deterministic Search 1 Example: Grid World A maze-like problem The agent lives

More information

We use probability distributions to represent the distribution of a discrete random variable.

We use probability distributions to represent the distribution of a discrete random variable. Now we focus on discrete random variables. We will look at these in general, including calculating the mean and standard deviation. Then we will look more in depth at binomial random variables which are

More information

Chapter 8 Statistical Intervals for a Single Sample

Chapter 8 Statistical Intervals for a Single Sample Chapter 8 Statistical Intervals for a Single Sample Part 1: Confidence intervals (CI) for population mean µ Section 8-1: CI for µ when σ 2 known & drawing from normal distribution Section 8-1.2: Sample

More information

Yao s Minimax Principle

Yao s Minimax Principle Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,

More information

ECON 4325 Monetary Policy Lecture 13: Summary. Martin Blomhoff Holm

ECON 4325 Monetary Policy Lecture 13: Summary. Martin Blomhoff Holm ECON 4325 Monetary Policy Lecture 13: Summary Martin Blomhoff Holm Outline 1. A short summary of this course. 2. The bank lending channel and bank capital channel 3. An example problem 4. Interpreting

More information

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 Lecture 17 & 18: Markov Decision Processes Oct 12 13, 2010 A subset of Lecture 9 slides from Dan Klein UC Berkeley Many slides over the course

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the reward function Must (learn to) act so as to maximize expected rewards Grid World The agent

More information

Lecture 7. Analysis of algorithms: Amortized Analysis. January Lecture 7

Lecture 7. Analysis of algorithms: Amortized Analysis. January Lecture 7 Analysis of algorithms: Amortized Analysis January 2014 What is amortized analysis? Amortized analysis: set of techniques (Aggregate method, Accounting method, Potential method) for proving upper (worst-case)

More information

Stochastic Games and Bayesian Games

Stochastic Games and Bayesian Games Stochastic Games and Bayesian Games CPSC 532L Lecture 10 Stochastic Games and Bayesian Games CPSC 532L Lecture 10, Slide 1 Lecture Overview 1 Recap 2 Stochastic Games 3 Bayesian Games Stochastic Games

More information

Brief Notes on the Category Theoretic Semantics of Simply Typed Lambda Calculus

Brief Notes on the Category Theoretic Semantics of Simply Typed Lambda Calculus University of Cambridge 2017 MPhil ACS / CST Part III Category Theory and Logic (L108) Brief Notes on the Category Theoretic Semantics of Simply Typed Lambda Calculus Andrew Pitts Notation: comma-separated

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2011 Lecture 9: MDPs 2/16/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Announcements

More information

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 7: Bayesian approach to MAB - Gittins index Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach

More information

UNIT VI TREES. Marks - 14

UNIT VI TREES. Marks - 14 UNIT VI TREES Marks - 14 SYLLABUS 6.1 Non-linear data structures 6.2 Binary trees : Complete Binary Tree, Basic Terms: level number, degree, in-degree and out-degree, leaf node, directed edge, path, depth,

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

Non-Deterministic Search

Non-Deterministic Search Non-Deterministic Search MDP s 1 Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2 Example:

More information

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management BA 386T Tom Shively PROBABILITY CONCEPTS AND NORMAL DISTRIBUTIONS The fundamental idea underlying any statistical

More information

Essays on Some Combinatorial Optimization Problems with Interval Data

Essays on Some Combinatorial Optimization Problems with Interval Data Essays on Some Combinatorial Optimization Problems with Interval Data a thesis submitted to the department of industrial engineering and the institute of engineering and sciences of bilkent university

More information

Rational Behaviour and Strategy Construction in Infinite Multiplayer Games

Rational Behaviour and Strategy Construction in Infinite Multiplayer Games Rational Behaviour and Strategy Construction in Infinite Multiplayer Games Michael Ummels ummels@logic.rwth-aachen.de FSTTCS 2006 Michael Ummels Rational Behaviour and Strategy Construction 1 / 15 Infinite

More information

PhD Qualifier Examination

PhD Qualifier Examination PhD Qualifier Examination Department of Agricultural Economics May 29, 2015 Instructions This exam consists of six questions. You must answer all questions. If you need an assumption to complete a question,

More information

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2

COMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2 COMP417 Introduction to Robotics and Intelligent Systems Reinforcement Learning - 2 Speaker: Sandeep Manjanna Acklowledgement: These slides use material from Pieter Abbeel s, Dan Klein s and John Schulman

More information

Notes on the EM Algorithm Michael Collins, September 24th 2005

Notes on the EM Algorithm Michael Collins, September 24th 2005 Notes on the EM Algorithm Michael Collins, September 24th 2005 1 Hidden Markov Models A hidden Markov model (N, Σ, Θ) consists of the following elements: N is a positive integer specifying the number of

More information

Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall A. Estimation of Policy Liabilities

Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall A. Estimation of Policy Liabilities Obtaining Predictive Distributions for Reserves Which Incorporate Expert Opinions R. Verrall A. Estimation of Policy Liabilities LEARNING OBJECTIVES 5. Describe the various sources of risk and uncertainty

More information

CS473-Algorithms I. Lecture 12. Amortized Analysis. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 12. Amortized Analysis. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 12 Amortized Analysis 1 Amortized Analysis Key point: The time required to perform a sequence of data structure operations is averaged over all operations performed Amortized

More information

Subgame Perfect Cooperation in an Extensive Game

Subgame Perfect Cooperation in an Extensive Game Subgame Perfect Cooperation in an Extensive Game Parkash Chander * and Myrna Wooders May 1, 2011 Abstract We propose a new concept of core for games in extensive form and label it the γ-core of an extensive

More information

A Consistent Semantics of Self-Adjusting Computation

A Consistent Semantics of Self-Adjusting Computation A Consistent Semantics of Self-Adjusting Computation Umut A. Acar 1 Matthias Blume 1 Jacob Donham 2 December 2006 CMU-CS-06-168 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213

More information

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Lecture 3 Tuesday, January 30, 2018 1 Inductive sets Induction is an important concept in the theory of programming language.

More information

CS792 Notes Henkin Models, Soundness and Completeness

CS792 Notes Henkin Models, Soundness and Completeness CS792 Notes Henkin Models, Soundness and Completeness Arranged by Alexandra Stefan March 24, 2005 These notes are a summary of chapters 4.5.1-4.5.5 from [1]. 1 Review indexed family of sets: A s, where

More information

a 13 Notes on Hidden Markov Models Michael I. Jordan University of California at Berkeley Hidden Markov Models The model

a 13 Notes on Hidden Markov Models Michael I. Jordan University of California at Berkeley Hidden Markov Models The model Notes on Hidden Markov Models Michael I. Jordan University of California at Berkeley Hidden Markov Models This is a lightly edited version of a chapter in a book being written by Jordan. Since this is

More information

Algorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information

Algorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information Algorithmic Game Theory and Applications Lecture 11: Games of Perfect Information Kousha Etessami finite games of perfect information Recall, a perfect information (PI) game has only 1 node per information

More information

Multiple regression - a brief introduction

Multiple regression - a brief introduction Multiple regression - a brief introduction Multiple regression is an extension to regular (simple) regression. Instead of one X, we now have several. Suppose, for example, that you are trying to predict

More information

Descriptive Statistics (Devore Chapter One)

Descriptive Statistics (Devore Chapter One) Descriptive Statistics (Devore Chapter One) 1016-345-01 Probability and Statistics for Engineers Winter 2010-2011 Contents 0 Perspective 1 1 Pictorial and Tabular Descriptions of Data 2 1.1 Stem-and-Leaf

More information

THE TRAVELING SALESMAN PROBLEM FOR MOVING POINTS ON A LINE

THE TRAVELING SALESMAN PROBLEM FOR MOVING POINTS ON A LINE THE TRAVELING SALESMAN PROBLEM FOR MOVING POINTS ON A LINE GÜNTER ROTE Abstract. A salesperson wants to visit each of n objects that move on a line at given constant speeds in the shortest possible time,

More information

SMT and POR beat Counter Abstraction

SMT and POR beat Counter Abstraction SMT and POR beat Counter Abstraction Parameterized Model Checking of Threshold-Based Distributed Algorithms Igor Konnov Helmut Veith Josef Widder Alpine Verification Meeting May 4-6, 2015 Igor Konnov 2/64

More information

Markov Decision Process

Markov Decision Process Markov Decision Process Human-aware Robotics 2018/02/13 Chapter 17.3 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/mdp-ii.pdf

More information

Lecture Quantitative Finance Spring Term 2015

Lecture Quantitative Finance Spring Term 2015 implied Lecture Quantitative Finance Spring Term 2015 : May 7, 2015 1 / 28 implied 1 implied 2 / 28 Motivation and setup implied the goal of this chapter is to treat the implied which requires an algorithm

More information

CS 188: Artificial Intelligence. Outline

CS 188: Artificial Intelligence. Outline C 188: Artificial Intelligence Markov Decision Processes (MDPs) Pieter Abbeel UC Berkeley ome slides adapted from Dan Klein 1 Outline Markov Decision Processes (MDPs) Formalism Value iteration In essence

More information

The Traveling Salesman Problem. Time Complexity under Nondeterminism. A Nondeterministic Algorithm for tsp (d)

The Traveling Salesman Problem. Time Complexity under Nondeterminism. A Nondeterministic Algorithm for tsp (d) The Traveling Salesman Problem We are given n cities 1, 2,..., n and integer distances d ij between any two cities i and j. Assume d ij = d ji for convenience. The traveling salesman problem (tsp) asks

More information

A Spreadsheet-Literate Non-Statistician s Guide to the Beta-Geometric Model

A Spreadsheet-Literate Non-Statistician s Guide to the Beta-Geometric Model A Spreadsheet-Literate Non-Statistician s Guide to the Beta-Geometric Model Peter S Fader wwwpetefadercom Bruce G S Hardie wwwbrucehardiecom December 2014 1 Introduction The beta-geometric (BG) distribution

More information

Structured Payoff Scripting in QuantLib

Structured Payoff Scripting in QuantLib Structured Payoff Scripting in QuantLib Dr Sebastian Schlenkrich Dusseldorf, November 30, 2017 d-fine d-fine All rights All rights reserved reserved 0 Why do we want a payoff scripting language? Let s

More information

Handout 4: Deterministic Systems and the Shortest Path Problem

Handout 4: Deterministic Systems and the Shortest Path Problem SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 4: Deterministic Systems and the Shortest Path Problem Instructor: Shiqian Ma January 27, 2014 Suggested Reading: Bertsekas

More information

Node betweenness centrality: the definition.

Node betweenness centrality: the definition. Brandes algorithm These notes supplement the notes and slides for Task 11. They do not add any new material, but may be helpful in understanding the Brandes algorithm for calculating node betweenness centrality.

More information

4 Martingales in Discrete-Time

4 Martingales in Discrete-Time 4 Martingales in Discrete-Time Suppose that (Ω, F, P is a probability space. Definition 4.1. A sequence F = {F n, n = 0, 1,...} is called a filtration if each F n is a sub-σ-algebra of F, and F n F n+1

More information

Real Options. Katharina Lewellen Finance Theory II April 28, 2003

Real Options. Katharina Lewellen Finance Theory II April 28, 2003 Real Options Katharina Lewellen Finance Theory II April 28, 2003 Real options Managers have many options to adapt and revise decisions in response to unexpected developments. Such flexibility is clearly

More information

Numerical Descriptive Measures. Measures of Center: Mean and Median

Numerical Descriptive Measures. Measures of Center: Mean and Median Steve Sawin Statistics Numerical Descriptive Measures Having seen the shape of a distribution by looking at the histogram, the two most obvious questions to ask about the specific distribution is where

More information

Sy D. Friedman. August 28, 2001

Sy D. Friedman. August 28, 2001 0 # and Inner Models Sy D. Friedman August 28, 2001 In this paper we examine the cardinal structure of inner models that satisfy GCH but do not contain 0 #. We show, assuming that 0 # exists, that such

More information

Solutions to the problems in the supplement are found at the end of the supplement

Solutions to the problems in the supplement are found at the end of the supplement www.liontutors.com FIN 301 Exam 2 Chapter 12 Supplement Solutions to the problems in the supplement are found at the end of the supplement Chapter 12 The Capital Asset Pricing Model Risk and Return Higher

More information

MA 1125 Lecture 12 - Mean and Standard Deviation for the Binomial Distribution. Objectives: Mean and standard deviation for the binomial distribution.

MA 1125 Lecture 12 - Mean and Standard Deviation for the Binomial Distribution. Objectives: Mean and standard deviation for the binomial distribution. MA 5 Lecture - Mean and Standard Deviation for the Binomial Distribution Friday, September 9, 07 Objectives: Mean and standard deviation for the binomial distribution.. Mean and Standard Deviation of the

More information

Ex 1) Suppose a license plate can have any three letters followed by any four digits.

Ex 1) Suppose a license plate can have any three letters followed by any four digits. AFM Notes, Unit 1 Probability Name 1-1 FPC and Permutations Date Period ------------------------------------------------------------------------------------------------------- The Fundamental Principle

More information

STAT 157 HW1 Solutions

STAT 157 HW1 Solutions STAT 157 HW1 Solutions http://www.stat.ucla.edu/~dinov/courses_students.dir/10/spring/stats157.dir/ Problem 1. 1.a: (6 points) Determine the Relative Frequency and the Cumulative Relative Frequency (fill

More information

Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method

Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method Meng-Jie Lu 1 / Wei-Hua Zhong 1 / Yu-Xiu Liu 1 / Hua-Zhang Miao 1 / Yong-Chang Li 1 / Mu-Huo Ji 2 Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method Abstract:

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Markov Decision Processes II Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC

More information

Notes for Section: Week 7

Notes for Section: Week 7 Economics 160 Professor Steven Tadelis Stanford University Spring Quarter, 004 Notes for Section: Week 7 Notes prepared by Paul Riskind (pnr@stanford.edu). spot errors or have questions about these notes.

More information

4 Reinforcement Learning Basic Algorithms

4 Reinforcement Learning Basic Algorithms Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems

More information

Full abstraction for multi-language systems ML plus linear types

Full abstraction for multi-language systems ML plus linear types Full abstraction for multi-language systems ML plus linear types Gabriel Scherer, Amal Ahmed, Max New Northeastern University, Boston May 5, 2017 1 1 Full Abstraction for Multi-Language Systems: Introduction

More information

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Lecture 3 Tuesday, February 2, 2016 1 Inductive proofs, continued Last lecture we considered inductively defined sets, and

More information

Use partial derivatives just found, evaluate at a = 0: This slope of small hyperbola must equal slope of CML:

Use partial derivatives just found, evaluate at a = 0: This slope of small hyperbola must equal slope of CML: Derivation of CAPM formula, contd. Use the formula: dµ σ dσ a = µ a µ dµ dσ = a σ. Use partial derivatives just found, evaluate at a = 0: Plug in and find: dµ dσ σ = σ jm σm 2. a a=0 σ M = a=0 a µ j µ

More information

Optimization for Chemical Engineers, 4G3. Written midterm, 23 February 2015

Optimization for Chemical Engineers, 4G3. Written midterm, 23 February 2015 Optimization for Chemical Engineers, 4G3 Written midterm, 23 February 2015 Kevin Dunn, kevin.dunn@mcmaster.ca McMaster University Note: No papers, other than this test and the answer booklet are allowed

More information

Probability and Stochastics for finance-ii Prof. Joydeep Dutta Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur

Probability and Stochastics for finance-ii Prof. Joydeep Dutta Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur Probability and Stochastics for finance-ii Prof. Joydeep Dutta Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur Lecture - 07 Mean-Variance Portfolio Optimization (Part-II)

More information

ACCT323, Cost Analysis & Control H Guy Williams, 2005

ACCT323, Cost Analysis & Control H Guy Williams, 2005 Cost allocation methods are an interesting group of exercise. We will see different cuts. Basically the problem we have is very similar to the problem we have with overhead. We can figure out the direct

More information

Implementing Risk Appetite for Variable Annuities

Implementing Risk Appetite for Variable Annuities Implementing Risk Appetite for Variable Annuities Nick Jacobi, FSA, CERA Presented at the: 2011 Enterprise Risk Management Symposium Society of Actuaries March 14-16, 2011 Copyright 2011 by the Society

More information

Proof Techniques for Operational Semantics

Proof Techniques for Operational Semantics Proof Techniques for Operational Semantics Wei Hu Memorial Lecture I will give a completely optional bonus survey lecture: A Recent History of PL in Context It will discuss what has been hot in various

More information

On the computational complexity of spiking neural P systems

On the computational complexity of spiking neural P systems On the computational complexity of spiking neural P systems Turlough Neary Boole Centre for Research in Informatics, University College Cork, Ireland. tneary@cs.may.ie Abstract. It is shown that there

More information

So we turn now to many-to-one matching with money, which is generally seen as a model of firms hiring workers

So we turn now to many-to-one matching with money, which is generally seen as a model of firms hiring workers Econ 805 Advanced Micro Theory I Dan Quint Fall 2009 Lecture 20 November 13 2008 So far, we ve considered matching markets in settings where there is no money you can t necessarily pay someone to marry

More information

1 Chapter 1 Extra Questions and Answers

1 Chapter 1 Extra Questions and Answers 1 Chapter 1 Extra Questions and s Question 1. What does GDP stand for? Write down and then define (that is, explain) the four major expenditure components of GDP. GDP stands for Gross Domestic Product.

More information

1 Online Problem Examples

1 Online Problem Examples Comp 260: Advanced Algorithms Tufts University, Spring 2018 Prof. Lenore Cowen Scribe: Isaiah Mindich Lecture 9: Online Algorithms All of the algorithms we have studied so far operate on the assumption

More information

Decision Theory: Value Iteration

Decision Theory: Value Iteration Decision Theory: Value Iteration CPSC 322 Decision Theory 4 Textbook 9.5 Decision Theory: Value Iteration CPSC 322 Decision Theory 4, Slide 1 Lecture Overview 1 Recap 2 Policies 3 Value Iteration Decision

More information

Richardson Extrapolation Techniques for the Pricing of American-style Options

Richardson Extrapolation Techniques for the Pricing of American-style Options Richardson Extrapolation Techniques for the Pricing of American-style Options June 1, 2005 Abstract Richardson Extrapolation Techniques for the Pricing of American-style Options In this paper we re-examine

More information

5.1 Personal Probability

5.1 Personal Probability 5. Probability Value Page 1 5.1 Personal Probability Although we think probability is something that is confined to math class, in the form of personal probability it is something we use to make decisions

More information

Generalising the weak compactness of ω

Generalising the weak compactness of ω Generalising the weak compactness of ω Andrew Brooke-Taylor Generalised Baire Spaces Masterclass Royal Netherlands Academy of Arts and Sciences 22 August 2018 Andrew Brooke-Taylor Generalising the weak

More information

Game theory and applications: Lecture 1

Game theory and applications: Lecture 1 Game theory and applications: Lecture 1 Adam Szeidl September 20, 2018 Outline for today 1 Some applications of game theory 2 Games in strategic form 3 Dominance 4 Nash equilibrium 1 / 8 1. Some applications

More information