Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks

Similar documents
Lec 1: Single Agent Dynamic Models: Nested Fixed Point Approach. K. Sudhir MGT 756: Empirical Methods in Marketing

4 Reinforcement Learning Basic Algorithms

A simple wealth model

FE610 Stochastic Calculus for Financial Engineers. Stevens Institute of Technology

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1

EE266 Homework 5 Solutions

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

Final exam solutions

Unobserved Heterogeneity Revisited

Multi-armed bandit problems

Dynamic Programming: An overview. 1 Preliminaries: The basic principle underlying dynamic programming

Sequential Decision Making

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018

Long-Run Market Configurations in a Dynamic Quality-Ladder Model with Externalities. June 2, 2018

Non-Deterministic Search

Notes on the EM Algorithm Michael Collins, September 24th 2005

Making Complex Decisions

Obtaining Analytic Derivatives for a Class of Discrete-Choice Dynamic Programming Models

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 5 January 30

Estimating a Dynamic Oligopolistic Game with Serially Correlated Unobserved Production Costs. SS223B-Empirical IO

Handout 4: Deterministic Systems and the Shortest Path Problem

Lecture 17: More on Markov Decision Processes. Reinforcement learning

CS 188: Artificial Intelligence

Financial Econometrics Jeffrey R. Russell. Midterm 2014 Suggested Solutions. TA: B. B. Deng

SOCIETY OF ACTUARIES Quantitative Finance and Investment Advanced Exam Exam QFIADV AFTERNOON SESSION

Decision Theory: Value Iteration

Chapter 3. Dynamic discrete games and auctions: an introduction

Handout 8: Introduction to Stochastic Dynamic Programming. 2 Examples of Stochastic Dynamic Programming Problems

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010

From Discrete Time to Continuous Time Modeling

1 Dynamic programming

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

17 MAKING COMPLEX DECISIONS

SYSM 6304 Risk and Decision Analysis Lecture 2: Fitting Distributions to Data

EC316a: Advanced Scientific Computation, Fall Discrete time, continuous state dynamic models: solution methods

1.1 Interest rates Time value of money

Identification and Estimation of Dynamic Games when Players Belief Are Not in Equilibrium

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE

Dynamic Portfolio Choice II

Graduate Macro Theory II: Notes on Value Function Iteration

Models of Directed Search - Labor Market Dynamics, Optimal UI, and Student Credit

Reinforcement Learning

Complex Decisions. Sequential Decision Making

Basic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig]

Information aggregation for timing decision making.

FINANCIAL OPTION ANALYSIS HANDOUTS

A DYNAMIC DISCRETE-CONTINUOUS CHOICE MODEL FOR CAR OWNERSHIP AND USAGE ESTIMATION PROCEDURE

Amath 546/Econ 589 Univariate GARCH Models

Estimation of dynamic discrete choice models

Lecture Note of Bus 41202, Spring 2008: More Volatility Models. Mr. Ruey Tsay

Financial Engineering. Craig Pirrong Spring, 2006

BSc (Hons) Software Engineering BSc (Hons) Computer Science with Network Security

Budget Management In GSP (2018)

Homework #4. CMSC351 - Spring 2013 PRINT Name : Due: Thu Apr 16 th at the start of class

Optimal rebalancing of portfolios with transaction costs assuming constant risk aversion

Forecasting jumps in conditional volatility The GARCH-IE model

INTERTEMPORAL ASSET ALLOCATION: THEORY

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

Optimally Thresholded Realized Power Variations for Lévy Jump Diffusion Models

Stochastic Games and Bayesian Games

The Analytics of Information and Uncertainty Answers to Exercises and Excursions

Optimal Stopping. Nick Hay (presentation follows Thomas Ferguson s Optimal Stopping and Applications) November 6, 2008

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Comprehensive Examination: Macroeconomics Fall, 2010

Duopoly models Multistage games with observed actions Subgame perfect equilibrium Extensive form of a game Two-stage prisoner s dilemma

Testing for non-correlation between price and volatility jumps and ramifications

Course information FN3142 Quantitative finance

Computer Vision Group Prof. Daniel Cremers. 7. Sequential Data

CS 188: Artificial Intelligence Spring Announcements

Lecture Quantitative Finance Spring Term 2015

Time-Varying Employment Risks, Consumption Composition, and Fiscal Policy

Monte Carlo Based Numerical Pricing of Multiple Strike-Reset Options

Characterization of the Optimum

Probability & Statistics

Missing Data. EM Algorithm and Multiple Imputation. Aaron Molstad, Dootika Vats, Li Zhong. University of Minnesota School of Statistics

MYOPIC INVENTORY POLICIES USING INDIVIDUAL CUSTOMER ARRIVAL INFORMATION

Martingale Pricing Applied to Dynamic Portfolio Optimization and Real Options

QI SHANG: General Equilibrium Analysis of Portfolio Benchmarking

Pricing Behavior in Markets with State Dependence in Demand. Technical Appendix. (for review only, not for publication) This Draft: July 5, 2006

The Risky Steady State and the Interest Rate Lower Bound

Analysis of truncated data with application to the operational risk estimation

Stochastic Games and Bayesian Games

Implementing the HJM model by Monte Carlo Simulation

Research Memo: Adding Nonfarm Employment to the Mixed-Frequency VAR Model

EE641 Digital Image Processing II: Purdue University VISE - October 29,

Steven Heston: Recovering the Variance Premium. Discussion by Jaroslav Borovička November 2017

TDT4171 Artificial Intelligence Methods

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Preliminary Examination: Macroeconomics Spring, 2007

On modelling of electricity spot price

M.I.T Fall Practice Problems

G5212: Game Theory. Mark Dean. Spring 2017

MACROECONOMICS. Prelim Exam

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

Reasoning with Uncertainty

Dynamic Replication of Non-Maturing Assets and Liabilities

Short-selling constraints and stock-return volatility: empirical evidence from the German stock market

Markov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N

Transcription:

Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks Spring 2009 Main question: How much are patents worth? Answering this question is important, because it helps inform the debate about optimal patent length and design. For example, are patents good tools for rewarding innovation? Q a : value of a patent at age a Goal of paper is to estimate Q a using data on their renewal. Q a is inferred from patent renewal process via a structural model of optimal patent renewal behavior. 1 Behavioral Model Treat patent renewal system as exogenous only looking at the European system For a = 1,..., L, a patent can be renewed by paying the fee c a Timing At age a = 1 patent holder obtains period revenue r 1 from patent Decides whether or not to renew. If renew then pay c 1 and proceed to age a = 2 These notes rely on Matthew Shum s lecture notes. 1

If don t renew, lose patent and get 0. At age a = 2 patent holder obtain period revenue r 2 from patent Decides whether or not to renew. If renew then pay c 2 and proceed to age a = 3 And so on.... Let V a denote the value of a patent at age a L a V a max β a R(a + a ) (1) t [a,l] a =1 where r a c a if t a (when you hold onto the patent) R(a) = 0 if t < a (after you allow the patent to expire) (2) and t is the age at which the agent allows the patent to expire. Hence R(a) denote the profits from a patent during the a-th year. The sequence R(1), R(2),... is a controlled stochastic process it is inherently random, but also affected by the agents actions (i.e. renewing the patent). This type of problem is called an optimal stopping problem. Unlike Rust s bus engine paper, this is not a regenerative optimal stopping problem. Since the maximal age is finite, L, this is a finite-horizon problem. Most dynamic problems are either (a) infinite-horizon, stationary problems or (b) finite-horizon, non-stationary problems Stationarity means that the value functions and decision rules are time-invariant functions of the state variables. Only get dependence on time through the values of the state variables (e.g. mileage in Rust s bus engine paper). State variable in this paper: r a the single period revenue. 2

Finite-horizon problems are solved via backward recursion. period of the problem and work backwards. Value function is Start with the last V a (r a ) = max {0, r a + βe[v a+1 (r a+1 ) Ω a ] c a } (3) where the value of a patent Q a = r a + βe[v a+1 (r a+1 ) Ω a ] and you choose to renew if Q a > c a. Ω a is the history of revenue up to age a, or {r 1, r 2,..., r a }. Expectation is over r a+1 Ω a. The sequence of conditional distributions, G a F (r a+1 Ω a ), a = 1, 2,..., L is an important component of the model specification. Pakes assumes 0 with prob. exp( θr a ) r a+1 = max{δr a, z} with prob. 1 exp( θr a ) (4) where density of z is q a = 1 σ a exp( (γ + z))/σ a and σ a = φ a 1, a = 1, 2,..., L 1 and {δ, θ, γ, φ, σ} are the important structural parameters of the model. Pakes explains his choice behind the stochastic evolution of r a 1. Firm learns about the patent over time (continuing to spend money on development) 2. May learn it is worthless get 0 3. May not learn anything so expectation is δr a where δ < 1. Revenue is less because others are innovating 4. May learn it is more valuable z. Agent s maximization problem: is the value of the patent Q a = r a + option value greater than the cost of renewing a patent c a. Get threshold values of r a, denoted r a, above which an agent renews (see figure 1). 3

Cutoff points are due to assumptions ensuring that Q a is increasing in r a so that Q a and c a only cross once. Specification also ensures that r a < r a+1 < r a+2 <... < r L 1 2 Implementation The paper uses aggregate data For cohort j (year in which patents are granted), observe the sequence n(a, j), a = f j, f j + 1,..., l j 1, l j : # of cohort j patents which are not renewed at age a. f j is the first date at which a renewal fee is observed for cohort j (e.g. UK first renewal fee is required 6 years after patent is filed). l j is the last date at which a renewal fee is observed for cohort j. Have left and right censoring. Don t know what happened in years before f j only see f j a=1 n(a, j). And don t know what happened in years after l f. Note model is fully parametric (although incorporates a flexible specification) Likelihood of the aggregate data is derived using Prob(t ij = a), the prob. that an individual patent i from cohort j is renewed up to age a. Prob(t ij = a) = Prob(r a < r a, r a 1 > r a 1,..., r 1 > r 1 ) (5) = ra r a 1... r 1 f(r a,..., r 1 )dr 1... dr a (6) π(a; c j ) (7) where f is the joint density of revenues and c j patents in cohort j. is the fee schedule in place for 4

Similarly, f j Prob(t ij f j ) = π(a; c j ) A (8) Prob(t ij l j ) = 1 a=1 l j a=1 π(a; c j ) B (9) Log-likelihood function for the aggregate date, letting ω denote the vector of parameters is l({n(a, j)}, ω) = J j=1 l j a=f j log π(a; c j )n(a, j) (10) where π(a; c j ) pi(a; c j ) = A B if f j < a < l j if a f j if a l j 3 Estimation Summary Use a nested algorithm 1. Inner loop: at current parameter values ˆω, solve the dynamic problem and obtain the sequence of thresholds, r 1,..., r L 1 2. Outer loop: for the revenue cutoff values, evaluate the log-likelihood function. This is a complicated integral, evaluated by simulation. 5

4 Computational Details 4.1 Inner Loop Solve for r 1,..., rl 1 by numerical backwards induction. There are many ways to do this. The brute force way (and most simple) is discretization. Assume at each age a, returns take values within [0, R]. Consider a grid of M points over this interval. We will compute the value function V 1 (r; ω),..., V L 1 (r; ω) only on these M points. For values between points, we will approximate the value function via interpolation. Specifically: Start with final period L, V L (r L ; ω) = r L (11) for all r L because there are profits and no chance for renewal after age L (b/c it is off patents, and assume others jump in) Go to period L 1 V L 1 (r L 1 ; ω) = max { } 0, r L 1 + E rl r L 1 V L (r L ; ˆω) c L 1 (12) = max { } 0, r L 1 + E rl r L 1 r L c L 1 (13) where E rl r L 1 r L is evaluated using the assumed parametric structure discussed earlier. So for each r L 1 on the M grid points, you calculate V L 1. Note in this process you will also uncover the threshold value of r L 1 (or determine it lies between 2 grid points). Now go to period L 2 V L 2 (r L 2 ; ω) = max { } 0, r L 2 + E rl 1 r L 2 V L 1 (r L 1 ; ˆω) c L 2 (14) (15) 6

For values of V L 1 (r L 1 ; ˆω) between the grid points, approximate the value with interpolation (e.g. a straight line). Repeat this procedure for all periods. Will result in all the value functions and threshold values. 4.2 Outer loop With the cutoff rules (i.e. optimal decision rules), we can simulation the likelihood function. For s = 1,..., S (where S is the number of simulation draws) 1. Draw a sequence of returns r1, s r2, s..., rl s according the assumed parametric specification. Start by drawing r1 s and then drawing r2 s given r1, s etc. 2. Given this sequence, figure out the drop out age t s, which equals the first a at which ra s < r a (ˆω). 3. Then, for all a = 1,..., L 1, you can approximate π(a; c) 1 S S 1(t s = a) s=1 4. Finally, perform simulated maximum likelihood using the above approximation for π(a; c). 7