Lecture 4: Graduate Industrial Organization. Characteristic Space, Product Level Data, and Price Indices.

Size: px
Start display at page:

Download "Lecture 4: Graduate Industrial Organization. Characteristic Space, Product Level Data, and Price Indices."

Transcription

1 Lecture 4: Graduate Industrial Organization. Characteristic Space, Product Level Data, and Price Indices. Ariel Pakes September 21, 2015 Estimation from Product Level Data: The Simple Cases. Data {(s o j, p j, x j )} J j=1 possibly also over markets or time. This is the prevalent type of data. Estimation from the Demand Equation: Vertical Model. Historical Note. This is the model estimated by Bresnahan (1987, JIE) on the auto industry (and underlies much of the problem set). He starts with a puzzle; there was an apparent decrease in automobile prices in 1955 relative to both its surrounding years (1954,56) despite the fact that 1955 was a boom year in the economy. Bresnahan asks whether changes in the nature of competition could explain this apparent paradox. In particular one hypothesis is that there had been a collusive regime, and it could not survive the 1955 boom. We will see later in the course that there are a group of models (largely associated with Rotember and Saloner, 1986, AER), that 1

2 predict that it will be hard to maintain a collusive agreement during a boom because the incentives to cheat might be higher during those periods (particularly if the boom is likely to be short-lived so that subsequent years will not be as robust and limit possible punishments). So Bresnahan s paper is part of the discussion of testing the nature of competition (see above). Just in that literature, Bresnahan posits different models for price equilibria and asks whether one or the other comes closer to fitting the data in each of the different years. He obtains his data from Automotive News and Wards, and uses an estimation procedure similar, but not identical to the subset of the procedures about to be discussed that use both the demand and the pricing equations in estimation. Bresnahan concludes that, as conjectured, collusive pricing better fits the data in 1954 and 1956, but a Nash in prices assumption fits the 1955 data better. Example 1: Vertical model. We now go through an estimation procedure for the vertical model based on only the demand equation. By the vertical model we mean U i,j = δ j νp j, where δ is the quality of the good, and ν differs by individuals. Note that everybody agrees on which products have more quality. To obtain the shares predicted by the model for different values of the parameters, Order goods by increasing p. Must also be increasing in δ j if good j is to have positive demand (a good with higher price and a lower δ than another good will not be purchased). Choose 0 if 0 > max j 1 ( νp j +δ j ) Given the ordering of goods, you should be able to show that this implies choose 0 ν < (δ 1 /p 1 ). 2

3 A 0 = {ν : ν < (δ 1 /p 1 )}. So if ν log normal, i.e. ν = exp[σv + µ] where v is standard normal, then choose 0 if exp[σv + µ] < δ 1 /p 1, v ψ 0 (θ) where ψ 0 σ 1 [ln(δ 1 /p 1 ) µ 1 ]. I.e our model has s 0 = F (ψ 0 (θ)), F standard normal. Similarly, choose good 1 if and only if 0 < νp 1 +δ 1 and νp 1 +δ 1 > νp 2 + δ 2. Or So ψ 0 (θ) < v < ψ 1 (θ), ψ 1 (θ) = ψ 1 (δ 1, δ 2, p 1, p 2, σ, µ) and more generally for j = 1,..., J, where s 1 (θ) = F (ψ 1 (θ)) F (ψ 0 (θ)), s j (θ) = F (ψ j (θ)) F (ψ j 1 (θ)), θ (δ 1,..., δ J, µ, σ). If you draw yourself a picture of the distribution of ν the ψ j represent cutoffs and the shares are the integral of the density between these cutoffs. Note that for this solution we require conditions. In particular U i,j < U i,j+1 U i,j < U i,k, k > (j + 1) and analogously for the reverse inequality (and this is true for all i) both δ j is increasing in j and (δ j+1 δ j )/(p j+1 p j ) is increasing in j. If one of the goods is out of order in this quantity, the 3

4 model implies that it will never be purchased (the proof of this takes a bit of work, and is not integral to the rest of the course; its a reasonable exercise for those of you who want to make sure they can play with the details of such models though; it is because of the linearity in δ and in price.). This gives us the model s predictions conditional on (δ, p). We now need the DGP or the data generating process for the shares. We simply assume we observe the choices of a random sample of size n = J j=0 n j. Each individual chooses one from a finite number of cells; choices are mutually exclusive and exhaustive. multinomial distribution of outcomes and L j Π j s j (θ) n j, so choose θ to max (θ) n j s o jlog[s j (θ)] min (θ) j (s o j s j (θ)) 2 s j (θ) Latter is called minimum χ 2, or if we put in observed shares in the denominator, modified minimum χ 2. The sign is equivalence up to first order asymptotics. Details of Estimation. Limit distribution for shares is s o s(θ) N(0, n 1 (diag[s] ss )) As n, variance goes to zero, so your prediction better fit well. 4

5 shares are more precisely estimated the lower s. Explains the weights in minimum χ 2. If there is a zero s you better get a zero share. Example 2: Pure Logit. U i,j = δ j p j +ɛ i,j, with {ɛ i,j } J j=1 extreme value (or double exponential) and mutually independent. The ɛ i,j are a source of horizontal differentiation, i.e. δ j p j is the mean, ɛ i,j deviation from the mean. Yields a closed form for the share exp[δ j p j ] s j (θ) = 1 + q exp[δ q p q ] 1 s 0 (θ) = 1 + q exp[δ q p q ] Now build up the likelihood with this model and the same DGP. Identification Recall that θ = (δ 1,..., δ J, µ, σ). There are J + 2 parameters, but only J independent data elements to fit since s 0 = (1 J j=1 s j ). You are under identified without more structure. Typical solution. δ j = k x k,j β k. Now estimate (β, θ). You should be sure you know how to do it. Problems with Estimates from Simple Models. There is one of these for each of the two models; and another which is common. 5

6 1. Vertical model. Cross price elasticities only with neighboring goods; with neighbors defined by prices (picture) Own price elasticities. Often not smaller for high priced goods, even though this is intuitive and we think true in most data sets (higher income purchasers care less about price, and higher priced goods generally have higher markups to justify the sunk costs required to develop them). 2. Logit model. IIA problem. The distribution of a consumer preferences over products other than the product it bought, does not depend on the product if bought. Own price derivative ( s/ p) = s(1 s). Only depends on shares. Two goods with same share must have same markup in a single product firm Nash in prices equilibrium. Cross price derivatives are s j s k. Two goods with the same shares have the same cross price elasticities with any other good. No data will ever change these implications of the two models. If your estimates do not satisfy them, there is a programming error. 3. Overfitting and Simultaneity. Recall that the variance of the share goes down like 1/n. Hence if n is large you will overfit, i.e. the data will tell you the model is wrong. Basically this is telling you there is another source of error other than sampling 6

7 error. This problem is typically striking in consumer products; it is often not so obvious in producer products. Problem 3. Overfitting and Simultaneity. Return to first two problems below. One possible source of error is unobserved or unmeasured characteristics (or just simply characteristics that the researcher has but does not use in estimation 1 ). The first explicit treatment of this is in Berry(1994, RAND) (though Bresnahan does have a discussion of the need for disturbances in his thesis). Leads to δ j = x j,k β k αp j + ξ j j Don t try and estimate {ξ j } (otherwise we would be back into the too many parameter problem), but assume they are random draws from some distribution, and use the properties of that distribution to estimate β (just as you do in any other estimation problem that allows for disturbances).. A traditional assumption, for example, would be that ξ is mean independent of the x, except for price (since consumers know about ξ, firms are likely to know also, in which case it is in their interest to account for it when setting price). You should think of why this may not be a good assumption (characteristics are determined by prior choices...), though its origins date back to the early demand systems, and it is still often used because price varies so much more over time than other characteristics. 1 This goes back to the distinction between producer and consumer goods. Consumer goods typically have many characteristics that at least some consumers care about (think about cars). If one were to use all of them in estimation, we would be back to the too many parameter problem. Typically what we do is choose the most important ones and hope the impacts of the rest of them are captured by the disturbance term introduced below. Again for producer goods this is typically less of a problem, and whether we add the disturbance does not seem to impact heavily on the results. 7

8 Estimation Strategy with Unobserved Product Characteristics. Problem with estimation; can not use instruments because shares are a non-linear function of the disturbance. Strategy: Assume n is very large. Then s o j = s j (ξ,... ; θ 0 ). For each θ there is only one ξ that makes the predicted shares the same as the observed shares (a system of J equations in J unknowns). We invert the demand model to find ξ as a function of the parameter vector. Precisely how we do this depends on the functional form of the demand model. Once we have ξ(θ), we have the disturbance in linear form and we can proceed with instruments just as we do in standard estimation problems. I.e. we make assumptions about the properties of the true disturbance, the properties of ξ(θ 0 ), and find that value of θ which makes the sample analogue of those properties as close to true as possible (covariances with observed x s...) Example: The Vertical Model. Note that s o 0 = F (ψ 0 (θ, ξ)) y 0 F 1 (s o 0) = ψ(θ, ξ), where the o superscript refers to observed data. Note that y 0 can be calculated from knowledge of s 0. Similarly s 0 1 = F (ψ 1 ( )) F (ψ 0 ( )), y 1 F 1 (s o 0 + s o 1) = ψ 1 (θ, ξ)... {y j } = {ψ j (θ, δ j )}. Once we have this sequence you should be able to calculate the ξ j as a function of (y, x, p ; β, µ, σ), say ξ j (y, x, p ; β, µ, σ). Since y 8

9 can be calculated from s, we usually write as the vector ξ(s, x, p ; θ), where θ = (β, µ, σ). Example: The Logit. The functional form for the inversion for the logit is particularly simple since ln[s j ] ln[s 0 ] = δ j (θ) x j β αp j + ξ j. So ξ j (s, x, p ; θ) = ln[s j ] ln[s 0 ] x j β + αp j. Estimation Given ξ j ( ). With either model we recover the {ξ j (s, x, p ; θ)}. Next we want to choose our moment restrictions. This is where the traditional simultaneity problem in demand estimation comes back to us. Since consumers know the ξ j presumably the firms do to. Then in whatever equilibrium you have the p j = p(x j, ξ j, x j, ξ j ). Since p is a function of ξ, we cannot use moment conditions which interact functions of p with ξ(θ). This is just the simultaneity problem we saw in the first lecture, an it implies that we need an instrument for p. Note that if E[ξ(θ 0 )x] 0 we need instruments for the x also. Notice also that typically ξ is a function of the characteristics of all goods, as the market share of one good depends on the characteristics of all goods. This function will get more complicated as we use more realistic demand functions. Here are some assumptions used for identification. 9

10 Traditional assumption is E[ξ j x, w] = 0, Note that x contains the vector of characteristics (other than price) on all products, not just product j. w here are cost side factors, and again they are all relevant. This is traditional both because it is the analogue of what is done in standard demand and supply estimation and because it is what was done in BLP. The assumptions leave you with a lot of potential instruments. They are however quite strong. Think of the fact that in some longer horizon both the (x, ξ) are choice variables, at least up to error, so one would need assumptions which generated choices which are orthogonal to one another. On the other hand p, the reason this assumption is used frequently, is that p can respond immediately to market conditions whereas x can not. So p will reflect the relative qualities in a given year, which will change as competing products enter and exit. Use of the panel dimension to the problems and the assumption that ξ j,t = ρξ j,t 1 + u j,t with E[u j,t x 1... x J ] = 0. Requires observing products over time, and makes one worry about selection problems when products drop out. However it has been used quite effectively in a number of articles. Multiple markets, or multiple income groups within a market, and assume ξ j,r = ξ j + u j,r with appropriate assumptions on u j,r (note that if there are interregional or intergroup differences there is something generating them, and you have to assume it is orthogonal to your instruments). In principal any one of these is fine provided the assumptions that underlie them is fine... i.e. what you have to do is think about those 10

11 assumptions in light of what is generating the important unobservable characteristics in your example. These estimation procedures all assume that the market size is very large so large that the sampling error in s, i.e. the difference between the observed shares and those predicted by the model given ξ, are too small to worry about. Note that sometimes you do not have actual shares, but rather the shares from a random sample of consumers, and those shares may not be based on such large sample sizes. Then one has to worry about sampling error explicitly, and since the sampling error is inside a non-linear function it must be treated carefully (especially for observations where shares are near zero). For an explicit treatment of this problem, see Berry, Linton, and Pakes, RESTUD, 2004). This concludes the discussion of overfitting and simultaneity (our Problem 3 ). Questions. 1. Note that once one of these assumptions are made there is still a question of which instrument to use. Under the traditional assumption that E[ξ x] = 0, which variables do you think would make appropriate instruments for the vertical model? 2. Assume that in addition to the vertical demand model, you assumed that each firm owned only one product, that mc = wγ + ω with E[ω x, w] = 0, and that the equilibrium was Bertrand, i.e. Nash in prices. Can you derive the pricing equations? Can you use them also in estimation? 11

12 Digression: Strategic Substitutes or Complements? Differentiated Product Nash in Prices Environments. Before moving on to more realistic differentiated product models that overcome the first two ( substitution ) problems discussed above (one for the logit and one for the vertical model), it is worthwhile pausing to consider whether prices are strategic complements or strategic substitutes in the simple differentiated product environments used in most of the theoretical literature. This will parallel our discussion of whether quantities are strategic complements or substitutes in the homogeneous goods model. Recall from the second lecture that if I assume single product firms and write the profit function as π 1 (p 1, p 2 ; ) = D 1 (p 1, p 2 ; )[p 1 mc 1 ] where for simplicity I have assumed that marginal costs are constant, then prices will be strategic complements or strategic substitutes according as 2 π 1 ( ) p 1 p 2 π 1 1,2( ) is greater than or less than zero. The Nash in prices assumption for single product firms give us the first order condition so π 1 ( ) p 1 = D1 ( ) p 1 [p 1 mc 1 ] + D 1 ( ) = 0 p 1 mc 1 = D1 ( ). D1 ( ) p 1 Straightforward differentiation gives us the needed cross partial as π 1 1,2( ) = D 1 1,2( )[p 1 mc 1 ] + D 1 2( ) 12

13 with p 1 mc 1 as above. Several points are worth noting before proceeding. Once we specify the form of the demand system and the current price vector we will know the result. The last term will be positive in virtually all differentiated product models (in these models goods are substitutes for one another). It just says that as the price of good two goes up more consumers are purchasing good one. Thus provided the first term is not too negative prices will be strategic complements in the differentiated product model. Moreover whether the first term is negative or not depends on what might seem to be technical details of the demand function. So there is a predilection to think that the second term dominates. This stands in rather strong contrast to homogeneous product quantity games where, recall, the controls (quantity) are strategic substitutes. The question is whether this is true. Note. The toy models used in the theory literature all generate it. The discussion below shows that this is true. It then shows that simple generalizations to the toy models, generalizations that we would want to make before going to actual data, show that prices are not generally strategic substitutes. Intuition.It is true that as the price of good two goes up we have a larger demand for good one, and this will generally make the derivative of good one larger which explains the term D 1 ( )/ p 2. However as we increase p 2 you will also change the type of consumer 13

14 purchasing good one. In particular as you increase p 2 a very particular type of consumer will move to good one; consumer s that cared about price that s why they shifted from good two. Faced with more price elastic consumers it may well be in good one s interest to lower its price. How important the two effects are, and hence which one dominates, is a matter for the data to decide. For the auto example below it works out that about half the couples of prices are strategic substitutes and half are strategic complements. Demand in Product Space. The simplest demand system, and one that is often used in theoretical discussions of differentiated product demand systems in product space is D 1 ( ) = α 0 α 1 p 1 + α 2 p 2. Clearly then D 1 ( )/ p 2 = α 2 which will always be greater than zero (independent of (p 1, p 2 )), and D 1 1,2 0. Consequently prices are strategic complements in this model, confirming the earlier intuition. Vertical Model with Uniform Tastes. I.e. the model is U i,j = νδ j p j in which case a repetition of the derivation above (with a small change in the formulation so I don t get signs wrong this time), gives us choose good j if p j p j 1 δ j δ j 1 < ν p j+1 p j δ j+1 δ j and the ratio (p j p j 1 )/(δ j δ j 1 ) must be increasing in j for all goods to have positive demand. The uniform distribution on ν 14

15 implies that D j ( ) is proportional to D j ( ) = p j+1 p j δ j+1 δ j p j p j 1 δ j δ j 1. As noted the demand for good j does not change if we move the prices of goods other than (j + 1, j, j 1), so non-neighboring goods are neither complements nor substitutes by assumption. Let s consider now an increase in the price of good j + 1. We have D j ( ) 1 = [ + p j δ j+1 δ j 1 δ j δ j 1 ] and D j j,j+1( ) 0. Since D j j+1( ) > 0, prices are strategic complements in this model also. Logit Demand I.e. the model is U i,j = δ j αp j +ɛ i,j with the ɛ i.i.d. double exponential. We have already derived D 1 ( ), D 1 ( )/ p 1, and D 1 ( )/ p 2. You should also be able to show that so that D 1 ( ) 1,2 = α 2 s 1 s 2 (1 2s 1 ) π 1 1,2( ) = αs2 1s 2 (1 2s 1 ) which unless one of the goods has an enormously large share of the market will also be positive (typically over fifty percent of consumers buy the outside good). So prices are strategic complements in this model also. 15

16 Generalizations. Running through these models you can easily see how the theory literature started taking it for granted that prices are strategic complements. Of course it is easy, and somewhat instructive, to show that this need not be the case. Go back to the vertical model but this time do away with the assumption of the uniform distribution of tastes. Then demand for the good is proportional to F ( p j+1 p j δ j+1 δ j ) F ( p j p j 1 δ j δ j 1 ), where F ( ) is the distribution of ν (f( ) will be its density). The derivative of demand with respect to own price is f( p j+1 p j 1 ) f( p j p j 1 1 ), δ j+1 δ j δ j+1 δ j δ j δ j 1 δ j δ j 1 so the (j, j + 1) cross partial of demand is f ( p j+1 p j 1 )( ) 2 δ j+1 δ j δ j+1 δ j which can be positive or negative. Indeed for a unimodal distribution it will be negative until the mode and positive thereafter. Whether (j, j + 1) are complements or substitutes and depends on the sign of f ( p j+1 p j 1 )( ) 2 δ j+1 δ j δ j+1 δ j F ( p j+1 p j δ j+1 δ j ) F ( p j p j 1 δ j δ j 1 ) f( p j+1 p j δ j+1 δ j ) 1 δ j+1 δ j f( p j p j 1 δ j δ j 1 ) 1 δ j δ j 1 +f( p j+1 p j 1 ), δ j+1 δ j δ j+1 δ j which will be negative if f( ) is small and f ( ) is sufficiently positive. If we are in a unimodal distribution, the conditions needed for 16

17 the strategic substitutes will tend to occur for low quality products (before the mode). Generalizing Demand To Allow for Realistic Substitution Patterns; BLP (1995) and Fellow Travellers. BLP is a micro model of demand, a generalization to the logit that allows for unobserved product characteristics, that aggregates up explicitly to obtain product level demand. The fact that it starts with a micro model allows the same framework to be used to structure demand analysis that involves micro data, strata samples, or product level data or any combination of them. So we begin with the micro model underlying the framework. We have with where U ij = Σ k x jk β ik + ξ j + ɛ ij, β ik = λ k + β o k z i + β u k ν i, The x jk and ξ j are, respectively, observed and unobserved product characteristics The the z i and ν i are vectors of observed and unobserved consumer attributes (what is observed in one study need not be observed in another) The vectors β o k and β u k determine the impacts of observed (o) and unobserved (u) consumer characteristics on the utility from characteristic k 17

18 The λ k provide the impact of product characteristic k on the product specific constant term in the utility function The ɛ ij represent idiosyncratic individual preferences for the different goods (preferences that are independent of the product and individual attributes we account for). Substitute the second equation into the first to produce U ij = δ j + Σ kr x jk z ir β o rk + Σ kl x jk ν il β u kl + ɛ ij, where δ j = Σ k x jk λ k + ξ j. The general model has two types of interaction terms between product characteristics and consumer characteristics Interactions between observed consumer characteristics (the z i ) and product characteristics. (Term is Σ kr x jk z ir β o rk.) Interactions between unobserved consumer characteristics (the ν i ) and product characteristics. (Term is Σ kl x jk ν il β u kl.) Note. It is these interactions which generate reasonable own and cross price elasticities (i.e. they kill the IIA problem). Now if we increase the price of a car very specific consumers leave that car, consumers who had chosen that car and hence like characteristic tuples similar to the tuples of that car. Consequently they will substitute to other cars with similar characteristics. 18

19 Now a given price change will force different amounts of consumers away from different cars. Cars with high prices tend to be cars that were preferred by consumers that don t care that much about price and hence will not respond as much to a given price change. This implies that own price effects will be lower (markups in a Nash pricing equilibrium will tend to be higher) for higher priced cars. Some Points to keep in mind. We will not get reasonable elasticities unless we include the characteristics people care about (all the elasticities are filtered through the characteristics). Price is a characteristic that people care about, and we almost always include it (except when consumers don t pay for it, like in hospital demand, and this may be an error since doctors help determine hospital choice, and they typically do face cost incentives). When we only have product level data (i.e. we don t have any form of micro data, i.e. data that matches consumers to the choices they make), then the best we have is information on the distribution of the z i (say from the CPS), and there is a sense in which all consumer attributes are unobserved. Note. Even when we do have micro data it is important to know whether the observed consumer attributes are sufficiently rich to pick up all sources of the differences in tastes for characteristics. They generally will not be. If not you still must add the unobservables, or you will end up with slight variants to the two problems discussed above. 19

20 Steps in Estimation: Product Level Data. Begin with product level data. There may be known distributions, but no match between an individual characteristic and the product the individual bought (i.e. some of the dimensions of the density f( ) below may be known). Step 1. Find aggregate shares conditional on (δ, β). s j (θ, δ) = exp[δ j + Σ kl x jk ν il β kl ] 1 + q exp[δ q + Σ kl x qk ν il β kl ] f(ν)d(ν), Integral is intractable. Use a simulation estimator for aggregation, as explained earlier in these notes. Use ns simulation draws to aggregate over ν. I.e. ŝ ns j (θ, δ) = r Note the following: exp[δ j + Σ kl x jk ν ilr β kl ] 1 + q exp[δ q + Σ kl x qk ν ilr β kl ]. We still use the fact that we can integrate out over the ɛ exactly. This is one reason to maintain the logit assumption, i.e. we can gain precision in simulation at low cost. If you know the distribution of the characteristics from, say the CPS, we can take random draws from the CPS (BLP use this for income; micro BLP uses it for a lot more). By introducing simulation you are introducing an error. It is true that the error goes away if you use enough simulation draws, but you might want to keep track of precisely what is enough or alternatively of the impact of simulation on your estimators. 20

21 There are more efficient simulators then the simplistic simulator used here, and you might want to look up importance sampling simulators if you run into simulation problems. There are also other ways of obtaining the integral that are sometimes used (magic numbers like Halton sequences). Step II Recover ξ(β, λ) from shares. Need to solve the system s o j = ŝ ns j (θ, δ) for j = 1,..., J. BLP show that iterating on the system of equations δj k (β) = δj k 1 (β) + ln[s o j] ln[ŝ ns j (β, δ k 1 )] leads to the unique solution (the system of equations is a contraction mapping with modulus less that one and hence converges geometrically to the unique fixed point which can be found by this iterative procedure) 2. Note that there are now alternative ways of solving these equations (Improving the Numerical Performance of BLP Static and Dynamic Discrete Choice Random Coefficients Demand Estimation P.Dube, J. Fox and Che-Lin Su, Econometrica, 2010). Any way of doing this is fine. They use a program called MPEC with a search algorithm called KNITRO, which I will come back to discuss. The solution to these equations is a function of (θ, s o, P ns ), or δ(θ, s o, P ns ), and we know that ξ j (θ, s o, P ns ) = δ(β, s o, P ns ) Σ k x jk λ k, 2 The second reason for using the logit assumption instead of the pure characteristics model is that though these equations are still a contraction without the logit errors, the contraction no longer will necessarily have modulus less than one. A contraction with modulus less than one need not converge, and in my experience it often does not converge. 21

22 Notes There is an analytic form for the λ parameters conditional on the β parameters; i.e. if we knew β, the solution for λ is easy to get. This will help in IV estimation, as given β we can get λ with the standard instrumental variable formula. So the nonlinear search is only over β (we can concentrate out the λ.) It is the contraction mapping step that differs in the pure characteristic model. Though there is a unique solution to the equation which determines ξ from the shares, the equation above is no longer a contraction mapping with modulus less than one, and hence does not necessarily converge. The usefulness of the large equation solvers in this context is that at least in principal they can be used to solve the analogous equation in the pure characteristics model. However I don t know of anyone who has found an easy way to do that yet. Step III Interact ξ j (θ, s o, P ns ) with function of (x, w) and find that value of θ that makes the sample moments as close as possible to zero. I.e. minimize G J,n,ns (θ) where G J,n,ns (θ) = j ξ j (θ, s o, P ns )f j (x, w) where f j (x, w) is a sufficiently rich function of all product and cost characteristics that are orthogonal to ξ. Again sufficiently rich means our identification assumption (so the vector of functions must have dimension at least as large as that of the parameter vector). 22

23 Notes. One needs instruments for the non-linear parameters (those that determine the impact of the variance in consumer tastes across the population) as well as for price. What makes sense here depends on the problem, but if there are geographic markets one might think of distributions of consumer characteristics across those markets. If we go across markets it is not clear whether you want to allow ξ j to have a market subscript. Recall that this is a characteristic which is to control for all unobserved characteristics that determine market demand. The preferences for some of them may vary across markets. The Hausman instruments (often referred to in the literature and in your problem set) are prices in some other market. The logic here is that if you do not have cost data the prices in the other market might be reflective of cost differences. To use them you have to be convinced that: (i) there are not omitted variables that affect prices in both markets (a national advertising campaign that you can not control for would be one such variable), and (ii) that the cost unobservable that presumably goes into prices does not have a common component across markets that is correlated with ξ. Any other equation solver can be substituted for the last two steps. One of them is MPEC (mathematical programming with equilibrium constraints) which I briefly review below. There are some details to keep in mind which will help you with computing and with analyzing variances. I turn to these now. 23

24 The Limit Distribution of Parameter Estimates. Can be obtained in a similar way to any GMM estimator With one cross section of observations is where J 1 (Γ Γ) 1 Γ V 0 Γ(Γ Γ) 1 Γ derivative of the expectation of the moments with respect to parameters V 0 variance-covariance of those moments evaluated V 0 has (at least) two orthogonal sources of randomness randomness generated by random draws on ξ variance generated by simulation draws. in samples based on a small set of consumers also randomness in sampling process Berry Linton and Pakes, (2004 RESTUD) derive these distributions and show that the last 2 components are likely to be very large if market shares are small, so that is when you have to be particularly careful with the simulation procedure and with sampling error. An outline of the derivation will be given at the end of the next lecture. If there is more than one market you just sum this over markets the variance term adjusts accordingly. Note: Obtaining the Correct Estimates. It has been shown that the care you take in programming is important for BLP type problems. Typically you have to worry about the following 24

25 use tight tolerance in inversion (10 12 ) proper code be careful with the optimizers you use (it is easy to get a local maximum), and start from different starting values. Note: Digression: MPEC The Dube, Fox and Su, article advocate the use of MPEC algorithm instead of the Nested fixed point The basic idea: maximize the GMM objective function subject to the equilibrium constraints (i.e., that the predicted shares equal the observed shares) Avoids the need to perform the inversion at each and every iteration of the search performing the inversion for values of the parameters far from the truth can be quite costly The problem to solve can be quite large, but efficient optimizers (e.g., Knitro) can solve it effectively. DFS report significant speed improvements for BLP but have not managed an efficient way to do the pure characteristic model. MPEC takes time to learn, and most BLP problems are pretty quick with the algorithm above. So you should probably only go to MPEC if the problem is sufficiently computer intensive. 25

26 Formally Note min θ,ξ subject to ξ ZWZ ξ σ(ξ; x, p, F ns, θ) = S the min is over both θ and ξ: a much higher dimension search ξ is a vector of parameters, and unlike before it is not a function of θ avoid the need for an inversion: equilibrium constraint only holds at the optimum in principle, should yield the same solution as the nested fixed point Many bells and whistles that I will skip Lessons from Estimation: Market Level Data. Often we find that there is not enough information in product level demand data to estimate the entire distribution of preferences with sufficient precision. The extent of this problem depends on the type of data. If there is a national market we typically have data over time periods, and when there are local markets the data is most often cross-sectional (though sometimes there is a panel dimension). Thus the researcher is trying to estimate the whole distribution of preferences from one set of aggregate choice probabilities per market or per period depending on the type of data. Other than functional form, the variance that is used to estimate the parameters of the model becomes 26

27 differences in choice sets across markets or time periods (hopefully allowing you to sweep out preferences for given characteristics), and differences in observable purchaser characteristics across markets or over time (usually due to demographics) over a fixed choice set (hopefully allowing you to sweep out the interaction between demographic characteristics and choice sets). It is easy to see how precision problems can arise. In the national market case (as in the original BLP, 1995, auto example), the distribution of purchaser characteristics rarely changes much over the time frame available for the analysis. As a result, one is relying heavily on variance in the choice set, which depending on the important characteristics of the product, is often not large (though one can think of products where it is, at least in certain characteristics, e.g. computers). In the cross sectional case there is often not much variance in the choice set, though this does depend on the type of market being studied. The choice sets for markets for services are typically local and do often differ quite a bit with market characteristics. When there is variance in choice sets across markets one must consider whether the source of that variance causes a selection problem in estimation. In the terminology of the last lecture we must consider whether ξ k,j, where k is a product and j is a market, is not a draw from a random distribution but is a draw from a distribution which depends on market characteristics. If so, there should be an attempt to control for it; i.e. model the distribution of ξ as a function of the relevant market characteristics. 27

28 As was shown above there is often quite a bit of variance in demographic characteristics across markets, but there are often sources of variance that are important for the particular choice which we do not have data on. We do typically have data on income, family size, etc., but we will not have data on other features, like household activities or holdings of related products, which are often important to the choice. For example, in auto choice it probably matters what type of second car the family has, and whether the household has a child on a sports team where some fraction of the team must be driven to games periodically. These unobserved attributes will go into the variance of the random coefficients, and if their distribution differs by market we will have to parameterize them and add to the parameters that need to be estimated. Solutions to the Precision Problem. If there is a precision problem, to get around it we have to add information. Fundamentally there are two ways of doing this add data, and add assumptions which allow you to bring to bear more information from existing data. Add Data. There are at least two forms of this. One is to add markets (Nevo, 2001), and this just makes the discussion we have already had on sources of identification more relevant. The other is to add micro data. This will be the topic of my next lecture as there are different types of micro data and additional estimation issues arise when dealing with each type. 28

29 What makes the incorporation of micro level data relatively easy for this class of models is that the model we just developed for market level data is a micro model (we just aggregated it up), so the model used for the micro data will be perfectly consistent with the models just developed for market level data. Though micro data sets, data sets that match individuals to the choices those individuals make, is increasingly available, it is not as available as market level data. Indeed typically if micro data is available, so is aggregate data, and the models and estimation techniques we develop and use for micro models will be models that incorporate information on both micro and market level data. For now it suffices to say that the additional micro data sometimes available can come in different forms, typical among them: There are publicly available purchaser samples that are too small to use in estimating a rich micro model but large enough to give fairly precise estimates of covariances between household and product characteristics which an adequate aggregate model should replicate. A good example of this is the Consumer Expenditure Survey (or the CEX), and we will return to this when we look at Petrin s work. See also Aviv s discussion of Goldberg s paper. There are privately generated surveys that tell you something about the average characteristics of consumers who buy different products. These are often generated by for-profit marketing firms. As a result, it can be costly to access current data, but these companies will often cut deals on old data if they are not currently selling it. 29

30 True micro data which matches households to the products they chose. This is still fairly rare in Industrial Organization but is becoming more widely available in public finance applications (e.g. schooling choices). This type of data is particularly useful when it contains not only the product purchased but second and other ordered choices. We come back to it when we discuss the CAMIP data and MicroBLP, and when we discuss the Homescan data which is used Katz s study (which I will turn to in the next lecture), and much of the other work on purchases at food outlets. It is a panel; which does not quite give us ordered choice, but has some similar features (see the micro lecture). Adding assumptions. There are a number of ways of doing this, but probably the most prevalent is to add a pricing assumption. This was done in BLP and in much of the literature preceding them. The assumption is typically that the equilibrium in the product market is Nash in prices, sometimes called Bertrand. To implement it we assume a cost function and add the equation for the Nash pricing equilibrium to the demand equation to form a system of equations which is estimated jointly (this is reminiscent of old style demand and supply analysis; with the pricing equation replacing the supply function). Since there is a pricing equation for each product, there is a sense in which the pricing assumption has doubled the amount of observations on the left-hand side variable (though, as we shall see we expect the errors in the two equations to be correlated). Of course typically the use of the cost function does require estimates of more parameters, but just as we did for the demand function, we typically reduce the cost function to be 30

31 a function of characteristics (possibly interacted with factor prices) and an unobservable productivity parameter. So the number of parameters added is a lot less than the number of products, and we get a pricing equation for each product. The assumption of a Nash equilibrium in prices can be hard to swallow, as there are many markets where it would seem to be a priori unreasonable. Examples include all markets where either demand or costs are dynamic in the sense that a change in current quantity changes future demand or cost functions. We expect this to be true in durable, experience or network goods, as well as in goods where there is learning by doing or important costs of adjusting quantities supplied. What is surprising is how well the Nash pricing assumptions fit in characteristic based models, even in markets where one or more of these issues is relevant. There is, however, an empirical sense in which this should have been expected. Its been known for some time that if we fit prices to product characteristics we get quite good R 2 s, as it was this fact that generated the hedonic s literature (see the discussion and illustration below). Since marginal costs are on the right had side of the Nash pricing assumption, and marginal costs are usually allowed to be a function of characteristics (perhaps in conjunction with factor prices) this already insures that the pricing function will have a reasonable fit. The other term on the right-hand side of the pricing equation is the markup. The markup from the Nash pricing assumption has properties that are so intuitive that they are likely to also be properties of more complex pricing models. In particular, the Nash pricing model implies: 31

32 Products in a heavily populated part of the characteristic space will have high price elasticities; if their prices are increased, there are lots of products which are close in characteristic space to switch to and consumer s utility is only a function of the characteristics of the product. High elasticities means low markups (see below). Products with high prices are sold to less price sensitive (or high income) consumers and hence will have lower elasticities and higher markups. If a firm is multi-product the markup it charges on any one of its products is likely to be higher than it would have been had the firm been a single product firm ( when a multi-product firm increases its price, some of the consumers that leave the good go to the other good the firm owns and the firm does not lose all the markup from the first good those consumers generated). The fact that the markup term is a function of price elasticities is also what endows that equation with highly relevant information on the structure of the demand system. Note. Though the pricing equations seems to fit very well in the cross-section, it does not fit as well for changes over time. This has generated many literatures (exchange rate pass through in trade, sticky prices in Macro,...,) and reflects the fact that the smaller differences that usually occur over time must be thought of, at least in part, as responses in explicitly dynamic models. 32

33 Bertrand Equilibrium in a Differentiated Product Market, (the Nash in Prices Solution with Multiproduct Firms). Since most of the markets we study involve multiproduct firms, we investigate price-setting equilibria in differentiated products market with multiproduct firms. The standard assumption is that the price setter is maximizing total firm profits. Though I comment below on alternatives, we begin with this assumption. Keep in mind when we use this assumption in estimation, we are actually adding two assumptions to our demand model an assumption on the nature of equilibrium and an assumption on the form of the cost function. The Cost Function. cost is A frequently used specification for the marginal ln[mc j ] = r w r,jγ k + γ q q j + ω j, where the {w r,j } typically include product characteristics (the {x k,j } and factor prices (often interacted with product characteristics). The ω j are unobserved determinants of costs, and q j picks up the possible impacts of non-constant returns to scale. Keep in mind that the ξ j are not included in this equation, so if it costs to produce these unobserved characteristics we would expect ξ j to be positively correlated with ω j. Of course ω j also includes the impact of productivity differences across firms. They are typically assumed to be i.i.d. draws from a population distribution which is independent of the {w r,j } and has a mean of 0. Notice that then the distribution of marginal cost depends on only R

34 parameters, and we have J observations on price (hopefully J >> R), so we are adding degrees of freedom). Also keep in mind that q j is endogenous, in the sense that it is a function of both the ξ j and the ω j ; both unobserved cost and unobserved product components impact demand. This function is often also assumed (at least formally incorrectly) to pick up other aspects of the environment that we can not deal with in a static model. For example if some of the inputs are imported, we should be able to evaluate changes in costs of those products due to exchange rate changes directly. Typically though these exchange rate changes are not passed through to consumers in full (this is the exchange rate pass through literature in trade). Instead of specifying the formal price setting reason for not passing through the exchange rate fluctuation in full, much of the literature just puts the exchange rate in the cost function, estimating a pass through rate. The Pricing Equation. For simplicity I am going to assume constant costs (or γ q = 0). Since the equilibrium is Nash in prices for a profit maximizing firm, if J f denotes the set of products the firm markets, their price choices must max p π f ( ) = j J f (p j mc j )Ms j ( ). Then the f.o.c. for each of the #J f prices of products owned by the firm is s j ( ) + (p r mc r )M s r( ) = 0, r J f p j 34

35 and for these to be equilibrium prices second order conditions must also hold. Notice that for a single product firm the f.o.c. reduces to s j ( ) p j = mc j + s j ( )/ p j which is the familiar formula for price; cost plus a markup which equals one over the (semi) elasticity of demand. When there are two products owned by firm j say good 1 and 2, this becomes p 1,j = mc 1,j + s 1,j ( ) s 1,j ( )/ p 1,j + s 2,j( )/ p 1,j s 1,j ( )/ p 1,j [p 2,j mc 2,j ]. This illustrates the intuition noted earlier. Since we are dealing with a differentiated product market, goods are substitutes and the last term is positive. So if we held all other product prices fixed for the comparison, prices will be higher for multi-product firms. The extent to which they will be higher depends on the cross price elasticity of good 2 with the price of good 1, and the markup on good 2 (and there is a similar reasoning for good 2). Of course, if prices are really equilibrium prices, once the price of either good 1 or good 2 changes, then the prices of the other goods in the market will also adjust (as they will no longer satisfy their first order conditions) and we could not get the full effect of going from a single to a double product firm without calculating a new equilibrium. As a further caveat to thinking about counterfactuals in this way, note that if prices are set in a simultaneous move game, then there may be more than one equilibrium price vector: more 35

36 than one vector of prices that satisfies all the first (and second) order conditions. All we will rely on in estimation will be the first order condition, so the potential for multiplicity does not affect the properties of the estimators we introduce. However the calculation of counterfactual prices, the prices that would hold were we to change the institutional structure or break up a multiproduct firm, does depend on the equilibrium selection mechanism and without further information (like an equilibrium selection procedure) or functional form restrictions (say, that insure a unique equilibrium) can not be calculated. Other Pricing Assumptions. This is not the only pricing assumption that is possible. One might want to see if a division of a firm that handled a subset of its products was pricing in a way that maximized the profits from that subset rather than from the products of the whole firm or if two firms were coordinating their pricing decisions and trying to maximize the sum of their profits. Each of these (and other) assumptions would deliver a different pricing equation, and the new equations could replace this equation in the estimation algorithm I am about to introduce. Similarly one could try and set up an algorithm which tested which pricing assumption was a better description of reality. For an example of this, see Nevo (Econometrica, 2001). Details. Define the matrix which is J J where the (i, j) element is s i ( )/ p j if both i J f and j J f for some firm f 36

37 and the (i, j) element is zero if the two goods do not belong to the same firm. Then we can write the vector of f.o.c.s in matrix notations as s + (p mc) = 0 p 1 s = mc. Turning to estimation, now in addition to the demand side of J equations for ξ we have the pricing side equations for ω which can be written as ln(p 1 s) w γ = ω(θ). Recall that once we isolated ξ the assumption that ξ was orthogonal to the instruments allowed us to form a moment which was mean zero at the true parameter value. Now we can do the same thing with ω. I.e. if z is an instrument then Ez j ω j (θ) = 0 at θ = θ 0. How much have we complicated the estimation procedure? Before, every time we wanted to evaluate a θ we had to simulate demand and do the contraction mapping for that θ. Now after simulating demand we have to calculate the markups for that θ also. Now we have twice as many moment restrictions to minimize, and this will often increase computing time somewhat. Some Final Notes on Estimating the Pricing Equation. When we estimate the pricing equation with the demand system we 37

Estimating Market Power in Differentiated Product Markets

Estimating Market Power in Differentiated Product Markets Estimating Market Power in Differentiated Product Markets Metin Cakir Purdue University December 6, 2010 Metin Cakir (Purdue) Market Equilibrium Models December 6, 2010 1 / 28 Outline Outline Estimating

More information

Chapter 3. Dynamic discrete games and auctions: an introduction

Chapter 3. Dynamic discrete games and auctions: an introduction Chapter 3. Dynamic discrete games and auctions: an introduction Joan Llull Structural Micro. IDEA PhD Program I. Dynamic Discrete Games with Imperfect Information A. Motivating example: firm entry and

More information

Characterization of the Optimum

Characterization of the Optimum ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing

More information

Automobile Prices in Equilibrium Berry, Levinsohn and Pakes. Empirical analysis of demand and supply in a differentiated product market.

Automobile Prices in Equilibrium Berry, Levinsohn and Pakes. Empirical analysis of demand and supply in a differentiated product market. Automobile Prices in Equilibrium Berry, Levinsohn and Pakes Empirical analysis of demand and supply in a differentiated product market. about 100 different automobile models per year each model has different

More information

What s New in Econometrics. Lecture 11

What s New in Econometrics. Lecture 11 What s New in Econometrics Lecture 11 Discrete Choice Models Guido Imbens NBER Summer Institute, 2007 Outline 1. Introduction 2. Multinomial and Conditional Logit Models 3. Independence of Irrelevant Alternatives

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2017 These notes have been used and commented on before. If you can still spot any errors or have any suggestions for improvement, please

More information

Choice Probabilities. Logit Choice Probabilities Derivation. Choice Probabilities. Basic Econometrics in Transportation.

Choice Probabilities. Logit Choice Probabilities Derivation. Choice Probabilities. Basic Econometrics in Transportation. 1/31 Choice Probabilities Basic Econometrics in Transportation Logit Models Amir Samimi Civil Engineering Department Sharif University of Technology Primary Source: Discrete Choice Methods with Simulation

More information

GMM for Discrete Choice Models: A Capital Accumulation Application

GMM for Discrete Choice Models: A Capital Accumulation Application GMM for Discrete Choice Models: A Capital Accumulation Application Russell Cooper, John Haltiwanger and Jonathan Willis January 2005 Abstract This paper studies capital adjustment costs. Our goal here

More information

PAULI MURTO, ANDREY ZHUKOV

PAULI MURTO, ANDREY ZHUKOV GAME THEORY SOLUTION SET 1 WINTER 018 PAULI MURTO, ANDREY ZHUKOV Introduction For suggested solution to problem 4, last year s suggested solutions by Tsz-Ning Wong were used who I think used suggested

More information

Chapter 6: Supply and Demand with Income in the Form of Endowments

Chapter 6: Supply and Demand with Income in the Form of Endowments Chapter 6: Supply and Demand with Income in the Form of Endowments 6.1: Introduction This chapter and the next contain almost identical analyses concerning the supply and demand implied by different kinds

More information

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 7: Bayesian approach to MAB - Gittins index Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach

More information

Chapter 19 Optimal Fiscal Policy

Chapter 19 Optimal Fiscal Policy Chapter 19 Optimal Fiscal Policy We now proceed to study optimal fiscal policy. We should make clear at the outset what we mean by this. In general, fiscal policy entails the government choosing its spending

More information

Fluctuations. Shocks, Uncertainty, and the Consumption/Saving Choice

Fluctuations. Shocks, Uncertainty, and the Consumption/Saving Choice Fluctuations. Shocks, Uncertainty, and the Consumption/Saving Choice Olivier Blanchard April 2005 14.452. Spring 2005. Topic2. 1 Want to start with a model with two ingredients: Shocks, so uncertainty.

More information

Introducing nominal rigidities. A static model.

Introducing nominal rigidities. A static model. Introducing nominal rigidities. A static model. Olivier Blanchard May 25 14.452. Spring 25. Topic 7. 1 Why introduce nominal rigidities, and what do they imply? An informal walk-through. In the model we

More information

The Two-Sample Independent Sample t Test

The Two-Sample Independent Sample t Test Department of Psychology and Human Development Vanderbilt University 1 Introduction 2 3 The General Formula The Equal-n Formula 4 5 6 Independence Normality Homogeneity of Variances 7 Non-Normality Unequal

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:

More information

These notes essentially correspond to chapter 13 of the text.

These notes essentially correspond to chapter 13 of the text. These notes essentially correspond to chapter 13 of the text. 1 Oligopoly The key feature of the oligopoly (and to some extent, the monopolistically competitive market) market structure is that one rm

More information

Lecture 3: Factor models in modern portfolio choice

Lecture 3: Factor models in modern portfolio choice Lecture 3: Factor models in modern portfolio choice Prof. Massimo Guidolin Portfolio Management Spring 2016 Overview The inputs of portfolio problems Using the single index model Multi-index models Portfolio

More information

When do Secondary Markets Harm Firms? Online Appendixes (Not for Publication)

When do Secondary Markets Harm Firms? Online Appendixes (Not for Publication) When do Secondary Markets Harm Firms? Online Appendixes (Not for Publication) Jiawei Chen and Susanna Esteban and Matthew Shum January 1, 213 I The MPEC approach to calibration In calibrating the model,

More information

Dynamic Replication of Non-Maturing Assets and Liabilities

Dynamic Replication of Non-Maturing Assets and Liabilities Dynamic Replication of Non-Maturing Assets and Liabilities Michael Schürle Institute for Operations Research and Computational Finance, University of St. Gallen, Bodanstr. 6, CH-9000 St. Gallen, Switzerland

More information

Revenue Management Under the Markov Chain Choice Model

Revenue Management Under the Markov Chain Choice Model Revenue Management Under the Markov Chain Choice Model Jacob B. Feldman School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jbf232@cornell.edu Huseyin

More information

CS364B: Frontiers in Mechanism Design Lecture #18: Multi-Parameter Revenue-Maximization

CS364B: Frontiers in Mechanism Design Lecture #18: Multi-Parameter Revenue-Maximization CS364B: Frontiers in Mechanism Design Lecture #18: Multi-Parameter Revenue-Maximization Tim Roughgarden March 5, 2014 1 Review of Single-Parameter Revenue Maximization With this lecture we commence the

More information

Estimating the Effect of Tax Reform in Differentiated Product Oligopolistic Markets

Estimating the Effect of Tax Reform in Differentiated Product Oligopolistic Markets Estimating the Effect of Tax Reform in Differentiated Product Oligopolistic Markets by Chaim Fershtman, Tel Aviv University & CentER, Tilburg University Neil Gandal*, Tel Aviv University & CEPR, and Sarit

More information

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models IEOR E4707: Foundations of Financial Engineering c 206 by Martin Haugh Martingale Pricing Theory in Discrete-Time and Discrete-Space Models These notes develop the theory of martingale pricing in a discrete-time,

More information

Parallel Accommodating Conduct: Evaluating the Performance of the CPPI Index

Parallel Accommodating Conduct: Evaluating the Performance of the CPPI Index Parallel Accommodating Conduct: Evaluating the Performance of the CPPI Index Marc Ivaldi Vicente Lagos Preliminary version, please do not quote without permission Abstract The Coordinate Price Pressure

More information

Econ 8602, Fall 2017 Homework 2

Econ 8602, Fall 2017 Homework 2 Econ 8602, Fall 2017 Homework 2 Due Tues Oct 3. Question 1 Consider the following model of entry. There are two firms. There are two entry scenarios in each period. With probability only one firm is able

More information

How Much Competition is a Secondary Market? Online Appendixes (Not for Publication)

How Much Competition is a Secondary Market? Online Appendixes (Not for Publication) How Much Competition is a Secondary Market? Online Appendixes (Not for Publication) Jiawei Chen, Susanna Esteban, and Matthew Shum March 12, 2011 1 The MPEC approach to calibration In calibrating the model,

More information

Birkbeck MSc/Phd Economics. Advanced Macroeconomics, Spring Lecture 2: The Consumption CAPM and the Equity Premium Puzzle

Birkbeck MSc/Phd Economics. Advanced Macroeconomics, Spring Lecture 2: The Consumption CAPM and the Equity Premium Puzzle Birkbeck MSc/Phd Economics Advanced Macroeconomics, Spring 2006 Lecture 2: The Consumption CAPM and the Equity Premium Puzzle 1 Overview This lecture derives the consumption-based capital asset pricing

More information

MA300.2 Game Theory 2005, LSE

MA300.2 Game Theory 2005, LSE MA300.2 Game Theory 2005, LSE Answers to Problem Set 2 [1] (a) This is standard (we have even done it in class). The one-shot Cournot outputs can be computed to be A/3, while the payoff to each firm can

More information

Estimating Mixed Logit Models with Large Choice Sets. Roger H. von Haefen, NC State & NBER Adam Domanski, NOAA July 2013

Estimating Mixed Logit Models with Large Choice Sets. Roger H. von Haefen, NC State & NBER Adam Domanski, NOAA July 2013 Estimating Mixed Logit Models with Large Choice Sets Roger H. von Haefen, NC State & NBER Adam Domanski, NOAA July 2013 Motivation Bayer et al. (JPE, 2007) Sorting modeling / housing choice 250,000 individuals

More information

L industria del latte alimentare italiana: Comportamenti di consumo e analisi della struttura di mercato

L industria del latte alimentare italiana: Comportamenti di consumo e analisi della struttura di mercato L industria del latte alimentare italiana: Comportamenti di consumo e analisi della struttura di mercato Castellari Elena * Dottorato in Economia e Management Agroalimentare Università Cattolica del Sacro

More information

Pricing Problems under the Markov Chain Choice Model

Pricing Problems under the Markov Chain Choice Model Pricing Problems under the Markov Chain Choice Model James Dong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jd748@cornell.edu A. Serdar Simsek

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 31 : Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood

More information

General Examination in Macroeconomic Theory SPRING 2016

General Examination in Macroeconomic Theory SPRING 2016 HARVARD UNIVERSITY DEPARTMENT OF ECONOMICS General Examination in Macroeconomic Theory SPRING 2016 You have FOUR hours. Answer all questions Part A (Prof. Laibson): 60 minutes Part B (Prof. Barro): 60

More information

Portfolio Sharpening

Portfolio Sharpening Portfolio Sharpening Patrick Burns 21st September 2003 Abstract We explore the effective gain or loss in alpha from the point of view of the investor due to the volatility of a fund and its correlations

More information

LECTURE 2: MULTIPERIOD MODELS AND TREES

LECTURE 2: MULTIPERIOD MODELS AND TREES LECTURE 2: MULTIPERIOD MODELS AND TREES 1. Introduction One-period models, which were the subject of Lecture 1, are of limited usefulness in the pricing and hedging of derivative securities. In real-world

More information

Static Games and Cournot. Competition

Static Games and Cournot. Competition Static Games and Cournot Competition Lecture 3: Static Games and Cournot Competition 1 Introduction In the majority of markets firms interact with few competitors oligopoly market Each firm has to consider

More information

Alternative VaR Models

Alternative VaR Models Alternative VaR Models Neil Roeth, Senior Risk Developer, TFG Financial Systems. 15 th July 2015 Abstract We describe a variety of VaR models in terms of their key attributes and differences, e.g., parametric

More information

GPD-POT and GEV block maxima

GPD-POT and GEV block maxima Chapter 3 GPD-POT and GEV block maxima This chapter is devoted to the relation between POT models and Block Maxima (BM). We only consider the classical frameworks where POT excesses are assumed to be GPD,

More information

Lecture 13 Price discrimination and Entry. Bronwyn H. Hall Economics 220C, UC Berkeley Spring 2005

Lecture 13 Price discrimination and Entry. Bronwyn H. Hall Economics 220C, UC Berkeley Spring 2005 Lecture 13 Price discrimination and Entry Bronwyn H. Hall Economics 220C, UC Berkeley Spring 2005 Outline Leslie Broadway theatre pricing Empirical models of entry Spring 2005 Economics 220C 2 Leslie 2004

More information

6.896 Topics in Algorithmic Game Theory February 10, Lecture 3

6.896 Topics in Algorithmic Game Theory February 10, Lecture 3 6.896 Topics in Algorithmic Game Theory February 0, 200 Lecture 3 Lecturer: Constantinos Daskalakis Scribe: Pablo Azar, Anthony Kim In the previous lecture we saw that there always exists a Nash equilibrium

More information

1 Explaining Labor Market Volatility

1 Explaining Labor Market Volatility Christiano Economics 416 Advanced Macroeconomics Take home midterm exam. 1 Explaining Labor Market Volatility The purpose of this question is to explore a labor market puzzle that has bedeviled business

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

3.4 Copula approach for modeling default dependency. Two aspects of modeling the default times of several obligors

3.4 Copula approach for modeling default dependency. Two aspects of modeling the default times of several obligors 3.4 Copula approach for modeling default dependency Two aspects of modeling the default times of several obligors 1. Default dynamics of a single obligor. 2. Model the dependence structure of defaults

More information

Housing Prices and Growth

Housing Prices and Growth Housing Prices and Growth James A. Kahn June 2007 Motivation Housing market boom-bust has prompted talk of bubbles. But what are fundamentals? What is the right benchmark? Motivation Housing market boom-bust

More information

Lecture 5: Iterative Combinatorial Auctions

Lecture 5: Iterative Combinatorial Auctions COMS 6998-3: Algorithmic Game Theory October 6, 2008 Lecture 5: Iterative Combinatorial Auctions Lecturer: Sébastien Lahaie Scribe: Sébastien Lahaie In this lecture we examine a procedure that generalizes

More information

Econ 101A Final Exam We May 9, 2012.

Econ 101A Final Exam We May 9, 2012. Econ 101A Final Exam We May 9, 2012. You have 3 hours to answer the questions in the final exam. We will collect the exams at 2.30 sharp. Show your work, and good luck! Problem 1. Utility Maximization.

More information

Unobserved Heterogeneity Revisited

Unobserved Heterogeneity Revisited Unobserved Heterogeneity Revisited Robert A. Miller Dynamic Discrete Choice March 2018 Miller (Dynamic Discrete Choice) cemmap 7 March 2018 1 / 24 Distributional Assumptions about the Unobserved Variables

More information

On Existence of Equilibria. Bayesian Allocation-Mechanisms

On Existence of Equilibria. Bayesian Allocation-Mechanisms On Existence of Equilibria in Bayesian Allocation Mechanisms Northwestern University April 23, 2014 Bayesian Allocation Mechanisms In allocation mechanisms, agents choose messages. The messages determine

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

Asymmetric Information: Walrasian Equilibria, and Rational Expectations Equilibria

Asymmetric Information: Walrasian Equilibria, and Rational Expectations Equilibria Asymmetric Information: Walrasian Equilibria and Rational Expectations Equilibria 1 Basic Setup Two periods: 0 and 1 One riskless asset with interest rate r One risky asset which pays a normally distributed

More information

Econ 101A Final exam May 14, 2013.

Econ 101A Final exam May 14, 2013. Econ 101A Final exam May 14, 2013. Do not turn the page until instructed to. Do not forget to write Problems 1 in the first Blue Book and Problems 2, 3 and 4 in the second Blue Book. 1 Econ 101A Final

More information

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.

FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015. FDPE Microeconomics 3 Spring 2017 Pauli Murto TA: Tsz-Ning Wong (These solution hints are based on Julia Salmi s solution hints for Spring 2015.) Hints for Problem Set 2 1. Consider a zero-sum game, where

More information

INTERTEMPORAL ASSET ALLOCATION: THEORY

INTERTEMPORAL ASSET ALLOCATION: THEORY INTERTEMPORAL ASSET ALLOCATION: THEORY Multi-Period Model The agent acts as a price-taker in asset markets and then chooses today s consumption and asset shares to maximise lifetime utility. This multi-period

More information

Economic stability through narrow measures of inflation

Economic stability through narrow measures of inflation Economic stability through narrow measures of inflation Andrew Keinsley Weber State University Version 5.02 May 1, 2017 Abstract Under the assumption that different measures of inflation draw on the same

More information

CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION

CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION Szabolcs Sebestyén szabolcs.sebestyen@iscte.pt Master in Finance INVESTMENTS Sebestyén (ISCTE-IUL) Choice Theory Investments 1 / 65 Outline 1 An Introduction

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012 Game Theory Lecture Notes By Y. Narahari Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 22 COOPERATIVE GAME THEORY Correlated Strategies and Correlated

More information

ESTIMATING THE EFFECTS OF TAX REFORM IN DIFFERENTIATED PRODUCT OLIGOPOLISTIC MARKETS. Chaim Fershtman, Neil Gandal and Sarit Markovich

ESTIMATING THE EFFECTS OF TAX REFORM IN DIFFERENTIATED PRODUCT OLIGOPOLISTIC MARKETS. Chaim Fershtman, Neil Gandal and Sarit Markovich No. 2107 ESTIMATING THE EFFECTS OF TAX REFORM IN DIFFERENTIATED PRODUCT OLIGOPOLISTIC MARKETS Chaim Fershtman, Neil Gandal and Sarit Markovich INDUSTRIAL ORGANIZATION AND PUBLIC POLICY ISSN 0265-8003 ESTIMATING

More information

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Preliminary Examination: Macroeconomics Fall, 2009

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Preliminary Examination: Macroeconomics Fall, 2009 STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics Ph. D. Preliminary Examination: Macroeconomics Fall, 2009 Instructions: Read the questions carefully and make sure to show your work. You

More information

Exercises Solutions: Oligopoly

Exercises Solutions: Oligopoly Exercises Solutions: Oligopoly Exercise - Quantity competition 1 Take firm 1 s perspective Total revenue is R(q 1 = (4 q 1 q q 1 and, hence, marginal revenue is MR 1 (q 1 = 4 q 1 q Marginal cost is MC

More information

Simple Notes on the ISLM Model (The Mundell-Fleming Model)

Simple Notes on the ISLM Model (The Mundell-Fleming Model) Simple Notes on the ISLM Model (The Mundell-Fleming Model) This is a model that describes the dynamics of economies in the short run. It has million of critiques, and rightfully so. However, even though

More information

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty George Photiou Lincoln College University of Oxford A dissertation submitted in partial fulfilment for

More information

In terms of covariance the Markowitz portfolio optimisation problem is:

In terms of covariance the Markowitz portfolio optimisation problem is: Markowitz portfolio optimisation Solver To use Solver to solve the quadratic program associated with tracing out the efficient frontier (unconstrained efficient frontier UEF) in Markowitz portfolio optimisation

More information

Problem Set 4 Answers

Problem Set 4 Answers Business 3594 John H. Cochrane Problem Set 4 Answers ) a) In the end, we re looking for ( ) ( ) + This suggests writing the portfolio as an investment in the riskless asset, then investing in the risky

More information

LECTURE NOTES 10 ARIEL M. VIALE

LECTURE NOTES 10 ARIEL M. VIALE LECTURE NOTES 10 ARIEL M VIALE 1 Behavioral Asset Pricing 11 Prospect theory based asset pricing model Barberis, Huang, and Santos (2001) assume a Lucas pure-exchange economy with three types of assets:

More information

Financial Liberalization and Neighbor Coordination

Financial Liberalization and Neighbor Coordination Financial Liberalization and Neighbor Coordination Arvind Magesan and Jordi Mondria January 31, 2011 Abstract In this paper we study the economic and strategic incentives for a country to financially liberalize

More information

Sublinear Time Algorithms Oct 19, Lecture 1

Sublinear Time Algorithms Oct 19, Lecture 1 0368.416701 Sublinear Time Algorithms Oct 19, 2009 Lecturer: Ronitt Rubinfeld Lecture 1 Scribe: Daniel Shahaf 1 Sublinear-time algorithms: motivation Twenty years ago, there was practically no investigation

More information

Problem set 1 Answers: 0 ( )= [ 0 ( +1 )] = [ ( +1 )]

Problem set 1 Answers: 0 ( )= [ 0 ( +1 )] = [ ( +1 )] Problem set 1 Answers: 1. (a) The first order conditions are with 1+ 1so 0 ( ) [ 0 ( +1 )] [( +1 )] ( +1 ) Consumption follows a random walk. This is approximately true in many nonlinear models. Now we

More information

Chapter 19: Compensating and Equivalent Variations

Chapter 19: Compensating and Equivalent Variations Chapter 19: Compensating and Equivalent Variations 19.1: Introduction This chapter is interesting and important. It also helps to answer a question you may well have been asking ever since we studied quasi-linear

More information

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017 Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program June 2017 The time limit for this exam is four hours. The exam has four sections. Each section includes two questions.

More information

January 26,

January 26, January 26, 2015 Exercise 9 7.c.1, 7.d.1, 7.d.2, 8.b.1, 8.b.2, 8.b.3, 8.b.4,8.b.5, 8.d.1, 8.d.2 Example 10 There are two divisions of a firm (1 and 2) that would benefit from a research project conducted

More information

Notes on Intertemporal Optimization

Notes on Intertemporal Optimization Notes on Intertemporal Optimization Econ 204A - Henning Bohn * Most of modern macroeconomics involves models of agents that optimize over time. he basic ideas and tools are the same as in microeconomics,

More information

Graduate Macro Theory II: The Basics of Financial Constraints

Graduate Macro Theory II: The Basics of Financial Constraints Graduate Macro Theory II: The Basics of Financial Constraints Eric Sims University of Notre Dame Spring Introduction The recent Great Recession has highlighted the potential importance of financial market

More information

Lecture 11: Bandits with Knapsacks

Lecture 11: Bandits with Knapsacks CMSC 858G: Bandits, Experts and Games 11/14/16 Lecture 11: Bandits with Knapsacks Instructor: Alex Slivkins Scribed by: Mahsa Derakhshan 1 Motivating Example: Dynamic Pricing The basic version of the dynamic

More information

Econ 101A Final exam Mo 18 May, 2009.

Econ 101A Final exam Mo 18 May, 2009. Econ 101A Final exam Mo 18 May, 2009. Do not turn the page until instructed to. Do not forget to write Problems 1 and 2 in the first Blue Book and Problems 3 and 4 in the second Blue Book. 1 Econ 101A

More information

1 Two Period Exchange Economy

1 Two Period Exchange Economy University of British Columbia Department of Economics, Macroeconomics (Econ 502) Prof. Amartya Lahiri Handout # 2 1 Two Period Exchange Economy We shall start our exploration of dynamic economies with

More information

Yao s Minimax Principle

Yao s Minimax Principle Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,

More information

Quantitative Risk Management

Quantitative Risk Management Quantitative Risk Management Asset Allocation and Risk Management Martin B. Haugh Department of Industrial Engineering and Operations Research Columbia University Outline Review of Mean-Variance Analysis

More information

Symmetric Game. In animal behaviour a typical realization involves two parents balancing their individual investment in the common

Symmetric Game. In animal behaviour a typical realization involves two parents balancing their individual investment in the common Symmetric Game Consider the following -person game. Each player has a strategy which is a number x (0 x 1), thought of as the player s contribution to the common good. The net payoff to a player playing

More information

1 Answers to the Sept 08 macro prelim - Long Questions

1 Answers to the Sept 08 macro prelim - Long Questions Answers to the Sept 08 macro prelim - Long Questions. Suppose that a representative consumer receives an endowment of a non-storable consumption good. The endowment evolves exogenously according to ln

More information

A Model of a Vehicle Currency with Fixed Costs of Trading

A Model of a Vehicle Currency with Fixed Costs of Trading A Model of a Vehicle Currency with Fixed Costs of Trading Michael B. Devereux and Shouyong Shi 1 March 7, 2005 The international financial system is very far from the ideal symmetric mechanism that is

More information

PhD Qualifier Examination

PhD Qualifier Examination PhD Qualifier Examination Department of Agricultural Economics May 29, 2014 Instructions This exam consists of six questions. You must answer all questions. If you need an assumption to complete a question,

More information

The Costs of Environmental Regulation in a Concentrated Industry

The Costs of Environmental Regulation in a Concentrated Industry The Costs of Environmental Regulation in a Concentrated Industry Stephen P. Ryan MIT Department of Economics Research Motivation Question: How do we measure the costs of a regulation in an oligopolistic

More information

Idiosyncratic risk, insurance, and aggregate consumption dynamics: a likelihood perspective

Idiosyncratic risk, insurance, and aggregate consumption dynamics: a likelihood perspective Idiosyncratic risk, insurance, and aggregate consumption dynamics: a likelihood perspective Alisdair McKay Boston University June 2013 Microeconomic evidence on insurance - Consumption responds to idiosyncratic

More information

Game Theory Fall 2003

Game Theory Fall 2003 Game Theory Fall 2003 Problem Set 5 [1] Consider an infinitely repeated game with a finite number of actions for each player and a common discount factor δ. Prove that if δ is close enough to zero then

More information

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren October, 2013 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that

More information

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory Strategies and Nash Equilibrium A Whirlwind Tour of Game Theory (Mostly from Fudenberg & Tirole) Players choose actions, receive rewards based on their own actions and those of the other players. Example,

More information

Increasing Returns and Economic Geography

Increasing Returns and Economic Geography Increasing Returns and Economic Geography Department of Economics HKUST April 25, 2018 Increasing Returns and Economic Geography 1 / 31 Introduction: From Krugman (1979) to Krugman (1991) The award of

More information

1.1 Interest rates Time value of money

1.1 Interest rates Time value of money Lecture 1 Pre- Derivatives Basics Stocks and bonds are referred to as underlying basic assets in financial markets. Nowadays, more and more derivatives are constructed and traded whose payoffs depend on

More information

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :

More information

Class Notes on Chaney (2008)

Class Notes on Chaney (2008) Class Notes on Chaney (2008) (With Krugman and Melitz along the Way) Econ 840-T.Holmes Model of Chaney AER (2008) As a first step, let s write down the elements of the Chaney model. asymmetric countries

More information

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 18 PERT

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 18 PERT Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 18 PERT (Refer Slide Time: 00:56) In the last class we completed the C P M critical path analysis

More information

Efficiency and Herd Behavior in a Signalling Market. Jeffrey Gao

Efficiency and Herd Behavior in a Signalling Market. Jeffrey Gao Efficiency and Herd Behavior in a Signalling Market Jeffrey Gao ABSTRACT This paper extends a model of herd behavior developed by Bikhchandani and Sharma (000) to establish conditions for varying levels

More information

Optimization Models for Quantitative Asset Management 1

Optimization Models for Quantitative Asset Management 1 Optimization Models for Quantitative Asset Management 1 Reha H. Tütüncü Goldman Sachs Asset Management Quantitative Equity Joint work with D. Jeria, GS Fields Industrial Optimization Seminar November 13,

More information

UCLA Department of Economics Ph.D. Preliminary Exam Industrial Organization Field Exam (Spring 2010) Use SEPARATE booklets to answer each question

UCLA Department of Economics Ph.D. Preliminary Exam Industrial Organization Field Exam (Spring 2010) Use SEPARATE booklets to answer each question Wednesday, June 23 2010 Instructions: UCLA Department of Economics Ph.D. Preliminary Exam Industrial Organization Field Exam (Spring 2010) You have 4 hours for the exam. Answer any 5 out 6 questions. All

More information

Lecture 5 Leadership and Reputation

Lecture 5 Leadership and Reputation Lecture 5 Leadership and Reputation Reputations arise in situations where there is an element of repetition, and also where coordination between players is possible. One definition of leadership is that

More information

3 Arbitrage pricing theory in discrete time.

3 Arbitrage pricing theory in discrete time. 3 Arbitrage pricing theory in discrete time. Orientation. In the examples studied in Chapter 1, we worked with a single period model and Gaussian returns; in this Chapter, we shall drop these assumptions

More information

Course information FN3142 Quantitative finance

Course information FN3142 Quantitative finance Course information 015 16 FN314 Quantitative finance This course is aimed at students interested in obtaining a thorough grounding in market finance and related empirical methods. Prerequisite If taken

More information

Module 3: Factor Models

Module 3: Factor Models Module 3: Factor Models (BUSFIN 4221 - Investments) Andrei S. Gonçalves 1 1 Finance Department The Ohio State University Fall 2016 1 Module 1 - The Demand for Capital 2 Module 1 - The Supply of Capital

More information