Implementing an Agent-Based General Equilibrium Model
1 2 3
Pure Exchange General Equilibrium We shall take N dividend processes δ n (t) as exogenous with a distribution which is known to all agents There are a large number of agents with differing utilities who trade claims to these cash flows Relevant prices are the interest rate, r(t), and a price of risk vector, θ(t) Markets clear: in aggregate all cash flows are consumed and all claims are held
Bayesian Equilibrium Agents know the possible types of other investors in the market but do not know the wealth of each type Each agent knows there may be many others of his own type and so knowing his own wealth does not help him infer the distribution of wealth across types All agents start out with homogeneous beliefs about this distribution Observing equilibrium r(t) and θ(t) provides the information for updating this distribution
Stochastic Discount Factors Recall that the stochastic factor, H(t) gives the time t price of any asset which pays cash flow ξ at T as It s dynamics are 1 H(t) E t[ξh(t )] dh(t) H(t) = r(t)dt θ(t) dw (t) What we are looking for is the dynamics of H(t) in terms of the observables
A Simple Example Suppose that all agents have log utility over consumption but different rates of time preference There is only one risky asset and agents own proportions w k of it. [ ] T max E e ρ k t log(c k (t))dt c k (t) 0 subject to [ T ] [ T ] E H(t)c k (t)dt = E H(t)w k δ(t)dt 0 0
Solution to Agent s Problems The solutions to the agents problem are given by where c k (t) = 1 H(t) e ρ k t γ k x k γ k = ρ k 1 e ρ k T and x k is the total starting wealth of agents of type k. By starting wealth we mean the value of the endowment stream of this type
Aggregation Since in equilibrium all cash flows are consumed we must have that δ(t) = 1 e ρ k t γ k x k H(t) Investors know that this must hold but they don t know the x k But their beliefs must be consistent with this market clearing condition δ(t) = 1 H(t) k e ρ k t γ k E t [x k ] k
Updating Let Y k (t) = E t [x k ] be the common beliefs of agents at time t about the starting wealth of type k This must a non-negative martingale so it follows that it is an exponential martingale dy k (t) Y k (t) = v k(t) dw (t) This extra uncertainty must be reflected in the stochastic discount factor dynamics
Equilibrium Applying the Ito formula to the market clearing condition and matching coefficients we obtain r(t) = K k=1 e ρ k t γ k Y k (t)ρ k K k=1 e ρ k t γ k Y k (t) + µ δ (t) θ(t) θ(t) K k=1 θ(t) = σ δ (t) e ρ k t γ k Y k (t)v k (t) K k=1 e ρ k t γ k Y k (t) If all the ρ k were the same and the v k were zero then we would have the classical result
Why is this Interesting? Notice that even with log-normal endowment shocks we have stochastic interest rates and risk prices in this model Empirical research suggests that a large portion of the variation in prices that we observe is due to time varying risk-prices But the source of the change in risk prices has been hard to identify Here we find that at least part of it is due to aggregate uncertainty about market structure
The Goal We want to be able to show how shocks propegate from one asset class to another, i.e. subprime CMOs to general stock market This means we need more than one risky asset We expect that the mechanism is that losses in one asset class changes the wealth of one type of investor disproportionately which causes large changes in risk pricing So investor types must be quite different from eachother We may even need to constrain some investors
Other Utility Functions The model just presented is the only one that can be solved analytically Non-log investors are necessary for realism consider the HARA class We can solve for optimal consumption and investment behavior for HARA investors but only with certain class of distributions for r(t) and θ(t) So we must assume that even with updating we always stay within this class
Agents with Metacognition In computer science intelligent agents are able to choose from a set of actions based on observed data In this work the agents are smarter than that in that they know their beliefs might be wrong and can adjust Agents also observe the results of their actions (equilibrium r(t) and θ(t)) and determine if this is consistent with their observations and beliefs introspection If not then they update their beliefs before taking new actions reflection
Gaussian State Vector Agents know that endowment growth rate is linear in a state vector Y which satisfies They believe that dy (t) = K (Θ Y (t)dt + ΣdW (t) r(t) = d 0 + d 1 Y (t) + Y (t) d 2 Y (t) and θ(t) = θ 0 + θ 1 Y (t)
Updating Investors are assumed to see Y (t) each period Equilibrium r(t) and θ(t) are also observed The parameters of the functions that relate r(t) and θ(t) to Y (t) are updated each period to reconcile these observations Ideally we would like this to take place during the market-clearing process, but the computational burden is high
Consumption Choice HARA investors optimal consumption c k (t) = X k(t) G k (t, T ) where X k (t) is current wealth and G k (t, T ) T t e ρ k β k (s t) F k (t, s)ds where β k is a risk tolerance parameter and F is a function of Y and t which solves a parabolic PDE
Exponential Quadratic Forms The Guassian state vector and the assumed forms of µ δ, r(t) and θ(t) guarantee that this PDE has a solution of the form ( F k (t, T ) = exp C(τ) + D(τ) Y (t) + 1 ) 2 Y (t) Q(τ)Y (t) where τ = T t The function C, D, and Q satisfy a particular set of ODEs
Investment Decision A HARA investor chooses investments according to ( π k (t) = β k X(t) σ(t) ) 1 ( θ(t) + Xk (t) σ(t) ) 1 gk (t, T ) where g k (t, T ) is the volatility of G k (t, T ) But to know σ(t) we need to be able to compute prices of risky assets This is another set of PDEs
Market Clearing The market clearing r(t) and θ(t) are determined by numerical search The observed Y (t) and the updated parameters determine agent demands for consumption and investment So at each iteration in finding market clearing we need to solve a set of ODEs and do several numerical integrations
Open Questions We do not yet know how to incorporate the extra variation caused by updating into the H(t) We can make guesses about it s magnitude and incorporate those guesses But then we may have to run the model long enough to calibrate these guesses to reality Lots of CPU crunching ahead of us