Taylor and Mishkin on Rule versus Discretion in Fed Monetary Policy The most debatable topic in the conduct of monetary policy in recent times is the Rules versus Discretion controversy. The central bankers and policy makers have always been mystified by the question as to what is the optimum quantity of money in the economy and at what price. Since the objectives of monetary policy are to promote economic growth and employment and ensure price stability, the central bankers have been creating money, regulating money supply and adjusting interest rates towards this goal. But when it came to conducting this monetary operation, the central bankers needed a definite quantitative indicator they should aim and pursue. The monetarist hero Milton Friedman after his monumental study Monetary History of the United States 1867-1960 advocated that instead of short term changes in money supply or rate of interest done in the light figures of past growth and inflation in the economy, the central bank should follow a policy of fixed money supply growth and let interest rates be market adjusted. Monetarism gained prominence in the late 1960s and 1970s, and 1980s in different form. The Federal Reserve Bank and Bank of England were leading the monetarist rule of targeting monetary aggregates. Further, the 1970s witnessed extraordinary global economic disturbance with two sharpest increases in crude oil prices in 1973 and 1979 catapulting the global economy into unprecedented cost-push inflation. The central banks
world over had to pursue accommodating monetary expansion to avoid another great depression. One has to reckon the fact in the post oil crisis world, Friedman s rule faced the great risk of pushing the recessionary economy onto the brink of depression. High money supply (M2) growth began in 1971-72 when it reached record high of 12.5 from 4% in 1970. It dropped to 6% in 1974, but went back to 12.5% in 1976-77. Paul Volcker reduced M2 growth to around 8% during 1979-81 by abandoning the monetary aggregates and Fed funds rate as the targets, and adopted bank reserve growth target, and allowed rate of interest to go up to record high of 19% to primarily tame inflation which had reached 15%. By 1983 when inflation was lowered to 5% money supply grew again at 12.5% in 1984. Since then M2 growth has been consistently coming down to record low of 1% in 1992-93.
The Great Inflation which began in 1960s and lasted until 1980s and monetary policy mistakes had led to great disappointment amongst the monetary economists, central bankers and policy makers. The discretionary monetary policy had not given positive results, and there was search for a more reliable rule or guidance for monetary policy. In 1993 John Taylor, Professor of Economics at the Stanford University brought out a paper outlining a rule for deciding the Fed funds rate to be monitored by the Fed on the basis of a formula of geometric average of output gap and inflation gap. The rule was based on the U.S. experience in the late 1980s and early 1990s. It suggested that the federal funds rate (r) should normatively be set by a simple equation: r = p + 0.5y + 0.5(p-2) + 2, where y represents the percent deviation of real GDP from trend and p represents the rate of inflation over the previous four quarters. With inflation on its assumed target of 2 percent and real GDP growing on its trend path of roughly 2 percent per year (so that y=0), the real ex post interest rate (r-p) would also equal 2. The Federal Reserve started using the Taylor formula as the Taylor Rule for guidance in their FMOC meetings since 1995. From 1995 until 2003 Fed funds rate nearly matched the Taylor rule rate.
At the recent conference on the subject titled Are Rules Made To Be Broken?: Discretion and Monetary Policy at the Federal Reserve Bank of Boston, the President and CEO of Boston Fed Eric Rosengren introduced the subject outlining the brief history of Taylor Rule and the recent developments in theory, empirical analysis and innovation in monetary policy. John Taylor presented his arguments for the instrument based rule to guide monetary policy and dealt with the criticism of the rule based approach to policy. He outlined the case for rule based monetary policy due to several reasons such as time-consistency problem, simplicity, less outside pressure and uncertainty, greater transparency and accountability and use as the benchmark. He further stated that the instrument based rule helps in adopting a systematic, consistent and predictable strategy of monetary conduct, and does not preclude discretion in times of abnormal, unexpected and extreme shocks. How far has been the deviation of actual rate from the Taylor rule? Comparison of the actual Fed funds rate with Taylor rule rate is shown in the Figure 1 below. It shows that the actual rate overshot the Taylor rate since 1994 until 2000, while it undershot the Taylor rate from 2001 until the crisis in 2008. Taylor believes that deviation from the rule enabled easier risk taking contributing to financial instability and the crisis. Undoubtedly, ineffective banking supervision and macroprudential policy, and flaws in ratings were also other factors.
Since the crisis, the economy and financial system went into a totally different and extreme scenario and original Taylor rule had to be substantially modified for comparison. The modified Taylor rule estimates were presented by Bernanke with Core PCE inflation and weight of 1.0 on output gap (Figure 2 below). With modified Taylor rule Bernanke pointed out that the deviation of actual rate from the rule had been very minor and both the curves almost matched except after the crisis. Taylor discussed several new ideas on rule based and also targeting policies identifying other parameters such as inflation, nominal GDP targeting, modifying weights and other parameters of the original Taylor rule. Considerable empirical and econometric testing of the Taylor rule, and modified and alternative rules and target based approach have shed more light on the Friedman s Optimum Money goal.
Frederic Mishkin, Professor of Economics at the Columbia University and former member of the Board of New York Federal Reserve examined the issue of the Rules versus Discretion in monetary policy and discussed the Taylor rule and alternative guidance for Fed policy and conduct of monetary aggregates and instruments. The Taylor rule marked a revolution in the conduct of monetary policy in 1990s. It gave better and continuing guidance compared to earlier reliance on period to period past economic data and total discretion in monitoring the money supply and the rate of interest. Mishkin feels that rule based policy has lesser possibility of policy mistakes than the discretionary one. The rule based policy according to him also overcomes the time-inconsistency problem and the temptation to deviate from the optimal path to exploit the short term tradeoff between unemployment and inflation. The case for discretion is based on the uncertainty in the financial and economic environment. Mishkin feels that discretion in monetary policy is desirable since the rule based model requires reliable macroeconomic
model, stable structure of economy, cannot foresee every contingency and does not permit a judgmental call. The Taylor rule also needs a reliable estimates of natural rate interest, natural rate of unemployment and Phillips curve. Assessing the pros and cons of both instrument rule or pure discretion, Mishkin proposed a mid approach of what he developed with former Chairman of Fed Ben Bernanke, called Constrained Discretion. He discussed the adoption of other target rules or nominal anchors such as inflation targeting, price-level or nominal GDP targeting or a target criterion involving a tradeoff between output gaps and inflation gaps. However, these approaches do not provide any automatic and instrument rules but again give considerable discretion. The central banks of developed nations are facing the problem of low growth, low inflation and deflationary tendencies. Economic growth is difficult to come by despite the fiscal stimulus on the one hand and QE
(Quantitative Easing) and ZLB (Zero Lower Bound) or negative interest rate policies. Prevalence of strong deflationary forces does not enable the central banks to hit 2% inflation target. This has forced the ECB, Bank of England, Bank of Japan and others to continue with QE and ZLB policies. The US is the only economy which has succeeded in achieving 2%+ growth and near full employment level, but is yet to hit 2% inflation target. Discussing the problems of monetary conduct in the emerging market economies which have high growth rate as well as inflation rate with susceptibility to large and frequent shocks, Jose De Gregorio, former Governor of the Central Bank of Chile and Minister in the Chilean Government outlined the significance of the rule as well as discretion in these economies.
From Adam Smith and Friedman to Taylor the quest for optimum quantity of money and its equilibrium price is continuing. The question also boils down to having the right estimates of equilibrium real rate interest, NAIRU ( Non-Accelerating Inflationary Rate of Unemployment), growth potential of the economy, and index or rate that captures inflation. In the world of information and data, accuracy is of crucial significance for the right outcome. Behind the numbers are true economic phenomena which happen day in and day out. To capture them correctly is a great task in achieving optimality in policy and let the economy achieve its equilibrium targeted by the policy. The failure to achieve the target will signal the fine-tuning of policy or may be a policy change.