Archive for the Category NGDP targeting

 
 

Basil Halperin on the logic behind NGDP targeting

James Alexander directed me to a recent post by Basil Halperin, which is one of the best blog posts that I have read in years.  (I was actually sent this material before Christmas, but it sort of fell between the cracks.)

Basil starts off discussing a program for distributing excess food production from manufacturers to food banks.

The problem was one of distributed versus centralized knowledge. While Feeding America had very good knowledge of poverty rates around the country, and thus could measure need in different areas, it was not as good at dealing with idiosyncratic local issues.

Food banks in Idaho don’t need a truckload of potatoes, for example, and Feeding America might fail to take this into account. Or maybe the Chicago regional food bank just this week received a large direct donation of peanut butter from a local food drive, and then Feeding America comes along and says that it has two tons of peanut butter that it is sending to Chicago.

To an economist, this problem screams of the Hayekian knowledge problem. Even a benevolent central planner will be hard-pressed to efficiently allocate resources in a society since it is simply too difficult for a centralized system to collect information on all local variation in needs, preferences, and abilities.

One option would simply be to arbitrarily distribute the food according to some sort of central planning criterion.  But there is a better way:

This knowledge problem leads to option two: market capitalism. Unlike poorly informed central planners, the decentralized price system – i.e., the free market – can (often but not always) do an extremely good job of aggregating local information to efficiently allocate scarce resources. This result is known as the First Welfare Theorem.

Such a system was created for Feeding America with the help of four Chicago Booth economists in 2005. Instead of centralized allocation, food banks were given fake money – with needier food banks being given more – and allowed to bid for different types of food in online auctions. Prices are thus determined by supply and demand. . . .

By all accounts, the system has worked brilliantly. Food banks are happier with their allocations; donations have gone up as donors have more confidence that their donations will actually be used. Chalk one up for economic theory.

Basil points out that while that solves one problem, there is still the issue of determining “monetary policy”, i.e. how much fake money should be distributed each day?

Here’s the problem for Feeding America when thinking about optimal monetary policy. Feeding America wants to ensure that changes in prices are informative for food banks when they bid. In the words of one of the Booth economists who helped design the system:

“Suppose I am a small food bank; I really want a truckload of cereal. I haven’t bid on cereal for, like, a year and a half, so I’m not really sure I should be paying for it. But what you can do on the website, you basically click a link and when you click that link it says: This is what the history of prices is for cereal over the last 5 years. And what we wanted to do is set up a system whereby by observing that history of prices, it gave you a reasonable instinct for what you should be bidding.”

That is, food banks face information frictions: individual food banks are not completely aware of economic conditions and only occasionally update their knowledge of the state of the world. This is because obtaining such information is time-consuming and costly.

Relating this to our question of optimal monetary policy for the food bank economy: How should the fake money supply be set, taking into consideration this friction?

Obviously, if Feeding America were to randomly double the supply of (fake) money, then all prices would double, and this would be confusing for food banks. A food bank might go online to bid for peanut butter, see that the price has doubled, and mistakenly think that demand specifically for peanut butter has surged.

This “monetary misperception” would distort decision making: the food bank wants peanut butter, but might bid for a cheaper good like chicken noodle soup, thinking that peanut butter is really scarce at the moment.

Clearly, random variation in the money supply is not a good idea. More generally, how should Feeding America set the money supply?

One natural idea is to copy what real-world central banks do: target inflation.

Basil then explains why NGDP targeting is likely to be superior to inflation targeting, using a Lucas-type monetary misperceptions model.

III. Monetary misperceptions
I demonstrate the following argument rigorously in a formal mathematical model in a paper, “Monetary Misperceptions: Optimal Monetary Policy under Incomplete Information,” using a microfounded Lucas Islands model. The intuition for why inflation targeting is problematic is as follows.

Suppose the total quantity of all donations doubles.

You’re a food bank and go to bid on cheerios, and find that there are twice as many boxes of cheerios available today as yesterday. You’re going to want to bid at a price something like half as much as yesterday.

Every other food bank looking at every other item will have the same thought. Aggregate inflation thus would be something like -50%, as all prices would drop by half.

As a result, under inflation targeting, the money supply would simultaneously have to double to keep inflation at zero. But this would be confusing: Seeing the quantity of cheerios double but the price remain the same, you won’t be able to tell if the price has remained the same because
(a) The central bank has doubled the money supply
or
(b) Demand specifically for cheerios has jumped up quite a bit

It’s a signal extraction problem, and rationally you’re going to put some weight on both of these possibilities. However, only the first possibility actually occurred.

This problem leads to all sorts of monetary misperceptions, as money supply growth creates confusions, hence the title of my paper.

Inflation targeting, in this case, is very suboptimal. Price level variation provides useful information to agents.

IV. Optimal monetary policy
As I work out formally in the paper, optimal policy is instead something close to a nominal income (NGDP) target. Under log utility, it is exactly a nominal income target. (I’ve written about nominal income targeting before more critically here.)

. . .  Feeding America, by the way, does not target constant inflation. They instead target “zero inflation for a given good if demand and supply conditions are unchanged.” This alternative is a move in the direction of a nominal income target.

V. Real-world macroeconomic implications
I want to claim that the information frictions facing food banks also apply to the real economy, and as a result, the Federal Reserve and other central banks should consider adopting a nominal income target. Let me tell a story to illustrate the point.

Consider the owner of an isolated bakery. Suppose one day, all of the customers seen by the baker spend twice as much money as the customers from the day before.

The baker has two options. She can interpret this increased demand as customers having come to appreciate the superior quality of her baked goods, and thus increase her production to match the new demand. Alternatively, she could interpret this increased spending as evidence that there is simply more money in the economy as a whole, and that she should merely increase her prices proportionally to account for inflation.

Economic agents confounding these two effects is the source of economic booms and busts, according to this model. This is exactly analogous to the problem faced by food banks trying to decide how much to bid at auction.

To the extent that these frictions are quantitatively important in the real world, central banks like the Fed and ECB should consider moving away from their inflation targeting regimes and toward something like a nominal income target, as Feeding America has.

The paper he links to contains a rigorous mathematical model that shows the advantages of NGDP targeting. He doesn’t claim NGDP targeting is always optimal, but any paper that did would actually be less persuasive, as it would mean the model was explicitly constructed to generate that result. Instead the result flows naturally from the Lucas-style archipelago model, where each trader is on their own little island observing local demand conditions before aggregate (NGDP conditions). This is the sort of approach I used in my first NGDP futures targeting paper, where futures markets aggregated all of this local demand (i.e. velocity) information. However Basil’s paper is light years ahead of where I was in 1989.

I can’t recommend him highly enough.  I’m told he recently got a BA from Chicago, which suggests he may be another Soltas, Wang or Rognlie, one of those people who makes a mark at a very young age.  He seems to combine George Selgin-type economic intuition (even citing a lovely Selgin metaphor at the end of his post) with the sort of highly technical skills required in modern macroeconomics.

Commenters often ask (taunt?) me with the question, “Where is the rigorous model for market monetarism”.  I don’t believe any single model can incorporate all of the insights from any half decent school of thought, but Basil’s model certainly provides the sort of rigorous explanation of NGDP targeting that people seem to demand.

Basil has lots of other excellent posts, and over the next few weeks and months I will have more posts responding to some of the points he makes (which to his credit, include criticism of NGDP targeting–he’s no ideologue.)

Better to undershoot a 3% target than overshoot a 2% target

I have pretty similar views on monetary policy as Ryan Avent, but I’m going to quibble slightly with this:

At the same time, a period of inflation above the Fed’s 2% target would give the central bank more headroom to raise its benchmark interest rate. The higher the level of long-run nominal rates, the less likely rates are to fall back to zero the next time trouble strikes.

Back in 2009, 2010, 0r 2011, it would have made sense to try to overshoot the 2% inflation target.  But not today, with unemployment at 4.6%.  If we pushed inflation above 2% when the economy is strong, then we’d have to shoot for under 2% inflation when the economy is weak.  We’d be more likely to fall back to zero next time.

Indeed this is basically what went wrong in 2008.  Inflation exceeded 2% during the housing boom.  Thus when the Fed needed to move aggressively to ease policy in 2008, they held back in fear of inflation (which ran above 2% during 2008).  It would make more sense to shoot for below 2% inflation during booms, and above 2% inflation during recessions.

Ryan does have some forceful arguments for higher inflation, but I think they’d be more effective if couched in terms of a change in the inflation target.  Thus instead of calling for an overshoot of the 2% target by 1/2%, it would make more sense to call for undershooting a new and higher 3% inflation target, by 1/2%.  Increasing the inflation target to 3% would indeed give the Fed more room to cut rates in the next recession, in a way that overshooting the 2% target would not.  In addition, undershooting the target during a boom is more consistent with the spirit of NGDP targeting, which Ryan has previously endorsed.

This all might seem like a meaningless quibble; “What difference does it make what we do to the target, as long as we get 2.5% inflation.”  In the very short run it may make no difference.  But if you don’t make monetary policy decisions in the context of a clearly defined policy regime, then the economy is likely to be less stable, especially when we reach a turning point in the business cycle.

As always, this 3% inflation target is not my preferred option (I prefer NGDPLT). I’m just trying to illustrate what I think is the most fruitful approach for people with current views on policy that might be described as “dovish.”

Binder and Rodrigue on NGDP targeting

Carola Binder and Alex Rodrigue have a very nice new paper out on monetary policy rules, for the Center on Budget and Policy Priorities.  Their paper suggests that either NGDP targeting or total wage targeting is likely to produce the best employment outcomes:

screen-shot-2016-10-06-at-12-44-14-pmI’d quibble a bit with the rankings, for instance I view the Taylor Rule as much superior to the gold standard, at least at positive interest rates.  But I do agree about the advantages of NGDP and wage targeting.  They discuss two types of wage targeting:

Nominal wage targeting can refer to targeting the wage rate (the price of labor) or targeting the quantity of wages paid (total nominal labor compensation, or the average hourly wage times the total number of hours worked). The former can be thought of as a special type of inflation targeting, since wages themselves are a price and wage growth is a type of inflation. Inflation-targeting central banks choose which specific price index to use for their inflation target; nominal wage targeting entails choosing a price index with 100 percent weight on wages. Mankiw and Reis (2003) find that “a central bank that wants to achieve maximum stability of economic activity should use a price index that gives substantial weight to the level of nominal wages.”

I tend to favor either targeting total wage payments, or the expected future level of average hourly wages, and they hold exactly the same view:

Nominal wage targeting has never been attempted, and its implementation could entail several challenges. First, there is no single wage rate. Policymakers would need to choose whether to target mean or median wages or some other measure. Second, nominal wages tend to respond to monetary policy with a lag. It may thus be preferable to target either expected future wages or total nominal labor compensation, which reacts more quickly.

In a slump, total wage payments fall faster than average wages per hour (due to wage stickiness).  So if you are not using a futures market approach, then aggregate wages may give a clearer signal.  However on theoretical grounds average hourly wages are slightly better, and hence to be preferred if the lag problem can be addressed with a futures market for average hourly wages.

Speaking of futures markets, they are skeptical:

Since NGDP responds slowly to monetary policy, Sumner proposes a futures contract approach that would allow monetary policy to respond to expected future NGDP instead of current NGDP.[80] The Fed would set up a futures market in which participants would bet as to whether the future NGDP growth rate would exceed or fall short of the Fed’s target. The Fed would then adjust the monetary base, just as it does today, according to the bets. So, if traders on this NGDP prediction market thought nominal growth would exceed the Fed’s target, the Fed would reduce the base, and vice versa.[81]

This approach is based on the notion that the market is an efficient forecaster, but it could be problematic for a number of reasons.  For instance, the futures market could be subject to manipulation by large speculators,[82] or trading volume could be too low. More broadly, the futures-market approach would drastically limit the Fed’s discretion; the Fed would play a passive role. We think it would be more effective for the Fed to commit to pursuing the NGDP target in the medium run, taking into account the Fed’s own forecasts of future NGDP in its policy decisions.

Not surprisingly, this is one area where I do not agree.  But before explaining why, let me point out that I would strongly support their (Svenssonian) suggestion of targeting the central bank’s own internal NGDP medium term forecast as a second best policy, as long as it was a part of the level targeting system.

Now for my response:

1. The lack of discretion could be viewed as a feature, not a bug.  If you want to preserve some discretion, however, my “guardrails” approach can be employed. Indeed even Bill Woolsey’s index futures convertibility approach allows for discretion, if the central bank sees one big speculator trying to manipulate the market.  (Keep in mind that all trades are with the central bank as the counter-party, so they’d know if someone were trying to manipulate the market.)  And of course manipulation would be almost impossible under the guardrails approach, where the central bank would promise to go short on 5% NGDP contracts, and long on 3% NGDP contracts.  And finally, the same manipulation possibilities apply to a gold standard and/or Bretton Woods regime.  But if you search the literature on these regimes, you will discover almost nothing on “market manipulation”, at least when rates actually are fixed and stable.  (Selling a currency before devaluation doesn’t count, as no one expects the central bank would default on NGDP futures.)  I think it’s a needless worry.

2.  Low trading volume is not a problem; indeed the system does not require any trading at all.  Here’s an analogy.  A gold standard would work fine as long as people were free to convert currency into gold at a fixed price, regardless of whether any such trading actually occurred.  It would simply mean that monetary policy is on target.  And if you still are concerned about trading, the central bank can always create trading by paying a high enough interest rate on margin accounts.

Even if NGDP futures markets are not to be used to set the policy instrument, there is NO EXCUSE for the failure of central banks to set up NGDP prediction markets, and subsidize trading.  This would provide essential high frequency data on NGDP expectations after important monetary policy events, and hence would be invaluable to monetary researchers.  Their failure to do so is gross dereliction of duty, which future generations will look back on in disbelief.  I would have loved to have such a market in the second half of 2008, exposing all their foolish decisions.

HT:  Dilip

An unruly debate over policy rules

George Selgin has a new piece criticizing Narayana Kocherlakota on policy rules. Here’s the intro:

I was just about to treat myself to a little R&R last Friday when — wouldn’t you know it? — I received an email message from the Brookings Institution’s Hutchins Center. The message alerted me to a new Brookings Paper by former Minneapolis Fed President Narayana Kocherlakota. The paper’s thesis, according to Hutchins Center Director David Wessel’s summary, is that the Fed “was — and still is — trapped by adherence to rules.”

Having recently presided over a joint Mercatus-Cato conference on “Monetary Rules for a Post-Crisis World” in which every participant, whether favoring rules or not, took for granted that the Fed is a discretionary monetary authority if there ever was one, I naturally wondered how Professor Kocherlakota could claim otherwise. I also wondered whether the sponsors and supporters of the Fed Oversight Reform and Modernization (FORM) Act realize that they’ve been tilting at windmills, since the measure they’ve proposed would only require the FOMC to do what Kocherlakota says it’s been doing all along.

So, instead of making haste to my favorite watering hole, I spent my late Friday afternoon reading,“Rules versus Discretion: A Reconsideration.”  And a remarkable read it is, for it consists of nothing less than an attempt to champion the Fed’s command of unlimited discretionary powers by referring to its past misuse of what everyone has long assumed to be those very powers!

To pull off this seemingly impossible feat, Kocherlakota must show that, despite what others may think, the FOMC’s past mistakes, including those committed during and since the recent crisis, have been due, not to the mistaken actions of a discretionary FOMC, but to that body’s ironclad commitment to monetary rules, and to the Taylor Rule especially.

The post is much longer and provides a detailed rebuttal of Kocherlakota’s claims. Those who read George know that he is a formidable debater, and I end up much more sympathetic to his view that to Kocherlakota’s view of discretion.  But it’s also important to be as generous to the other side as possible, so that their views don’t seem too outlandish.  Here are two senses in which it might make sense to criticize the Fed for an excessively rules-based approach.

1. One criticism would be that the Fed was too tied to inflation targeting in 2008, when it would have been appropriate to also look at other variables.  In that case, George and I would argue something closer to an NGDP target.  But of course that’s not discretion, it’s a different rule.

2.  Kocherlakota’s main objection seems to be that the Fed put too much weight on Taylor Rule-type thinking. One can certainly cite periods where money was (in retrospect) too tight, especially during and after 2008.  And it’s very possible that the excessive tightness was partly related to Taylor Rule-type thinking.  Thus I would not disagree with the claim that, at any given point of time, the Taylor Rule formula might lead to poor policy choices.

But in the end I don’t find his claim to be all that convincing.  Selgin demonstrates fairly convincingly that while they may have occasionally paid lip service to Taylor Rule type thinking, they were in no sense following a specific instrument rule. John Taylor also makes this argument, and indeed criticized policy for being too expansionary during the housing boom.  I’m going to try to better explain this distinction by reviewing the so-called “monetarist experiment” of 1979-82.

The conventional wisdom is that the Fed adopted money supply targeting in 1979, ran the policy for three years, and then abandoned it in 1982 due to unstable velocity.  As always in macroeconomics, the conventional view is wrong.  Just as wrong as the view that fiscal stimulus contributed to the high inflation of the 1960s, or that oil shocks played a big role in the high inflation of the 1970s, or that Volcker had a tight money policy during his first 18 months as Fed chair. Here’s what actually happened:

1.  The Fed said it would start targeting the money supply, but it did not do so.

2.  The Fed had a stop–go policy in 1979-81, and then a contractionary policy in 1981-82.

3.  Velocity fell sharply in 1982, just as the monetarist model predicts.

The three-year “monetarist experiment” saw inflation fall from double digits to about 4%.  That would be expected to reduce velocity.  Friedman’s 4% money supply rule is supposed to work by keeping inflation expectations stable, so that velocity will be more stable.  But inflation expectations were not kept stable during 1979-82, so the policy was never really tested long enough to see if it works.Having said that, I’d rather not give the 4% money growth rule a “fair test”, as that would take decades, and would probably result in the policy failing for other reasons.  But 1979-82 told us essentially nothing about the long run effect of money supply targeting.  It wasn’t even tried.

In a similar way, the Fed set interest rates far below Taylor Rule levels during the housing boom.  So even if a momentary switch to Taylor Rule policy did occur during the housing bust, it’s not really a fair test of the policy.  It’s be like driving rapidly toward the edge of a cliff, ripping off the steering wheel and handing it to the passenger, and then saying “OK, now you drive”.

Having said that, I also don’t favor the Taylor Rule.  And even John Taylor is open to modifications based on new information as to the actual level of things like the equilibrium interest rate.  When I’ve heard him talk, he’s actually calling on the Fed to adopt some sort of clear, transparent procedure for policy, so that when they make a move, it will be something the markets could have fully anticipated based on publicly available macro/financial data. In that case, I agree, which is why I am more sympathetic than many other economists to what the House is trying to do with its proposed Fed policy rule legislation.

If the Fed won’t adopt my futures targeting approach, then I’d at least like them to make their actual decision-making transparent.  Show us the model of the economy, and the reaction function.  If the model later has to be tweaked because of new developments in macro, that’s OK.  Keep us abreast of how it’s being tweaked.  If FOMC members use multiple models then show us each model, and set the fed funds target at the median vote.  However they make decisions, Fed announcements should never be a surprise.

I’d also like to make a point about “rules” in the sense of policy goals, rather than instrument rules.  People often seem to assume that NGDPLT is a “rule” in the same sense that 2% inflation targeting is a rule.  Not so, NGDPLT is more rule-like that current Fed policy, in two important ways:

1. Any sort of NGDP targeting tells us exactly where the Fed wants the macro economy to be in the future, whereas 2% inflation targeting is much more vague. That’s because the dual mandate doesn’t tell us how much weight the Fed puts on inflation and how much they put on employment.  Even worse, the Fed often predicts procyclical inflation, which would seem to violate the dual mandate.  So it’s not at all clear what they are trying to do.  Should the Fed try to push inflation a bit above 2% during periods of low unemployment, so that it averages 2% over the entire cycle?  I don’t think so, but I have no idea what the Fed thinks, as they don’t tell us.  With NGDP targeting it’s clear—aim for countercyclical inflation.

2.  Going from NGDP growth rate targeting to NGDPLT, also makes policy much more rules based.  Under NGDPLT, I have a very good sense of where the Fed wants NGDP to be 10 years from now.  That helps us make more intelligent decisions on things like determining the proper yield on 10-year corporate bonds. If they miss the target on a given year, I know that they will aim to get back onto the old trend line.  In contrast, the Fed currently doesn’t have any clear policy when they miss the target.  Are they completely letting bygones by bygones, or do they aim to recover a part of the lost ground?  I have no idea.  It seems to vary from one business cycle to the next.  In 2003-06 they regained ground lost in the 2001 recession, but in 2009-15 they did not do so.

I certainly also favor a policy rule for the instrument setting, but even if the instrument setting were 100% discretionary, I would view NGDPLT as an order of magnitude more rule-like than a “flexible” 2% inflation target, which also takes employment into account.

NGDP Advisers

There’s a new post by James Alexander, Benjamin Cole, Justin Irving, Marcus Nunes describing their consulting firm, called NGDP-Advisers:

After a six-year run, during which Historinhas helped spread the Market Monetarist approach, this blog will undergo a metamorphosis, becoming NGDP-Advisers. The blog will continue but be augmented by new products that will be available via subscription.”

.  .  .

At NGDP Advisers, we hope not only to continue our examination of the global economy, but also to recognize realities and advise accordingly. We’ll yell from the cliff tops ‘what should be’, but we’ll also help you get ready for what ‘will be’.

Please join us at ngdp-advisers.com, the best is yet to come. The Historinhas blog will stay up but dormant, and recent and all future posts will be freely available here ngdp-advisers.com/blog/ 

This is good to see.  Ideas are taken more seriously when they move beyond academia and out into the marketplace.  I’ve added them to my blog roll and look forward to reading what they have to say.  If I’m not mistaken, the four participants reside in 4 different continents–so I expect an international perspective.

Speaking of NGDP, I found a new SSRN working paper (by Jonathan Benchimol and Fourçans André), with the following abstract:

Since the beginning of the financial crisis, a lively debate has emerged regarding which monetary policy rule the Fed (and other central banks) should follow, if any. To clarify this debate, several questions must be answered. Which monetary policy rule fits best the historical data? Which monetary policy rule best minimizes economic uncertainty and the Fed’s loss function? Which rule is best in terms of household welfare? Among the different rules, are NGDP growth or level targeting rules a good option, and when? Do they perform better than Taylor-type rules? To answer these questions, we use Bayesian estimations to test the Smets and Wouters (2007) model under nine different monetary policy rules with US data from 1955 to 2015 and over three different sub-periods. We find that when considering only the central bank’s loss function, the estimates generally indicate the superiority of NGDP level targeting rules, whatever the period. However, if other criteria are considered, the central bank’s objectives are not consistently met by a single rule for all periods.

I was pleasantly surprised by their findings, as traditional loss function criteria are biased against NGDP targeting, by assuming that inflation instability is what matters, whereas it’s actually NGDP growth instability that is the problem.