Archive for September 2016

 
 

An unruly debate over policy rules

George Selgin has a new piece criticizing Narayana Kocherlakota on policy rules. Here’s the intro:

I was just about to treat myself to a little R&R last Friday when — wouldn’t you know it? — I received an email message from the Brookings Institution’s Hutchins Center. The message alerted me to a new Brookings Paper by former Minneapolis Fed President Narayana Kocherlakota. The paper’s thesis, according to Hutchins Center Director David Wessel’s summary, is that the Fed “was — and still is — trapped by adherence to rules.”

Having recently presided over a joint Mercatus-Cato conference on “Monetary Rules for a Post-Crisis World” in which every participant, whether favoring rules or not, took for granted that the Fed is a discretionary monetary authority if there ever was one, I naturally wondered how Professor Kocherlakota could claim otherwise. I also wondered whether the sponsors and supporters of the Fed Oversight Reform and Modernization (FORM) Act realize that they’ve been tilting at windmills, since the measure they’ve proposed would only require the FOMC to do what Kocherlakota says it’s been doing all along.

So, instead of making haste to my favorite watering hole, I spent my late Friday afternoon reading,“Rules versus Discretion: A Reconsideration.”  And a remarkable read it is, for it consists of nothing less than an attempt to champion the Fed’s command of unlimited discretionary powers by referring to its past misuse of what everyone has long assumed to be those very powers!

To pull off this seemingly impossible feat, Kocherlakota must show that, despite what others may think, the FOMC’s past mistakes, including those committed during and since the recent crisis, have been due, not to the mistaken actions of a discretionary FOMC, but to that body’s ironclad commitment to monetary rules, and to the Taylor Rule especially.

The post is much longer and provides a detailed rebuttal of Kocherlakota’s claims. Those who read George know that he is a formidable debater, and I end up much more sympathetic to his view that to Kocherlakota’s view of discretion.  But it’s also important to be as generous to the other side as possible, so that their views don’t seem too outlandish.  Here are two senses in which it might make sense to criticize the Fed for an excessively rules-based approach.

1. One criticism would be that the Fed was too tied to inflation targeting in 2008, when it would have been appropriate to also look at other variables.  In that case, George and I would argue something closer to an NGDP target.  But of course that’s not discretion, it’s a different rule.

2.  Kocherlakota’s main objection seems to be that the Fed put too much weight on Taylor Rule-type thinking. One can certainly cite periods where money was (in retrospect) too tight, especially during and after 2008.  And it’s very possible that the excessive tightness was partly related to Taylor Rule-type thinking.  Thus I would not disagree with the claim that, at any given point of time, the Taylor Rule formula might lead to poor policy choices.

But in the end I don’t find his claim to be all that convincing.  Selgin demonstrates fairly convincingly that while they may have occasionally paid lip service to Taylor Rule type thinking, they were in no sense following a specific instrument rule. John Taylor also makes this argument, and indeed criticized policy for being too expansionary during the housing boom.  I’m going to try to better explain this distinction by reviewing the so-called “monetarist experiment” of 1979-82.

The conventional wisdom is that the Fed adopted money supply targeting in 1979, ran the policy for three years, and then abandoned it in 1982 due to unstable velocity.  As always in macroeconomics, the conventional view is wrong.  Just as wrong as the view that fiscal stimulus contributed to the high inflation of the 1960s, or that oil shocks played a big role in the high inflation of the 1970s, or that Volcker had a tight money policy during his first 18 months as Fed chair. Here’s what actually happened:

1.  The Fed said it would start targeting the money supply, but it did not do so.

2.  The Fed had a stop–go policy in 1979-81, and then a contractionary policy in 1981-82.

3.  Velocity fell sharply in 1982, just as the monetarist model predicts.

The three-year “monetarist experiment” saw inflation fall from double digits to about 4%.  That would be expected to reduce velocity.  Friedman’s 4% money supply rule is supposed to work by keeping inflation expectations stable, so that velocity will be more stable.  But inflation expectations were not kept stable during 1979-82, so the policy was never really tested long enough to see if it works.Having said that, I’d rather not give the 4% money growth rule a “fair test”, as that would take decades, and would probably result in the policy failing for other reasons.  But 1979-82 told us essentially nothing about the long run effect of money supply targeting.  It wasn’t even tried.

In a similar way, the Fed set interest rates far below Taylor Rule levels during the housing boom.  So even if a momentary switch to Taylor Rule policy did occur during the housing bust, it’s not really a fair test of the policy.  It’s be like driving rapidly toward the edge of a cliff, ripping off the steering wheel and handing it to the passenger, and then saying “OK, now you drive”.

Having said that, I also don’t favor the Taylor Rule.  And even John Taylor is open to modifications based on new information as to the actual level of things like the equilibrium interest rate.  When I’ve heard him talk, he’s actually calling on the Fed to adopt some sort of clear, transparent procedure for policy, so that when they make a move, it will be something the markets could have fully anticipated based on publicly available macro/financial data. In that case, I agree, which is why I am more sympathetic than many other economists to what the House is trying to do with its proposed Fed policy rule legislation.

If the Fed won’t adopt my futures targeting approach, then I’d at least like them to make their actual decision-making transparent.  Show us the model of the economy, and the reaction function.  If the model later has to be tweaked because of new developments in macro, that’s OK.  Keep us abreast of how it’s being tweaked.  If FOMC members use multiple models then show us each model, and set the fed funds target at the median vote.  However they make decisions, Fed announcements should never be a surprise.

I’d also like to make a point about “rules” in the sense of policy goals, rather than instrument rules.  People often seem to assume that NGDPLT is a “rule” in the same sense that 2% inflation targeting is a rule.  Not so, NGDPLT is more rule-like that current Fed policy, in two important ways:

1. Any sort of NGDP targeting tells us exactly where the Fed wants the macro economy to be in the future, whereas 2% inflation targeting is much more vague. That’s because the dual mandate doesn’t tell us how much weight the Fed puts on inflation and how much they put on employment.  Even worse, the Fed often predicts procyclical inflation, which would seem to violate the dual mandate.  So it’s not at all clear what they are trying to do.  Should the Fed try to push inflation a bit above 2% during periods of low unemployment, so that it averages 2% over the entire cycle?  I don’t think so, but I have no idea what the Fed thinks, as they don’t tell us.  With NGDP targeting it’s clear—aim for countercyclical inflation.

2.  Going from NGDP growth rate targeting to NGDPLT, also makes policy much more rules based.  Under NGDPLT, I have a very good sense of where the Fed wants NGDP to be 10 years from now.  That helps us make more intelligent decisions on things like determining the proper yield on 10-year corporate bonds. If they miss the target on a given year, I know that they will aim to get back onto the old trend line.  In contrast, the Fed currently doesn’t have any clear policy when they miss the target.  Are they completely letting bygones by bygones, or do they aim to recover a part of the lost ground?  I have no idea.  It seems to vary from one business cycle to the next.  In 2003-06 they regained ground lost in the 2001 recession, but in 2009-15 they did not do so.

I certainly also favor a policy rule for the instrument setting, but even if the instrument setting were 100% discretionary, I would view NGDPLT as an order of magnitude more rule-like than a “flexible” 2% inflation target, which also takes employment into account.

Utility in the 21st century

Utility is one of those things we can’t measure, but we know it when we see it. The 20th century was about becoming “developed” (In North America, Europe, Australia and parts of East Asia). What will the 21st century be about? After all, going from one to two refrigerators is a much smaller jump than going from zero to one.

One possibility is drugs.  Will this 25-year old Malaysian woman end up being the most influential person in the 21st century?  (The Norman Borlaug of the 21st century.)

screen-shot-2016-09-25-at-9-54-38-am

She is all of 25 and may have already made one of the most significant discoveries of our time.

Scientists in Australia this week took a quantum leap in the war on superbugs, developing a chain of star-shaped polymer molecules that can destroy antibiotic-resistant bacteria without hurting healthy cells. And the star of the show is 25-year-old Shu Lam, a Malaysian-Chinese PhD candidate at the University of Melbourne, who has developed the polymer chain in the course of her thesis research in antimicrobials and superbugs.

A polymer is a large molecule composed of several similar subunits bonded together. Polymers can be used to attack superbugs physically, unlike antibiotics that attempt to kill these bugs chemically and killing nearby healthy cells in the process.

“I’ve spent the past three and a half years researching polymers and looking at how they can be used to kill antibiotic resistant bacteria,” or superbugs, she told This Week in Asia, adding the star-shaped polymers work by tearing into the surface membrane of the bacteria, triggering the cell to kill itself. . . .

Her group is also examining the use of polymers as a drug carrier for cancer patients as well as the treatment of other diseases.

A key project at the moment is the synthetic transplant of cornea in the eye, which involves the use of polymers grown from the patient’s own cells in the lab to replace the damaged cornea.

The operation has already been tested multiple times successfully on sheep, and Qiao hopes to begin the first human trials in Melbourne within two years, working with the Melbourne Eye and Ear Hospital.

And from a bit more authoritative source (Science Daily):

The study, published today in Nature Microbiology, holds promise for a new treatment method against antibiotic-resistant bacteria (commonly known as superbugs).

The star-shaped structures, are short chains of proteins called ‘peptide polymers’, and were created by a team from the Melbourne School of Engineering.

The team included Professor Greg Qiao and PhD candidate Shu Lam, from the Department of Chemical and Biomolecular Engineering, as well as Associate Professor Neil O’Brien-Simpson and Professor Eric Reynolds from the Faculty of Medicine, Dentistry and Health Sciences and Bio21 Institute.

Professor Qiao said that currently the only treatment for infections caused by bacteria is antibiotics. However, over time bacteria mutate to protect themselves against antibiotics, making treatment no longer effective. These mutated bacteria are known as ‘superbugs’.

“It is estimated that the rise of superbugs will cause up to ten million deaths a year by 2050. In addition, there have only been one or two new antibiotics developed in the last 30 years,” he said.

OK, I know that preliminary research often looks promising in medicine, and that “more research is needed”—blah, blah, blah.

But even so, here’s a question for you infrastructure enthusiasts.  Which would do more for global utility in 2050, if you had to pick just one:

1.  Having the government spend $80 billion on a single (sort of) high-speed rail line from LA to the Bay Area?

2.  Funding 80 different research labs all over the world that are studying treatments for superbugs, at the tune of $1 billion each.

Another question:  Does it makes sense to go after China for “stealing” the idea behind an 80-year old Snow White film, based on a 200-year old European folktale, while constantly bashing the drug companies for high prices?

1946

Over at Econlog I did a post discussing the austerity of 1946.  The Federal deficit swung from over 20% of GDP during fiscal 1945 (mid-1944 to mid-1945) to an outright surplus in fiscal 1947.  Policy doesn’t get much more austere than that! Even worse, the austerity was a reduction in government output, which Keynesians view as the most potent part of the fiscal mix.  I pointed out that employment did fine, with the unemployment rate fluctuating between 3% and 5% during 1946, 1947 and 1948, even as Keynesian economists had predicted a rise in unemployment to 25% or even 35%—i.e. worse than the low point of the Great Depression.  That’s a pretty big miss in your forecast, and made me wonder about the validity of the model they used.

One commenter pointed out that RGDP fell by over 12% between 1945 and 1946, and that lots of women left the labor force after WWII.  So does a shrinking labor force explain the disconnect between unemployment and GDP?  As far as I can tell it does not, which surprised even me.  But the data is patchy, so please offer suggestions as to how I could do better.

Let’s start with hours worked per week, the data that is most supportive of the Keynesian view:

screen-shot-2016-09-24-at-9-32-18-am

Weekly hours worked dropped about 5% between 1945 and 1946. Does that help explain the huge drop in GDP?  Not as much as you’d think. Here’s the civilian labor force:

screen-shot-2016-09-24-at-9-34-38-am

So the labor force grew by close to 9%, indicating that the labor force in terms of numbers of worker hours probably grew.  Indeed if you add in the 3% jump in the unemployment rate, it appears as if the total number of hours worked was little changed between 1945 and 1946 (9% – 5% – 3%).  Which is really weird given that RGDP fell by 22% from the 1945Q1 peak to the 1947Q1 trough–a decline closer to the 36% decline during the Great Contraction, than the 3% fall during the Great Recession.

That’s all accounting, which is interesting, but it doesn’t really tell us what caused the employment miracle.  I’d like to point to NGDP, which did grow very rapidly between 1946 and 1948, but even that doesn’t quite help, as it fell by about 10% between early 1945 and early 1946.

Here’s why I think that the NGDP (musical chairs) model did not work this time. Let’s go back to the hours worked, and think about why they were roughly unchanged.  You had two big factors pushing hard, but in opposite directions. Hours worked were pushed up by 10 million soldiers suddenly entering the workforce.  In the offsetting direction were three factors.  A smaller number of (mostly women) workers leaving the workforce, unemployment rising from 1% to 4%, and average weekly hours falling by about 5%.  All that netted out to roughly zero change in hours worked.

So why did RGDP fall so sharply?  Keep in mind that while those soldiers were fighting WWII, their pay was a part of GDP. They helped make the “G” part of GDP rise to extraordinary levels in the early 1940s.  But when the war ended, that military pay stopped.  Many then got jobs in the civilian economy.  Now they were counted as part of hours worked. (Soldiers aren’t counted as workers.)  That artificially depressed productivity.

It’s also worth noting that real hourly wages fell by nearly 10% between February 1945 and November 1946:

screen-shot-2016-09-24-at-9-30-12-am

This data only applies to manufacturing workers. But keep in mind that the 1940s was the peak period of unionism, so I’d guess service workers did even worse.  So my theory is that the sudden drop in NGDP in 1946 was an artifact of the end of massive military spending, and the strong growth in NGDP during 1946-48, which reflected high inflation, helped to stabilize the labor market.  When the inflation ended in 1949, real wages rose and we had a brief recession.  By 1950, the economy was recovering, even before the Korean War broke out in late June.

Obviously 1946 was an unusual year, and it’s hard to draw any policy lessons.  At Econlog, I pointed out that the high inflation occurred without any “concrete steppes” by the Fed; T-bill yields stayed at 0.38% during 1945-47 and the monetary base was pretty flat.  Some of the inflation represented the removal of price controls, but I suspect some of it was purely (demand-side) monetary—a rise in velocity as fears of a post-war recession faded.

This era shows that you can have a lot of “reallocation” and a lot of austerity, without necessarily seeing a big rise in unemployment.  And if you are going to make excuses for the Keynesian model, you also have to recognize that most Keynesians got it spectacularly wrong at the time.  Keynesians often make of big deal of Milton Friedman’s false prediction that inflation would rise sharply after 1982, but tend to ignore another monetarist (William Barnett, pp. 22-23) who correctly forecast that it would not rise.  OK, then the same standards should apply to the flawed Keynesian predictions of 1946.

Tyler Cowen used to argue that 2009 showed that we weren’t as rich as we thought we were.  I think 1946 and 2013 (another failed Keynesian prediction) show that we aren’t as smart as we thought we were.

Update:  David Henderson has some more observations on this period.

 

The Fed begins to see the light

Each Fed statement, they inch a bit closer to market monetarism.  The newest statement lowered the forecasts for the future level of interest rates (the so-called dot plot) by about 50 basis points.  That’s still too high, but no longer as out of touch as they were a year or two ago.  Kudos to Kocherlakota and Bullard for seeing the light before the rest of the FOMC.  The long-term trend RGDP growth estimate was lowered again, this time from from 2.0% to 1.8%.  That’s still too high, but it’s getting closer to my estimate of 1.2%.  (The actual growth rate over the past decade has been 1.28%, but I believe the trend is still slowing.)

Question:  Has any school of thought been more accurate than market monetarism (over the past 8 years) regarding these issues:

1.  QE is expansionary.

2.  Negative IOR is expansionary.

3.  Forward guidance can be expansionary.

4.  Low rates are here to stay.

5.  Inflation won’t be a problem.

6.  NGDP and RGDP growth will be slower than the Fed expected.

7.  Sweden erred in not following Svensson’s advice.

8.  Trichet screwed up in 2011.

9.  Monetary policy would offset fiscal austerity in 2013

10.  Cross-sectional evidence for fiscal stimulus vanishes when confined to countries with independent monetary policy.

11.  Denmark would not be forced to revalue their currency upwards.

12.  Ending extended unemployment insurance in 2014 would accelerate job growth by about 1/2 million.

13.  Abenomics would increase Japan’s inflation rate.

14.  Switzerland would make their zero bound problem worse by revaluing the franc.

15. “Austerity” would not stop the UK unemployment rate from falling to full employment.

Why the BOJ policy move (mostly) lacked credibility

The BOJ’s recent decision is likely to end up being far more important than anything the Fed does or does not do today.  But as of now it raises more questions than answers:

1.  The BOJ announced it would cap 10-year bond yields at 0%, and also that it would attempt to overshoot its 2% inflation target.

2.  The BOJ did not announce lower IOR or more QE.

Today’s market reaction is hard for me to gauge.  The initial reaction was clearly positive, as stocks rose nearly 2%, and the yen fell by almost 1%.  Later, however, the yen more than regained the ground it lost.  That doesn’t mean the BOJ action had no impact, just that whatever impact it had was at most slightly more than markets anticipated.

The overshoot promise could be viewed as either Krugman’s “promise to be irresponsible”, or as a baby step toward level targeting.  Martin Sandhu gives a third interpretation—a signal that the target really is symmetrical, despite all the talk about a de facto 2% inflation ceiling in many countries.  There’s no reason that all three interpretations could not be a little bit true—after all, central bank policy is made by committees.

I vaguely recall Bernanke suggesting something like a long term interest rate peg—can anyone confirm?  I have mixed feelings about this idea.  On the one hand, I like moving monetary policy away from a QE/negative IOR approach, and toward a price peg approach.  But I view interest rates as almost the worst price to peg, for standard NeoFisherian reasons.  Are low long-term rates easy money, or a sign that money remains tight?  That’s not at all clear.

Although I am disappointed by the specific steps taken today, these actions do make me more optimistic in one sense.  The BOJ has shown that it’s still willing to experiment, and that it still wants to raise inflation.  Here’s an analogy.  When the Fed first engaged in forward guidance, they did so in a very ineffective manner—low rates for X number of years.  This was criticized as being rather ambiguous—in much the same way the BOJ’s 10-year bond yield cap is ambiguous.  So the next step in forward guidance was to make the interest rate commitment conditional on the economy, a major improvement.  Perhaps the BOJ’s next step will be to switch to price level target, in order to make the size of the inflation overshoot more concrete.  Or maybe instead of capping 10-year bond yields, they’ll peg something more unambiguous, such as the yen against a basket of currencies.  If that’s too controversial (and it probably is) then peg the yen against a CPI futures contract.

The danger is that this specific move won’t work, and the backlash will prevent the BOJ of moving further down the road in the future.  Maybe that’s why the yen reversed course a few hours later.

PS.  It just occurred to me that the 10-year bond yield cap could be viewed as a sort of commitment to enact a policy expected to lead the yen to appreciate by at least 1.68%/year against the dollar (for standard interest parity reasons).  That’s a non-NeoFisherian way of explaining why I’m skeptical.  In the long run, this bond yield cap means that Japanese inflation is likely to be 1.68% lower than US inflation.  They really needed to do the opposite of what the Swiss did early last year—they should have sharply depreciated the yen, and simultaneously raised interest rates.

PPS.  Kudos to Paul Krugman.  Ideas that start out seeming very “ivory tower”, such as promising to be irresponsible, can end up being enacted, at least in part.  Unfortunately, Krugman first proposed that idea under the (quite reasonable) assumption that liquidity traps would be temporary.  The markets don’t seem to believe that any longer, at least with respect to Japan.

Update:  Kgaard added this comment:

Scott — My understanding is that at the post-announcement press conference Kuroda said he would not be upping the amount of monthly bond purchases, and this is what turned the yen. Seems to me that what you wrote a couple days ago is entirely relevant here: If they SAY they want 2% CPI but then take actions consistent with 0% CPI, then what they really are targeting is 0% CPI until further notice, and investors will respond accordingly. Hence stronger JPY …

Update #2:  from commenter Mikio:

Kuroda did not say he will not expand purchases. On the contrary. He repeated again that the BOJ is ready to expand both the purchases as well as cut the IOR. But obviously they did not think they need to act now.

I think the jury is out there about this move. It’s non-progress if you look at the yen, it’s marginally positive if you look at stock market.

Update#3:  HL added the following:

Kgaard and Mikio are both right / wrong

After the longest delay for the statement release, markets learned at 01:18 pm Tokyo time, the following:

No change in policy balance rate at -0.1%
Monetary base expansion until inflation stable above 2%
JGB purchases in line with the current pace
MB/NGDP rate to hit 100% in 1 year (currently 80%)
Yield curve control introduced
Average maturity target scrapped
10 year JGB yield target around 0%
Forward guidance enhanced
Inflation overshooting commitment
Purchases to fluctuate to achieve curve control
Continue easing until inflation stably above 2%
No mention of timeframe
Comprehensive review on policy
NIRP helpful for decline in funding rates
NIRP didn’t seem to change banks’ willingness to lend
NIRP’s impact on yield curve, however, a bit problematic

Then during the preso (03:30~04:42), Bloomberg headlines
03:39 pm “No change in commitment to achieve 2% ASAP”
03:44 pm “Cutting minus rates further is still an option”
03:55 pm “Don’t think BOJ is coming close to limits”
03:55 pm “We just strengthened our framework”
04:12 pm “Expect inflation to hit 2% during FY 2017”
04:22 pm “amount of bond buying changes with the economy”
04:29 pm “BOJ won’t have JPY 80 trillion for JGB buying as fixed”
04:40 pm “New framework isn’t tapering”

So immediately after the statement release, markets had reasons to believe that some guidance on quantity part could be provided by Kuroda (continuing the purchase, etc). Then during the preso, Kuroda made that guidance more ambiguous. But the fact is: there is nothing in these comments to suggest that the BOJ will start tapering soon.

USDJPY had its strongest period between the statement release and the start of the preso Q&A. Then it was on a gradual decline even before Kuroda started making comments related to the quantity dimension. Eventually it settled around 101.70 before declining sharply at the start of New York session…