Why should we live in a low rate world?

The Economist can be very good on monetary policy.  For instance, they’ve endorsed NGDP targeting.  And then there are other times.  Check out the subtitle of their new cover story on living in a low rate world:

Central banks have been doing their best to pep up demand. Now they need help

Actually, they have not been doing their best, and it’s not even debatable:

1.   The Fed raised rates last December, and just a week ago indicated that it is likely to raise rates again later this year.  Is that doing your best to inflate?

2.  The ECB and the BOJ have mostly disappointed markets this year, offering up one announcement after another that was less expansionary than markets expected.

So no, they are not doing their best.  If at some point they do in fact do their best, and still come up short, then by all means given them help.

And what should that “help” look like?  Simple, give them more policy tools.  I.e. a higher target, or the right to buy more kinds of assets.  Whatever help they need.

And then there is this:

To live safely in a low-rate world, it is time to move beyond a reliance on central banks. Structural reforms to increase underlying growth rates have a vital role. But their effects materialise only slowly and economies need succour now. The most urgent priority is to enlist fiscal policy. The main tool for fighting recessions has to shift from central banks to governments.

Actually, the Japanese have already shown that enlisting fiscal policy does not help.  In fact, Japanese NGDP growth has picked up a bit since 2013, despite the fact that fiscal policy has become tighter.  Instead of resigning ourselves to a low rate world, why not have central banks create a higher rate world, by raising their NGDP/inflation target?  And tell the banks to actually hit their targets.  A low rate world is a choice, not some inevitable fate sent down to us by the gods.

To their credit, they realize that infrastructure spending cannot stabilize a modern economy:

But infrastructure spending is not the best way to prop up weak demand. Ambitious capital projects cannot be turned on and off to fine-tune the economy. They are a nightmare to plan, take ages to deliver and risk becoming bogged down in politics. To be effective as a countercyclical tool, fiscal policy must mimic the best features of modern-day monetary policy, whereby independent central banks can act immediately to loosen or tighten as circumstances require.

But then suggest something even less effective:

Politicians will not—and should not—hand over big budget decisions to technocrats. Yet there are ways to make fiscal policy less politicised and more responsive. Independent fiscal councils, like Britain’s Office for Budget Responsibility, can help depoliticise public-spending decisions, but they do nothing to speed up fiscal action. For that, more automaticity is needed, binding some spending to changes in the economic cycle. The duration and generosity of unemployment benefits could be linked to the overall joblessness rate in the economy, for example.

Actually, a number of studies show that extended unemployment benefits make unemployment even higher. When President Bush made unemployment benefits more generous during the 2008 recession, Brad DeLong correctly predicted that it would push unemployment 50 basis points higher by Election Day. Another example occurred in 2014, when we saw job creation accelerate by about 700,000 (from 2.3 million in 2013 to over 3 million in 2014), after the extended benefits were eliminated.  Exactly the opposite of what Keynesians like Paul Krugman expected.

I have a better idea; have the BoE adopt a more expansionary monetary policy.  Their governors will warn that this will push inflation above target. OK, but make up your mind—do you want more demand, or not?

Should Hillary claim that she also opposed the Iraq War?

The Iraq War was a debacle.  Hillary supported it.  So wouldn’t it be in her interest to start claiming she opposed it?  Forget the morality of the idea; would it work as a campaign strategy?

Many people probably find the suggestion to be absurd.  Almost ludicrous.  But why?

They’d say she supported the war.  There is a paper trail showing her support.  She could never get away with claiming she opposed it.  Maybe so, but Trump also support the Iraq War, and did so publicly. Nonetheless, Trump does in fact claim that he opposed the Iraq War, and reporters do let him get away with it.  So why can’t Hillary?  Why the double standard?

Hillary’s core supporters include lots of smart/idealistic people like Paul Krugman, who would be outraged by Hillary lying about her support for the war. Yes, they let her shade the truth on murky personal questions like emails, but they’d be outraged by a bald-faced lie on a key policy issue.  She is seen as a competent manager of government (wrongly in my view).  In contrast, Trump is never seriously seen as someone who would actually govern the country. He’s running as a sort of troll, a way for voters to show their contempt for the establishment. The American Brexit. The truth value of his claims about the Iraq War have no importance, nor do his claims about the “40%” unemployment rate, or indeed anything else.  It makes no difference whether or not he favors a higher minimum wage, or infrastructure, or paying off the national debt in 8 years, or anything else.  He’s a troll, and all that matters is that he annoys the (global) establishment.

I’m a member of the global establishment, if you define the term loosely enough, and so I’m annoyed.  Not because his policies would hurt me, indeed they’d massively help my career.  Every day after January 20th would be like Christmas, as I got to watch the look on alt-right and supply-sider faces as they found out the truth about Trump. The blogosphere would go crazy.

Rather I’m annoyed because we have a track record here, and Trump people don’t seem to know it. Throughout history, there have been lots of right-wing demagogues who engaged in the big lie.  Today we see Duterte in the Philippines, Orban in Hungary, Putin in Russia, Erdogan in Turkey, etc.  A few years back we had Berlusconi in Italy.  In earlier decades we saw many others.

It never turns out well.  That’s what Trump supporters don’t get, it never turns out well.

Maybe Trump will be a first, but I doubt it.  (Of course left wing demagogues (Chavez, etc.) are usually even worse.)

PS.  I can already anticipate what commenters will say, but I know that deep down you guys don’t believe it.  Deep down even the most die-hard Trump supporter would be shocked if Hillary suddenly started claiming that she opposed the Iraq War.  So don’t waste your time denying it.  I don’t believe you.

PPS.  I would LOVE to see Hillary deny that she supported the Iraq War, in tonight’s debate.  As a prank, a way of making a point.  Trump would take the bait and insist she was lying.  But she could reply “how is that different from your lies about opposing the war”.  After the debate the news media would be all over the issue, and would end up claiming that both sides were lying.  It would be the big story—man bites dog.  Then Hillary could say she was “just being sarcastic”, which Trump always uses as a get of of jail free card, when he is caught saying something idiotic (and that he didn’t realize at the time was idiotic.)  She could say she was just trying to make a point—how ridiculous Trump’s lies are.

She won’t take my advice.  And I suppose she shouldn’t.  But it’s a nice daydream.

PPPS.  You want double standards?  Remember when Trump bragged about his sexual prowess in a GOP debate? Try to imagine a female candidate doing something comparable. It makes my head explode. I guess I have a double standard too.  (Any comments on the PPPS will be deleted.)

An unruly debate over policy rules

George Selgin has a new piece criticizing Narayana Kocherlakota on policy rules. Here’s the intro:

I was just about to treat myself to a little R&R last Friday when — wouldn’t you know it? — I received an email message from the Brookings Institution’s Hutchins Center. The message alerted me to a new Brookings Paper by former Minneapolis Fed President Narayana Kocherlakota. The paper’s thesis, according to Hutchins Center Director David Wessel’s summary, is that the Fed “was — and still is — trapped by adherence to rules.”

Having recently presided over a joint Mercatus-Cato conference on “Monetary Rules for a Post-Crisis World” in which every participant, whether favoring rules or not, took for granted that the Fed is a discretionary monetary authority if there ever was one, I naturally wondered how Professor Kocherlakota could claim otherwise. I also wondered whether the sponsors and supporters of the Fed Oversight Reform and Modernization (FORM) Act realize that they’ve been tilting at windmills, since the measure they’ve proposed would only require the FOMC to do what Kocherlakota says it’s been doing all along.

So, instead of making haste to my favorite watering hole, I spent my late Friday afternoon reading,“Rules versus Discretion: A Reconsideration.”  And a remarkable read it is, for it consists of nothing less than an attempt to champion the Fed’s command of unlimited discretionary powers by referring to its past misuse of what everyone has long assumed to be those very powers!

To pull off this seemingly impossible feat, Kocherlakota must show that, despite what others may think, the FOMC’s past mistakes, including those committed during and since the recent crisis, have been due, not to the mistaken actions of a discretionary FOMC, but to that body’s ironclad commitment to monetary rules, and to the Taylor Rule especially.

The post is much longer and provides a detailed rebuttal of Kocherlakota’s claims. Those who read George know that he is a formidable debater, and I end up much more sympathetic to his view that to Kocherlakota’s view of discretion.  But it’s also important to be as generous to the other side as possible, so that their views don’t seem too outlandish.  Here are two senses in which it might make sense to criticize the Fed for an excessively rules-based approach.

1. One criticism would be that the Fed was too tied to inflation targeting in 2008, when it would have been appropriate to also look at other variables.  In that case, George and I would argue something closer to an NGDP target.  But of course that’s not discretion, it’s a different rule.

2.  Kocherlakota’s main objection seems to be that the Fed put too much weight on Taylor Rule-type thinking. One can certainly cite periods where money was (in retrospect) too tight, especially during and after 2008.  And it’s very possible that the excessive tightness was partly related to Taylor Rule-type thinking.  Thus I would not disagree with the claim that, at any given point of time, the Taylor Rule formula might lead to poor policy choices.

But in the end I don’t find his claim to be all that convincing.  Selgin demonstrates fairly convincingly that while they may have occasionally paid lip service to Taylor Rule type thinking, they were in no sense following a specific instrument rule. John Taylor also makes this argument, and indeed criticized policy for being too expansionary during the housing boom.  I’m going to try to better explain this distinction by reviewing the so-called “monetarist experiment” of 1979-82.

The conventional wisdom is that the Fed adopted money supply targeting in 1979, ran the policy for three years, and then abandoned it in 1982 due to unstable velocity.  As always in macroeconomics, the conventional view is wrong.  Just as wrong as the view that fiscal stimulus contributed to the high inflation of the 1960s, or that oil shocks played a big role in the high inflation of the 1970s, or that Volcker had a tight money policy during his first 18 months as Fed chair. Here’s what actually happened:

1.  The Fed said it would start targeting the money supply, but it did not do so.

2.  The Fed had a stop–go policy in 1979-81, and then a contractionary policy in 1981-82.

3.  Velocity fell sharply in 1982, just as the monetarist model predicts.

The three-year “monetarist experiment” saw inflation fall from double digits to about 4%.  That would be expected to reduce velocity.  Friedman’s 4% money supply rule is supposed to work by keeping inflation expectations stable, so that velocity will be more stable.  But inflation expectations were not kept stable during 1979-82, so the policy was never really tested long enough to see if it works.Having said that, I’d rather not give the 4% money growth rule a “fair test”, as that would take decades, and would probably result in the policy failing for other reasons.  But 1979-82 told us essentially nothing about the long run effect of money supply targeting.  It wasn’t even tried.

In a similar way, the Fed set interest rates far below Taylor Rule levels during the housing boom.  So even if a momentary switch to Taylor Rule policy did occur during the housing bust, it’s not really a fair test of the policy.  It’s be like driving rapidly toward the edge of a cliff, ripping off the steering wheel and handing it to the passenger, and then saying “OK, now you drive”.

Having said that, I also don’t favor the Taylor Rule.  And even John Taylor is open to modifications based on new information as to the actual level of things like the equilibrium interest rate.  When I’ve heard him talk, he’s actually calling on the Fed to adopt some sort of clear, transparent procedure for policy, so that when they make a move, it will be something the markets could have fully anticipated based on publicly available macro/financial data. In that case, I agree, which is why I am more sympathetic than many other economists to what the House is trying to do with its proposed Fed policy rule legislation.

If the Fed won’t adopt my futures targeting approach, then I’d at least like them to make their actual decision-making transparent.  Show us the model of the economy, and the reaction function.  If the model later has to be tweaked because of new developments in macro, that’s OK.  Keep us abreast of how it’s being tweaked.  If FOMC members use multiple models then show us each model, and set the fed funds target at the median vote.  However they make decisions, Fed announcements should never be a surprise.

I’d also like to make a point about “rules” in the sense of policy goals, rather than instrument rules.  People often seem to assume that NGDPLT is a “rule” in the same sense that 2% inflation targeting is a rule.  Not so, NGDPLT is more rule-like that current Fed policy, in two important ways:

1. Any sort of NGDP targeting tells us exactly where the Fed wants the macro economy to be in the future, whereas 2% inflation targeting is much more vague. That’s because the dual mandate doesn’t tell us how much weight the Fed puts on inflation and how much they put on employment.  Even worse, the Fed often predicts procyclical inflation, which would seem to violate the dual mandate.  So it’s not at all clear what they are trying to do.  Should the Fed try to push inflation a bit above 2% during periods of low unemployment, so that it averages 2% over the entire cycle?  I don’t think so, but I have no idea what the Fed thinks, as they don’t tell us.  With NGDP targeting it’s clear—aim for countercyclical inflation.

2.  Going from NGDP growth rate targeting to NGDPLT, also makes policy much more rules based.  Under NGDPLT, I have a very good sense of where the Fed wants NGDP to be 10 years from now.  That helps us make more intelligent decisions on things like determining the proper yield on 10-year corporate bonds. If they miss the target on a given year, I know that they will aim to get back onto the old trend line.  In contrast, the Fed currently doesn’t have any clear policy when they miss the target.  Are they completely letting bygones by bygones, or do they aim to recover a part of the lost ground?  I have no idea.  It seems to vary from one business cycle to the next.  In 2003-06 they regained ground lost in the 2001 recession, but in 2009-15 they did not do so.

I certainly also favor a policy rule for the instrument setting, but even if the instrument setting were 100% discretionary, I would view NGDPLT as an order of magnitude more rule-like than a “flexible” 2% inflation target, which also takes employment into account.

Utility in the 21st century

Utility is one of those things we can’t measure, but we know it when we see it. The 20th century was about becoming “developed” (In North America, Europe, Australia and parts of East Asia). What will the 21st century be about? After all, going from one to two refrigerators is a much smaller jump than going from zero to one.

One possibility is drugs.  Will this 25-year old Malaysian woman end up being the most influential person in the 21st century?  (The Norman Borlaug of the 21st century.)

screen-shot-2016-09-25-at-9-54-38-am

She is all of 25 and may have already made one of the most significant discoveries of our time.

Scientists in Australia this week took a quantum leap in the war on superbugs, developing a chain of star-shaped polymer molecules that can destroy antibiotic-resistant bacteria without hurting healthy cells. And the star of the show is 25-year-old Shu Lam, a Malaysian-Chinese PhD candidate at the University of Melbourne, who has developed the polymer chain in the course of her thesis research in antimicrobials and superbugs.

A polymer is a large molecule composed of several similar subunits bonded together. Polymers can be used to attack superbugs physically, unlike antibiotics that attempt to kill these bugs chemically and killing nearby healthy cells in the process.

“I’ve spent the past three and a half years researching polymers and looking at how they can be used to kill antibiotic resistant bacteria,” or superbugs, she told This Week in Asia, adding the star-shaped polymers work by tearing into the surface membrane of the bacteria, triggering the cell to kill itself. . . .

Her group is also examining the use of polymers as a drug carrier for cancer patients as well as the treatment of other diseases.

A key project at the moment is the synthetic transplant of cornea in the eye, which involves the use of polymers grown from the patient’s own cells in the lab to replace the damaged cornea.

The operation has already been tested multiple times successfully on sheep, and Qiao hopes to begin the first human trials in Melbourne within two years, working with the Melbourne Eye and Ear Hospital.

And from a bit more authoritative source (Science Daily):

The study, published today in Nature Microbiology, holds promise for a new treatment method against antibiotic-resistant bacteria (commonly known as superbugs).

The star-shaped structures, are short chains of proteins called ‘peptide polymers’, and were created by a team from the Melbourne School of Engineering.

The team included Professor Greg Qiao and PhD candidate Shu Lam, from the Department of Chemical and Biomolecular Engineering, as well as Associate Professor Neil O’Brien-Simpson and Professor Eric Reynolds from the Faculty of Medicine, Dentistry and Health Sciences and Bio21 Institute.

Professor Qiao said that currently the only treatment for infections caused by bacteria is antibiotics. However, over time bacteria mutate to protect themselves against antibiotics, making treatment no longer effective. These mutated bacteria are known as ‘superbugs’.

“It is estimated that the rise of superbugs will cause up to ten million deaths a year by 2050. In addition, there have only been one or two new antibiotics developed in the last 30 years,” he said.

OK, I know that preliminary research often looks promising in medicine, and that “more research is needed”—blah, blah, blah.

But even so, here’s a question for you infrastructure enthusiasts.  Which would do more for global utility in 2050, if you had to pick just one:

1.  Having the government spend $80 billion on a single (sort of) high-speed rail line from LA to the Bay Area?

2.  Funding 80 different research labs all over the world that are studying treatments for superbugs, at the tune of $1 billion each.

Another question:  Does it makes sense to go after China for “stealing” the idea behind an 80-year old Snow White film, based on a 200-year old European folktale, while constantly bashing the drug companies for high prices?

1946

Over at Econlog I did a post discussing the austerity of 1946.  The Federal deficit swung from over 20% of GDP during fiscal 1945 (mid-1944 to mid-1945) to an outright surplus in fiscal 1947.  Policy doesn’t get much more austere than that! Even worse, the austerity was a reduction in government output, which Keynesians view as the most potent part of the fiscal mix.  I pointed out that employment did fine, with the unemployment rate fluctuating between 3% and 5% during 1946, 1947 and 1948, even as Keynesian economists had predicted a rise in unemployment to 25% or even 35%—i.e. worse than the low point of the Great Depression.  That’s a pretty big miss in your forecast, and made me wonder about the validity of the model they used.

One commenter pointed out that RGDP fell by over 12% between 1945 and 1946, and that lots of women left the labor force after WWII.  So does a shrinking labor force explain the disconnect between unemployment and GDP?  As far as I can tell it does not, which surprised even me.  But the data is patchy, so please offer suggestions as to how I could do better.

Let’s start with hours worked per week, the data that is most supportive of the Keynesian view:

screen-shot-2016-09-24-at-9-32-18-am

Weekly hours worked dropped about 5% between 1945 and 1946. Does that help explain the huge drop in GDP?  Not as much as you’d think. Here’s the civilian labor force:

screen-shot-2016-09-24-at-9-34-38-am

So the labor force grew by close to 9%, indicating that the labor force in terms of numbers of worker hours probably grew.  Indeed if you add in the 3% jump in the unemployment rate, it appears as if the total number of hours worked was little changed between 1945 and 1946 (9% – 5% – 3%).  Which is really weird given that RGDP fell by 22% from the 1945Q1 peak to the 1947Q1 trough–a decline closer to the 36% decline during the Great Contraction, than the 3% fall during the Great Recession.

That’s all accounting, which is interesting, but it doesn’t really tell us what caused the employment miracle.  I’d like to point to NGDP, which did grow very rapidly between 1946 and 1948, but even that doesn’t quite help, as it fell by about 10% between early 1945 and early 1946.

Here’s why I think that the NGDP (musical chairs) model did not work this time. Let’s go back to the hours worked, and think about why they were roughly unchanged.  You had two big factors pushing hard, but in opposite directions. Hours worked were pushed up by 10 million soldiers suddenly entering the workforce.  In the offsetting direction were three factors.  A smaller number of (mostly women) workers leaving the workforce, unemployment rising from 1% to 4%, and average weekly hours falling by about 5%.  All that netted out to roughly zero change in hours worked.

So why did RGDP fall so sharply?  Keep in mind that while those soldiers were fighting WWII, their pay was a part of GDP. They helped make the “G” part of GDP rise to extraordinary levels in the early 1940s.  But when the war ended, that military pay stopped.  Many then got jobs in the civilian economy.  Now they were counted as part of hours worked. (Soldiers aren’t counted as workers.)  That artificially depressed productivity.

It’s also worth noting that real hourly wages fell by nearly 10% between February 1945 and November 1946:

screen-shot-2016-09-24-at-9-30-12-am

This data only applies to manufacturing workers. But keep in mind that the 1940s was the peak period of unionism, so I’d guess service workers did even worse.  So my theory is that the sudden drop in NGDP in 1946 was an artifact of the end of massive military spending, and the strong growth in NGDP during 1946-48, which reflected high inflation, helped to stabilize the labor market.  When the inflation ended in 1949, real wages rose and we had a brief recession.  By 1950, the economy was recovering, even before the Korean War broke out in late June.

Obviously 1946 was an unusual year, and it’s hard to draw any policy lessons.  At Econlog, I pointed out that the high inflation occurred without any “concrete steppes” by the Fed; T-bill yields stayed at 0.38% during 1945-47 and the monetary base was pretty flat.  Some of the inflation represented the removal of price controls, but I suspect some of it was purely (demand-side) monetary—a rise in velocity as fears of a post-war recession faded.

This era shows that you can have a lot of “reallocation” and a lot of austerity, without necessarily seeing a big rise in unemployment.  And if you are going to make excuses for the Keynesian model, you also have to recognize that most Keynesians got it spectacularly wrong at the time.  Keynesians often make of big deal of Milton Friedman’s false prediction that inflation would rise sharply after 1982, but tend to ignore another monetarist (William Barnett, pp. 22-23) who correctly forecast that it would not rise.  OK, then the same standards should apply to the flawed Keynesian predictions of 1946.

Tyler Cowen used to argue that 2009 showed that we weren’t as rich as we thought we were.  I think 1946 and 2013 (another failed Keynesian prediction) show that we aren’t as smart as we thought we were.

Update:  David Henderson has some more observations on this period.