Archive for September 2013

 
 

Is unemployment overdetermined?

There’s been a lot of recent discussion of long term unemployment.  I would caution against putting too much weight on any single factor.  For instance, the drop in the labor force participation rate is partly due to boomers retiring, young people staying in school, and the unemployed going on disability.  But none of those factors plays a dominant role.

Another mistake is that people sometimes forget that unemployment can be “overdetermined,” i.e. that multiple factors are each powerful enough to explain most unemployment.  I hate to pick on Kevin Erdmann, because his post is actually one of the best I’ve read on the subject:

Upon further reflection, I think the approx. .75% of age-related unemployment, anomalous to the recent recession, which I found in the demographics section, is probably closely related to the .7% excess unemployment that I found in the later section on EUI related to excess unemployment duration above 26 weeks.  So, in total, of the approx. 5% in cyclical unemployment that we saw at the peak, I am attributing approx. .5% to age demographics and at least .75% to EUI.  This leaves 3.75% attributable to other factors, although EUI would likely be responsible for some of the remaining 3.75% in ways that I haven’t been able to isolate here.

Those numbers sound plausible to me, however be careful when partitioning the effects of various factors.  To see why let’s go back to my “musical chairs model” of recessions.  Suppose 10o people play the game of musical chairs, but there are only 95 chairs to sit in.  When the music stops 5 people will end up sitting on the floor.

Each time the music stops, a different 5 people will be sitting on the floor.  However it is possible that some people will end up missing out 2 or even 3 times in a row, particularly if there are differences in quickness and agility among the players.  By analogy, the 5% frictional unemployment we observe will represent different people each year, although less skilled workers are more likely to show up multiple times.

Now assume two things happen at once.  The number of chairs is reduced from 95 to 90, and the people who end up sitting on the floor each time have heavy lead weights attached to their feet.  They will be less agile, and thus far more likely to end up on the floor multiple times in a row.  However by construction the lead weights have no impact on the total number of people who sit on the floor.  If there are 90 chairs, then 10 people sit on the floor.  End of story.

This is essentially Paul Krugman’s argument for why extended UI benefits don’t raise unemployment in a recession.  The reduction in chairs from 95 to 90 is analogous to the reduction in the employment rate from 95% to 90% when NGDP falls very sharply and nominal wages are sticky.  The lead weights are like the disincentive effects of extended UI, which makes it less likely that those workers will find new jobs.  (A better analogy might be if those sitting on the floor have soft cushions affixed to their rear ends, to making sitting on the floor less painful.  So they don’t try as hard as the other players.)

I’ve challenged Krugman’s argument in the past, but there is clearly a grain of truth in this sort of reasoning.  Just because a study shows that the unemployed are suddenly consisting disproportionately of the long term unemployed, as compared to past recessions, doesn’t really prove that the factors leading to long term unemployment have any effect on the overall unemployment rate.  They might, but that fact would have to be established in some other way.  Unemployment might be overdetermined, i.e. without extended UI there would but just as many unemployed, albeit a different mix of people and durations, and for a different reason.

The problem with my simple example is that extended UI benefits probably do reduce the number of chairs, by making nominal wages less more sticky than if an army of unemployed workers were desperate to find new jobs.  Disability benefits and boomer retirement also have that effect, BTW.  And if wages are more sticky then nominal GDP shocks have bigger effects on employment than otherwise.

In the end I find the 0.50% and 0.75% figures to be plausible, but they don’t necessarily imply that the remaining 3.75% is an upper limit to the part of excess unemployment that is due to effects of deficient demand.  One problem is overdetermination, as I’ve just showed.  The other is endogeniety.  The extend UI program was itself caused by the demand shock, and would probably be eliminated if demand returned to normal.

Still, it’s a useful exercise to partition the various factors as well as we can, if only because Congress needs accurate policy counterfactuals when making the decision as to whether or not to extend the extend UI benefits.  So I applaud Erdmann’s thoughtful post.

PS.  In the final paragraph I am assuming that the Dems in Congress are Matt Yglesias-types and the GOP is composed of a bunch of Tyler Cowens.  Both groups want to maximize aggregate utility, but disagree slightly on the size of disincentive effects.  I think that’s a reasonable description of Congress these days, isn’t it?

PPS.  I may have been unfair to Erdmann, as the final sentence I quote suggests that he understands the overdetermination issue.

PPPS.  Here’s my favorite part of the Erdmann post:

Here’s the kind of reminder that makes Scott Sumner slap his forehead: The EUI was instituted in June 2008 when the unemployment rate was 5.6%.  At the time, the Fed Funds rate was still at 2%.

HT:  Tyler Cowen

Nostalgia for a black and white world

I recently quoted Karl Knausgaard saying that knowledge drains the world of meaning. I haven’t been able to get that idea out of my head. Just yesterday I read the following in a NYR of Books review of Leopardi’s diary:

Giacomo saw before him a life without physical love or financial independence. Studying was the one thing he knew how to do, but the knowledge so gained only revealed to him that knowledge does not help us to live; on the contrary it corrodes those happy errors, or illusions as he came to call them, that give life meaning, shifting energy to the mental and rational and away from the physical and instinctive, where, in complicity with illusion, happiness lies.

.  .  .

In a rare, brief, personal entry, Giacomo writes:

I was frightened to find myself in the midst of nothingness, a nothing myself. I felt as if I were suffocating, thinking and feeling that all is nothing, solid nothing.

In such circumstances

it could be said that there will never be heroic, generous, and sublime action, or high thoughts and feelings, that are anything other than real and genuine illusions, and whose price must fall as the empire of reason increases.

When I was a young adult I lived a world of black and white, good guys and bad guys. When I read bad things about the people I disagreed with, I believed them. When I read excuses for bad behavior of those I agreed with, I believed those excuses. I used to care a lot who won Presidential elections.  Only later did I realize that there were no heroes (and not many villains). That both parties were pro-war on drugs, pro-war on terror, and just pro-war in general. That presidents from both parties expanded government (Johnson, Nixon, Bush I, Bush II and Obama) and Presidents from both parties reduced government (Reagan, Clinton.)

The party mouthpieces on both sides want their people to live in a black and white world, and thus they have an incentive to dramatize non-existent differences, or differences in areas that policy was never likely to address (guns, abortion, etc.) The only other differences are tribal—which groups does government policy favor. If you are a utilitarian economist those tribal differences aren’t particularly interesting.

I used to think  I was superior to all those other bloggers who still lived in a black and white world, who credulously believed everything bad they heard about the other side, and excused the lapses in ethics on their own side.  (I won’t name names, but you must know who I’m thinking of.)

But after reading Knausgaard and Leopardi, I now wonder whether I’m the fool. Too rational for my own good, living in a drab grey world that lacks energy and enthusiasm. Watching political events play out as dispassionately as a scientist looking at an ant colony.

I finished the book review with the depressing thought that I will now have to read Leopardi’s diary.  And it’s 2502 pages long.  These days I only get meaning by reading other intellectuals who valiantly struggle against the apparent meaninglessness of life.

PS.  Tyler Cowen linked to a John Gray review of the same book.  Here’s an example of why I’ve never liked Gray:

Faced with emptiness, modern humanity has taken refuge in schemes of world improvement, which all too often – as in the savage revolutions of the 20th century and the no less savage humanitarian warfare of the 21st – involve mass slaughter.

Yes, the “humanitarian” interventions in Bosnia, Somalia, Libya and Haiti were “no less” barbaric than the crimes of Hitler, Stalin, Pol Pot and Mao.  Maybe if I read people like Gray more often I’d get so angry I could return to the black and white world.

The Bush/Obama years

The new Fraser/Cato Economic Freedom Index is out, and the US continues to slide lower:

Global economic freedom increased modestly in this year’s report, though it remains below its peak level of 6.92 in 2007. After a global average drop between 2007 and 2009, the average score rose to 6.87 in 2011, the most recent year for which data is available. In this year’s index, Hong Kong retains the highest rating for economic freedom, 8.97 out of 10. The rest of this year’s top scores are Singapore, 8.73; New Zealand, 8.49; Switzerland, 8.30; United Arab Emirates, 8.07; Mauritius, 8.01; Finland, 7.98; Bahrain, 7.93; Canada, 7.93; and Australia, 7.88.

The United States, long considered the standard bearer for economic freedom among large industrial nations, has experienced a substantial decline in economic freedom during the past decade. From 1980 to 2000, the United States was generally rated the third freest economy in the world, ranking behind only Hong Kong and Singapore. After increasing steadily during the period from 1980 to 2000, the chain linked EFW rating of the United States fell from 8.65 in 2000 to 8.21 in 2005 and 7.74 in 2011. The chain-linked ranking of the United States has fallen precipitously from second in 2000 to eighth in 2005 to 19th in 2011 (unadjusted rating of 17th).

The rankings (and scores) of other large economies in this year’s index are the United Kingdom, 12th (7.85); Germany, 19th (7.68); Japan, 33rd (7.50); France, 40th (7.38); Italy, 83rd (6.85); Mexico, 94th (6.64); Russia, 101st (6.55); Brazil, 102nd (6.51); India, 111th (6.34); and China, 123rd (6.22).

I’m getting nostalgic for the Reagan/Clinton years.

On another topic, the radical Republicans have taken over North Carolina.  With almost complete control of the state government, they have enacted an agenda that is the worst nightmare of liberals:

The big hope: The new economic activity will compensate for the estimated $2.4 billion revenue loss over the next five years as a result of the reforms.

But the overhaul — which represents a scaled back version of earlier proposals — has been heavily criticized by many, mostly liberals. They contend its tax cuts will disproportionately benefit the rich and the revenue loss will cut into government services.

Starting in 2014, the individual income tax rate will be 5.8%, and then it will fall to 5.75% in 2015. Those rates are down from the 6%, 7% and 7.75% rates currently in effect.

The standard deduction, meanwhile, will more than double — to $7,500 for singles, from $3,000; and to $15,000 for married couples filing jointly, from $6,000.

That’s right, an income tax almost identical to that of Massachusetts, except more progressive.  That’s what passes for “far right” these days. Why not even lower income tax rates?  Because they were unable to do meaningful sales tax reform:

The state’s biggest opportunity for more revenue would be to apply its sales tax to services, Groseclose said. But the new reforms still leave most of them exempt.

Indeed, he noted, only about 25 or 30 services are subject to tax — such as dry cleaning. But another 165 to 175 could be — such as CPA services.

Let me guess, CPAs in North Carolina vote Republican.

PS.  I notice that both the Fraser/Cato and Heritage surveys have Denmark scoring higher than the US in “economic freedom.”  How do you think Jim DeMint and the gang at Heritage would react if the House GOP suggested replacing the US economic model with the Danish model?  It’s kind of comical that the Heritage keeps putting out a survey developed in the days when they were still a reasonable right wing organization, not the loony Tea Party outfit they have become.  Maybe this blog post will cause them to shut it down.  That would be too bad, as it’s slightly less inaccurate than the Cato survey, which has Spain nearly tied with the Netherlands.

Evan Soltas on optimal control theory

Evan Soltas has a very interesting new post suggesting how the Fed could better achieve its targets:

How would it do that, exactly? Here is where I come in with my recommendations. What the Fed has tried to do with forward guidance over the last year is plot the possible paths of the policy rate conditional on economic scenarios, but the way they’ve been doing it is kludgey when you think about it. We’ve been talking about asset purchases and signaling future policy — and signaling is imperfect! — so much that we’ve forgotten that there is a way to do this more directly.

My recommendation is that the Fed should target federal funds rate futures, eurodollar futures, overnight indexed swap (OIS) rates, or an appropriate proxy for the expectation of future interbank lending rates. (Whatever the specific contract form, I’ll call them fed funds futures going forward.) The reason why it ought to do this: There’s nothing clearer than just talking about the actual course of policy directly.

Here’s how this would work. The FOMC concludes its next full meeting in mid-December. In the summary of economic projections, the Fed should report what fed funds futures prices it believes to be warranted given the midpoint of the central-tendency projection over some five-year curve. It should also report the fed funds futures prices warranted conditional upon upward or downward deviations in economic data from the midpoint of the central-tendency projection. In other words: If we get ugly or nice surprises, how does the Fed plan on changing plans?

Before commenting, let me say that my math is somewhere between rusty and non-existent, so I may use the wrong terminology.  My main concern with this proposal is the difficulty of estimating the optimal path of the fed funds rate.  The proposal seems to assume that the lower the expected future fed funds rate, the more expansionary the policy.  But that can’t always be true, as the interest rate path that would produce Zimbabwean hyperinflation is likely higher than the current path, at least for 2015 and 2016.  So the relationship between expected 2016 interest rates and expected 2016 NGDP may not be monotonic, and thus there could be multiple equilibria.

I will concede that the relationship may be monotonic over the range relevant to current policymaking (although that’s less certain than most would assume), but nonetheless this thought experiment shows the difficulty of estimating the appropriate path of rates; the Wicksellian equilibrium rate changes as the expected growth rate of NGDP changes.  So while Evan’s plan might well be a big improvement over current policy, it is still not optimal.

In contrast, there is no multiple equilibria problem with NGDP futures targeting. Under that sort of regime the market for NGDP futures becomes the de facto FOMC.  This is the step that almost all elite macroeconomists (with the exception of John Cochrane) refuse to take.  There’s an endless search for the optimal path of the policy instrument, but very little soul-searching as to whether policymakers should even be in the business of forecasting the relationship between various instrument settings and future expected aggregate demand.  In other areas of economics we’d almost automatically view that as something that markets do better.  Why not in macro?

PS.  Here’s another way of making the same point.  Evan discusses the case where the market doesn’t find the future path to be credible.  But what he overlooks is that the lack of credibility can come from two sources; either a lack of trust that the Fed will persevere in its attempts to hit its nominal targets, or else markets may have confidence in those targets, but disagree as to whether the central bank’s estimate of the appropriate target path is actually the correct one. I believe that the second problem is dominant in Britain right now. Markets believe Carney is committed to faster NGDP growth, they simply don’t think he can hold rates low for as long as he claims, even if he hits the (implicit) NGDP target.

HT:  Saturos

Recent links

I’m still catching up on a host of great posts.  Let’s start with Yichuan Wang, who explains why rumors of tapering had such a big impact on various emerging markets.  It was not the direct effect (which would obviously be tiny), but rather that it threw off their monetary policy because they foolishly focused on the exchange rate:

Here’s a link to my 3rd Quartz article on how much of the emerging market sell-off was about monetary policy failures in the emerging markets themselves. In particular, by trying to maintain exchange rate policies, central banks in these countries overexpose themselves to foreign economic conditions. The highly positive response to the recent delay of taper serves as further evidence that many of these emerging economies need better ways of insulating themselves from foreign monetary shocks. Much of the work, draws on blog posts from Lars Christensen. His examples comparing monetary policy in Australia and South Africa versus policy in Brazil and Indonesia were particularly helpful.

David Beckworth looks a a bunch of natural experiments on the efficacy of monetary policy at the zero bound:

The first quasi-natural experiment has been happening over the course of this year. It is based on the observation that monetary policy is being tried to varying degrees among the three largest economies in the world. Specifically, monetary policy in Japan has been more aggressive than in the United States which, in turn, has had more aggressive monetary policy than the Eurozone.1 These economies also have short-term interest rates near zero percent. This makes for a great experiment on the efficacy of monetary policy at the ZLB.

So what have these monetary policy differences yielded? The chart below answers the question in terms of real GDP growth through the first half of 2013:

Screen Shot 2013-09-27 at 10.02.30 AM

The outcome seems very clear: when really tried, monetary policy can be very effective at the ZLB. Now fiscal policy is at work too, but for this period the main policy change in Japan has been monetary policy. And according to the IMF Fiscal Monitor, the tightening of fiscal policy over 2013 has been sharper in the United States than in the Eurozone. Yichang Wang illustrates this latter point nicely in this figure. So that leaves the variation in real GDP growth being closely tied to the variation in monetary policy. Chalk one up for the efficacy of monetary policy at the ZLB.

However I’d caution readers that the BOJ has still not done enough to hit a 2% inflation target.  They have had limited success, but need to do more.  They should also switch to a 3% NGDP growth target, as inflation is the wrong target.

Ryan Avent has an excellent post that is hard to excerpt.

I would be shocked if the public had any real sense of what QE is. QE is confusing. And any expectations-based strategy that relied on the public understanding precisely what QE is, or what nominal output is, or for that matter what the federal funds rate is, would be entirely doomed. But I don’t think that’s how this stuff works. I also don’t think inflation targeting works by getting everyone to expect 2% inflation and raise prices or demand pay-rises accordingly. Consumers basically never expect 2% inflation, and consumers and producers alike seem to ask for as much as they think they can get based on their observations about what other people can get. That’s one reason why I don’t think the argument that “no one knows what NGDP is” is not a strong criticism of NGDP targeting, though I also think that it would be daft for a central bank to say it was targeting NGDP rather than just, say, national income.

I think most people operate using pretty simply heuristics. They have a feeling for what it feels like to be in a boom or a bust or something between. They have a sense for when inflation””in the economics sense of the term””is eroding their real incomes. They also have in mind something called inflation which basically means energy costs. My general feeling is that over the past 20 years (and in contrast to the two decades before that) most people have not much distinguished between real and nominal, because there has been no point to doing so. Complaints about “inflation” in this period virtually always boil down to complaints about unpleasant shifts in relative prices: more expensive gas and housing, mostly.

My sense is that what the Fed should do is target the trend path for a nominal variable that minimises the consumer experience “weak job market”. I think a nominal GDP level target accomplishes that. And once the Fed adopts that target the system will work as it does around any target. The Fed message will be intermediated by financial markets. Consumers will to some extent take their cues directly from financial markets and will to some extent take their cues from the reaction by sophisticated businesses to the reaction in financial markets.

And in an even more recent post, there is this gem (in reply to a Krugman post):

In his initial post Mr Krugman writes:

One answer could be a higher inflation target, so that the real interest rate can go more negative. I’m for it! But you do have to wonder how effective that low real interest rate can be if we’re simultaneously limiting leverage.”

But if you create higher inflation you don’t need low real interest rates to solve the demand problem; it’s already solved! Maybe this is the confusion that keeps the economy in its rut. Markets are looking to the Fed, saying “which equilibrium, boss?”. And the Fed is saying that it would prefer the adequate-demand equilibrium but priority one is keeping a lid on inflation. And markets are saying “well I guess we have our answer”.

However his final paragraph is slightly off course:

Or to be succinct about things, there is no a priori reason to think that generating adequate demand requires rising indebtedness. But when an inflation-averse central bank is trying to generate adequate demand when the zero lower bound is a binding or near-binding constraint (as it was in 2002-3 and is now) it just might.

The first sentence in that paragraph is where he should have ended the post.  If the central bank is inflation averse, even more debt won’t help.  I’d add that the Fed could produce a robust recovery with 2% inflation. The problem today is that inflation is below 2%, and is likely to stay there.

As far as Paul Krugman’s comment, I have no idea what he is talking about.  He seems to be steadily regressing from new Keynesianism to a crude version of 1930s Keynesianism.  Keynes also thought that higher inflation targets were not a solution to the zero rate trap.  But Krugman should know better.

PS.  Justin Wolfers also has a very good column, discussing the fall in the PCE deflator in Q2.

HT:  Stan Greer, and lots of other commenters.