In a not so recent post Tyler Cowen made the following argument:
Matt Yglesias suggests the notion is implausible, but I am surprised to read those words. Keep in mind, we have had a recovery in output, but not in employment. That means a smaller number of laborers are working, but we are producing as much as before. As a simple first cut, how should we measure the marginal product of those now laid-off workers? I would start with the number zero. If a restored level of output wouldn’t count as evidence for the zero marginal product hypothesis, what would? If I ran a business, fired ten people, and output didn’t go down, might I start by asking whether those people produced anything useful?
It is true that the ceteris are not paribus, But the observed changes if anything favor the hypothesis of zero marginal product. There has been no major technological breakthrough in the meantime. If anything, there has been bad monetary policy and a dose of regulatory uncertainty. And yet again we can produce just as much without those workers. Think of “labor hoarding” yet without…the hoarding.
I don’t normally comment on old posts, but I was asked what I thought of this idea, which still seems to be attracting attention. My initial reaction is skepticism. Why wouldn’t companies just lay off more workers? But Tyler Cowen refers to the labor hoarding hypothesis, which might be able to explain that seemingly irrational behavior.
So I think we need see how the theory matches the data. This post by Stephen Gordon shows US employment in 2010:3 falling about 5% below its 2008:1 peak, while output seems to have declined only about 0.7%. This is what Cowen finds puzzling.
But I don’t see any puzzle at all. If employment didn’t change, I’d expect US output to grow at about 2% a year, which is the trend rate of productivity growth. Because we are looking at a two and a half year period, you’d expect output to grow roughly 5% with stable employment. Now assume that employment actually fell 5%. If the workers who lost jobs were similar to those who remained employed, I’d expect output to be flat over that 2.5 year period. Because output fell slightly, it seems like the workers who lost jobs were slightly more productive than those who remain employed.
Do I believe this? No, for several reasons I think they were less skilled than those who remained employed. Labor productivity growth (assuming we were at full employment) probably slowed in the most recent 2.5 years, as investment in new capital declined. Measured productivity continued to rise briskly, partly because technological progress continues in good times and bad, and partly because those workers still employed are somewhat higher skilled, or perhaps are trying harder in fear of losing their own jobs. So Tyler is probably right that those workers who lost their jobs have a lower than average marginal product. I just don’t see why zero is the natural starting point for consideration of the issue, as you only get that number by making some fairly extreme assumptions about technological progress coming to a screeching halt after 2008:1.
I’m certainly open to alternative views here. My baseline assumption is not consistent with Okun’s Law, for instance. And the three most recent recessions have seen slower recovery in employment than earlier recessions, although I think people often underestimate how much of that difference is because monetary stimulus has been much weaker during those recoveries (compared to a recession like 1981-82.)