The mystery of mini-recessions: The dog that didn’t bark

[If you haven’t already, read the previous post first.]

In this post I plan to define mini-recessions and then discuss why they are so mysterious and why they might offer the key to macroeconomics.  Then I’ll offer an explanation for the mystery.

To understand mini-recessions we first need to understand the monthly unemployment data collected by the Bureau of Labor Statistics.  This data is based on large surveys of households.  It seems relatively “smooth,” rising and falling with the business cycle.  Month to month changes, however, often show movements that seem “too large” by 0.1% to 0.3%, relative to the other underlying macro data available (including the more accurate payroll survey.)  So let’s assume that once and a while the reported unemployment rate is about 0.3% below the actual rate.  And once in a great while this is followed soon after by an unemployment rate that is about 0.3% above the actual rate.  Then if the actual rate didn’t change during that period, the reported rate would rise by about 0.6%.

I searched the post war data, which starts at 1948, and covers 11 recessions.  During expansions I found only 12 occasions where the unemployment rate rose by more than 0.6%.  In 11 cases the terminal date was during a recession.  In other words, if you see the unemployment rate rise by more than 0.6%, you can be pretty sure we are entering an recession.  The exception was during 1959, when unemployment rose by 0.8% during the nationwide steel strike, and then fell right back down a few months later.  That’s not called a recession (and shouldn’t be in my view.)  Oddly, unemployment had risen by exactly 0.6% above the Bush expansion low point by December 2007 (when the current recession began) and by 0.7% by March 2008, and yet many economists didn’t predict a recession until mid-2008, or even later.

What’s my point?  That fluctuations in U of up to 0.6% are generally noise, and don’t necessarily indicate any significant movement in the business cycle.  But anything more almost certainly represents a recession.

Now here’s one of the most striking facts about US business cycles.  When the unemployment rate does rise by more than 0.6%, it keeps going up and up and up.  With the exception of the 1959 steel strike, there are no mini-recessions in the US.  The smallest recession occurred in 1980, when the unemployment rate rose 2.2% above the Carter expansion lows.  That’s a huge gap, almost nothing between 0.6% and 2.2%.

It’s often said that nature abhors a vacuum.  I’d add that nature abhors a huge donut hole in the distribution of “shocks.”  Suppose there were lots of earthquakes of zero to six magnitude.  And occasional earthquakes of more than seven.  And even fewer earthquakes of more than 8 magnitude.  But nothing between 6 and 7.  Wouldn’t that be very odd?  I can’t imagine any geological theory capable of explaining such a gap.  We normally expect shocks to become less and less common as we move to larger scales.  And to some extent that’s true of recessions.  The Great Depression is unique in US history.  Big recessions like 1893, 1982, and 2009 (roughly 10% unemployment) are rarer than common recessions.  So why no mini-recessions? (Or almost none, if we are going to count the 1959 downturn as a mini-recession.)

I don’t see why other macroeconomists are not obsessed with this issue.  Why isn’t there a Journal of Mini-Recessions?  I suppose some smart-alec commenter will say; “because there’s nothing to study, stupid.”  But he’d be missing the point; just like the dog that didn’t bark in that Sherlock Holmes tale, the lack of mini-recessions may provide the explanation to the business cycle.

(Or is there such a field, and I just don’t know about it?)

Let’s start with real theories of the cycle.  I can’t imagine any plausible real theory that didn’t predict lots more mini-recessions than actual recessions.  After all, aren’t modest size real shocks (i.e. those capable of raising the unemployment rate by 1.0% to 2.0%) much more common than really big real shocks, capable of raising unemployment by more than 2.0%?

Of course one could say the same thing about nominal shocks, wouldn’t you expect modest-sized nominal shocks to be more common than big nominal shocks?  Yes, but I still think the lack of mini-recessions points to nominal shocks (or monetary policy) as being the culprit.

Let’s try to construct a “just so story” to explain why recessions are always fairly big, and then look for evidence to support the story.  Suppose you have the following conditions:

1.  There is a data lag of a couple months.

2.  There is a recognition lag of a few months.  This is the time between when the data comes in and the Fed recognizes that a new trend is developing.

3.  When the Fed does recognize problems, it reacts in a “responsible and deliberative fashion,” it doesn’t change policy drastically, in a move that appears panicky.

4.  The Fed targets nominal interest rates.

5.  When the Wicksellian equilibrium rate is below above the market rate the economy expands at trend or above.

6.  The Fed can’t directly observe the Wicksellian equilibrium rate, and tends to gradually nudge rates higher as the economy approaches full employment, and seems in danger of overheating.  Or as inflation rises above target, and appears in danger of affecting inflation expectations.

7.  At some point the target rate is nudged above the Wicksellian equilibrium rate, but the Fed doesn’t know this initially.

8.  When the economy turns into recession the Wicksellian equilibrium rate falls fairly rapidly.  Even after the Fed begins cutting rates the market rate will be above the equilibrium rate for several months.  Hence monetary policy stays “contractionary” for several months after the Fed realizes a recession may be developing.

Put all that together and you get contractions that could easily last for 9 months to a year, even if Fed policy is attempting to be countercyclical.  BTW, you could tell a similar story with money supply targeting, as velocity tends to fall on its own accord during contractions.

OK, but is there any evidence for my just so story?  Maybe a bit.  Let’s start with the peculiar 1980 recession, the mildest in the post war period.  I recall in mid-1980 thinking that Carter was toast, and that the recession would be as bad as 1974-75.  The Fed had raised rates to about 15% in late 1979.  Unemployment soared in the spring of 1980.  But then it suddenly stopped rising, leaving the recession the mildest on record.  What explains this turnabout?  There are two possible answers.  First, this was one of the few periods where the Fed wasn’t targeting interest rates.  But I don’t think that tells the whole story.  Rather it was the Fed’s willingness to do an extraordinary about face in policy, and ignore interest rates.

We know from the Fed minutes that they generally don’t like to suddenly reverse course; it makes them look bad.  It makes the previous decision look foolish.  So during recessions they cut rates gradually, in a “responsible and deliberative fashion.”  But not in 1980.  The 3 month T-bill yield plunged from 15.2% in March to 7.07% in June.  That’s more than 800 basis points in 3 months!  And that immediately ended the recession.  The unemployment rate had ended 1979 at 6.0%, and then soared to 7.8% in July 1980.  But that was it; the rate immediately started falling, as the Fed stimulus (which pushed nominal interest rates well below the inflation rate) caused NGDP to soar at an annual rate of 19% in late 1980 and early 1981.

So that explains why the 1980 recession was so short.  But what of the longer than average recessions like 1982 and 2009?  In mid-1981 Volcker realized that the previous “tight money” policy had failed and more draconian medicine was needed.  So the Fed tightened policy and kept it tight even after it was clear we were in recession.  Volcker kept it tight until inflation fell to about 4%.  So the 1982 recession was longer and deeper than normal because it was a rare case where the Fed wanted a longer and deeper recession.

In 2009 the Fed would have preferred a milder recession, but they weren’t able to use their preferred interest rate instrument to spur the economy.  So the “liquidity trap” can explain this one.  But most contractions are about 9 to 12 months, which I think fits my model just so story pretty well.  In some cases like 1974, the first part of the recession is arguably not a recession at all, as unemployment rose only a tiny amount.  Rather it was a sluggish economy produced by the oil shocks and price controls.  The severe phase of the recession (when unemployment soared) was in late 1974 and early 1975, and was pretty short.

To summarize, I can’t even imagine a non-monetary theory that explains the lack of mini-recessions.  RBC, RIP.  Bye bye to blaming ObamaCare.  So much for the sub-prime bubble.  But I can imagine a plausible theory of how inertial central banks that target nominal rates and observe the macroeconomy with a lag might occasionally produce short contractions, typically 9 to 12 months.  I recall that in both the 1991 and 2001 recessions it wasn’t until about 6 months in that the consensus of economists even forecast a recession.  A few months later the contraction was over.  And I believe this theory can also account for the occasional recession that is slightly shorter or longer.

Also note that it’s a post-war US theory only.  I’d expect mini-recessions in small, less diversified economies, and perhaps even in the US prior to WWII, when we had a different monetary regime.  Unfortunately we lack comprehensive monthly unemployment data from before WWII.

PS.  Grad students who are interested might want to compare this theory to the Romer and Romer narrative of Fed decisions.  There might be some overlap.

PPS.  The two occasions where the U-rate rose by 0.6% with no recession were 1957 and 1960.  In both cases we were officially in recession within 2 months.  So maybe 0.5% is the limit of randomness.

PPPS.  I discussed one short recession (6 months) and three long ones (16, 18, 18 months)  in this post.  The other 8 post-WWII recessions were all within 8 to 11 months long.  There’s probably a reason for that, and I’d guess it has something to do with monetary policy.

Real shocks/nominal shocks

This is part one of two posts on business cycles.  Both posts will examine one of the greatest mysteries on all of economics:  Why no mini-recessions?

I’ll get into mini-recessions in the next post, but first I’d like to examine another issue; why are US recessions always accompanied by nominal shocks?  And I’ll consider that issue by first examining recent events in Japan.  Here is the unemployment rate since July 2010:

A few issues:

1.  The graph is slightly off, the last two months are September/October, not October/November as it might appear.

2.  The figures from March to August do not include the regions devastated by the quake of March 2011.

Nevertheless, for two reasons I believe the evidence strongly suggests the quake did not significantly impact Japan’s unemployment rate.  First, because when the devastated regions were added to the total in September, the national unemployment rate actually fell from 4.3% to 4.1%.

And second, it was widely believed that the quake would cause lots of unemployment in Japan’s industrial heartland (Osaka to Tokyo), as supply lines were disrupted.  But it clearly did not.

Keep in mind this was a mindbogglingly large real shock.  The death toll was more than 10 times larger than Hurricane Katrina, and Japan is a much smaller country than the US.  The devastation was enormous.   Even today most nuclear plants are shutdown throughout Japan (only 11 of 60 are operating?), and electric power is rationed in some places. If this real shock didn’t affect the unemployment rate, what kind of real shock would?   Industrial production did drop sharply, but quickly recovered.  I’m not arguing that real shocks don’t affect output, I’m arguing they don’t affect jobs (very much.)

Some might argue that Japan is different, that Japanese firms don’t lay off workers.  OK, but let’s see what happens when Japan is hit by a demand shock:

It sure looks like Japanese firms do lay off workers, at least when demand falls.  The unemployment rate rose from 3.8% in October 2008 to 5.6% in July 2009.  That’s a big jump by Japanese standards.

And I’d argue the same for the US.  Almost all of the big jumps in unemployment in US history are due to demand shocks.  I only know of one clear exception in the post-war period: 1959, when a steel strike caused the unemployment rate to rise by 0.8%, and then fall sharply.  And guess what, 1959 also happens to be the only mini-recession in modern American history (that I could find.)  One real shock and one mini-recession.  Coincidence?  I don’t think so, but I’ll examine the mini-recession issue in the next post.

PS.  In fairness, there is one ambiguous case in the US; 1974.  NGDP growth did slow significantly during 1974 (compared to 1973.)  But NGDP growth was still fairly high, so it might be a stretch to call 1974 a nominal shock.  In my view the gradual removal of price controls during 1974 distorts the data, and the negative nominal shock in the latter part of 1974 was much bigger than it looks from reported NGDP data.  Others may disagree.  Thus 1974 remains a possible example of a real shock boosting unemployment sharply.

PPS.  The huge (20%) wage shock of July 1933 sharply reduced industrial output.  But it didn’t seem to affect employment, as it was implemented along with a rule that reduced the workweek from 48 hours to 40 hours, leaving weekly wages unchanged.

Brad DeLong finds “coherence” in the Greenwald-Stiglitz Depression model

The following isn’t Brad DeLong’s entire summary of Greenwald-Stiglitz, but it gets at the central assumption:

However, even though I do not fully buy it I do think I understand the argument. And I do not think it is incoherent.

As I understand the Greenwald-Stiglitz hypothesis–about the Great Depression as applied to agriculture and about today as applied to manufacturing–it goes like this:

  1. Rapid technological progress in a very large economic sector (agriculture then, manufacturing now) leads to oversupply and steep declines in the sector’s prices. Poorer producers have less income. They come under pressure to cut back their spending. Others–consumers–are now richer because they are paying less for their food (or their manufactures), but their propensity to spend is lower than that of the stressed farmers or ex-manufacturing workers.
  2. Moreover, the oversupply of agricultural commodities (or manufactured goods) means that only an idiot would invest at their normal pace in those sectors. To the shortfall in consumption spending is added a shortfall in investment spending as well.
  3. Thus we have systematic pressures pushing spending down below economy-wide income. These aren’t going to go away until the declining sector (agriculture then, manufacturing now) is no longer large enough to be macroeconomically significant.
  4. Macroeconomic balance requires that the economy generate offsetting pressures pushing spending up. What might they be?

First let’s translate this into a monetarist framework, and then we can examine what’s wrong.

Even the Keynesian model requires a big drop in NGDP (below trend) to get a demand-side recession.  So how does the G-S hypothesis do that?  The Keynesians would argue that if the central bank held the money supply fixed, these shocks would cause a fall in velocity.  That’s not an unreasonable assumption, as DeLong is talking about a situation where desire to save rises relative to desire to invest.  That does reduce interest rates.  Lower interest rates mean a lower opportunity cost of holding base money, and thus lower velocity.  So far so good.

My complaints lie elsewhere.  On empirical grounds almost every single step in this argument is wildly implausible.  And that’s not hyperbole; I don’t mean one step, I mean every single step:

1.  I see no evidence that technological change in the farm sector that pushed farmers toward the city would boost aggregate saving rates or reduce the propensity to invest.  This process was going on continually from the late 1800s, until it began slowing in recent decades because there were so few farmer left.  Market economies are really good at adapting to these slow and predictable types of creative destruction.  Perhaps there was less investment in the farm sector, but there was more investment in the urban sector.  And was there really less investment in the farm sector?  Farmers were moving to the cities in the 1920s precisely because farming was becoming more mechanized, as machines were replacing workers.  So I’m not sure the 100 year trend of farmers moving to the cities depressed real interest rates at all.

2.  But let’s suppose I’m wrong.  Business cycles aren’t caused by 100 year trends, they are caused by “shocks.”  Where is the shock?  If each year 2% of farmers move to the city, how does that cause a sudden demand shock?  The economy was booming in the 1920s despite the gradual decline in farming.  Now you could argue that something else was propping up the economy, like a thriving manufacturing sector, and when the bottom fell out of manufacturing then the economy slumped.  But in that case why not blame the collapse of manufacturing?

3.  Stiglitz claims things got much worse in farming in the early 1930s, citing evidence that incomes fell between 1/3 and 2/3 (which doesn’t seem very precise data to build a theory around.)  But total national income fell by roughly 1/2, right smack dab in the middle of the Stiglitz estimates.  So what’s so special about farming?

4.  Let’s say everything I said was wrong.  Let’s say DeLong is right that the decline in farming during the 1920 gradually led to a saving/investment imbalance, which eventually got so bad it triggered the Great Depression.  How would this show up in the data?  We would see falling real interest rates.  When they got close to zero the real demand for base money would soar, triggering a sharp fall in NGDP (unless offset by lots of money printing.)  And something like that did happen in the early 1930s.  But the problem is that it didn’t happen in the 1920s.  After the 1920-21 deflation, prices were pretty stable for the rest of the 1920s.  So nominal rates should give us a rough estimate for real rates.  (I’d add that expected inflation rates were usually near zero when the dollar was pegged to gold.)  During the 1920s short term nominal interest rates fluctuated in the 3.5% to 6.0% range.  Those are strikingly high risk free real rates by modern standards.  Even worse for Stiglitz, they were trending upward in the latter part of the 1920s.  Thus there’s not a shred of evidence that the migration away from farms during the 1920s had the sort of macro implications that are necessary for the Stiglitz model to be plausible.

I suppose some would try to resurrect the model by pointing to all sorts of “ripple effects.”  How problems in agriculture spilled over into other sectors.  The problem with this approach is that it proves too much.  There has only been one Great Depression in US history.  (In real terms the 1870s and 1890s weren’t even close.)  If capitalism is so unstable that a problem in one area causes ripples which eventually culminate in a Great Depression, then one might as well argue the Depression was caused by my grandfather sneezing.   His sneeze passed a cold to several other people, and voila, via the “butterfly effect” we eventually get the collapse of the world economy and the rise of the Nazis.  I happen to believe that any useful model has to be more than “coherent” in a logical sense, it also has to be empirically plausible.

Suppose someone walked up to you in mid-1929 and said a depression was on the way for reasons outlined by Stiglitz.  What would you think?  What data would support that conclusion?  Did the economy seem to be having trouble accommodating farmers gradually moving to the city?  No.  Was there a savings glut?  No.  Was the real interest rate trending downward?  No.  Were there a host if exciting new technological developments that would lead one to be very excited about the future of American manufacturing?  Yes.  You’d ask Stiglitz why we should believe his model.  What pre-1929 facts was it able to explain?  As of 1929 I don’t see it explaining anything.  Of course we did have a Depression, which is exactly what you’d expect if:

1.  The Fed, BOE, and BOF all tightened in late 1929, sharply raising the world gold reserve ratio over the next 12 months.

2.  Then falling interest rates and bank failures increased the demand for base money after October 1930.

3.  Then international monetary collapse and more bank failures led to more demand for both cash and gold after mid-1931.

4.  Then FDR raised nominal wages by 20% overnight in July 1933, aborting a promising recovery in industrial production.

5.  Then in 1937 the Fed doubled reserve requirements and sterilized gold, slowing the economy.

6.  Then when the economy slowed a dollar panic (fear of devaluation) led to lots of gold hoarding, sharply depressing commodity prices world-wide in late 1937.

That would be a theory with explanatory power.  Something Stiglitz’s theory lacks.

Of course I haven’t even discussed the much deeper problems with the Stiglitz worldview.  Because he insists on a “real” theory of AD, he has no explanation for movements in NGDP.  I’d guess this is where DeLong would part company with Stiglitz.  In Delong’s worldview there is a trend rate of inflation high enough to prevent liquidity traps, and hence (unless the Fed crashes the monetary base) high enough to prevent collapses of NGDP.  Not in Stiglitz’s world.  Although he often uses the language of Keynesianism (aggregate demand, etc) his model is in some ways even more primitive, like the early progressive models that Keynes pushed aside during the 1930s.  Recall that Keynes saw the problem as the failure of our monetary system.  As Nick Rowe likes to say, the problem isn’t saving, it’s money hoarding.

PS.  It seems to me that DeLong mildly scolds Nick Rowe for roughly the same reason that Nick Rowe scolds Bryan Caplan.  I agree with Caplan and Nick Rowe.  That is Nick Rowe the victim, not Nick Rowe the villain.

🙂

The Great Depression of 1963-73

Those of us born in the mid-50s still can recall the Great Depression of 1963-73.  The trigger was obviously the Kennedy assassination.  A wave of sympathy led the federal government to go on an orgy of spending, taxes, regulation, inflation, and price controls.  Medicare and Medicaid was passed in 1965.  The War on Poverty was launched.  Affirmative action was imposed on business.  Inflation soared, pushing people into higher tax brackets.  Then LBJ raised income tax rates in 1968.  OSHA began imposing burdensome regulations on business.  Ditto for the EPA.  The US left Bretton Woods, causing enormous uncertainty over monetary policy.  By 1971 there were wage and price controls on the entire economy.  What a god-awful mess!

Of course the period from 1963-73 was actually one of the greatest boom periods in all of human history.  The question is why?

One answer is Adam Smith’s famous maxim: “There is a great deal of ruin in a nation.”

Or perhaps monetary policy drove NGDP up at a fast and accelerating rate.  And it is monetary policy, not structural factors, that explains the business cycle.

Karl Smith recently expressed similar sentiments:

I don’t think Krugman is doing this, but it is easy to get too caught up in thinking the macroeconomy is an extension personal finance. Having bought a house you couldn’t afford seems like a really bad situation to be in, and if everyone is in that situation then it seems like that ought to be really bad for the economy.

However, keep always in the front of your mind that a recession is not simply a series of unfortunate events.  A recession is when the economy produces less. For example,  the AIDS epidemic in Botswana is a horrible event for millions of people that uprooted lives and destroyed families and promises to leave a generation of orphans.

However, Botswana’s GDP growth didn’t turn negative until Lehman Brothers went under.

That a Global Financial Crisis could do what rampant death and disease could not, is an important indicator of the nature of recession.

A recession isn’t when bad things happen, whether that’s loosing your house to foreclosure or your parents to AIDS. A recession is when the economy produces less.

Somehow you have to make a link between the bad thing happening and the economy producing less. I maintain that, that link almost always runs through the supply of money and credit.

Still think “the problem” is the cost imposed on business by Obamacare?  Obamacare is “a problem,” as are many of the policies from 1963-73.  In the long run they may be more important than the business cycle.  But let’s not confuse policies that reduce the efficiency of the economy, with those that create business cycles.

Which state had the most bank failures during 2008-10?

No, it’s not centers of sub-prime madness like Arizona or Nevada.  Nor is it big states like California or Florida.  It’s Georgia.  And Illinois is second.  Check out the graph in this link:

There is a good reason why most bank failures in 2009 did not occur in the sub-prime states; sub-prime loans were not the main problem.  Indeed mortgages of all types were not the main problem.  What was?  According to McNewspaper USA Today it was construction loans, often for commercial real estate:

The biggest bank killer around isn’t some exotic derivative investment concocted by Wall Street’s financial alchemists. It’s the plain old construction loan, Main Street banks’ bread and butter for decades.

Deutsche Bank has called them “without doubt, the riskiest commercial real estate loan product.” The Congressional Oversight Panel, a financial watchdog, has warned that construction loans have deteriorated faster and inflicted bigger losses on banks than any other real estate loans.

That’s right, everything we were told about the financial crisis in 2009 (and which I also believed for a while) is wrong.  It’s a commercial RE crisis, not a mortgage crisis.   You might argue that it was housing loans that triggered the liquidity crisis of late 2008.  Yes, but the crash of late 2008 was caused by the Fed’s failure to do level targeting once rates hit zero.  The main public policy issue with bank failures is the cost to taxpayers, not the impact on the business cycle.

In earlier posts I argued that the commercial real estate market does not appear to have been a bubble.  It held up very well in late 2006 and 2007, even as residential housing was falling almost continuously.  Only when NGDP growth slowed in 2008, did commercial real estate begin a significant decline.  No big surprise there, commercial real estate is extremely sensitive to falls in NGDP produced by excessively tight money.  The same problem hit commercial RE in the 1930s, when NGDP fell in half.  There are stories of the Empire State Building being mostly empty after it opened in 1931.

Why were all those bad commercial real estate loans made?  After all, shouldn’t banks take into account the risk of recession?  Well nobody could have expected NGDP to suddenly fall 8% below trend.  But even so, there clearly is a problem here.  Indeed it appears that our current banking crisis, which was initially thought to be very different from the 1980s S&L fiasco, was almost an exact replay of that earlier crash.  Initially we were told that the big banks were the problem this time–it was all about “Too-Big-to-Fail.”   But they have been quietly repaying their TARP loans.  Even the worst banking fiasco in nearly 80 years will not result in taxpayer money being permanently transferred to big banks.  Even if you include the AIG bailout as an implicit bailout of the big banks, the small banks are still the main problem.  Our government insurance company let’s smaller banks run wild, just as in the 1980s (i.e. before the so-called “regulatory reforms” that were supposed to fix the S&L problem.)  The cost to FDIC of all these smaller bank failures in places like Georgia will be many times larger than the net cost of AIG plus the banking part of the TARP bailout.  And let’s not even talk about the cost of bailing out the GSEs.

It’s not about big banks and it’s not about derivatives:

It did not end well. Construction loans started blowing up when the real estate market collapsed and the economy tumbled into recession. The 10 biggest banks, facing problems of their own with subprime mortgages, were largely immune to the deterioration in construction loans, which accounted for just 2% of their assets in 2007, according to the Federal Reserve. By contrast, construction loans accounted for more than 10% of assets at banks that didn’t rank in the top 1,000. “What’s causing the problem is Main Street America, the construction loan made by the bank down the street,” says Bill Bartmann, who owns a debt advisory firm. “They built, and nobody came.”

Making matters worse: Community banks never sold the construction loans to investors the way banks unload auto loans and residential mortgages. “Most construction loans are so unique, so different, so non-homogenous, that you can’t securitize them,” Bartmann says. “They were kept on the books of the banks that originated them.” And there, many of them started to turn rotten.

Here’s an example of what banks did in Georgia:

Rollo Ingram witnessed one spectacular flameout up close. He was chief financial officer at Atlanta’s RockBridge Commercial Bank, which opened in 2006, backed by other members of the city’s business elite.

RockBridge told banking regulators it planned to specialize in business lending. It didn’t, plunging instead into real estate and construction loans. The bank told regulators in 2006 that construction loans would account for 5% of its portfolio. By the end of 2007, they accounted for 42%. Business loans, which were supposed to make up 50% of RockBridge’s lending, came to just 28%, according to an after-the-fact autopsy by Federal Deposit Insurance Corp.’s inspector general.

Nor did RockBridge recruit veteran loan officers with enough experience to safely assemble its risky portfolio, the inspector general concluded. “They hired younger, less-experienced ones, and didn’t hire enough of them,” Ingram says. He says he was forced out in 2008 when he complained about the risky direction the bank was taking.

Those on the left complained the banking crisis resulted from “laissez-faire,” forgetting that the federal government effectively nationalized most of the liabilities of the banking system in 1934.  That’s right; when you deposit $100 in your bank account you are actually lending the money to Uncle Sam, who re-lends it at the same rate to the bank.  FDIC is effectively a government institution, and the fees on banks are effectively taxes, which of course are passed onto the public.  The government didn’t seem to care that wildcat banks in Georgia colluded with property speculators and ran wild with government loans made at risk free rates.

But some on the right were arguably even worse, not paying enough attention to this problem and constantly harping on the need for “deregulation,” aka the doctrine of “business should be free of regulations that inhibit their ability to loot the Treasury.”

I’m amazed that after the S&L fiasco of the 1980s our government wasn’t able to figure out the problem.  Or maybe they do understand the problem, but are in the pocket of property developers.

And of course now if someone proposed a crackdown, there’d be complaints about how it would “starve the economy of capital, and slow the recovery.”  Just one more side-effect that results when hawks at the Fed prevent an adequate recovery in NGDP.

Update:  I just saw an example of the “blame it all on laissez-faire” meme discussed in Arnold Kling’s blog.  And Barry Eichengreen isn’t even very left wing.