Archive for the Category Methodology


What information should we consume?

This is my second “Ted talk”.  Ted asked:

How do you fight against selection bias as you consume information about the world?

One answer is to read “everything”, as does Tyler Cowen.

But you may not have the time, in which case I’d focus on a “diverse” set of reading material. This means much more than avoiding ideological bias (although that’s important too.)

1.  Read material on both sides of the ideological spectrum, indeed on many different sides.  I subscribe to three magazines, which represent three different ideological perspectives.  (NYR of Books, The Economist, Reason.)  I also spend a lot of time reading the NYT, WSJ, FT, WaPo, National Review, Bloomberg, South China Morning Post, Yahoo and lots of other outlets—mostly online.  Don’t let your ideological bias affect how you view a news outlet.

2.  Avoid geographic bias.  It’s almost inevitable that you’ll be biased toward your own country (I’m no exception), but push back against that bias.  Try to read lots of news about other countries.  Don’t focus on the countries that the news media considers important; focus on what’s actually important.  For instance, a few decades ago I decided to stop reading about the Israeli/Palestinian conflict.  I’d had enough.  It’s not that the conflict is not important; it is.  Rather it’s not as important as the news media (on both sides of the issue) assumes it to be.  There’s no objective reason why you would want to pay more attention to the Palestinians than to the plight of Muslim minorities in western Burma, northwest China, or some other region. (One exception is if you are Palestinian or Jewish, or in the case of Northern Ireland, if you are Irish.)

3.  Read about a wide range of topics.  I just read a book about psychedelics, and I now realize that prior to reading the book I knew almost nothing about the subject. That was because I had little interest in the topic, and had never really paid attention.  While reading Michael Pollan’s book I found some interesting material on a wide range of topics, such as mental health, meditation, drug laws, consciousness, the culture of Silicon Valley, etc.  Indeed the book might even be of some interest to a person who has absolutely no interest in LSD.  The book’s main flaw is that it focuses too much on the US (see previous point.)  My next book (on the Great Recession) has the same problem.

4.  Most non-economists assume that economics is the field that studies the economy.  In fact, it would be more accurate to describe economics as a certain way of thinking about the world.  (Think of the joke, “Economics is about how people make choices; sociology is about how people don’t have any choice.”)  If you are an economist you should occasionally look at other social sciences, so that you can examine alternative ways of thinking about problems.

5.  Read lots of fiction.  One of our biases is to put too much weight on our own life experience, and not enough on the life experience of others, especially people from different cultures.  Reading fiction helps us to overcome that bias.  Good films are also helpful, especially when they are not too political.  Books or films with obvious messages are likely to have oversimplified the issue.  (The film “Three Identical Strangers” is a recent example.)  Films where the message is less obvious (say The Death of Lazarescu) are often ones with the more important implications. It’s become a cliche that fiction is often truer than non-fiction.

6.  Read extremely smart bloggers, not people you agree with.  Read people who annoy you.  Paul Krugman has a way of writing that many conservatives and libertarians find to be quite annoying. But he’s still a very bright intellectual who often has interesting things to say.  Ditto for Brad DeLong. I often come across commenters who say, “I don’t see why everyone thinks X is such a genius.”  If you don’t understand why everyone thinks X is such a genius, then it’s likely the problem is with you, not X.

7.  Try to double-check both sides of the story.  If the liberal media describes some conservative outrage, see what the conservative media says about the same event before forming an opinion.  Vice versa if the conservative media describes some liberal outrage.  If necessary, check the moderate media, defined as outlets that frequently criticize both sides.  Also check data sources.  One of my comparative advantages is that I know the data better than most other people. I often read posts by people who are smarter than me, and immediately notice that they are citing implausible data.  Either they made a mistake or their data source was unreliable.  For instance, almost all of the media stories on the richest people who ever lived are based on completely false data.

8.  On the other hand, I don’t know if it’s worthwhile for most people to read as many data sources as I do.  I have an unusually good memory for data and an unusually bad memory for names and other forms of verbal information.  So I’m not typical.

9.  You should occasionally change your media outlets.  After a while you’ll have gotten most of the insights you can expect from any given source, so try a different outlet. Yes, that means TheMoneyIllusion long ago reached the point of diminishing returns. (I don’t do very well following this advice—indeed I probably should have shifted from reading blogs to twitter, but I’m lazy.)

10.  Try to avoid TV news, except perhaps to get a sense of the zeitgeist.  If you consume too much TV then you become a part of the zeitgeist, i.e., a part of the problem.

11.  Travel is another good source of information.  If you travel to China and speak with the people you meet, it might give you a very different view of the country than what you get reading about China in the US media.  I know it did for me.  Travel makes you realize that countries are very complex, not the sort of cartoonish vision you get from the mainstream media.

12.  If you are a macroeconomist then read the pre-war macroeconomists, such as Keynes, Fisher, Cassel and Hawtrey.  Learn about time series data over the past 100 years, not just since WWII.  Read Keynesians, monetarists, and other perspectives as well.

13.  Podcast interviews can provide a perspective that one might not get by simply reading some material written by the interviewee.

14.  When you read articles about social science research, treat the findings as an interesting hypothesis, not settled science.  Much of it does not replicate.

15.  Talk to average people, especially when you travel.  And remember, there are no average people.  Frame questions carefully.  Thus don’t ask if people like Trump, ask what they like and don’t like about him.

PS.  I’m actually not very well read in literature, philosophy, history, etc.  So do as I say, not as I do.

A new NGDP prediction market

I finally limped home from a long trip, with a bad cold.  The trip actually started well, as I taught a few classes at Alternative Money University from July 15-18, at the Cato Institute in DC.  Thanks to George Selgin, Lydia Mashburn and the other people at Cato for organizing an outstanding program.  And a special thanks to the students, who restored my faltering faith in humanity.  A number of them seemed very interested in pursuing market monetarist research ideas.  Given that there were students from schools like MIT, Chicago and Stanford, that bodes well for the future.  AMU was probably the most personally rewarding experience I’ve had since I started blogging, far more so than the flurry of publicity I received back around 2012.

Basil Halperin was one of the students at AMU, and he recently did a blog post describing a new NGDP futures market at Augur:

[A]n NGDP futures market is now live on the Augur blockchain. The specific contract is simply a binary option: will the growth rate in NGDP from 2018Q1 to 2019Q1 be greater than 4.5%?

The current price/probability implied by this contract can be viewed on the Augur aggregator website just search “NGDP’, or the permalink is here.

. . .

For those unfamiliar, Augur is a new cryptocurrency project – it launched just last Thursday – built on the Ethereum platform that allows holders of its currency, “REP”, to create prediction markets. To speculate on such markets, an investor must use the Ethereum cryptocurrency (ETH).

The platform is decentralized: for everyone who wants to bet that NGDP growth will exceed 4.5%, there must be a counterparty who takes the other side of the bet. That is, the creators of Augur are not acting as market makers for the contract. The price of the contract will move to equilibrate supply and demand in a decentralized market: if the price is 0.7 ETH, that indicates that the market gives a 70% (risk-neutral) probability that NGDP will exceed 4.5%.

Read the entire post, it is quite interesting.

BTW, about 18 months ago I did a few posts (here and here) discussing Basil’s research on NGDP targeting.  He has a bright future.

I spent this past weekend going back and forth between an old folks home and an IHOP in Arizona.  Because the AC at the IHOP was giving me a fever and chills, I dressed up with three layers of shirts and long pants before walking in 105 degree heat to the restaurant (fortunately just a block away.) With nothing else to do, I read Eliezer Yudkowsky’s excellent book Inadequate Equilibria.

It’s hard to summarize the book in a single sentence, but here are a few themes:

1. Whereas financial markets are highly efficient, many of our other institutions are poorly designed—inadequate equilibria.

2.  While it’s generally unwise to believe that one can beat the stock market, we are often too modest in deferring to existing institutions, or conventional wisdom on a given issue.

3.  We need to rely on both theory and data.  Be a hedgehog and a fox.

The book explores when we should be willing to believe that we have an idea that others have overlooked.  It might be a public policy idea, a start-up company idea, a new medical treatment, or a new academic theory.

One example cited by Yudkowsky is treatments for Seasonal Affective Disorder (SAD), which afflicts millions of people.  He experimented with stringing up 130 LED lights in his home as a way of helping his wife, and it seemed to be sort of successful.  Then he discussed all the reasons why the market might not be expected to produce this treatment.

While reading the book, I could not stop thinking about NGDP predictions markets.  In the past I’ve argued that the failure of the government to set up and subsidize such a market is an example of near criminal negligence.  In Yudkowsky I’ve found a kindred spirit—someone who is outraged by things that the vast majority of people couldn’t care less about, like the ingredients that go into hospital formula for infants, or the fact that many doctors are too lazy to wash their hands as often as they should.

I’ve talked to many economists, and I have yet to hear a single plausible excuse for the lack of a federally subsidized NGDP prediction market.  Not one.  People just sort of shrug, because the cause doesn’t pull on our heartstrings like those kids separated from their parents on the border.

Yudkowsky is similarly exasperated.  Where others see nice shiny hospitals that “help people”, Eliezer sees a monstrous medical industrial complex that needlessly destroys lives.  And he sees these sorts of failures all over the place:

If you truly perceived the world through the eyes of a conventional cynical economist, then the horrors, the abominations, the low- hanging fruits you saw unpicked would annihilate your very soul.

Of course all this refers to the “normal” state of affairs in America, pre-Trump.

In any case, I highly recommend the book. It’s one of those books where the question of whether the author is “right” or “wrong” is almost beside the point; what’s interesting is how Yudkowsky approaches questions.

Here’s a review by Robin Hanson, another by Scott Alexanderand another by Scott AaronsonIf I were made king of the world, I’d probably just turn it over to those four bloggers.  I’m not sure what they’d do, but I pretty sure that none of them would put their ego ahead of the well-being of billions of people.

Define “cause”

Tyler Cowen links to a couple of studies looking at the contribution of sectoral shocks to the business cycle.  Here’s one example, from a paper by Enghin Atalay:

Next, I examine whether the choice of elasticities has implications for individual historical episodes. Figure 4 presents historical decompositions for two choices of εM. In both panels, εD = εQ = 1. In panel A, I set εM = 1; and, in panel B, εM = 0.1. With relatively high elasticities of substitution across inputs, each and every recession between 1960 and the present day is explained almost exclusively by the common shocks. The sole partial exception is the relatively mild 2001 recession. In 2001 and 2002, Non-Electrical Machinery, Instruments, F.I.R.E. (Finance, Insurance, and Real Estate), and Electric/Gas Utilities—together accounting for GDP growth rates that were 2.0 percentage points below trend.

Table 3, along with panel B of Figure 4, presents historical decompositions, now allowing for complementarities across intermediate inputs. Here, industry-specific shocks are a primary driver, accounting for a larger fraction of most, but certainly not all of, recent recessions and booms. According to the model-inferred productivity shocks, the 1974–1975 and, especially, the early 1980s recessions were driven to a large extent by common shocks.27 At the same time, the late 1990s expansion and the 2008–2009 recession are each more closely linked with industry-specific events. Instruments (essentially computer and electronic products) and F.I.R.E. had an outsize role in the 1996–2000 expansion, while wholesale/retail, construction, motor vehicles, and F.I.R.E. appear to have had a large role in the most recent recession.

Let’s think about this using an analogy.  Suppose you study the causes of cycles in house collapses.  Assume a community where 90% of houses have solid foundations, and 10% have rotten wood foundations.  Also assume that during floods the rate of house collapses rises from 7 per week to 450 per week.  A cross sectional study shows that 425 of the 450 collapsed houses during a flood had rotten foundations, while 25 had solid foundations.  This despite the fact that only 10% of overall homes had rotten foundations.

How much of the “cycle” in house collapses is “caused” by floods, and how much is caused by rotten foundations?  Show work.

The important question is: “How big would the business cycle be in a counterfactual where the Fed successfully stabilized NGDP growth?” I say “fairly small”.

Another question that is actually much less important, but seems more important to most people is: “How much of the instability in NGDP is due to monetary policy mistakes triggered by sectoral shocks, such as a decline in the natural rate of interest that the Fed overlooked, which was itself caused by a housing slump?”

When you read impressive looking empirical studies in top journals, do not assume that the authors are asking the right question.

Fiscal multiplier studies—it’s far worse than I thought

I was stunned to see a recent paper on fiscal multipliers use a 90% confidence interval, which seemed far too lenient.  After all, economics and many other sciences suffer from problems such as data mining, publication bias, and inability to replicate findings.  I’d like to see the standard statistical significance cut-off point raised from 95% to something stronger, maybe 98%.  When I did this recent post I wondered if I was making some elementary error, as econometrics is not my strong suit.

It turns out the problem is even worse than I assumed.  Indeed Ryan Murphy recently published a study of fiscal multiplier research (in Econ Journal Watch), and found that many studies use 68%!!

In recent decades, vector autoregression, especially structural vector autoregression, has been used to study the size of the government spending multiplier (Blanchard and Perotti 2002; Fatás and Mihov 2001; Mountford and Uhlig 2009). Such methods are used in a significant proportion of empirical research designed to estimate the multiplier (see Ramey 2011a). Despite being published in respected journals and cited by prominent members of the profession, much of this literature does not use the conventional standard of statistical significance that economists are accustomed to in empirical research.

Results in the literature on the fiscal multiplier are typically communicated using a graph of the estimated impulse-response functions. For instance, the effect of government spending on output may be reported by reproducing a graph of an impulse-response function of a one-unit (generally, one percentage point or one standard error) change in government spending. The graph would show the percent change in output over time following the change in government spending. To report statistical significance, authors of these studies may then draw confidence bands around the impulse response function. Ostensibly, if zero lies outside the confidence band, it is statistically distinguishable from zero. But very frequently in this literature the confidence bands correspond to only one standard error. In other words, instead of representing what corresponds to rejecting the null hypothesis at a 90% level or 95% level, the confidence bands correspond to rejecting the null hypothesis at a 68% level. By conventional standards, this confidence band is insufficient for hypothesis testing. Not every useful empirical study must achieve significance at the 95% level to be considered meaningful, of course, but a pattern of studies which do not use and reach the conventional benchmark is a cause for attention and perhaps concern. Statistical significance is not the only standard by which we should judge empirical research (Ziliak and McCloskey 2008). It is, however, a useful standard, and still an important one. Here I examine papers in the fiscal multiplier literature which apply vector autoregression methods. Sixteen of the thirty-one papers identified use narrow, one-standard-error confidence bands to the exclusion of confidence bands corresponding to the conventional standard of 90% or 95% confidence. This practice will often not be clear to the reader of a paper unless its text is read rather carefully.

I can’t even fathom what people are thinking when they use 68%.  It seems like something you’d see in The Onion, and yet apparently this stuff gets published.  Can someone help me here, what am I missing?

Lateral thinking

Over at Econlog a few weeks ago I did a post entitled “How I Think.”  I was reminded of that post when reading commenter theories on the poor health outcomes of American whites aged 45-54.  In the post, I said that when evaluating Alan Greenspan’s performance, you don’t want to focus on Greenspan; you want to focus on how other central bankers did during this period (about as well.)  When thinking about China’s growth prospects you don’t want to focus on everything you know about China (too complicated), but rather look at other East Asian countries.  And when deciding whether the gold inflows to Spain (1500-1650) were a resource curse that hurt long-term growth, you want to look at other Mediterranean regions that did not received big gold inflows.

Here’s the graph:

Screen Shot 2015-11-05 at 2.22.21 PMPeople were providing answers that seemed to miss the big picture. The graph shows two surprising results, the poor performance of middle-aged white health after 2000, and the excellent performance of Hispanic health after 2000.  In America, Hispanics tend to be disproportionately low income/working class, the group that has been hit hardest by recent economic trends. They were also especially likely (before ObamaCare) to lack health insurance.  And yet their health is significantly better than the health of French and German citizens, in the same age group.

I have no theories at all—I don’t even know if the data are accurate.  But if I was going to come up with a theory, I sure as hell would make sure it explained the sudden and massive divergence in White/Hispanic health outcomes.  If it didn’t, I’d have zero confidence that my theory was correct.

Paul Krugman has promised us an explanation in a future post.  Let’s see what he comes up with.

PS.  I don’t know about you, but to me that graph undercuts some of the recent anti-immigration hysteria.  If Hispanics are actually so inferior, so likely to degrade our precious “Anglo” civilization, how come they have such superior health outcomes?  In rich countries like America, don’t poor health outcomes often reflect poor lifestyle choices?  Just asking.

Update:  Commenter Mike Scally linked to an Andrew Gelman post that says the US data is biased about 5% upward due to compositional effects (the 45-54 age group is getting slightly older, due to boomers passing through).  So instead of rising slightly, US white (middle aged) death rates at a given age have been essentially flat. Of course other death rates fell about 30%, so there’s still a pretty big mystery.