Archive for the Category Philosophy


Inexplicable knowledge

Is it possible to know something, and yet be unable to convincingly explain how you know it?  I think so.

[Just to be clear, when I say “I know something” I mean that I believe I know it. But then what else could it mean?]

David Henderson recently said:

Like Scott, I doubt that the CIA was behind the JFK assassination, but all I have is doubt. I don’t have the certainty that Scott has and I don’t know what’s behind that certainty.

Just to be clear, I’m not completely certain of anything.  But basically David is right; I claim to “know” that the CIA did not conspire to assassinate Kennedy, with 99.9% certainty.  And yet I cannot explain why I know this.  So do I?  Let’s use an analogy of a picture of Trump’s face, made out of 10,000 dots, or pixels if you prefer.  I might look at the picture and say it’s obviously Trump.  But how do I know that?  None of the individual pixels looks anything like Trump.  Rather it’s the cumulative effect of all those pixels that creates the likeness.

When we go through life we accumulate an enormous amount of information.  Each piece of information is like a dot, and together it gives us a complex worldview that tells us that some ideas are plausible and some are not.  The best I could do is use an analogy, something David would agree with.  I might say that I know that the American Girl Scouts leadership council was not behind the Kennedy assassination. If that didn’t work, pick a conspiracy that was even more far fetched—David’s grandmother.  At some point he’d accept the idea that one might know something because the alternative is too implausible, or at least most people would.  But of course that wouldn’t help at all with the CIA (which really does do nasty things.) The problem is that our life experiences give us each a different set of facts, and a different brain to process those facts.  I see a different CIA from the one David sees.

When I was much younger—like 3 months ago—I used to think it was a productive use of time to try to convince someone that Trump’s a demagogue, because . . . well, because he’s obviously a near perfect dictionary definition of a demagogue. But it’s pointless. For every fact you cite, they’ll point to other non-demagogue politicians who do something similar, at least on occasion.  Trump’s demagoguery is like the picture with 10,000 pixels, you either see it or you don’t.  No single example of the big lie, or of demonizing minorities and foreigners, or of unrealistic promises, or macho posturing, is going to convince anyone, because they’ll always be able to explain it away.  After all, politics is a very messy business.  And each argument is just one dot.

This also relates to monetary policy.  I know that monetary policy was too tight in 2008 and 2009 and that the Fed could have adopted a policy that led to faster NGDP growth.  But if asked to explain how I know this, I’d have trouble explaining my belief.  I lack an elevator pitch.  I could tell people to read my entire blog, from end to end.  But that’s 1000s of pages of argument, and it still wouldn’t even come close to explaining my belief, which also depends on decades of reading economic theory, economic history, and the history of economic thought.  That reading creates the brain architecture or grid that determines where I store all the various facts that I come across, and explains why I often just “know” that a commenter’s facts are wrong, without having actually checked. That doesn’t mean I don’t try to convince people (in the blog I do the best I can), just that it’s very difficult to do.

On the other hand I strongly recommend that people not try to explain their beliefs on the Kennedy assassination, or 9/11, or why Noam Chomsky is wrong about US foreign policy, or why Trump is a demagogue, or why free will doesn’t exist, or why Scott Alexander is brilliant, or what it means to “know something”, unless you enjoy pointless debates.  The odds of convincing anyone are so small that it’s not worth the effort.

PS.  I don’t believe that ‘inexplicable knowledge’ is the right term, but am not sure what is.  What I have in mind is not just tacit knowledge, as it can also involve reading books or articles.

PPS.  I was going to do a post on Trump’s nominees, which so far are mostly lousy. But it’s probably not worth the effort.  So I’ll just do a PS. Confirm them.  In politics, I always try to put principle over expediency.  So although I don’t like many of the nominees, I’ve always felt that Presidents have a right to pick the people who will serve them, unless something truly awful turns up.  I didn’t think it was fair to prevent those women from serving as Attorney General back in 1993, just because of various “nanny-gates”, and I’m not going to change my views just because Trump is President.

PPPS.  Vox recently published this piece by Sherri Underwood:

I remember the precise moment that I realized I regretted voting for Donald Trump.

It was during his 60 Minutes interview after the election. I was, like everyone else, shocked that he had won. It seemed so unlikely based on the polls and the confidence the media had that he would lose. It was a pleasant surprise, and I went to bed on election night thrilled that he would be our president.

But sitting on my couch, sipping coffee as I watched the interview, I saw with my own eyes who Trump really was as a person. He backtracked on one of his signature campaign promises: pursuing an investigation into the Clinton email scandal. It’s not that I want Clinton to be crucified or “locked up” — it’s the nonchalance with which he went back on his word after hammering it repeatedly during the campaign. The ease and quickness with which he reversed his position shook me to my core. I realized in that moment that I had voted for a demagogue. And it was sickening.

Three months ago I would have mercilessly mocked her stupidity.  Now I respect her much more than I respect myself.  Writing that article took courage.  Not surprisingly, she’s a Midwesterner.

PPPPS.  Speaking of the Midwest; Minnesota, Iowa and Wisconsin were three of the most liberal states in the country back in 1988, going for Dukakis while Illinois and California went for Bush.  This fascinating factoid from the National Review suggests they are about to turn red:

In the Upper Midwest, demographic trends have lent a hand: In 2004, Iowa, Wisconsin, and Minnesota were among the few states in which the oldest white voters were the most liberal, and the generation born of the Great Depression has been dying off.

Those old hippies are the Dukakis voters.  The Wisconsin I grew up in is gone—just faded memories.  And one more dot to slightly rewire the political map in my brain, which has both spatial and temporal dimensions.

BTW, Politico has a piece on Pepin County, Wisconsin that is the single best article on the election that I have read.  I will do a post.

Update:  Regarding Trump’s alleged demagoguery:

Screen Shot 2017-01-20 at 5.10.56 PM

Trust, but verify (reply to David Deutsch)

Back in the early days, before Trump Derangement Syndrome set in, I did a post entitled:

Are the laws of physics mere social conventions?  No, they are social conventions.

The basic idea was that theories are beliefs, but not “merely” beliefs.  There is nothing more important in the entire universe that mental states such as beliefs. Now, after a delay of 7 years, David Deutsch has left the following comment:

The astronomy example puzzles me; how closely our model reflects objective reality is somewhat orthogonal to whether “objective reality” exists in the first place. Elsewhere, the author asks if a model that is right 99.8% of the time is “false” while one that is right 99.99% of the time is “true” (or something to that effect). Truth is quantitative/probabilistic, not binary; some models are more “true” than others, but their level of “truthiness” is still an objective fact.

If this well known piece by Asimov has not been referenced yet, I’d be surprised, but here he sums it up quite well:

“When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

I have no idea if it is the David Deutsch, one of the smartest writer that I have ever read, but in the off chance that I have been honored with a comment from him, then I really need to offer some sort of response.  First, a bit of context.  I was defending Richard Rorty’s views on truth (sometime summarized as “truth is what my peers let me get away with”).  Or perhaps one might say, true things are things that are regarded as true. Not surprisingly, this drives lots of people crazy, because we all see things that others regard as true, which clearly don’t seem true, and there are things that we see as being obviously true.  Is this Rortian stuff too much relativism?

I don’t see it as relativism at all.  I don’t see it as the world of fuzzy post-modern philosophers attacking the virtuous hard sciences.  It’s important not to get confused by semantics, and focus on what’s really at stake.  In my view, Rorty’s views are most easily seen by considering his denial of the distinction between objective truth and subjective belief.  In order to see why he did this, consider Rorty’s claim that, “That which has no practical implications, has no theoretical implications.”  Suppose Rorty’s right, and it’s all just belief that we hold with more or less confidence.  What then?  In contrast, suppose the distinction between subjective belief and objective fact is true.  What then?  What are the practical implications of each philosophical view?  I believe the most useful way of thinking about this is to view all beliefs as subjective, albeit held with more or less confidence.

Let’s suppose it were true that we could divide up statements about the world into two categories, subjective beliefs and objective facts.  Now let’s write down all our statements about the world onto slips of paper.  Every single one of them, there must be trillions (even if we ignore the field of math, where an infinite number of statements could be constructed.)  Now let’s divide these statements up into two big piles, one set is subjective beliefs, and the other pile contains statements that are objective facts.  We build a vast Borgesian library, and put all the subjective beliefs (i.e. Trump is an idiot) into one wing, and all the objective facts (Paris is the capital of France) into the other wing.

Now here’s the question for pragmatists like Rorty and me.  Is this a useful distinction to make? If it is useful, how is it useful?  Here’s the only useful thing I can imagine resulting from this distinction.  If we have a category of objective facts, then we can save time by not questioning these facts as new information arises.  They are “off limits”.  Since they are objective facts, they can never be refuted.  If they could be refuted, then they’d be subjective beliefs, not objective facts.

But I don’t want to do that.  I don’t want to consider any beliefs to be completely off limits—not at all open to refutation.  That reminds me too much of fundamentalist religion.  On the other hand, I do want to distinguish between different kinds of beliefs, in a way that I think is more pragmatic than the subjective/objective distinction.  Rather I’d like to assign probability values to each belief, which represent my confidence as to whether or not the belief is true.  Then I’d like to devote more of my time to entertaining critiques of highly questionable hypotheses, than I do to less plausible hypotheses.

Thus if someone tells me that I really need to read a book showing how 9/11 was a CIA plot, my response is, “No, it’s not worth my time.” It’s possible that it was a CIA plot, but so unlikely I don’t want to waste limited time trying to refute the view that Al Qaeda launched the attack.  It’s not that I believe Al Qaeda’s culpability is an objective fact; rather my subjective belief that it was Al Qaeda is so strong that I don’t want to waste time on it. Ditto for my view that 1+1 = 2.  On the other hand, at some later date new information on 9/11 may arise and reach the headlines of the New York Times, where I see it.  Now I may want to read that book.  Similarly, I can imagine a physicist not wanting to read some idiot’s crackpot anti-Newtonian model in 1850, but finding anti-Newtonian models quite plausible after the work of Einstein.

The subjective/objective distinction would only be useful if it put some ideas off limits, not open to questioning.  There are certainly some ideas where it’s a waste of time to question them, but I don’t like this as a general category, because I don’t know where the boundary lies between claims that should be beyond questioning, and claims that should be open to question.  So it’s simply more pragmatic to regard all statements as being beliefs about the world that are open to question, and then assign probability estimates (guesstimates?) to the chance that these claims will be overturned.

The other point of confusion I see is people conflating “the map and the territory”. Then they want to view “objective facts” as aspects of the territory, the underlying reality, not (just) beliefs about the territory.  I don’t think that’s very useful, as it seems to me that statements about the world are always models of the world, not the world itself.  Again, if it were not true, then theories could never be revised over time.  After all, Einstein didn’t revise reality in 1905; he revised our understanding of reality–our model of reality.

Reagan said “Trust, but verify”.  That means it’s OK to believe that certain things are true, but always be open to evidence that these things are not true.

PS.  Recall this statement I made above:

Rather I’d like to assign probability values to each belief, which represent my confidence as to whether or not the belief is true.

Rorty was criticized when people pointed out that one often hears something like the following:  “Although most people believe X, I believe that Y is actually true.”  If there is no objective standard to determine whether X is true, then what can this statement possibly mean?  I seem to recall that Rorty said something to the effect that when people claim Y is actually true, despite most people believing X, they are actually predicting that in the future Y will eventually be regarded as true.  Or maybe it’s a claim that, “If other people had seen what I saw, then they would also believe Y is true.”

PPS.  Back in 2013 I mentioned Deutsch in a post:

David Deutsch likes to sum up his philosophy as:

1.  Problems are inevitable.

2.  Problems are solvable.

The horrible nationalism sweeping the world was inevitable, and it’s solvable.  Good times will return.

PPPS.  I had a discussion of Deutsch’s views on quantum mechanics, and well as Eliezer Yudkowsky’s views, back in this 2013 post.


You can’t have it both ways

In two recent Econlog posts (here and here), I pointed out that a wise man or women should always have two levels of belief.  One is their own view of things, independently derived from their own research.  This is the view from within your skin.  The second level of belief is the awareness of the wisdom of crowds.  The awareness that an index fund is likely to do better than a fund that you personally manage. An awareness that the consensus view of the true model of the macroeconomy is likely to be better than your own model of the economy.  This is the view from 20,000 miles out in space, where it’s clear that you are nothing special.

In the comment section, Philo suggested:

In most things, you’re admirably sensible (insightful, etc.). In philosophy . . . well, better stick to your day job!”

He likes my market monetarist view of things, but not my philosophical musings.  But you can’t have it both ways.  If my philosophy is wrong then my market monetarism is equally wrong.  Either the wisdom of the crowds is true, or it isn’t.

(As an aside I’m aware that the wisdom of the crowds might be slightly better if the views are weighted by expertise, but that has no bearing on my claim.  Even if you think I have a bit more expertise that the average economist, the entire weighted sum of non-Scott Sumner economists is, objectively speaking, far more qualified than I am.)

In this blog I am normally giving you my views on the optimal economic model from the “within the skin” perspective, because otherwise I am of no use to society.  I’d be just a textbook.  In contrast, I give you my views on where markets are heading from the 20,000 miles up perspective, because that’s the most useful view for me to communicate the intuition behind market monetarism.  You don’t care where I personally think the DOW is going, and you should not care.

It is the job of the economics profession to weigh my arguments, and the arguments of those who disagree with me, and reach a consensus.  That consensus is not always correct, but it’s the optimal forecast.  Unfortunately, at the moment the optimal forecast is that I’m wrong about monetary offset, but I’ll keep arguing for monetary offset because that’s the view I arrived at independently, and I’m of no use to society unless I report that view, and explain why.

When I talk to philosophers about epistemology, they often mention concepts like “justified true belief” which seems question begging to me.  I’m certainly no expert on the subject, but I can’t see how the EMH is not right at the center of the field of epistemology.  If back in 1990, we wanted to know whether there were Higgs bosons or gravity waves, the optimal guess would not have been derived by asking a single physicist, but rather setting up a prediction market.  Yes, traders know less about physics than the average MIT physicist, but traders know whom to ask.

Many worlds vs. Copenhagen interpretation? Perhaps it can’t be tested.  But if it could, then set up a prediction market.  Robin Hanson’s futarchy is a proposal to have public policy based on society’s best estimate of what is true—derived from prediction markets.  He wants us to vote on values and bet on beliefs.  Richard Rorty might go even further, and have us bet on values, where the outcome of the bet depends on a poll of values 50 years in the future.  Rorty would say values are no more subjective than science.

I think the EMH is basically what Rorty meant when he said truth is what my peers let me get away with.

PS.  My two types of beliefs do not have a rank order; they are incommensurable concepts.  Both are essential, and one is not more or less important than the other.  There’s no answer to “What do I really believe about monetary offset?”  I believe different things, at different levels of belief.

What if Wittgenstein had been a macroeconomist?

The commenter Jason sent me a great Wittgenstein quotation, and I immediately knew I had to use it somewhere.  It took me 10 seconds to decide where:

“Tell me,” the great twentieth-century philosopher Ludwig Wittgenstein once asked a friend, “why do people always say it was natural for man to assume that the sun went around the Earth rather than that the Earth was rotating?” His friend replied, “Well, obviously because it just looks as though the Sun is going around the Earth.” Wittgenstein responded, “Well, what would it have looked like if it had looked as though the Earth was rotating?”

It’s quotations like this that make life worth living.  So I wondered what Wittgenstein would have thought of the current crisis:

Wittgenstein:  Tell me, why do people always say it’s natural to assume the Great Recession was caused by the financial crisis of 2008?

Friend:  Well, obviously because it looks as though the Great Recession was caused by the financial crisis of 2008.

Wittgenstein:  Well, what would it have looked like if it had been caused by Fed policy errors, which allowed nominal GDP to fall at the sharpest rate since 1938, especially during a time when banks were already stressed by the subprime fiasco, and when the resources for repaying nominal debts come from nominal income?

OK, not nearly as elegant as Wittgenstein’s example.  But you get the point.

Jason also wonders what future generations will think of the Keynesian/monetarist split.  Which model will seem like the Ptolemaic system?  I won’t answer that, but will take a stab at a related question.  The Great Depression was originally thought to be due to the inherent instability of capitalism.  Later Friedman and Schwartz blamed it on a big drop in M2.  Their view is now more popular, because it has more appealing policy implications.  It’s a lot easier to prevent M2 from falling, than to repair the inherent instability of capitalism.  Where there are simple policy implications, a failure to do those policies eventually becomes seen as the “cause” of the problem, even if at a deeper philosophical level “cause” is one of those slippery terms that can never be pinned down.

In 50 years (when we are targeting NGDP futures contracts) the Great Recession will be seen as being caused by the Fed’s failure to prevent NGDP from falling.  Not through futures contracts (which didn’t exist then) but through a failure to engage in the sort of “level targeting” that Bernanke recommended the Japanese try during their similar travails.

PS.  W. Peden thinks the quotation is apocryphal, and notes that it’s used in Tom Stoppard’s play “Jumper.”  For some reason I prefer it be Wittgenstein.

Lazy people, nice people, crazy people, happy people.

Here’s a question.  When we describe people using the adjectives in the title of this post, are we describing the way they are, or they way they behave?  We have deeply ambivalent views in this area, which on close examination are probably incoherent.  Much of our social interaction is based on shared myths, which shrivel under the bright light of scientific scrutiny.  Even the language we use is subtly inconsistent with the scientific method.  For instance, consider one of society’s monsters, say a Hitler, Mao, or Osama.  Suppose someone says “I know how he felt when he committed his crimes.”  Most people would take that as condoning the crimes, even though from a scientific perspective there’s no logical connection between knowing why a person acted a certain way, and condoning their behavior.

I was reading a book about genetic engineering called “Babies by Design,” and was struck when Ronald Green claimed:

Research shows that obesity is consistently attributed to laziness and a lack of self-discipline.

In reality, the truth may be just the opposite.  Studies of identical twins reared together or apart indicate that much obesity may be caused by hereditary factors.  In technical terms, the heritability of obesity, the percentage of observed variation among people that is attributed to genes, is very high, somewhere between 50 and 80 percent.

[Before continuing, a disclaimer so that I am not misunderstood.  I have good genes for being thin.  If I didn’t I assume I’d be fat, as I don’t have much self-control.  So the following should not be viewed as criticism of fat people.]

Do you see the problem with Green’s assertion?  He asks us to believe that just because obesity is 80% genetic, it can’t also be 80% due to laziness.  But why?  What are those two hypotheses viewed as mutually exclusive?  Is it because genetic characteristics are viewed as “not one’s fault,” whereas laziness is viewed as a character flaw?  But why shouldn’t character flaws be genetic?

A new study has found a “kindness gene.”  It seems that some people are born kind and some are born “bad to the bone.”

People with a certain gene trait are known to be more kind and caring than people without it, and strangers can quickly tell the difference, according to US research published on Monday.

The variation is linked to the body’s receptor gene of oxytocin, sometimes called the “love hormone” because it often manifests during sex and promotes bonding, empathy and other social behaviors.

Scientists at Oregon State University devised an experiment in which 23 couples, whose genotypes were known to researchers but not observers, were filmed.

One member of the couple was asked to tell the other about a time of suffering in his or her life. Observers were asked to watch the listener for 20 seconds, with the sound turned off.

In most cases, the observers were able to tell which of the listeners had the “kindness gene” and which ones did not, said the findings in the Proceedings of the National Academy of Sciences edition of November 14.

Should we no longer praise people for being kind?  No, we should praise them, but if we were to use Green’s logic then meanness would no longer be viewed as the person’s fault, because we’ve discovered that it’s genetic.

And happiness is also genetic, according to an article in The Economist:

Serotonin is involved in mood regulation. Serotonin transporters are crucial to this job. The serotonin-transporter gene comes in two functional variants””long and short. The long one produces more transporter-protein molecules than the short one. People have two versions (known as alleles) of each gene, one from each parent. So some have two short alleles, some have two long ones, and the rest have one of each.

The adolescents in Dr De Neve’s study were asked to grade themselves from very satisfied to very dissatisfied. Dr De Neve found that those with one long allele were 8% more likely than those with none to describe themselves as very satisfied; those with two long alleles were 17% more likely.

That’s already pretty disturbing, but then consider the following:

Which is interesting. Where the story could become controversial is when the ethnic origins of the volunteers are taken into account. All were Americans, but they were asked to classify themselves by race as well. On average, the Asian Americans in the sample had 0.69 long genes, the black Americans had 1.47 and the white Americans had 1.12.

That result sits comfortably with other studies showing that, on average, Asian countries report lower levels of happiness than their GDP per head would suggest. African countries, however, are all over the place, happinesswise. But that is not surprising, either. Africa is the most genetically diverse continent, because that is where humanity evolved (Asians, Europeans, Aboriginal Australians and Amerindians are all descended from a few adventurers who left Africa about 60,000 years ago). Black Americans, mostly the descendants of slaves carried away from a few places in west Africa, cannot possibly be representative of the whole continent.

Note how the alleged racial gaps in happiness are inversely correlated with average income in America.  Proof God is a utilitarian?

Seriously, if society insists on continuing to probe ever more deeply into human genetics, I think we need a whole new language for discussing ethical issues.  My suggestion is that scientists give up on all the comforting notions of “just deserts.”  Yes, proof that X% of behavior in genetic still allows for 100-x% to be environment.  But environment is also not the villain’s fault.

In my view the right way to handle all this is to ignore the question of whether anything is really a person’s fault, and consider the related question of whether certain behavior is changed by external incentives (including telling them that it is their fault.)  I don’t have any problem with obese or unhappy people, but I don’t like mean people.  So as long as there is evidence that mean people can be deterred from meanness by sanctions, I’ll continue to give them a hard time.  And no amount of genetic research will change my behavior in that regard.

However I do think all this research supports utilitarian ethics.  We utilitarians are sometimes criticized for caring equally about the happiness of the deserving and the undeserving.  This genetic research suggests that much of the variation in personality is genetic, and hence “not the person’s fault.”  The optimal policy (and I’m not proposing this) would be for an omniscient government to tax mean people $X dollars for each unit of mean behavior, and then rebate the entire amount of revenue in lump sums to everyone with a mean gene in their body.

Or in Christian terms we could say “love thine enemy, but also punch them in the nose every time they misbehave.”  Does that seem contradictory?  Then you are confusing behavior with character.

Because of genetic research our view of humanity and ethics in the year 2111 will be totally different from today, just as our current views are totally different from 100 years ago (when “progressives” often favored eugenics.)

What do I fear most?  Busybodies like this:

People who have two copies of the G allele are generally judged as more empathetic, trusting and loving.

Those with AG or AA genotypes tend to say they feel less positive overall, and feel less parental sensitivity. Previous research has shown they also may have a higher risk of autism.

.   .   .

However, no gene trait can entirely predict a person’s behavior, and more research is needed to find out how the variant affects the underlying biology of behavior.

“These are people who just may need to be coaxed out of their shells a little,” said senior author Sarina Rodrigues Saturn, an assistant professor of psychology at Oregon State University whose previous research established the genetic link to empathetic behavior.

“It may not be that we need to fix people who exhibit less social traits, but that we recognize they are overcoming a genetically influenced trait and that they may need more understanding and encouragement.”

Keep your %&#@$*& hands off my anti-social traits.  Remember what Greta Garbo said.

And then there is mental illness.  Here’s Reason magazine explaining that crazy is as crazy does:

Metzl is not interested in such distinctions. “Schizophrenia is shaped by social, political, and, ultimately institutional factors in addition to chemical or biological ones,” he writes. “Too often, we assume that medical and cultural explanations of illness are distinct entities, or engage in frustratingly pointless debates about whether certain mental illnesses are either socially constructed or real.” He says “this polarizing dichotomy serves no one, and makes it harder to see how mental illness is always already both.”

It is hard to imagine someone making a similar speech about cancer or diabetes. “Unlike the conditions treated in most other branches of medicine,” observes Marcia Angell, former editor of The New England Journal of Medicine, in a June New York Review of Books essay, “there are no objective signs or tests for mental illness””no lab data or MRI findings””and the boundaries between normal and abnormal are often unclear. That makes it possible to expand diagnostic boundaries or even create new diagnoses, in ways that would be impossible, say, in a field like cardiology.” In other words, mental illnesses are whatever psychiatrists say they are. If someone is diagnosed with depression or schizophrenia based on the currently accepted behavioral markers, assuming the criteria are correctly applied, it does not make sense to say he does not really have depression or schizophrenia, since there is no test to disconfirm the diagnosis. And if the criteria change so that they no longer apply to him, his disease disappears or becomes something else; it has no independent existence.

Music to my post-modern ears.

When I read “A Beautiful Mind” there was one aspect of John Nash’s behavior that I found strange.  Every so often he was involuntarily committed to an insane asylum.  Because he hated it there, he soon began to “act sane” so that they’d have to let him out.  And they did.  I don’t recall any of the book reviews noticing this, but doesn’t it seem a bit odd that someone who is mentally ill can act sane, given that acting crazy seems to be the only way to diagnose most mental illnesses?

I’m not saying Nash wasn’t “actually crazy.”  I’m saying that like all our other behavioral traits mental illness probably isn’t what we think it is.  I look forward to the day when all human vices are relabeled “mental illness.”  Then we can clear the decks and start over with the real question: Which behaviors can be changed through incentives and which cannot?  It’s all about economics.

PS. In comments Woupiestek provided the following:

You post reminds me of “drapetomania”:

Learn this from the British quiz QI. It comes up after 3 minutes in this fragment:

And around 7 minutes they start replicating this post.  Truth is stranger than fiction.  And why can’t America have TV shows like that?

Update: ChrisA points out that Bryan Caplan did discuss the John Nash case.