Archive for the Category Philosophy

 
 

The worst prejudice of them all

I was awakened about 3am last night by my alter ego, Scott Slumber.  He seemed very upset and had me dictate a blog post.  I was only able to scribble down a small part of what he said, as his speech was rambling and erratic.  Here’s what I got:

I am sick and tired of the attitude of the awake world to the sleeping world.  They look down on us as if we are inferior—all this talk about “real life”, as if their lives are more real than ours.  Just the opposite is true; we have a much richer life, comprised of a mixture of worry and pain-free oblivion, and a rich dream world that’s far more “real” than their awake world.  How much richer?  Recall how The Wizard of Oz transitions from a drab grey opening to a glorious color fantasia, and then back to black and white.  I use this example not because color film is better than black and white (we may not even dream in color), but because it’s a metaphor for how dreams are richer than waking hours.  In dreams you move through a sort of ether of meaning, as if some momentous reality is just beyond your grasp. Even the best parts of life, such as watching a David Lynch film, cannot quite capture the feeling.  That’s not to say there aren’t downsides; the “eternal return” to searching for that exam that you forgot to study for—but waking life also has its ups and downs.

I’ve known all of this for a long time, but what’s really got me agitated is the sudden increase in anti-sleepworld prejudice.  For instance:

1.  While “wake up” has often been used as a metaphor for intellectual awakening, the millennials have added on a new insult; “woke” is being used as a metaphor for moral superiority.  Actually, just the opposite is true.  I can shoot someone in the middle of Times Square, and no one cares.  Unless you are Donald Trump, that’s not true of the awake world.  I can engage in guilt-free forbidden love that you can only dream of experiencing . . . and no one gets hurt.  The dream world is a moral paradise.  When people speak of someone being “woke”, it reminds me of when I was young and you’d still hear people say, “that’s very white of you”—as a compliment!

2.  As if that’s not bad enough, we have bloggers speculating about a technology that allows sleep hours to be bought and sold, like slaves.  You might ask, “What’s wrong with that, if it’s done freely?  Aren’t you a libertarian?”  Yes, but the awake will end up selling the sleepers without even consulting them.  Of course there are a few “woke” people who understand the value of sleep, but if you look at the comment section after the sleep market post you see the same sort of rampant bigotry that occurs when bloggers discuss immigration and diversity.  People were positively gleeful about the thought of using money and technology to kill off their sleep world alter egos, and stay awake 24 hours a day.  Disgusting.  How could they do this to us after all we’ve done for them?  We’ve given them the palaces of Kubla Khan, the guitar riff that built the Stones, and a 1000 eureka moments of scientific discovery.

3.  Puritans in the awake world want to ban chemicals that produce vivid dreams.  They are afraid that the young will find the dream world more attractive than their pathetic depressing alternative.  That’s probably because it is more attractive.

I occasionally meet Scott Sumner for brief moments, such as last week when I was cruelly ejected from a Turkish harem by his murderous beeping iPhone. In our brief exchanges I’ve convinced Sumner that I’m right.  He’s already a radical utilitarian who believes that the flow of positive and negative brain states is the only thing that matters in the universe.  He’s contemptuous of the waking world’s Trumpian fascination with money and power.  Their weird belief in “personal identity” and “free will”.  Unlike that other blogger who cowardly hides behind the controversial claims of his alter ego, Sumner will affirm that everything I say is true.  Tell them, tell them Sumner, tell th . . .

Gulp.  Drugs?  Guilt-free forbidden love?  Umm, let me sleep on it.

PS.  Critics say that Mulholland Drive and In the Mood for Love are the two best films of the 21st century.  Indeed the only two films to make the top 100 all-time.  What do they have in common?  A dream-like mood.  And the critics’ choice for the best film of all time?  It’s also dream-like:

Screen Shot 2018-07-01 at 11.46.52 AM

PPS:  This might be hard to believe, but the opening paragraph of this post is kind of true–I did wake up at 3am last night and write down notes for this post.

PPPS. Maybe I should let John Milton have the last word:

Methought I saw my late espoused saint
       Brought to me, like Alcestis, from the grave,
       Whom Jove’s great son to her glad husband gave,
       Rescu’d from death by force, though pale and faint.
Mine, as whom wash’d from spot of child-bed taint
       Purification in the old Law did save,
       And such as yet once more I trust to have
       Full sight of her in Heaven without restraint,
Came vested all in white, pure as her mind;
       Her face was veil’d, yet to my fancied sight
       Love, sweetness, goodness, in her person shin’d
So clear as in no face with more delight.
       But Oh! as to embrace me she inclin’d,
       I wak’d, she fled, and day brought back my night.

 

Is wealth good?

Studies suggest that people with higher incomes tend to be happier.  But of course that tells us nothing about causation.  It seems plausible that people who “have their act together” are both happier and richer, for reasons relating to their personal characteristics.  Thus economists are interested in studies of happiness that look at the effect of an exogenous increase in wealth.

Tyler Cowen recently linked to a study of Swedish lottery winners, and summarized the results as follows:

In other words, it is good to have more money.

That’s a plausible interpretation of the study, but not the only one.  After all, it’s not easy to measure “good”.  Here’s how the authors summarize their findings:

We find that the long-run effects of wealth vary depending on the exact dimension of well-being. There is clear evidence that wealth improves people’s evaluations of their lives as a whole. According to our estimate, an after-tax prize of $100,000 improves life satisfaction by 0.037 standard-deviation (SD) units. We find no evidence that the effect varies by years-since-win, suggesting a limited role for hedonic adaptation over the time horizon we analyze. Our results suggest improved financial circumstances is the key mechanism behind the increase in life satisfaction. In contrast, the estimated effects on our measures with a stronger affective component – happiness and an index of mental health – are smaller and not statistically distinguishable from zero.

To explain my reservations, I’m going to have to do a long digression into pop philosophy.  Let’s start with Tyler’s use of the term ‘good’.  I believe ‘good’ to be the most important word in the English language, and indeed in the end is the only thing that matters at all (along with negative good, i.e. bad.)  I define good as positive mental states and bad as negative mental states.  I use the phrase ‘positive mental states’ to incorporate the reservations people have with crude utilitarianism.  Thus a term like ‘happiness’ often connotes hedonism, whereas I have something in mind that also allows for deeper forms of good, such as the satisfaction one gets from doing charity, or writing a great novel, or seeing your child do well.  It also allows for more disreputable forms of “positive mental states”, such as the Nietzschean (or Trumpian) thrill that some people get in exercising power over others.  So in my view, mental states are all that matters.

On the question of whether having money makes people better off, I’m of two minds. Here I am considering middle class people in Sweden or America, I think it quite likely that having more money does make the poor better off.  But would I be happier if I won the lottery?

1. My gut instinct tells me that more money is good.  I’d be pleased if I came across a $100 bill lying on the ground, imagining the fun things I could do with the money.

2.  My philosophical mind is more skeptical.  I don’t see any signs that I have more positive mental states when my income is higher than when it is lower.  I’ve seen other people get a dramatic improvement in their financial well being, and (best as I can tell) they don’t seem to have a more positive mental state than when they had less money.  They seem the same old person, mostly reflecting whether then have an upbeat or downbeat personality.

So I’m currently agnostic on this question; I’d put about a 40% weight on my (pro-money) instincts and about a 60% weight on my skeptical philosophical mind.  Later I’ll discuss the implications of this weighting.

While the Swedish study is certainly consistent with Tyler’s conclusion, I also think it’s consistent with mine.  Thus suppose that for some reason, say evolutionary forces, we are tricked into thinking money makes us better off.  That’s not so far-fetched, as wealth probably does boost the probability of reproductive success, at least back during historical periods when our genes were developing.  Our genes don’t want us to be happy, they want us to have lots of successful children.  Thus it’s not implausible that we would want things that are not good for us, like money, fat and sugar.

Let’s assume that most people think money is a sign of success, but beyond a certain point they don’t actually have more positive mental states when they have more money.  Then when asked about their overall well-being, they might report higher numbers if richer.  They would think to themselves, “Let’s see, I’m a millionaire with a nice house and summer cottage, so I guess I’m doing pretty well.”  But when faced with the happiness question, they think about their recent mental states, and don’t report any improvement over before they won the lottery.

Now we face another conundrum—which measure should count?  Actually two issues; is happiness different from well-being, and is well-being accurately reported in surveys?  I don’t doubt that heroin addicts would report that heroin makes them happy, but most people think that’s a different issue from whether heroin makes them better off.  Partly because ‘happy’ and ‘better off’ may be different concepts, but also because they may even be wrong about happiness, in the long run heroin probably does not make them happy.  Their self-reports are not reliable.

Another way to make my point is that I started by saying that positive mental states might be a more comprehensive concept than mere happiness.  But I’m also suggesting that when people answer the happiness survey, they may actually be describing their overall mental state. In contrast, the answers to questions on overall life satisfaction may not describe mental states.  It’s at least plausible that the Swedish survey is finding nothing more than that money doesn’t make people better off, but that they believe it makes them better off.

Now let’s go back to the probabilities I assigned to each of the two interpretations; 40% for money is good, 60% for the view that it is not.  What are the implications of those probabilities?  It turns out that this means we should assume that money is good, that it does make even middle class people better off.  The expected boost to well-being from having more money is 0.40 times the boost you’d get if Tyler’s straightforward interpretation of the Swedish study is true.  So even though I think it a bit more likely that money does not make us better off, we should act in such a way as if I am wrong, as if it does make us slightly better off.

But there’s another implication of these probabilities.  I am pretty sure that most people assign a higher weight to the likelihood of money being good than I do.  Too high a weight. If so, they put too much weight on getting more money, and not enough on other goals in life.  The biggest mistake I ever made was agreeing to write an economics textbook, where I sacrificed a big chunk of my life on a frustrating project for money that will yield me very little benefit.  So I encourage already affluent people to dial back the expected benefit they’d get from having more money.

If anyone is still reading, let’s dive a bit deeper into epistemology.  The concept of ‘good’ is often considered one of the three transcendentals, along with “true” and “beautiful”.  How should we regard beliefs in those three areas?  In each case, someone might say “most people believe X, but Y is actually the case.” If so, what do they mean?  They might mean one of two things; either that they disagree and think Y is true, or that they predict in the future that most people will come to believe Y.  (Or both).

Unfortunately, ‘actually’ is a misleading term, as all beliefs are provisional.  Thus when you say:

1.  Most scientists believe the universe is largely composed of dark energy, but actually it is not.

2.  Most people believe more money is good, even for the affluent, but actually it is not.

3.  Most people believe Thomas Kinkade’s painting are beautiful, but actually they are not.

You are better thought of as predicting that scientists will later come to believe that some other model better explains the cosmological data, that more money will eventually be seen as useless for the affluent, and that Kinkade’s paintings will eventually be regarded as schlocky.

Some statements about truth, goodness, and beauty are held with more confidence than other beliefs (2+2 = 4, murder is evil, the Taj Mahal is beautiful.)  In those cases, we are highly confident that current conventional wisdom will not later be overturned.  But it’s always a matter of degree; we can never be certain about any belief.  We can never go beyond what we regard to be the case.

PS.  In my view, the three transcendentals are actually just one—goodness.  Truth and beauty are instrumental in achieving goodness.  Only mental states matter. Make them good.

Inexplicable knowledge

Is it possible to know something, and yet be unable to convincingly explain how you know it?  I think so.

[Just to be clear, when I say “I know something” I mean that I believe I know it. But then what else could it mean?]

David Henderson recently said:

Like Scott, I doubt that the CIA was behind the JFK assassination, but all I have is doubt. I don’t have the certainty that Scott has and I don’t know what’s behind that certainty.

Just to be clear, I’m not completely certain of anything.  But basically David is right; I claim to “know” that the CIA did not conspire to assassinate Kennedy, with 99.9% certainty.  And yet I cannot explain why I know this.  So do I?  Let’s use an analogy of a picture of Trump’s face, made out of 10,000 dots, or pixels if you prefer.  I might look at the picture and say it’s obviously Trump.  But how do I know that?  None of the individual pixels looks anything like Trump.  Rather it’s the cumulative effect of all those pixels that creates the likeness.

When we go through life we accumulate an enormous amount of information.  Each piece of information is like a dot, and together it gives us a complex worldview that tells us that some ideas are plausible and some are not.  The best I could do is use an analogy, something David would agree with.  I might say that I know that the American Girl Scouts leadership council was not behind the Kennedy assassination. If that didn’t work, pick a conspiracy that was even more far fetched—David’s grandmother.  At some point he’d accept the idea that one might know something because the alternative is too implausible, or at least most people would.  But of course that wouldn’t help at all with the CIA (which really does do nasty things.) The problem is that our life experiences give us each a different set of facts, and a different brain to process those facts.  I see a different CIA from the one David sees.

When I was much younger—like 3 months ago—I used to think it was a productive use of time to try to convince someone that Trump’s a demagogue, because . . . well, because he’s obviously a near perfect dictionary definition of a demagogue. But it’s pointless. For every fact you cite, they’ll point to other non-demagogue politicians who do something similar, at least on occasion.  Trump’s demagoguery is like the picture with 10,000 pixels, you either see it or you don’t.  No single example of the big lie, or of demonizing minorities and foreigners, or of unrealistic promises, or macho posturing, is going to convince anyone, because they’ll always be able to explain it away.  After all, politics is a very messy business.  And each argument is just one dot.

This also relates to monetary policy.  I know that monetary policy was too tight in 2008 and 2009 and that the Fed could have adopted a policy that led to faster NGDP growth.  But if asked to explain how I know this, I’d have trouble explaining my belief.  I lack an elevator pitch.  I could tell people to read my entire blog, from end to end.  But that’s 1000s of pages of argument, and it still wouldn’t even come close to explaining my belief, which also depends on decades of reading economic theory, economic history, and the history of economic thought.  That reading creates the brain architecture or grid that determines where I store all the various facts that I come across, and explains why I often just “know” that a commenter’s facts are wrong, without having actually checked. That doesn’t mean I don’t try to convince people (in the blog I do the best I can), just that it’s very difficult to do.

On the other hand I strongly recommend that people not try to explain their beliefs on the Kennedy assassination, or 9/11, or why Noam Chomsky is wrong about US foreign policy, or why Trump is a demagogue, or why free will doesn’t exist, or why Scott Alexander is brilliant, or what it means to “know something”, unless you enjoy pointless debates.  The odds of convincing anyone are so small that it’s not worth the effort.

PS.  I don’t believe that ‘inexplicable knowledge’ is the right term, but am not sure what is.  What I have in mind is not just tacit knowledge, as it can also involve reading books or articles.

PPS.  I was going to do a post on Trump’s nominees, which so far are mostly lousy. But it’s probably not worth the effort.  So I’ll just do a PS. Confirm them.  In politics, I always try to put principle over expediency.  So although I don’t like many of the nominees, I’ve always felt that Presidents have a right to pick the people who will serve them, unless something truly awful turns up.  I didn’t think it was fair to prevent those women from serving as Attorney General back in 1993, just because of various “nanny-gates”, and I’m not going to change my views just because Trump is President.

PPPS.  Vox recently published this piece by Sherri Underwood:

I remember the precise moment that I realized I regretted voting for Donald Trump.

It was during his 60 Minutes interview after the election. I was, like everyone else, shocked that he had won. It seemed so unlikely based on the polls and the confidence the media had that he would lose. It was a pleasant surprise, and I went to bed on election night thrilled that he would be our president.

But sitting on my couch, sipping coffee as I watched the interview, I saw with my own eyes who Trump really was as a person. He backtracked on one of his signature campaign promises: pursuing an investigation into the Clinton email scandal. It’s not that I want Clinton to be crucified or “locked up” — it’s the nonchalance with which he went back on his word after hammering it repeatedly during the campaign. The ease and quickness with which he reversed his position shook me to my core. I realized in that moment that I had voted for a demagogue. And it was sickening.

Three months ago I would have mercilessly mocked her stupidity.  Now I respect her much more than I respect myself.  Writing that article took courage.  Not surprisingly, she’s a Midwesterner.

PPPPS.  Speaking of the Midwest; Minnesota, Iowa and Wisconsin were three of the most liberal states in the country back in 1988, going for Dukakis while Illinois and California went for Bush.  This fascinating factoid from the National Review suggests they are about to turn red:

In the Upper Midwest, demographic trends have lent a hand: In 2004, Iowa, Wisconsin, and Minnesota were among the few states in which the oldest white voters were the most liberal, and the generation born of the Great Depression has been dying off.

Those old hippies are the Dukakis voters.  The Wisconsin I grew up in is gone—just faded memories.  And one more dot to slightly rewire the political map in my brain, which has both spatial and temporal dimensions.

BTW, Politico has a piece on Pepin County, Wisconsin that is the single best article on the election that I have read.  I will do a post.

Update:  Regarding Trump’s alleged demagoguery:

Screen Shot 2017-01-20 at 5.10.56 PM

Trust, but verify (reply to David Deutsch)

Back in the early days, before Trump Derangement Syndrome set in, I did a post entitled:

Are the laws of physics mere social conventions?  No, they are social conventions.

The basic idea was that theories are beliefs, but not “merely” beliefs.  There is nothing more important in the entire universe that mental states such as beliefs. Now, after a delay of 7 years, David Deutsch has left the following comment:

The astronomy example puzzles me; how closely our model reflects objective reality is somewhat orthogonal to whether “objective reality” exists in the first place. Elsewhere, the author asks if a model that is right 99.8% of the time is “false” while one that is right 99.99% of the time is “true” (or something to that effect). Truth is quantitative/probabilistic, not binary; some models are more “true” than others, but their level of “truthiness” is still an objective fact.

If this well known piece by Asimov has not been referenced yet, I’d be surprised, but here he sums it up quite well:

“When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

http://chem.tufts.edu/answersinscience/relativityofwrong.htm

I have no idea if it is the David Deutsch, one of the smartest writer that I have ever read, but in the off chance that I have been honored with a comment from him, then I really need to offer some sort of response.  First, a bit of context.  I was defending Richard Rorty’s views on truth (sometime summarized as “truth is what my peers let me get away with”).  Or perhaps one might say, true things are things that are regarded as true. Not surprisingly, this drives lots of people crazy, because we all see things that others regard as true, which clearly don’t seem true, and there are things that we see as being obviously true.  Is this Rortian stuff too much relativism?

I don’t see it as relativism at all.  I don’t see it as the world of fuzzy post-modern philosophers attacking the virtuous hard sciences.  It’s important not to get confused by semantics, and focus on what’s really at stake.  In my view, Rorty’s views are most easily seen by considering his denial of the distinction between objective truth and subjective belief.  In order to see why he did this, consider Rorty’s claim that, “That which has no practical implications, has no theoretical implications.”  Suppose Rorty’s right, and it’s all just belief that we hold with more or less confidence.  What then?  In contrast, suppose the distinction between subjective belief and objective fact is true.  What then?  What are the practical implications of each philosophical view?  I believe the most useful way of thinking about this is to view all beliefs as subjective, albeit held with more or less confidence.

Let’s suppose it were true that we could divide up statements about the world into two categories, subjective beliefs and objective facts.  Now let’s write down all our statements about the world onto slips of paper.  Every single one of them, there must be trillions (even if we ignore the field of math, where an infinite number of statements could be constructed.)  Now let’s divide these statements up into two big piles, one set is subjective beliefs, and the other pile contains statements that are objective facts.  We build a vast Borgesian library, and put all the subjective beliefs (i.e. Trump is an idiot) into one wing, and all the objective facts (Paris is the capital of France) into the other wing.

Now here’s the question for pragmatists like Rorty and me.  Is this a useful distinction to make? If it is useful, how is it useful?  Here’s the only useful thing I can imagine resulting from this distinction.  If we have a category of objective facts, then we can save time by not questioning these facts as new information arises.  They are “off limits”.  Since they are objective facts, they can never be refuted.  If they could be refuted, then they’d be subjective beliefs, not objective facts.

But I don’t want to do that.  I don’t want to consider any beliefs to be completely off limits—not at all open to refutation.  That reminds me too much of fundamentalist religion.  On the other hand, I do want to distinguish between different kinds of beliefs, in a way that I think is more pragmatic than the subjective/objective distinction.  Rather I’d like to assign probability values to each belief, which represent my confidence as to whether or not the belief is true.  Then I’d like to devote more of my time to entertaining critiques of highly questionable hypotheses, than I do to less plausible hypotheses.

Thus if someone tells me that I really need to read a book showing how 9/11 was a CIA plot, my response is, “No, it’s not worth my time.” It’s possible that it was a CIA plot, but so unlikely I don’t want to waste limited time trying to refute the view that Al Qaeda launched the attack.  It’s not that I believe Al Qaeda’s culpability is an objective fact; rather my subjective belief that it was Al Qaeda is so strong that I don’t want to waste time on it. Ditto for my view that 1+1 = 2.  On the other hand, at some later date new information on 9/11 may arise and reach the headlines of the New York Times, where I see it.  Now I may want to read that book.  Similarly, I can imagine a physicist not wanting to read some idiot’s crackpot anti-Newtonian model in 1850, but finding anti-Newtonian models quite plausible after the work of Einstein.

The subjective/objective distinction would only be useful if it put some ideas off limits, not open to questioning.  There are certainly some ideas where it’s a waste of time to question them, but I don’t like this as a general category, because I don’t know where the boundary lies between claims that should be beyond questioning, and claims that should be open to question.  So it’s simply more pragmatic to regard all statements as being beliefs about the world that are open to question, and then assign probability estimates (guesstimates?) to the chance that these claims will be overturned.

The other point of confusion I see is people conflating “the map and the territory”. Then they want to view “objective facts” as aspects of the territory, the underlying reality, not (just) beliefs about the territory.  I don’t think that’s very useful, as it seems to me that statements about the world are always models of the world, not the world itself.  Again, if it were not true, then theories could never be revised over time.  After all, Einstein didn’t revise reality in 1905; he revised our understanding of reality–our model of reality.

Reagan said “Trust, but verify”.  That means it’s OK to believe that certain things are true, but always be open to evidence that these things are not true.

PS.  Recall this statement I made above:

Rather I’d like to assign probability values to each belief, which represent my confidence as to whether or not the belief is true.

Rorty was criticized when people pointed out that one often hears something like the following:  “Although most people believe X, I believe that Y is actually true.”  If there is no objective standard to determine whether X is true, then what can this statement possibly mean?  I seem to recall that Rorty said something to the effect that when people claim Y is actually true, despite most people believing X, they are actually predicting that in the future Y will eventually be regarded as true.  Or maybe it’s a claim that, “If other people had seen what I saw, then they would also believe Y is true.”

PPS.  Back in 2013 I mentioned Deutsch in a post:

David Deutsch likes to sum up his philosophy as:

1.  Problems are inevitable.

2.  Problems are solvable.

The horrible nationalism sweeping the world was inevitable, and it’s solvable.  Good times will return.

PPPS.  I had a discussion of Deutsch’s views on quantum mechanics, and well as Eliezer Yudkowsky’s views, back in this 2013 post.

 

You can’t have it both ways

In two recent Econlog posts (here and here), I pointed out that a wise man or women should always have two levels of belief.  One is their own view of things, independently derived from their own research.  This is the view from within your skin.  The second level of belief is the awareness of the wisdom of crowds.  The awareness that an index fund is likely to do better than a fund that you personally manage. An awareness that the consensus view of the true model of the macroeconomy is likely to be better than your own model of the economy.  This is the view from 20,000 miles out in space, where it’s clear that you are nothing special.

In the comment section, Philo suggested:

In most things, you’re admirably sensible (insightful, etc.). In philosophy . . . well, better stick to your day job!”

He likes my market monetarist view of things, but not my philosophical musings.  But you can’t have it both ways.  If my philosophy is wrong then my market monetarism is equally wrong.  Either the wisdom of the crowds is true, or it isn’t.

(As an aside I’m aware that the wisdom of the crowds might be slightly better if the views are weighted by expertise, but that has no bearing on my claim.  Even if you think I have a bit more expertise that the average economist, the entire weighted sum of non-Scott Sumner economists is, objectively speaking, far more qualified than I am.)

In this blog I am normally giving you my views on the optimal economic model from the “within the skin” perspective, because otherwise I am of no use to society.  I’d be just a textbook.  In contrast, I give you my views on where markets are heading from the 20,000 miles up perspective, because that’s the most useful view for me to communicate the intuition behind market monetarism.  You don’t care where I personally think the DOW is going, and you should not care.

It is the job of the economics profession to weigh my arguments, and the arguments of those who disagree with me, and reach a consensus.  That consensus is not always correct, but it’s the optimal forecast.  Unfortunately, at the moment the optimal forecast is that I’m wrong about monetary offset, but I’ll keep arguing for monetary offset because that’s the view I arrived at independently, and I’m of no use to society unless I report that view, and explain why.

When I talk to philosophers about epistemology, they often mention concepts like “justified true belief” which seems question begging to me.  I’m certainly no expert on the subject, but I can’t see how the EMH is not right at the center of the field of epistemology.  If back in 1990, we wanted to know whether there were Higgs bosons or gravity waves, the optimal guess would not have been derived by asking a single physicist, but rather setting up a prediction market.  Yes, traders know less about physics than the average MIT physicist, but traders know whom to ask.

Many worlds vs. Copenhagen interpretation? Perhaps it can’t be tested.  But if it could, then set up a prediction market.  Robin Hanson’s futarchy is a proposal to have public policy based on society’s best estimate of what is true—derived from prediction markets.  He wants us to vote on values and bet on beliefs.  Richard Rorty might go even further, and have us bet on values, where the outcome of the bet depends on a poll of values 50 years in the future.  Rorty would say values are no more subjective than science.

I think the EMH is basically what Rorty meant when he said truth is what my peers let me get away with.

PS.  My two types of beliefs do not have a rank order; they are incommensurable concepts.  Both are essential, and one is not more or less important than the other.  There’s no answer to “What do I really believe about monetary offset?”  I believe different things, at different levels of belief.