Noah: “But notice that Bayesian beliefs depend on your priors”

It may be so, but your priors may be wrong and you should punish yourself more the more confident you were in these mistaken priors. I had this discussion with Scott Sumner and god, it is frustrating.

Ok, lets have this example: you have 3 theories that may explain some phenomena. You need to be more than 50% confident in order for you to publicly subscribe to a theory. Imagine that your priors are as follows: 51% for theory A, 48.99999999999 for theory B and 0.00000000001 % for odd theory C that you deem almost impossible.

Now new evidence comes up that makes you update your priors as in the following 2 scenarios:

1) New evidence supporting theory B appears so that the posterior probabilities are as follows: A – 49% ; B – 51.99999999999% and C – 0.00000000001 %

Since it is now theory B that passed the threshold you make public announcement that effective now you are “completely” changing your opinion and now support theory B. However from Bayesian point of view it is not complete change of opinion. You reduced probability of theory A and B by approximately 4%. We may say that it is not a large change. You made a mistake, but not a large one.

2) New evidence supporting theory C appears. Now it looks like this: A – 51% ; B – 46.99999999999% and C – 2.00000000001 %

Now your public “opinion” did not change much. No bold statements, you still support theory A, right? But this is HUGE, really HUGE from bayesian point of view. You just elevated something from being labeled as “really, really incredibly not likely” to just “not likely”. You made a huge change in your bayesian scoring scale. You should really berate yourself for having such terrible priors.

So Bayesian is something else from what you think. The most useful concept here is that of information entropy, a concept from information theory. This concept can help you measure how “surprising” a change in probability was, how much new information you gained with a new evidence. The point being that the lower the probability of a surprise the larger the information value it has. Promoting low probability even by orders of magnitude means a lot of information was gained. And by this account the scenario 2 has a much larger impact then scenario 1.

]]>The lower the probability of a phenomenon the higher the information value it has if happening. Misstating probabilities has huge implications from this point of view and vice versa – how huge a mistake somebody made in bayesian sense can be measured by this. Therefore you cannot escape quantifying the magnitude of this sin.

And no this is not “binary” speaking. Mistating probabilities by an orders of magnitude is huge sin in bayesian thinking. It may not have the same implications in regular thinking, I give you that (like for instance chance of a heart attack or chance of going deaf from loud sound etc.). I was not talking about these things.

PS: thank you for the discussion, I actually have better grasp of what I wanted to say trying to explain it. At this point I surrender as I do not think I can express myself cleared than I have in several examples that I tried to give. Maybe you assumed that I am attacking you even if I was not. Thank you anyways.

]]>“No, all I am disputing is that huge relative shift in probabilities has to be recognized as such. To use your example it would be like if your previous assumption was that gov. Christie is 2380 pounds but now you assume that he is 238 pounds and than say “Yeah, he is still fat so not much of a change in my opinion. No practical impact or anything”.”

That’s an absolutely horrible analogy. Adding 2000 pounds is a huge deal, you get a heart attack. Adding 970/billion in risk is so small it doesn’t change my behavior at all.

You said:

“If you are true bayesian you do not hold view of binary outcome like “fat” and “not fat”.”

You are contradicting yourself. I said no big deal and you said big deal. That’s binary. If it’s not binary then you should let me say “no big deal.”

]]>This whole line of post about bayesianism was spurred by Scott’s notion that some relatively strong evidence “does not change his opinions that much”. I beg to differ on this one from Bayesian point of view – which is all my recent posts are about.

]]>http://krugman.blogs.nytimes.com/2013/12/01/inflationistas-at-bayes/

I love the digression about Bayesian analysis and enjoy the education, but for purposes of this conversation, the right question is probably “what did Scott understand Krugman to mean?”

It’s always hard to figure out what Krugman means, beyond the obvious “Krugman means that Krugman is much smarter and better informed than Cochrane.” But he seems to mean that instead of saying “I still think inflation is a significant danger,” Cochrane should have said.

“I still think inflation is a signifant danger, and although the past four years have caused me to revise my estimate of the danger level by a trivial amount, it doesn’t change my overall point.”

or maybe:

“In 2009, I thought that we had a chance of increasing inflation that followed the following function over time: f(x), and I thought f(x) had the following probability of being true. y” Because f(x) produces fairly low rates in the early years, the lack of inflation in the first four years is not greatly concerning, but compared with the alternative hypotheses, when you do the math, it has lowered my probability estimate to y-z.”

or maybe:

“The last four years are not a good test of my theory because of other stuff.”

]]>And I am not disputing that this is common pracice of all of us. But since this post is named “is Noah Smith Bayesian” I thought it relevant to comment that it is different. If you are true bayesian you do not hold view of binary outcome like “fat” and “not fat”. Or “I write a blog supporting idea or I don’t”. Doing such stuff is not bayesian, it is what all people do all the time.

Bayesians constantly do updates based on evidence. You try to be well calibrated and simultanously try to better discriminate to get the best possible Bayesian score.

]]>