First of all, I am not the one that needs convincing. With millions of people buying lottery tickets, someone is bound to win even though the chance of anyone winning is really slim, but it is meaningless to ask why this particular person ended up winning the lottery.

Secondly, I agree Bayesian reasoning could be useful, but only in cases where the process/ experiment is repeatable, such as using Bayesian models to predict how Covid could spread or to determine which Covid patients could be most at risk.

More troublingly, it seems those people are trying to prove, based on Bayesian reasoning alone, whether Covid originated from lab leak.

By right, Bayesian reasoning should serve only as a guidance to concentrate one’s resources (say on Covid patients which your model shows most likely to be at risk, or which part of the sea to look for a missing airplane).

]]>“By inverting the conditional probabilities, Bayes’ Rule enables us to make probabilistic statements about the cause, given the observed effect.”

Yes, and I cannot even imagine any other rational way of doing things.

]]>Non-Bayesian Reasoning

Your friend wins the lottery, and you think, “Wow, the probability of winning is 1 in 1 million. They must have cheated!”

Bayesian Reasoning

Let’s apply Bayes’ theorem:

Assign a prior. Cheating the lottery is hard. It’s very uncommon. So, let’s assign a Prior probability (before the win): P(cheating) = 0.0001 (0.01% chance of cheating), P(not cheating) = 0.9999 (99.99% chance of not cheating)

Simplified likelihood fnc:

Likelihood of winning given cheating: P(winning|cheating) = 0.5 (assuming cheating does not guarantee a win but increases the odds from 1 in a million to 50%).

Likelihood of winning given not cheating: P(winning|not cheating) = 0.000001 (1 in 1 million chance of winning fairly)

Using Bayes’ theorem, we update our probabilities:

P(cheating|winning) = P(winning|cheating) * P(cheating) / [P(winning|cheating) * P(cheating) + P(winning|not cheating) * P(not cheating)]

= 0.5 * 0.0001 / [0.5 * 0.0001 + 0.000001 * 0.9999]

= 0.033

P(not cheating|winning) = 1 – P(cheating|winning) = 0.967

Going through the excersize of quantifying things based on educated guesses (prior knowledge and likelihoods) results in a more accurate probability: there’s only a 3.3% chance your friend cheated, and a 96.7% chance they won fairly.

Bayesian reasoning helps us:

Update our probabilities based on new evidence (the win)

Incorporate prior knowledge (the rarity of cheating)

Provide a more nuanced answer than a simple “they must have cheated”

The process of going step by step using educated guesses forces us to account for things like the difficulty of cheating and makes everything transparent.

Your friend wins the lottery, and you think, “Wow, the probability of winning is 1 in 1 million. They must have cheated!” This provides us nothing transparent regarding how you arrived at that conclusion. The second, we can see how you quantify the difficulty of cheating, the odds of winning with and without cheating, and how you combined those things to produce your posterior probability.

This is is why the process is not useless.

]]>A person I know has just won lottery, if he’s not cheating, the chance of him winning is just one in a million, hence he must be cheating.

There are trillions of planets out there, but life somehow develops and thrives on earth, the probability is really small that this happened by chance, it must be the work of a higher being.

Bottomline, no amount of Bayesian reasoning could get you anywhere closer to the truth.

]]>Bayes’ Rule is a mathematical formula for inverting conditional probabilities, allowing us to update our beliefs about the probability of a cause (or hypothesis) given its effect (or observed data). It’s a way to reverse the direction of inference, moving from:

P(effect | cause) → P(cause | effect)

In other words, Bayes’ Rule helps us answer questions like:

Given that I have a positive test result (effect), what’s the probability that I have the disease (cause)?

Or

Given that it’s raining outside (effect), what’s the probability that the streets will be wet (cause)?

By inverting the conditional probabilities, Bayes’ Rule enables us to make probabilistic statements about the cause, given the observed effect.

]]>All probabilistic reasoning is Bayesian, whether people realize it or not. The only difference is how transparent people are with their priors.

]]>Priors are similar to prejudice… as they are a measure of existing beliefs or the existing state of understanding, before accounting for new information. They are initial beliefs, they influence perception and they can be (strong priors) resistant to changing those beliefs. But they are transparent, derived from data, are flexible to updates, and so are in a sense neutral.

When someone says, well I used Bayesian reasoning to arrive at such and such… it’s a lot like saying, well I used math to estimate this. Yeah sure, now I need to see your math… lol

]]>