Frequentist approach is based around the notion that population parameters of a model are unknown but fixed quantities that can be estimated by collecting a large enough amount of data. The Bayesian paradigm does not consider parameters to be fixed quantities but rather random variables having their own distributions. It incorporates a degree of subjectivity by allowing the specification of a prior distribution of each parameter based on the researcher’s own belief. Information from the prior combined with the likelihood of the observed data together creates the posterior distribution, which is used to make inference on the parameter.

While Bayesian inference is influenced to an extent by the choice of prior, with a large enough sample size, the observed data tends to be most dominant in the posterior distribution, meaning that a poor choice of prior does not render the analysis useless. By producing a full distribution rather than just a point estimate for a parameter, the Bayesian paradigm provides the ability to quantify uncertainty based on distributional properties of the posterior, which is something not provided by a frequentist approach. The essence of the Bayesian approach is to conduct inference on the posterior distribution by updating a prior belief based on data observed.

## Explained using an Example

Imagine you have a bag of colored balls, but you don’t know the proportion of red and blue balls. Classical statistics would say that the proportion of red balls is fixed, even though you don’t know what it is. In frequentist terms, you might say, “I think there’s a 30% chance that the next ball I pick will be red.”

Now, let’s bring in Bayesian thinking. Before you start picking balls, you might have some prior belief about the proportion of red balls based on your experience or intuition. This is your “prior distribution.” Let’s say you believe there’s an equal chance of the proportion being anywhere between 20% and 50%. As you pick balls and see their colors, you update your belief using Bayes’ theorem. So, after picking a few red balls, you might say, “Given the data I’ve seen so far, I now believe there’s a 40% chance that the next ball will be red.”

In classical statistics, the probability is about the data. In Bayesian statistics, the probability is about your belief in different possible values for the proportion of red balls, and it gets updated as you see more data.

It’s like classical statistics is predicting the outcome of a coin toss based on past coin tosses, while Bayesian statistics is updating your belief about the fairness of the coin as you observe each toss.

## Video Explanation

The embedded playlist below contains two videos:

- In the first video, Prof Trefor Bazett provides a brilliant explanation of the difference between Classical, Frequentist, and Bayesian approach using a deck of cards.
- The second video from Ox Educ explains the difference between Frequentist and Bayesian approach using two examples: Doctor diagnosing a patient, and a ship finding a submarine.

*Contributions: AIML Research Team and Edupuganti Aaditya *

## 2 Responses

Imagine you have a bag of colored balls, but you don’t know the proportion of red and blue balls. Classical statistics would say that the proportion of red balls is fixed, even though you don’t know what it is. In frequentist terms, you might say, “I think there’s a 30% chance that the next ball I pick will be red.”

Now, let’s bring in Bayesian thinking. Before you start picking balls, you might have some prior belief about the proportion of red balls based on your experience or intuition. This is your “prior distribution.” Let’s say you believe there’s an equal chance of the proportion being anywhere between 20% and 50%. As you pick balls and see their colors, you update your belief using Bayes’ theorem. So, after picking a few red balls, you might say, “Given the data I’ve seen so far, I now believe there’s a 40% chance that the next ball will be red.”

In classical statistics, the probability is about the data. In Bayesian statistics, the probability is about your belief in different possible values for the proportion of red balls, and it gets updated as you see more data.

It’s like classical statistics is predicting the outcome of a coin toss based on past coin tosses, while Bayesian statistics is updating your belief about the fairness of the coin as you observe each toss.

Thank you Aaditya for such a detailed and thoughtful explanation. We have included your example in the article.