Machine Learning Resources

How does Bayesian Statistics differ from the classical paradigm?

Bookmark this question

Classical Statistics is based around the notion that population parameters of a model are unknown but fixed quantities that can be estimated by collecting a large enough amount of data. The Bayesian paradigm does not consider parameters to be fixed quantities but rather random variables having their own distributions. It incorporates a degree of subjectivity by allowing the specification of a prior distribution of each parameter based on the researcher’s own belief. Information from the prior combined with the likelihood of the observed data together creates the posterior distribution, which is used to make inference on the parameter.

While Bayesian inference is influenced to an extent by the choice of prior, with a large enough sample size, the observed data tends to be most dominant in the posterior distribution, meaning that a poor choice of prior does not render the analysis useless. By producing a full distribution rather than just a point estimate for a parameter, the Bayesian paradigm provides the ability to quantify uncertainty based on distributional properties of the posterior, which is something not provided by a frequentist approach. The essence of the Bayesian approach is to conduct inference on the posterior distribution by updating a prior belief based on data observed. 

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can