The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.

AIML.com

Machine Learning Resources

What is Chebyshev’s Theorem and its implications?

Bookmark this question

While the Empirical Rule provides bounds on the proportion of observations that fall within 1, 2, or 3 standard deviations from the mean of a normal distribution, Chebyshev’s Theorem provides a more general criteria for such bounds that applies to distributions besides just the normal.

It states that the maximum proportion of observations falling beyond k standard deviations (for k > 1) from the mean of a distribution is approximately , meaning of the observations should fall within k standard deviations.

Whereas the Empirical Rule states that 95% of observations fall within 2 standard deviations of the mean, Chebyshev’s approximates at least 75%, and for 3 standard deviations away, the two theorems arrive at 99.7% and 89%, respectively.

While the proportion of observations within k standard deviations of the mean is smaller from Chebyshev’s than the Empirical Rule, the implication of having such a threshold for a wide class of distributions beyond just the Gaussian makes it a useful theorem in statistics.

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can

Explore Questions by Topics