The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.

Machine Learning Resources

What is Normalization? 

Bookmark this question

Normalization, also known as ‘Unit-Length Scaler’,  is a ‘Feature Scaler’ that can be used when preprocessing numerical data, as we prepare our ‘Training Data’. The purpose, as with most pre-processing techniques, is to manipulate the data so that it is in a better format, ready for the predictive modeling we intend to use it for.

As we know, Features that have grossly different scales and magnitudes can adversely affect some predictive models, resulting in poor prediction capability. Especially algorithms that rely on the eucliedean distance between Data Points 

So that this does not happen we can utilize a technique known as Normalization where we remove the influence of differences in scale by re-scaling each ‘Feature Vector’ to a unit length of 1. In practice this means that the Feature in question is scaled so that its minimum and maximum value sit between 0…1. This changes the absolute distances between data points but maintains the relative distances, as such it is useful when zero values exist within the data, such as in a ‘Sparse Array’.

Normalization removes the problems associated with wildly different Feature Scales and Magnitudes but comes at the cost of creating sensitivity to outliers within the data. If these are present then we can end up producing ‘Training Data’ that has data points that are bunched towards 0 or 1, as the relative distance from the Outlier Data is maintained.

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can

Explore Questions by Topics

Partner Ad

Learn Data Science with Travis - your AI-powered tutor |