Another technique that we may wish to use, when preparing our ‘Training Data’, is ‘**MinMax Normalization**’.

Along with the majority of Feature Scaling techniques this is a transformation applied to Numerical Features. Depending upon your particular use case it may be required to ensure your data is in a format suitable for the algorithms you have selected..

The method is regarded as being one of the more basic preprocessing techniques and involves rescaling the range that Features have in their raw condition, so that they fit within the bounds of [0…1], if all +ve values in your data, or in the bounds [-1…1], if -ve values appear in your data.

When we compare the two techniques ‘MinMax Normalization’ and ‘Z-Score Standardization’ we have the obvious similarities, they are both Feature Scaling methods, for numerical data, intended to allow you to create good predictive models.

If we consider the differences using only ‘ideal’ raw data, then these are less significant than if we use ‘real’ data, which contains outliers. When outliers are included Z-Score Normalization will produce data that is likely to be of better quality than MinMax Normalization. This is due to the bundling of ‘inlier’ data points as a result of the strict scaling with the ‘outlier’ data residing at either the min/max value. However, it is this strict scaling that allows Features *without* ‘outliers’ to be better compared with each other, resulting in better quality of data.

Unfortunately, which technique to use and when is not easily defined and depends upon the data in question and your ambitions in terms of modelling. However, in general if your data contains outliers, anomalies or novel values then you must consider the effect this will have if you are using MinMax Normalization.