The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.

Machine Learning Resources

What is Max Absolute Scaler? Compare it with MinMax Normalization? Why scaling to [-1, 1] might be better than [0, 1] scaling? 

Bookmark this question

Within the options for Feature Scaling, ‘Max Absolute Scaler’ is another that is open to us, as we preprocess data for our Training Data.

Along with the majority of Feature Scaling techniques, this is a transformation applied to Numerical Features. Depending upon your particular usecase it may be required to ensure your data is in a format suitable for the algorithms you have selected.

‘Max Absolute Scaler’ can be considered a close relation with, and acts in a similar manner to, Min Max Scaler. The main difference between Max Absolute Scaler and Min Max Scaler is that Max Absolute Scaler is only applicable to data where the values are +ve. If you attempted to use -ve values then you would find that your modelling would not make sense. As in some use cases, for example Fraudulent Transaction Detection, if your data possesses outliers, anomalies or novel values Max Absolute Scaler also has the same drawbacks, due to the strict scaling, as Min Max Scaler.

Due to the mapping of minimum and maximum values Whereas Min Max Scaler can also be used with -ve values, by scaling with Max Absolute Scaler 

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can

Explore Questions by Topics

Partner Ad

Learn Data Science with Travis - your AI-powered tutor |