### What is Feature Standardization (or Z-Score Normalization), and why is it needed?

Feature Standardization is a technique for pre-processing numerical raw data during the creation of your training data.

- Machine Learning 101 (30)
- Statistics 101 (38)
- Supervised Learning (114)
- Regression (42)
- Classification (46)
- Logistic Regression (10)
- Support Vector Machine (10)
- Naive Bayes (4)
- Discriminant Analysis (5)
- Classification Evaluations (9)

- Classification & Regression Trees (CART) (23)

- Unsupervised Learning (55)
- Clustering (28)
- Distance Measures (9)
- Dimensionality Reduction (9)

- Deep Learning (23)
- Data Preparation (34)
- General (5)
- Standardization (6)
- Missing data (7)
- Textual Data (16)

Feature Standardization is a technique for pre-processing numerical raw data during the creation of your training data.

Two basic pre-processing techniques, applicable to Numerical Features, are ‘Centering’ and ‘Scaling’.

Normalization, also known as ‘Unit-Length Scaler’, is a ‘Feature Scaler’ that can be used when preprocessing numerical data.

Another technique that we may wish to use, when preparing our ‘Training Data’, is ‘MinMax Normalization’.

‘Max Absolute Scaler’ is another option for preprocessing Training Data.

If outliers are detected and it is deemed appropriate to exclude them, they can be removed from the data before performing standardization.

Find out all the ways

that you can

- Machine Learning 101 (30)
- Statistics 101 (38)
- Supervised Learning (114)
- Regression (42)
- Classification (46)
- Logistic Regression (10)
- Support Vector Machine (10)
- Naive Bayes (4)
- Discriminant Analysis (5)
- Classification Evaluations (9)

- Classification & Regression Trees (CART) (23)

- Unsupervised Learning (55)
- Clustering (28)
- Distance Measures (9)
- Dimensionality Reduction (9)

- Deep Learning (23)
- Data Preparation (34)
- General (5)
- Standardization (6)
- Missing data (7)
- Textual Data (16)