The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.

AIML.com

Machine Learning Resources

How does a decision tree create splits from continuous features?

Bookmark this question

The continuous variable is first sorted in ascending order of its values, and the midpoint between each pair of adjacent observations is calculated. The decision tree algorithm evaluates the chosen impurity measure (Entropy, Gini, etc.) after performing a candidate split using each midpoint as the threshold to determine which side of the split each observation will fall upon. It ultimately chooses the split that results in the lowest value for the chosen error metric among all possible splits that can be made using that feature. This process of discretization is a useful feature engineering technique for creating binned versions of continuous attributes and sometimes improves model performance.

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can

Explore Questions by Topics