How does a decision tree create splits from continuous features?

The continuous variable is first sorted in ascending order of its values, and the midpoint between each pair of adjacent observations is calculated. The decision tree algorithm evaluates the chosen impurity measure (Entropy, Gini, etc.) after performing a candidate split using each midpoint as the threshold to determine which side of the split each observation will fall upon. It ultimately chooses the split that results in the lowest value for the chosen error metric among all possible splits that can be made using that feature. This process of discretization is a useful feature engineering technique for creating binned versions of continuous attributes and sometimes improves model performance.