Advantages:
- Ability to learn non-linear decision boundary
- Reduces prediction variance compared to single decision tree
- Minimal data preprocessing required, as Random Forest can handle outliers, does not require feature scaling, and can take in both numeric and categorical data types
- Trees can be created in parallel, since there is no dependence between iterations, which speeds up the training time
Disadvantages:
- Usually less accurate than a well-tuned boosting algorithm
- Less interpretable than a single decision tree, since the prediction cannot be explained by only one diagram. However, the variable importance can still be extracted.