In what situations is it more advisable to use GBM instead of Random Forest, and vice versa?

On structured datasets, a well-tuned GBM more often outperforms a Random Forest. However, if there are many features that have high predictive importance in the model, Random Forest’s variable selection procedure on each tree might further improve prediction. If there are only a few features that dominate in terms of importance, GBM will likely result in a larger reduction in bias. Assuming one has the computation power available, it is usually advisable to try as many algorithms as possible and even consider an ensemble of multiple algorithms for a final prediction.