Dropout refers to randomly turning off hidden units so that a smaller network is trained on a given pass
Overfitting can be mitigated by collecting more training data, using regularization and so on
A model that is underfit will produce evaluation metrics that are poor on the training data alone, such as high RMSE or misclassification rate.
Overfitting occurs when a model fails to perform at a similar level of accuracy on data that was not used in the training process compared to data that it was explicitly built on.
Tuning the combination of number of trees and learning rate is a good way to ensure you are creating a model with appropriate complexity.