Pros:
- Through a simple transformation, much of the interpretability and intuition of linear regression is preserved
- Efficient to implement and requires little tuning of parameters
- Predicted probabilities are usually well calibrated, which is often not the case in some machine learning algorithms. This means that, for example, it is accurate to translate a predicted probability of 0.10 to a 10% probability of success for that observation.
Cons:
- Does not automatically learn a non-linear decision boundary, and it is necessary to explicitly define any non-linear relationships by transforming the original data and adding additional terms to the model
- Does not perform as well as some machine learning algorithms in modeling more complex, high-dimensional relationships