The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.

AIML.com

Machine Learning Resources

How does hinge loss differ from logistic loss?

Bookmark this question

Hinge loss adds an increased penalty to misclassifications that are off by a large amount, since the cost function increases linearly as the decision function output moves further away from the actual label. This property is one of the reasons SVM performs very well on many data sets, as it enables hyperplanes to find margins that result in the highest accuracy possible. As can be seen in the graphs above, hinge loss is non-differentiable, which means that the optimization problem is no longer convex. Logistic, or cross-entropy loss, does not suffer from such a problem and also allows for the computation of predicted probabilities rather than just class labels, which is why it is suitable for logistic regression. In practice, SVM is usually preferred to logistic regression if the decision boundary is non-linear or many variable transformations would be required, but if it is a simpler problem and direct probability estimates are desired, logistic regression might be the preferred choice. 

For information regarding less common loss functions used in classification, a reference is provided: (https://en.wikipedia.org/wiki/Loss_functions_for_classification)

Leave your Comments and Suggestions below:

Please Login or Sign Up to leave a comment

Partner Ad  

Find out all the ways
that you can

Explore Questions by Topics