What are some of the pros/cons of SVM?
Pros: Relatively fast computational time due to kernel trick
Cons:Performance of algorithm is sensitive to the choice of kernel
Pros: Relatively fast computational time due to kernel trick
Cons:Performance of algorithm is sensitive to the choice of kernel
While SVM is used most often in classification scenarios, it can be extended to regression cases by allowing the user to provide a maximum margin of error allowed for any observation.
Hinge loss adds an increased penalty to misclassifications that are off by a large amount, since the cost function increases linearly as the decision function output moves further away from the actual label.
The hinge loss function uses the distance between observations and the hyperplane that separates classes in the SVM algorithm in order to quantify the magnitude of error.
C (regularization parameter), Kernel Function, Gamma (RBF kernel)
Common choices for kernels include: Linear, Polynomial, Radial Basis, Sigmoid
The kernel trick allows SVM to form a decision boundary in higher dimensional space
While soft margin classification relaxes the requirement of a hyperplane that must perfectly distinguish between the classes, a separate issue arises when there is no way to define such a hyperplane in the original feature space.
While the maximum margin classifier is optimal in theory, in practice, observations cannot be perfectly separated in most classification problems.
SVM is a classification algorithm that seeks to determine a decision boundary by maximizing the distance between points of different classes.
Partner Ad