In the case described above where a transformation is necessary in order to define a hyperplane to separate between classes, the kernel trick allows SVM to form a decision boundary in higher dimensional space without actually going through the computation of transforming the original data. It does this by using a similarity measure between observations that is found from the kernel, or dot product, of observations in the original feature space. This allows SVM to maintain its computational efficiency when there are many features.
The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.