Kernel PCA extends regular PCA to situations where linear transformations are not satisfactory in capturing the variability within the data. It works analogous to kernelized SVM, where a problem that is not linearly separable in the original feature space is transformed into something that is so in the transformed space. Like in SVM, it makes use of the kernel trick so that it does not have to perform the computation in higher dimensions. Common choices for kernels include the polynomial, Gaussian, RBF, and Sigmoid.

The website is in Maintenance mode. We are in the process of adding more features.

Any new bookmarks, comments, or user profiles made during this time will not be saved.