Kernel PCA extends regular PCA to situations where linear transformations are not satisfactory in capturing the variability within the data. It works analogous to kernelized SVM, where a problem that is not linearly separable in the original feature space is transformed into something that is so in the transformed space. Like in SVM, it makes use of the kernel trick so that it does not have to perform the computation in higher dimensions. Common choices for kernels include the polynomial, Gaussian, RBF, and Sigmoid.