The curse of dimensionality refers to the potential dangers associated with modeling from a dataset that has a large number of features. The risk for overfitting increases with the number of input features, largely due to the likelihood of data sparsity occurring. It is also increasingly difficult to meaningfully interpret the relationships learned in a machine learning algorithm when the number of features is large. Feature selection and dimensionality reduction are the most appropriate remedies for making sense out of high dimensional data.
The website is in Maintenance mode. We are in the process of adding more features.
Any new bookmarks, comments, or user profiles made during this time will not be saved.