When used for clustering, any of the evaluation metrics (Silhouette Score, Dunn Index, Rand Index, etc.) are appropriate
Since there are no labels associated with the observations in unsupervised learning, there is no direct error metric that can be applied
The ROC curve is produced by plotting the False Positive Rate (FPR) on the x-axis and the True Positive Rate (TPR) on the y-axis for all decision rules.
Recall measures the proportion of actual observations that belong to the positive class that were correctly classified
Misclassification Rate measures the overall proportion of observations that were wrongly classified or mis-classified.
Accuracy is the most straight-forward evaluation metric for a classification problem, and it simply measures the overall proportion of observations that were correctly classified.
Global F test is the most high-level model significance measure, which simply reports if any component of the model is significant.
Global F-test, R-Squared, MSE, MAE, RMSE, Information Criteria (AIC, BIC)