What is Precision?
Precision measures the proportion of observations that the algorithm predicts to be positive that actually are positive labels.
Precision measures the proportion of observations that the algorithm predicts to be positive that actually are positive labels.
Recall measures the proportion of actual observations that belong to the positive class that were correctly classified
The F1 Score is the harmonic mean between precision and recall.
Accuracy is the most straight-forward evaluation metric for a classification problem, and it simply measures the overall proportion of observations that were correctly classified.
Misclassification Rate measures the overall proportion of observations that were wrongly classified or mis-classified.
False Positive Rate measures the proportion of actual negative observations that were predicted to be positive.
The specificity is the analog of recall for the negative class.
The ROC curve is produced by plotting the False Positive Rate (FPR) on the x-axis and the True Positive Rate (TPR) on the y-axis for all decision rules.
One of the most useful tools for evaluating the performance of any classification algorithm is the confusion matrix.
Partner Ad
Find out all the ways
that you can