precision and recall

(Notes from my Udacity Machine Learning Nanodegree course)

Recall:

True Positive / (True Positive + False Negative)

Out of all the items that are truly positive, how many were correctly classified as positive. Or simply, how many positive items were ‘recalled’ from the dataset.

Precision:

True Positive / (True Positive + False Positive)

Out of all the items labeled as positive, how many truly belong to the positive class.

F1 Score

Now that you’ve seen precision and recall, another metric you might consider using is the F1 score. F1 score combines precision and recall relative to a specific positive class.

The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst at 0:

F1 = 2 * (precision * recall) / (precision + recall)