12.7 Model Performance

  • Say we decide that a value of 0.22828 is our optimal cutoff value to predict depression using this model. (note here is a GOOD place to use all the decimals.)
  • We can use this probability to classify each row into groups.
    • The assigned class values must match the data type and levels of the true value.
    • It also has to be in the same order, so the 0 group needs to come first.
    • I want this matrix to show up like the one in Wikipedia, so I’m leveraging the forcats package to reverse my factor level ordering.
  • We can calculate a confusion matrix using the similarly named function from the caret package.
plot.mpp$pred.class2 <- ifelse(plot.mpp$pred.prob <0.22828, 0,1) 
plot.mpp$pred.class2 <- factor(plot.mpp$pred.class2, labels=c("Not Depressed", "Depressed")) %>%   
                        forcats::fct_rev()

confusionMatrix(plot.mpp$pred.class2, forcats::fct_rev(plot.mpp$truth), positive="Depressed")
## Confusion Matrix and Statistics
## 
##                Reference
## Prediction      Depressed Not Depressed
##   Depressed            25            52
##   Not Depressed        25           192
##                                           
##                Accuracy : 0.7381          
##                  95% CI : (0.6839, 0.7874)
##     No Information Rate : 0.8299          
##     P-Value [Acc > NIR] : 0.999973        
##                                           
##                   Kappa : 0.2362          
##                                           
##  Mcnemar's Test P-Value : 0.003047        
##                                           
##             Sensitivity : 0.50000         
##             Specificity : 0.78689         
##          Pos Pred Value : 0.32468         
##          Neg Pred Value : 0.88479         
##              Prevalence : 0.17007         
##          Detection Rate : 0.08503         
##    Detection Prevalence : 0.26190         
##       Balanced Accuracy : 0.64344         
##                                           
##        'Positive' Class : Depressed       
## 
  • 192 people were correctly predicted to not be depressed (True Negative, \(n_{11}\))
  • 52 people were incorrectly predicted to be depressed (False Positive, \(n_{21}\))
  • 25 people were incorrectly predicted to not be depressed (False Negative, \(n_{12}\))
  • 25 people were correctly predicted to be depressed (True Positive, \(n_{22}\))

Other terminology:

  • Sensitivity/Recall/True positive rate: P(predicted positive | total positive) = 25/(25+25) = .50
  • Specificity/true negative rate: P(predicted negative | total negative) = 192/(52+192) = .7869
  • Precision/positive predicted value: P(true positive | predicted positive) = 25/(25+52) = .3247
  • Accuracy: (TP + TN)/ Total: (25 + 192)/(25+52+25+192) = .7381
  • Balanced Accuracy: \([(n_{11}/n_{.1}) + (n_{22}/n_{.2})]/2\) - This is to adjust for class size imbalances (like in this example)
  • F1 score: the harmonic mean of precision and recall. This ranges from 0 (bad) to 1 (good): \(2*\frac{precision*recall}{precision + recall}\) = 2*(.3247*.50)/(.3247+.50) = .3937