11.1 Calculating predictions

So what if you want to get the model predicted probability of the event for all individuals in the data set? There’s no way I’m doing that calculation for every person in the data set.

Using the main effects model from above, stored in the object mvmodel, we can call the predict() command to generate a vector of predictions for each row used in the model.

Any row with missing data on any variable used in the model will NOT get a predicted value.

The mvmodel object contains a lot of information. I recommend you look at str(mvmodel) on your own time as it’s too much to print out here. The important pieces for this section is that the data used in the model (all complete case records) are stored.

Calling dim(model.pred.prob) gives us 294. This matches the number of complete case records used to build the model. This is the same length as mvmodel$y, so we can bind them together in a data frame (useful for plotting).

The model.pred.prob is a vector of individual predicted probabilities of the outcome (being depressed). To classify individual \(i\) as being depressed or not, we draw a binary value (\(x_{i} = 0 or 1\)), with probability \(p_{i}\) by using the rbinom function, with a size=1.

Applying class labels and creating a cross table of predicted vs truth:

The model correctly identified 195 individuals as not depressed and 15 as depressed. The model got it wrong 49 + 25 times.

Is this good? What if death were the event?

11.1.0.1 Distribution of Predictions

Another important feature to look at is to see how well the model discriminates between the two groups in terms of predicted probabilities. Let’s look at a plot:

  • What do you notice in this plot?
  • What can you infer?