10.5 Count outcome data

Lets consider modeling the distribution of the number of of occurrences of a rare event in a specified period of time - e.g. Number of thunderstorms in a year

• If we assume:
• Rate ($$\mu$$) is fixed over time
• Successive occurrences independent of each other

Then we can use the Poisson distribution.

$P(Y=y) = e^{-\mu}\frac{\mu^{y}}{y!}$

• The Poisson distribution has a distinct feature where the mean of the distribution $$\mu$$, is also the variance.

10.5.0.1 Poisson Regression

Just another GLM - we use a $$ln$$ as the link function. This lets us model the log rates using a linear combination of covariates.

$ln(\mu) = \mathbf{X}\beta$

Then the expected rate of events per unit of time is:

$\mu = e^{\mathbf{X}\beta}$

This model assumes that the time of “exposure” for each record is identical.

• Number of cigarettes per month
• Number of epileptic seizures per week
• Number of people with lung cancer in four cities

If this is not the case (often), then this model needs to include an offset.

• e.g. observing each patient for epileptic seizures for a different number of days
• accounting for different sizes or structures of populations of interest (e.g. different cities with lung cancer)

What actually gets fit in glm is the model of expected counts, rather than rates, with an offset for the time period $$T$$.

• If all time periods are the same, then T is constant, and a linear combination of the intercept, thus dropped from the model.

$ln(\lambda) = \mathbf{X}\beta + ln(T)$

While this offset will be added to the regression model as if it were another variable, it’s not quite the same because the regression coefficient for the $$ln(T)$$ term is fixed at 1.

The generic formula for fitting a poisson model using glm is:

glm(y ~ x1 + x2 + offset(log(T)), family='poisson')

or alternatively as an argument

glm(y ~ x1 + x2, offset = log(T),  family='poisson')

The interpretation of the $$\beta$$ regression coefficients are differences in the log rate (or the log rate-ratio). So, just like with a logistic regression often we back-transform the coefficients by exponentiating before interpreting. So $$e^{\beta}$$ is now the rate-ratio.

• The intercept term is not a ratio, but a baseline rate when all covariates are 0
• For other covariates, the coefficient is the relative change per unit change in the covariate.
• one year older
• males vs females

Also, similar to logistic regression, since the outcome was transformed, the standard errors are not useful or interpretable as is. To calculate confidence intervals for the rate ratios,

1. calculate the CI for $$\beta$$
2. exponentiate each end point.

10.5.0.2 Example: Modeling counts from the Add Health data Wave IVset.

better example forthcoming

Let’s model the number of siblings someone has, based off their age at Wave 1 (2008).

Visualize

hist(addhealth\$nsib, xlab="Number of siblings", ylab="Count", main="",axes=FALSE, ylim=c(0,3000))
axis(1);axis(2, las=2);box()

nsib.model <- glm(nsib ~ agew1 + female, data=addhealth, family="poisson")
pander(summary(nsib.model))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.2647 0.1014 2.611 0.009019
agew1 0.0443 0.005989 7.397 1.39e-13
female 0.0969 0.01909 5.076 3.851e-07

(Dispersion parameter for poisson family taken to be 1 )

 Null deviance: 6411 on 3917 degrees of freedom Residual deviance: 6335 on 3915 degrees of freedom
betas <- cbind(coef(nsib.model), confint(nsib.model))
kable(exp(betas), digits=3)
2.5 % 97.5 %
(Intercept) 1.303 1.068 1.589
agew1 1.045 1.033 1.058
female 1.102 1.061 1.144