General Advice (PMA6 9.9)
- Model selection is not a hard science.
- Some criteria have “rules of thumb” that can guide your exploration (such as difference in AIC < 2)
- Use common sense: A sub-optimal subset may make more sense than optimal one
- p-values: When you compare two criteria, often the difference has a known distribution.
- Wald F Test, the difference in RSS between the two models has a F distribution.
- All criterion should be used as guides.
- Perform multiple methods of variable selection, find the commonalities.
- Let science and the purpose of your model be your ultimate guide
- If the purpose of the model is for explanation/interpretation, error on the side of parsimony (smaller model) than being overly complex.
- If the purpose is prediction, then as long as you’re not overfitting the model (as checked using cross-validation techniques), use as much information as possible.
- Automated versions of variable selection processes should not be used blindly.
- “… perhaps the most serious source of error lies in letting statistical procedures make decisions for you.”…“Don’t be too quick to turn on the computer. Bypassing the brain to compute by reflex is a sure recipe for disaster.” Good and Hardin, Common Errors in Statistics (and How to Avoid Them), p. 3, p. 152