Taming the Beast of multiple model estimation with model quality control
Model specification is essential for the quantitative analysis of empirical data. Still, model quality is rarely assessed during statistical analyses across disciplines of empirical science. For example, in cognitive neuroscience, general linear model (GLMs) are routinely applied to functional magnetic resonance imaging (fMRI) data without systematic model quality control. At best, this leads to mismodelling (underfitting and overfitting) of measured fMRI data which increases the false negative rate of statistical analyses. At worst, it leads to multiple GLM specification and estimation (motivating p-hacking) which increases the false positive rate of these analyses. In this talk, I will recite the tale of Hydra, the many-headed Beast of multiple model estimation and how it can be tamed by chopping off its heads using model selection or tangling up its necks using model averaging. These methods of model quality control might help us in guiding a way out of the reproducibility crisis that has recently been diagnosed in psychology and cognitive neuroscience.
Jul 10, 2017 | 04:00 PM