"How to detect, avoid & eliminate confounds in MVPA, neuroimaging, and other experimental studies using the same analysis approach (SAA), illustrated on X>=7 reasons for so-far ominous below chance accuracies"
Classical design principles (e.g. randomization) and control analyses (e.g. on behavioural errors, reaction time, age) are routinely applied in neuroimaging studies. However, it has not been tested whether this is still valid for studies using new analysis methods such as cross-validation or decoding that are both routinely employed in multivariate pattern analysis (MVPA). In this talk I show that – counterintuitively – this standard practice can lead to the exact opposite of what it should achieve: Classical design principles can induce confounds instead of controlling them, and standard control analyses can give a false sense of certainty that a confound has been successfully controlled, even if it has not. These "Design-Analyses-Interactions" can cause systematic positive or negative biases (such as ominous significant below-chance decoding accuracies), potentially yielding false positive or – equally problematic – false negative findings. Interestingly, many of these problems do not depend on whether multivariate or univariate analyses techniques are used, but instead are caused by designs that are not suitable for cross-validation, incorrect statistics, or false interpretations of decoding results.I will illustrate the problem on X>=7 practical examples that illuminate different potential causes behind below-chance decoding accuracies, based on which I will motivate a simple remedy that serves to detect, avoid, and eliminate a large class of potential confounds. These best practices can - and should - be employed before, during, and/or after data collection. They include performing matching statistical analyses on (i) imaging data, (ii) design variables, and (iii) control data. Fortunately, the approach has another great plus: It simply saves time. First, because setting up analyses during design creation is much faster and easier than at a later stage (e.g. because variable names are still fresh in mind). And second, running these analyses occasionally ensures that all the design goals are indeed achieved (similar to Unit Testing in modern programming paradigms). Thus, this "Same Analysis Approach" does not only help to produce better science by detecting, avoiding, and eliminating confounds; it also saves time, money, and frustration. Although our main demonstration is for neuroimaging experiments, similar arguments apply to other fields such as machine learning or genetics.
Jun 13, 2016 | 04:00 PM - 06:00 PM