Springe direkt zu Inhalt

Helen Steingroever

Oct 27, 2014 | 04:00 PM - 06:00 PM

“Reinforcement-Learning Models for the Iowa Gambling Task: What Can We Learn from the Parameters and How Can We Identify a Good Model?”


Reinforcement-learning (RL) models are often used to decompose performance on the Iowa gambling task into its constituent psychological processes. After briefly reviewing the Iowa gambling task and three RL models commonly used in this context (the expectancy valence (EV), the prospect valence learning (PVL), and their hybrid, the PVL-Delta model), the first part of my talk focuses on the validity of conclusions drawn from model parameters.I present two methods that can be used to assess absolute model performance: The first method (a post hoc fit method) assesses whether a model provides an acceptable fit to an observed choice pattern; the second method (a simulation method) assesses whether the parameters obtained from model fitting can be used to generate the observed choice pattern. I show that all models provide an acceptable fit to two data sets; however, when the model parameters were used to generate choices, only the PVL-Delta model captured the qualitative patterns in the data. Thus, a model’s ability to fit a particular choice pattern does not guarantee that the model can also generate that same choice pattern. The second part of my talk focuses on methods that can be used to compare different RL models. I will present results from parameter space partitioning studies and two Bayes factor analyses: one analysis based on importance sampling and one analysis based on the product space method.