The role of task states and sequential replay in reinforcement learning
Reinforcement learning (RL) theory has proven to be a powerful framework for studying learning from feedback both computationally as well as neuroscientifically. Despite much progress, two key challenges have remained: first, how are rich experiences mapped onto abstract states that are associated with values, and second, how can transitions between states be efficently computed? In my talk I will provide evidence that orbitofrontal cortex and the hippocampus might indeed be related to state inference and sequential state transition replay, ensuring that current and past experiences are encoded in ways that enable reinforcement learning elsewhere in the brain.
Time & Location
Jun 04, 2018 | 04:00 PM
J 32 / 102