Beyond similarity: Interpretable dimensions underlying representations in minds, brains, and artificial intelligence
Representational similarity has been playing an increasing role in the study of the human mind and brain, both as a method to compare representations between models and data, as well as in our ability to understand these representations in a data-driven fashion. In this talk, I would like to argue that now is the time to start moving beyond studying fixed similarities but instead focus on the dimensions underlying these similarities, which build the foundation of our mental and neural representations. I will first highlight methodological work combining the merits of multivariate decoding with those of representational similarity analysis, yielding the novel method of voxel-reweighted representational similarity analysis. Then, moving beyond the study of representational similarity, I will highlight recent efforts in unraveling core representational dimensions from human behavior, brain activity, and neural networks. Together, this work sketches a pathway towards a more fundamental understanding of representations in humans.