Springe direkt zu Inhalt


Our mission and approach

The mission of the lab is to determine how the brain gives rise to the rich variety of visual cognitive phenomena (recognition, imagery, etc.) observed in humans. To this end, we ask how visual information is processed in the brain and how such processes interact with other cognitive functions such as attention, language, and memory.

We have a specific focus on the cognitive function of visual object recognition: how does the brain translate the constant flow of photons hitting the retina into a conscious percept of the world consisting of meaningful entities – objects?

How we pursue the lab mission depends fundamentally on the most important aspect of the lab: its members. What we do and how we do it is a function of the creativity and ideas of current lab members – everyone individually and together, through a collaborative combination of forces. Together with a wide network of external collaborators, this creates a highly dynamic structure of diverse experimental approaches that forms a whole, which is greater than the sum of its parts.

In our diversity, we are united by a methodological leitmotif: to make available new pieces of information for a new theory of vision, we integrate results from multiple methodological approaches. We integrate methods from across the disciplines of neuroscience (different brain imaging modalities), psychology (behavior) and computer science (deep neural networks). We further rely on multivariate analysis patterns such as classification, encoding models, and representational similarity analysis (RSA).

Please find below a snapshot of some of the projects we are working on currently, ordered by larger topics (state: early 2022).

Topic 1: Directions and contents of visual information flow (imagery, 7T)

Visual perception is orchestrated by information flow in two cardinal directions: externally generated information is first processed in a feedforward manner, while internally generated interpretations, expectations or prior knowledge flow in the feedback direction. A third mode of functioning is recurrence, whereby information is repeatedly processed at the same processing stage.

While the existence of feedforward, feedback and recurrent processing is indisputable, distinguishing the different information flow directions is not easy, as they strongly overlap in both space and time at the level at which standard techniques operate.

It is thus unclear what the exact contents and representations being processed in each information flow are, what their respective functions are in enabling visual cognition, and how those different aspects relate to behavior.

We take a multipronged strategy to tackle these open questions in three different experimental approaches.

Approach 1.1: Visual mental imagery

Mental imagery is the ability to conjure up a vivid internal experience from memory that stands in for the percept of the stimulus. Visually imagined contents subjectively mimic perceived contents, suggesting that imagery and perception share common neural mechanisms (Cichy et al., 2012). Unlike perception, imagery lacks feedforward information flow from the stimulus, suggesting that neural representations during imagery emerge through feedback information flow. Thus, by relating visual imagery to perception we can gain insight into the mechanisms and contents of feedback information flow.

For example, we recently found that visual imagery and perception share neural representations of visual information in the alpha frequency band (Xie et al., 2020).

Lab members (current): Siying, Carolina, Tony

Approach 1.2: Resolving brain responses in cortical layers

Previous neuroanatomical and functional studies have demonstrated that feedforward and feedback signals are segregated across cortical layers. Feedforward signals terminate.

in the middle layer, while feedback signals terminate in superficial and deep layers. Thus, by resolving brain activity across layers, we aim to distinguish feedforward- and feedback-related signals that together enable visual perception.

For example, we have recently shown that perceived and mentally rotated contents are differentially represented across cortical depths human V1 (Iamshchinina et al., 2021).

Our methodological focus is on high-resolution fMRI, but in collaboration we also investigate layer-specific electrophysiological signals in non-human primates.

Lab members (current): Maya, Tony, Nicolas

Lab members (former): Polina

Approach 1.3: Challenging the visual system (masking, clutter, etc.)

While some visual recognition problems might be solved through feedforward information flow only, when the visual recognition task or conditions become more challenging (as a result of uncertainty or ambiguity),  additional feedback processing is required (Kietzmann et al., 2019).

We thus investigate how information processing changes as a function of the difficulty of a perceptual task.

For example, we have recently shown that representations of objects and their locations when presented under clutter rather than on blank backgrounds involve recurrent processing (Graumann et al., 2022 NatHumanBehav).

Lab members(current): Monika, Pablo, Siying, Maya

Topic 2: Plasticity of the visual system

Our visual capabilities and representations are to a strong degree the result of experience with the world. Neither are we born with ready visual representations, nor are they stable in adulthood. We are investigating this plasticity by looking at different types of brain reorganisation: 1) developmental changes during infancy, 2) healthy aging and 3) the lack of input to the visual cortex in congenital blindness.

Approach 2.1: Infant neuroscience

To understand how visual representations come about in humans we investigate them when are formed : during infancy. This research is done in close collaborations with researchers in the field of infant studies, including the labs of Steffi Hoehl, Moritz Koester, Richard Aslin, Laurie Bayet and Charlotte-Grosse Wiesmann.

Our goals are i) characterizing visual object representations in infants based on brain imaging data, ii) comparing them to the other age group's representations (e.g., adults), and iii) determining the developmental trajectory of particular visual capabilities such as object permanence.

For example, we have recently shown how infant representations differ in timing, visual features, and oscillatory basis from adult representations (Xie et al., 2022).

Lab members(current): Siying

Lab members(former), now collaborator: Moritz

Approach 2.2: Blindness

An extreme case allowing to investigate the plasticity of the visual cortex is the situation when visual input to the brain is missing: congenital blindness.

We thus investigate the role of visual cortex in people born with congenital blindness, focusing on its potential role in Braille reading and language understanding.

Lab members(current): Monika, Marleen

Approach 2.3: Life span plasticity of visual representations

Previous research has shown that visual processing capabilities and mechanisms are far from static, but continuously change across the life span. However, what changes exactly happen remains unknown. We use multivariate pattern analysis methods and the combination of multiple imaging modalities to unravel where in the brain and when during visual processing potential changes happen, and to determine what exactly changes in terms of computations.

This work is carried out in collaboration in two strands: together with Douglas Garrett at the MPI for Lifespan Development in Berlin, we investigate and contrast basic visual object processing in healthy younger and older adults. Together with Emrah Düzel at the DZNE Magdeburg in the context of the CRC 1436 (Neural Resources of Cognition) we investigate whether mnemonic discrimination training affects visual representations differentially in younger and older adults.

Lab members(current): Marleen, Boyan, Panagiotis

Topic 3: Modelling visual object recognition

Knowing when in the brain something happens, or where it is happens, does not by itself tell us what is happening there. What visual information is processed, and how is it processed exactly? One way to address these questions is by modeling: if we can build artificial models that in relevant aspects behave like the brain, we would make important steps forward in understanding the algorithmic details of visual processing. In addition, the behavior of such models might inspire new hypotheses about visual information processing which can subsequently be empirically tested.

For modeling the brain we mostly use deep neural networks, connectionist models whose architecture was firstly inspired by the visual ventral stream (Cichy et al., 2019).

Approach 3.1: The Algonauts project

The Algonauts Project is a free and open challenge to predict brain activity during visual perception (website). We provide brain data and invite participants to determine how their preferred computational model compares to other models in a benchmark of predicting this data. Open challenges have played an important role in accelerating research fields such as robotics and computer vision. We believe that open challenges such as the Algonauts Project have the potential to benefit cognitive neuroscience similarly.

Lab members(current): Alessandro, Kshitij, Monika

Lab members(former): Polina

Approach 3.2: Model development and testing

This approach is taken in close collaboration with researchers that focus on modeling such as Gemma Roig, Tim Kietzmann and Klaus Obermayer. Our collaborators develop new and innovative computational models that we use to predict and explain human brain activity and behavior. This close interaction at the border of artificial and biological intelligence research leads to fast-paced progress and mutual inspiration.

For example, we have recently used DNNs trained on different tasks to map out the functionality of different regions in visual cortex (Kshitij et al., 2021). Further, we have collected a large and rich EEG data set that for the first time allowed end-to-end training of a DNN that takes in pixel values, and gives out brain activity (Gifford et al., 2022).

Lab members(current): Zejin, Manshan, Alessandro, Kshitij

Topic 4: Linking brain to behavior

For a biological organism perception is not a goal in itself. Instead, perception is there to guide behavior, allowing it to make choices that are adaptive and beneficial. How does the transformation between a representation of the world and making choices about it work? What aspects of brain activity that we can observe as experimenters during visual cognition does the brain use for cognition (Cichy et al., 2019) and for decision-making? We use advanced analysis methods to investigate how the brain activity discovered by our research relates to human choice behavior.

Lab members(current): Agnessa, Johannes, Pablo, Chun-Hui

Topic 5: Statistical regularities and natural vision

In this line of research, we investigate how the regular and predictable structure of our everyday environments facilitates the ways in which we perceive and represent complex visual scenes. We are particularly interested in how typical information distributions across visual space impact object recognition and scene representation, as well as the role of cortical feedback in the predictive processing of such statistical regularities.

This line of research was established by Daniel Kaiser, who was a postdoc in the lab from 2017-2019 and is now a Professor for Neural Computation at Justus-Liebig-University Gießen. Daniel continues to lead this line of  research, co-supervising the PhD projects of Lixiang Chen and Gongting Wang.

Research in this area has three major key emphases: First, we study how the distribution of objects across space and within typical multi-object configurations impacts fundamental processes of perception and visual representation (see here for a review). Second, we investigate how the part-whole structure of rich and complex natural scenes facilitates their representations across the visual brain (see here for a review). Third, we use state-of-the-art multivariate analyses of multimodal neural recordings to resolve how cortical feedback processes allow the brain to dynamically predict visual inputs using experience-based priors about the typical structure of scenes.

Lab members(current): Lixiang, Gongting, Greta

Lab members (former), now collaborator: Daniel