PhD Jiawei Li
Neural Cognition of Visual Dynamics
Alexander von Humboldt Fellow
Room JK 25/222f
I will join the lab in November 2022 as a postdoc researcher. Prior to this, I spent over ten years in the School of Social Science, Tsinghua University, Beijing, China. I obtained my B.Sc. and Ph.D. there.
My research focusses on the temporal dynamics of neural activity underlying cognition. My doctoral work centered on the temporal neural dynamics underlying continuous naturalistic speech processing. The title of my Ph.D. dissertation is ‘The neural mechanism of attention modulation of distinct features in naturalistic speech. I use EEG recordings, multivariate analysis methods (temporal response function, TRF; canonical correlation analysis, CCA) and natural language processing (NLP) methods in my research.
Besides neuroscience, I love sports and music. I was the second runner-up in the national college fencing competition in China, and I hope to run my first marathon in Berlin.
Please see my website for further information.
General research interests
When you were a child, your caregivers may have read you stories before you went to bed. The adult was reading the story, and you as the child were listening. While the sensory input for you and the caregiver was different (visual vs. auditory), you and the storyteller would have arrived at a related comprehension of the story. It means that our brain could extract similar meaning albeit from different modalities.
This is fascinating - how does our brain translate the information received from different modalities (visual vs. auditory) into similar comprehension and what are the neural mechanisms behind that? This question forms the core of my scientific interest. To answer it, I use the high temporal resolution EEG-recording methods, as well as the encoding models and deep neural networks.
My research is funded by a two-year long scholarship funded by the Alexander von Humboldt Foundation in the Henriette Herz funding scheme.
I am planning to conduct two research projects during my stay.
In project 1, we will investigate the temporal dynamics of neural representations encoding semantic information from linguistic material, generalized across input modalities. To do this, we will invite the participants to read or listen to the same story while recording their scalp EEG. We will use different encoding models (sensory features, semantic features) to model the data and to reveal common representations.
In project 2, we will investigate when and where the semantic information beyond the linguistic domain emerges. For this, we will investigate the spatiotemporal dynamics with which neural representations encoding information from linguistic or visual material equally emerge.