Multisensory Perception

Overview

The overall aim of our research is to understand how the human brain combines expectations and sensory information to communicate.
Our ability to successfully communicate with other people is an essential skill in everyday life. Therefore, unravelling how the human brain is able to derive meaning from acoustic speech signals and to recognize our communication partner based on seeing a face represents an important scientific endeavour.
Speech recognition depends on both the clarity of the acoustic input and on what we expect to hear. For example, in noisy listening conditions, listeners of the identical speech input can differ in their perception of what was said. Similarly for face recognition, brain responses to faces depend on expectations and do not simply reflect the presented facial features.
These findings for speech and face recognition are compatible with the more general view that perception is an active process in which incoming sensory information is interpreted with respect to expectations. The neural mechanisms supporting such integration of sensory signals and expectations, however, remain to be identified. Conflicting theoretical and computational models have been suggested for how, when, and where expectations and new sensory information are combined. Our group is interested in understanding how the human brain combines expectations and sensory information to communicate and how individuals differ in their use of expectations.

Visit us on: http: //predcommlab.eu/

  • Staff
    • Multisensory perception
    • Speech recognition
    • Face recognition
    • Predictive processing

    • Marie Curie IF fellowship 2017-2019
    • Research grant for young investigators by University Medical Center Hamburg-Eppendorf 2020
    • Emmy Noether Group starting 2020

  • There will be new open positions in 2020. Please, contact Helen Blank if you are interested in joining the group as a Master student, PhD student, or Postdoc.