Amphithéâtre Guillaume Budé, Site Marcelin Berthelot
Open to all
-

Abstract

This third lecture focused on the mechanisms by which a listener's perception of a sound is modulated by feedback from the efferent auditory system on the ascending auditory pathways, once the sensory information (sound, visual, tactile, or other) has been processed. This modulation may take place consciously during attentive listening, but it is also largely unconscious, acting as a high-level filtering that the auditory system exercises continuously, and which depends crucially on multisensory integration, some aspects of which we saw in the previous lecture. The question of where and how this integration takes place has led researchers to question the impact of the disparity and reliability of the parameters that characterize the different sensory signals. How are disparate sound and visual parameters taken into account and processed by the brain to form a distinct audio-visual object ? This question naturally led us to consider the role of statistical inference in multisensory integration. The first part of the lecture was therefore devoted to introducing this notion through its most natural model, Bayesian inference.