Hearing through our AERS - auditory scene analysis and deviance detection

old_uid10725
titleHearing through our AERS - auditory scene analysis and deviance detection
start_date2012/01/23
schedule11h-12h30
onlineno
location_info2e étage
detailsInvité par l'équipe Parole
summaryIn everyday situations, multiple sound sources are active in the environment. Typically, there is no unique solution to finding the sound sources from the mixture of sound arriving to the ears. To constrain the solution, the brain utilizes known properties of the acoustic environment. However, even using these “rules of perception” (Gestalt principles), for any non trivial sequence of sounds, alternative descriptions can be formed. Indeed, for some stimulus configurations, auditory perception switches back and forth between alternative sound organizations, revealing a system in which two or more possible explanations of the auditory input co-exist and continuously vie for dominance. I propose that the representation of a sound organization in the brain is a coalition of auditory regularity representations producing compatible predictions for the continuation of the sound input. Competition between alternative sound organizations relies on comparing the regularity representations on how reliably they predict incoming sounds and how much together they explain from the total variance of the acoustic input. Results obtained in perceptual studies using the auditory streaming paradigm will be interpreted in support of the hypothesis that regularity representations underlie auditory stream segregation. We shall then argue that the same regularity representations are also involved in the deviance-detection process reflected by the mismatch negativity (MMN) event-related potential (ERP). Finally, based on the hypothesized link between auditory scene analysis and deviance detection, we shall propose a functional model of sound organization and discuss how it can be implemented in a computational model.
oncancelAjout 25/11
responsiblesRämä