|
Deciphering the architecture of the spoken word recognition systemold_uid | 1398 |
---|
title | Deciphering the architecture of the spoken word recognition system |
---|
start_date | 2006/06/09 |
---|
schedule | 11h-12h |
---|
online | no |
---|
location_info | amphi Charves |
---|
details | Un deuxième séminaire (14h-15h) est prévu le même jour |
---|
summary | Most current models of spoken word recognition assume that there are both lexical and sublexical levels of representation for words. The most common view is that speech is initially coded as sets of phonetic features, with some intermediate recoding (e.g., phonemes) before it is mapped onto lexical representations. There is a longstanding debate about whether the information flow through such an architecture is entirely bottom-up, or whether there is also top-down communication from the lexical level to the phonemic codes.
The selective adaptation procedure offers a particularly effective way to address this debate, because it provides a test that relies on the consequences of top-down lexical effects, rather than on a direct subjective report. Three sets of experiments use this approach to decipher the word recognition system's architecture. One set uses lexically-based phonemic restoration to generate the adapting sounds, and a second set uses a similar approach based on the "Ganong" effect. The third set extends this approach to audiovisual lexical adaptation, combining the technique with a "McGurk" effect manipulation. Collectively, the studies clarify how visual and auditory lexical information are processed by language users. |
---|
responsibles | NC |
---|
| |
|