Language processing in deaf signers: from phonological processing to semantic access

old_uid17090
titleLanguage processing in deaf signers: from phonological processing to semantic access
start_date2019/03/18
schedule10h-12h
onlineno
location_infosalle 406
summarySign languages (SLs) are natural human languages—with the same level of grammatical complexity than spoken languages (SPL)—that occur in the visual modality. Here I present a series of studies aimed to investigate to what extent SL processing is a-modal, and hence sustained by the same mechanisms and brain networks as the spoken modality, but also to identify what aspects of processing are modality specific. First, I will show functional transcraneal doppler sonography (fTCD) findings indicating that, like speech, SL is produced using a left lateralised brain network. However, a stronger left lateralisation for sign than speech generation was indicative of modality differences, perhaps due to the use of proprioceptive and spatial information for signs’ encoding. I will then review a series of event related potentials (ERP) studies addressing the question of whether deaf signers process automatically the phonological information of signs and written words. Results from these studies revealed that deaf signers automatically use phonological information to recognise visually presented signs, however not all phonological parameters were accessed equally. Further dissimilarities between SL and speech processing arose when studying the interplay between semantic and phonological features of signs—which might be more closely related in SLs than in SPLs. Finally, I will turn into written word recognition to address the question of whether deaf signers use phonological codes from words automatically during lexical access. I will present recent data showing that, although present, early phonological processing does not contribute to reading ability in deaf signers in the same way that it does in hearing signers.
responsiblesBogliotti, Parisse