Visual attention matters during word recognition: A Bayesian modeling approach

old_uid18837
titleVisual attention matters during word recognition: A Bayesian modeling approach
start_date2021/03/12
schedule10h
onlineno
detailspour la connexion à Zoom voir la page : https://team.inria.fr/biovision/seminar-of-julien-diard-on-a-bayesian-word-recognition-model-with-attention-interference-and-dynamics/
summaryIt is striking that visual attention, the process by which attentional resources are allocated in the visual field so as to locally enhance visual perception, is a pervasive component of models of eye movements in reading, but is seldom considered in models of isolated word recognition. We describe BRAID, a new Bayesian word Recognition model with Attention, Interference and Dynamics. As most of its predecessors, BRAID incorporates three sensory, perceptual and orthographic knowledge layers together with a lexical membership submodel. Its originality resides in also including three mechanisms that modulate letter identification within strings: an acuity gradient, lateral interference and visual attention. We show that BRAID can account not only for benchmark effects, such as word frequency, neighborhood frequency, context familiarity and transposed letter priming effects but, further, for more challenging behavioral effects, such as the optimal viewing position effect, the word length effect in lexical decision or the interaction of effects of crowding and frequency in word recognition. We show that visual attention modulates these latter effects, mimicking patterns reported in impaired readers.
responsiblesLafont