The contribution of dynamic visual cues to audiovisual speech perception

TitleThe contribution of dynamic visual cues to audiovisual speech perception
Publication TypeJournal Article
Year of Publication2015
AuthorsJaekl P, Pesquita A, Alsius A, Munhall K, Soto-Faraco S
JournalNeuropsychologia
Volume75
Pagination402–410
Date Published08/2015
ISSN0028-3932
KeywordsAudiovisual enhancement, Biological motion, Configural, Speech-in-noise, Visual form, Visual motion, Visual pathways
Abstract

 Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. 

URLhttp://www.sciencedirect.com/science/article/pii/S0028393215300713
DOIhttp://dx.doi.org/10.1016/j.neuropsychologia.2015.06.025