Cross-modal prediction in speech perception

TitleCross-modal prediction in speech perception
Publication TypeThesis
Year of Publication2013
AuthorsSánchez-García C, Soto-Faraco S
Academic DepartmentDepartment of Experimental and Health Sciences
Number of Pages194
Date Published11/2013
UniversityPompeu Fabra
Thesis TypePhD
KeywordsAudiovisual speech, Event-related potentials, Multisensor integration, Phonology based-prediction, Predictive coding, Speech Perception
Abstract

The present dissertation addresses the predictive mechanisms operating online during audiovisual speech perception. The idea that prediction mechanisms operate during the perception of speech at several linguistic levels (i.e. syntactic, semantic, phonological….) has received increasing support in recent literature. Yet, most evidence concerns prediction phenomena within a single sensory modality, i.e., visual, or auditory. In this thesis, I explore if online prediction during speech perception can occur across sensory modalities. The results of this work provide evidence that visual articulatory information can be used to predict the subsequent auditory input during speech processing. In addition, evidence for cross-modal prediction was observed only in the observer’s native language but not in unfamiliar languages. This led to the conclusion that well established phonological representations are paramount for online cross-modal prediction to take place. The last study of this thesis, using ERPs, revealed that visual articulatory information can have an influence beyond phonological stages. In particular, the visual saliency of word onsets has an influence at the stage of lexical selection, interacting with the semantic processes during sentence comprehension. By demonstrating the existence of online cross-modal predictive mechanisms based on articulatory visual information, our results shed new lights on how multisensory cues are used to speed up speech processing.

URLhttp://www.tdx.cat/handle/10803/293266