Technical Program

Paper Detail

Paper: RS-1B.2
Session: Late Breaking Research 1B
Location: Late-Breaking Research Area
Session Time: Thursday, September 6, 18:45 - 20:45
Presentation Time:Thursday, September 6, 18:45 - 20:45
Presentation: Poster
Publication: 2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania
Paper Title: The Contribution of Facial Motion to Voice Recognition
Authors: Noa Simhi, Galit Yovel, Tel Aviv University, Israel
Abstract: Visual and auditory processing are linked in face perception, (e.g, McGurk effect). Studies on face and voice integration have primarily focused on speech perception. However, recent theories on the role of dynamic information in person recognition predict that motion may also play a role in voice identity recognition. To examine this, we conducted an EEG experiment during which participants learned to recognize the identity of 4 voices, never seeing their corresponding faces. At test, voices were presented with dynamic faces, static faces or alone and subjects had to indicate their identity. Behavioral findings show that voice recognition is better when voices are presented with dynamic but not static faces. Based on these findings we predicted that the relative contribution of the face to the multimodal face-voice representation would be smaller when voices are presented with static than with dynamic faces. To assess this prediction, we modeled the EEG responses to the multi-modal dynamic and static stimuli based on the responses to the faces and voices alone. Both the face and voice contributed to the representation of dynamic speaking faces, however the representation of static speaking faces was based primarily on voices. These results indicate that dynamic facial information contributes to voice recognition, even when recognizing voices of previously unseen faces.