Session Perception VII:Perception VII: Audio-Visual Effects
|
Perception VII-1 |
THE EFFECT OF INCONGRUENT VISUAL CUES ON THE HEARD QUALITY OF FRONT VOWELS
Hartmut Traunmüller, Dept. of Linguistics, University of Stockholm Niklas Öhrström, Dept. of Linguistics, University of Stockholm Paper File |
Swedish nonsense syllables distinguished solely by their vowels [i], [y] or [e], were presented to phonetically sophisticated subjects auditorily, visually and in cross-dubbed audiovisual form with incongruent cues to openness, roundedness or both. Acoustic [y] dubbed onto optic [i] or [e] was heard as a retracted [i], while acoustic [i] or [e] dubbed onto optic [y] were perceived as rounded and slightly fronted. This confirms the higher weight of the more reliable information and that intermodal integration occurs at the level of phonetically informative properties prior to any categorization. | |
Perception VII-2 |
Auditory-Visual Integration in the Perception of Age in Speech
Sascha Fagel, Berlin University of Technology Paper File Additional Files |
Five speakers of different age uttering one sentence were recorded audiovisually. Stimuli were created where auditory and visual information are coherent (from the same speaker) as well as incoherent (combinations of audio track from one speaker and video track from another speaker). The subjects’ task was to rate the age either by the whole speaking person, only by the voice while ignoring the face, or only by the face while ignoring the voice. Results reveal that subjects integrate both modalities if available in all three tasks. Additionally it could be shown that a) this effect is stronger if visual information should be ignored, b) in coherent stimuli the subjects rely more on the visual information, and c) the robustness of the visual modality exceeds that one of the auditory modality. Overall results give evidence for vision as the leading modality with respect to age perception in audiovisual speech. | |
Perception VII-3 |
A PERCEPTUAL DESYNCHRONIZATION STUDY OF MANUAL AND FACIAL INFORMATION IN FRENCH CUED SPEECH
Emilie Troille, Département ICP-Parole et Cognition de GIPSA-Lab Marie-Agnès Cathiard, Département ICP-Parole et Cognition de GIPSA-Lab Christian Abry, Département ICP-Parole et Cognition de GIPSA-Lab Paper File |
French Cued Speech, adapted from American Cued Speech, disambiguates lipreading by a manual code of keys allowing the deaf to recover a more accurate phoneme identification. Using movement tracking of manual and facial actions coproduced in CS, Attina et al. evidenced a significant anticipation of the hand over the lips. In this study we tested the natural temporal integration of this bimodal hand-face communication system, using a desynchronization paradigm in order to evaluate the robustness of CS to temporal decoherence. Our results obtained with 17 deaf subjects demonstrate hand gestures can be delayed relative to the lips without consequences for perception, as long as this delay does not push the hand outside the visible articulatory phase of the consonant constriction state. Perceptual coherence or recomposition of coherence (recoherence) depends crucially on the compatibility of hand and mouth states, i.e. on the timing patterns evidenced in preceding production studies. | |