Face, Speech, and Acoustics  

Face, Speech, and Acoustics

   Introduction ·  Program ·  Local Information ·  Registration ·  Contact

 

Abstract

Kevin G. Munhall, (Queen's University, Kingston)

Decomposing visual speech: Segmental and prosodic aspects of audiovisual speech perception

In this talk I will summarize a series of studies on the spatial and temporal characteristics of visible speech. These studies have been carried out with my colleagues at Queen's University and ATR Laboratories. In our studies we have manipulated the spatial frequency content of images directly by digital filtering and indirectly by controlling gaze location. The results suggest that visual speech information is coarsely coded with relatively low spatial and temporal information being sufficient for perceptual accuracy. In a separate series of studies we explored the motion characteristics of visible speech. Using animation techniques we can separately manipulate the facial motion patterns that convey segmental and prosodic structure. Our neurological studies indicate these aspects of visual speech perception can be independently impaired. However, in other perceptual experiments we have also demonstrated interactions between visual prosody and segmental processing.

 


Last modified: Sat Nov 16 16:45:16 CET 2002