Breadcrumb Navigation
 
Content

Do you hear who is talking? Speaking rate normalization in multiple talker conversations

Speech is very variable. But listeners evidently have little difficulty in hearing the same words when they hear them from different speakers or spoken at a fast or slow rate. A central question in phonetics and speech processing is: how do listeners deal with this kind of variation so effortlessly? The solution to this puzzle is important, not only to understand how speech processing functions in humans (including pathological speech) but also to improve possibilities of verbal interaction with machines. From a theoretical perspective the project seeks to resolve an apparent conflict in the literature about the nature of human word recognition. Specifically it will address the disputed role of episodic memory in speech processing: Do listeners remember detailed speaker-specific information such as speaking rate in order to 'tune in' to a speaker's speech or is speaking rate 'normalized' before words are recognized in a speaker-independent fashion? The project will disentangle these opposing accounts by going beyond simple laboratory experiments and investigating word recognition in multiple-speaker conversations. This allows yet another factor to be taken into account: the dynamics of dialogue. If interlocutors converge on their speaking rates in the course of a dialogue it may not be necessary for listeners to track each speaker's speech characteristics to optimally recognize what is being said. Rather, a speaker-independent processing mechanism may be advantageous. To further establish conditions under which listeners use speaker-specific vs. speaker-independent processing strategies results from speaking rate will be compared to another acoustic cue: voice pitch. The aim of the project is to explore listeners' strategies in dealing with multiple speakers and thereby disentangle conflicting theoretical accounts of speech processing: how important is it to hear who is talking?