Korin Richmond, CSTR Edinburgh Exploiting Articulation in Speech Technology Mainstream speech technology is occupied largely with speech in the acoustic domain. This is natural since the acoustic domain is where the speech signal exists in transmission between human speakers, and we can conveniently record and generate acoustic signals. However, an acoustic speech signal is generated in humans by the physical articulatory system, and representing speech in terms of its underlying articulation offers an alternative to the acoustic representation normally dealt with in speech technology. An articulatory representation of speech has certain properties which may be exploited in modelling speech. For example, speech articulators move relatively slowly and their movements are continuous; the mouth cannot 'jump' from one configuration to a completely different one instantaneously. Exploiting such knowledge of speech production could improve speech processing methods by providing useful constraints. Several potential applications have been proposed; for example, low bit-rate speech coding, speech analysis and synthesis, automatic speech recognition, animating talking heads and so on. In this talk, I will begin by introducing the topic of using articulatory data in speech technology generally. I will then present a few examples from my own work on using articulatory data. These examples will be drawn from two areas. First, I will present a summary of various ways of exploiting articulatory information for speech synthesis. Second, I will talk about a method for performing the acoustic-articulatory inversion mapping, whereby for a given acoustic speech signal we aim to estimate the underlying sequence of articulatory configurations which produced it.