Seminar: Preferential early attribution in segmental perception and its consequences for phonology. (CLaS-CCD Research Colloquium Series)
|Event Name||Seminar: Preferential early attribution in segmental perception and its consequences for phonology. (CLaS-CCD Research Colloquium Series)|
|Start Date||5 Jun 2018 2:00 pm|
|End Date||5 Jun 2018 3:00 pm|
Speaker : Associate Professor Amanda Rysling, Department of Linguistics, University of California, Santa Cruz, USA.
Recognising the speech we hear as the sounds of the languages we speak requires solving a parsing problem: mapping from the acoustic input we receive to the sounds and words we recognize as our language. The way that listeners do this impacts the phonologies of the world’s languages. Most work on segmental perception has focused on how listeners successfully disentangle the effects of segmental coarticulation to solve the parsing problem. An assumption of this literature is that listeners attribute the acoustic products of articulation to the sounds whose articulation created those products. As a result, listeners judge two successive phones to be maximally distinct from each other in clear listening conditions. Few studies (Fujimura, Macchi, & Streeter, 1978; Kingston & Shinya, 2003; Repp, 1983) have examined cases in which listeners seem to systematically "mis-parse" (Ohala, 1981; et seq.), hearing two sounds in a row as similar to each other, and apparently failing to disentangle the blend of their production. I present the results of a series of speech sound categorisation studies in which listeners were faced with ambiguity about the identity of the first of two successive phones. In these contexts, listeners productively heard the first sound as spectrally similar to the second sound in a manner suggesting that they construe the transitions between the two as evidence about the identity of the first. Listeners seem to default to construing the acoustic properties of the input as evidence about the phone they have already begun processing, rather than positing a new phone. Moreover, they do so until they encounter acoustics that are clearly inconsistent with that first phone. These effects go unaccounted for in the two prominent models of speech perception. Given parallels between these effects and several known domain-general effects in perceptual processing, I argue that this default is likely a consequence of the structure of the human auditory system. If this physiological basis is correct, then we can account for the predominance of regressive place of articulation assimilation in the world’s languages by appealing to a perceptual predisposition rather than a grammatical one.