Prof Niels Schiller Seminar
Date: Tuesday, 16 October 2018, 11.00am - 12noon
Venue: The Australian Hearing Hub, Level 3, Room 3.610, Macquarie University
Speaker: Professor Niels O. Schiller, Leiden University Centre for Linguistics (LUCL) & Leiden Institute for Brain and Cognition (LIBC), Leiden University
Host: Dr Mike Proctor
Topic: Morphological processing in speech production: The case of compounding
This talk will be about how we plan and produce speech. More specifically, how do we put together words and sentences and what are the linguistic units that need to be activated and retrieved from long-term memory. Words can consist of smaller meaningful elements called “morphemes”, e.g. the English compound dishwasher consisting of dish (meaning: ‘dirty dishes’) and washer (derived from ‘to wash’; meaning: ‘to clean’). How do we represent words like dishwasher in our memory – as one holistic entity or do we (also) store the morphemes dish, wash, and the suffix -er separately?
The present series of studies investigated morphological priming as well as its time course and neural correlates in overt speech production using a long-lag priming paradigm. Behavioural (reaction time), event-related potential (ERP), and neuroimaging (fMRI) data were collected in separate sessions. Recently, we extended our research to multilingual participants. I will report about five different studies which show an extremely coherent picture and argue for a separate level of morphological processing in language production planning.
Professor Niels Schiller is Scientific Director of the Leiden University Centre for Linguistics and Professor of Psycho- and Neurolinguistics at Leiden University. His research areas are psycho- and neurolinguistics, in particular syntactic, morphological, and phonological processes in language production and reading aloud. Furthermore, he is interested in articulatory-motor processes during speech production, language processing in neurologically impaired patients, and forensic phonetics. Professor Schiller is also involved in applied research, such as SpeechView, which consists of a pair of video glasses connected to a microphone that records the speech output of an interlocutor and transmits it wireless to a computing unit running speech-to-text software. The converted text is presented in the video glasses where the person wearing the glasses can read what the interlocutor has said. For sudden-deaf or hard-of-hearing people this system improves the intelligibility of speech significantly and can potentially increase their quality of life.