Perception in Action
Perception in Action Research Group
The Perception in Action Program is headed by Dr Matthew Finkbeiner, Associate Professor Anina Rich, and Associate Professor Mark Williams at the Department of Cognitive Science, Macquarie University. This program is interested in the way in which the human brain processes information and uses it to act on the world. To investigate this issue we employ a variety of behavioural, psychophysiological and neuroimaging techniques.
The Perception in Action Research Group also has links with the ARC Centre of Excellence in Cognition and its Disorders (CCD), and in particular the CCD's Perception in Action Program.
Perception in Action Program Social Event
Back Row : Mark Williams, Jade Jackson, Geoffrey Gonzalez, Nicolas Bullot, Brenda OCampo, Paul Sowman, Regine Zopf, Matthew Finkbeiner, Kiley Seymour, Alex Wilson & Anina Rich.
Front Row : Soheil Asfar, Marina Butko, Genevieve Quek & Erika Contini.
Current Research Participation Opportunities
Body Perception in Eating Disorders
About the study
Our research investigates body perception in eating disorders. We are trying to understand why body perception can be distorted, for example why the body can be perceived as bigger than it actually is. The aim of our research is to understand the mechanisms that underlie changes in how one’s own body is perceived and to develop new interventions for how body distortions might be treated.
For our research, we are looking for people who have an eating disorder or people who had an eating disorder in the past and are recovered.
How to participate
For more information please contact:
Dr Regine Zopf
Department of Cognitive Science, Macquarie University Email: Regine.Zopf@mq.edu.au
Phone: 9850 2956
Body representations: Perception and Action
Processing of body information is central to the perception of ourselves and others and to enable successful interaction within our environment. Our basic research aim is to shed light on the role of visual information in body perception. For example, what role does looking at a hand have in body perception and how does it integrate with information from other senses such as touch. We study how these processes inform central body representations and distinctions between one’s own body and actions versus other objects, bodies or actions in the environment.
Face Processing & Attention
Of all the objects we encounter, faces are perhaps the most socially and biologically relevant. Accordingly, these stimuli hold a unique status within the human visual system, eliciting activation in specific brain regions and capturing our attention in complex visual scenes (Theewues et al).
One unique characteristic of faces is their capacity to be processed by the brain and affect human behaviour in the near-absence of attention (cf. Reddy, Wilken, & Koch, 2004; Finkbeiner & Palermo, 2009).
Our research in this area aims to further elucidate our understanding of the role attention plays in face processing, and how this varies under different conditions.
Gaze following: Automatic or top-down?
Eye gaze following is vital for smooth social interactions as it can indicate sources of threat, underlying attitudes, points of interest and so on. As such, it's not so surprising that infants begin to follow another person's gaze from 4 months of age. Given the early development of this gaze following ability, many researchers have concluded that gaze following is automatic (effortless and unavoidable). But our research suggest otherwise. We present our participants with masked (subliminal) averted eye-gaze cues in a central location and then ask our participants to respond to a peripheral target. If gaze following is effortless and unavoidable, then our participants should shift their attention in the direction of the averted eye-gaze cue. While our participants demonstrate a clear ability to process masked gaze cues in tasks that tap learned stimulus-response mappings, we find that the same gaze cues do not produce shifts of spatial attention unless they are highly visible. These results suggest that gaze following is not as effortless and unavoidable as previously believed.
Making sense of the world: How does the brain process task-relevant information?
In everyday life, we are constantly bombarded with information from our senses. To make use of the information we receive, we must partial out irrelevant information, and integrate the relevant information with our memories, goals, and the tasks at hand. Non-invasive neuroimaging techniques (e.g., functional Magnetic Resonance Imaging - fMRI; magnetoencephalography - MEG) afford a unique opportunity to investigate the processes adopted by human brains to sort sensory information. New brain imaging methodologies allow us to investigate not only which brain areas respond to particular tasks but also what information is coded in different brain regions. This project aims to extend current fMRI analysis techniques, so that we can examine more precisely how information is represented in the brain, using more fine-grained units of analysis (patterns across voxels). A second aim is to develop similar methods for analysing MEG data. The third aim is to implement these techniques in new experimental studies that address fundamental questions about how sensory information is represented and integrated in the human brain. More specifically, we propose to investigate the cognitive processes that occur in the frontoparietal brain regions, which have proven to be critical for the modulation and cognitive control of information processed in the visual cortex.
Multiple object tracking
In daily life, our visual system is bombarded with information - some of which must be ignored in order for us to achieve the task at hand. Furthermore, to navigate in a dynamic world, we must often track multiple objects moving simultaneously about the environment. This research examines the types of events that cause the most distraction when we are performing tasks involving motion. Specifically, we are interested in understanding how distracting events affect one's ability to track moving objects. Using the multiple object tracking (MOT) paradigm, we are building on classical experiments that have used stationary displays, to examine attention capture in a more real-world scenario.
Seeing clearly: Examining the consequences of glaucoma for the human brain
Glaucoma is a progressive optic neuropathy characterised by a specific pattern of optic disc damage and ganglion cell loss. If untreated it leads to blindness, and it remains one of the three major causes of blind registrations in Australia. Despite considerable deficits on objective testing, glaucoma patients are often unaware of their scotoma (blind spot). In the early stages of the disease, the cortex somehow 'fills in' the gaps, so patients do not see black regions associated with their loss of vision. There is the possibility that adjacent striate or higher cortical areas recruit unused neurons corresponding to the scotoma as a form of neural plasticity. The broad aim of this project is to investigate the way the brain adapts to changes in visual input due to a scotoma. The ability of the brain to 'fill in' missing visual information has been well documented using 'artificial scotomas', the physiological blind spot. This study will compare brain adaptations resulting from scotomas to those of the physiological blind spot (artifical scotoma) using both functional MRI to examine plasticity in the brain after retinal damage and behavioural studies examining the behavioural consequences of such plasticity. The findings of this research will help us understand the glaucoma process and how to possibly detect visual loss at an earlier stage of the disease.
Synaesthesia provides a unique opportunity to explore how we perceive the world. By looking at the way the synaesthetes' unusual experiences arise, we can find out more about how the brain processes incoming information from the senses, and puts together our conscious experience of the world. Synaesthesia may also provide insights into the role of learning and experience in our perception.
The brain that adapts itself: Flexible processing in an ever-changing world
How do humans - characterised above all animals for the diversity and flexibility of their behaviour - cope so effortlessly in the ever-changing world around us? How does the brain achieve such flexible control? What are the neural mechanisms that drive dynamic focus on the most important information? Novel methods for neuroimaging analysis enable new insights into how the brain processes information from the world and integrates it with internal representations of task rules and our current cognitive focus. This research uses multi-voxel pattern analysis with fMRI data to examine these processes as participants perform tasks in the scanner. This research focuses on the contribution of a key network of frontal and parietal "multiple-demand" (MD) brain regions which are important for a wide range of tasks. These regions are thought to behave flexibly: adjusting to process the most important information at each moment; and biasing processing elsewhere in the brain to drive a goal-directed response across the system. Current projects focus on the flexibility of processing in the MD system, the relationship between processing in these regions and more specialised brain regions such as the visual and somatosensory cortices, and what happens when the system fails and we make mistakes.
Towards understanding visual perception of the body: Neuroimaging and behavioural studies
A central question in cognitive neuroscience is how embodiment (i.e., having the kinds of bodies we have) influences our physical actions and how this shapes our perception of the world. This project aims to enhance our understanding of the influences of the body on perception, and its neural underpinnings. More specifically, the project investigates how information from the hands influences how healthy observers, as well as amputees, perceive external objects. In order to understand the neural mechanisms that underlie the use of our hands to manipulate objects, we use neuroimaging techniques to demarcate the neural representations of hand form and hand orientation, paying special attention to brain areas that have been found to be recruited in planning and controlling actions.
Research Group Leaders
- Associate Professor Matthew Finkbeiner
- Associate Professor Anina Rich
- Professor Mark Williams
- Dr Thomas Carlson
- Associate Professor Matthew Finkbeiner
- Dr David Kaplan
- Associate Professor Anina Rich
- Dr Paul Sowman
- Dr Susan Wardle
- Professor Mark Williams
- Dr Alexandra Woolgar
- Dr Regine Zopf
- Ann Carrigan
- Leidy Janeth Castro-Meneses
- Jade Jackson
- Manjunath Narra
- Marguerite Rowe
- Felice Smith
- Jasmina Vrankovic
- Kimberly Weldon
- Astrid Zeman
Current Research Assistants
- Marina Butko
- Dr Margery Pardey
Current External Associates
- Dr Nicolas Bullot
- Associate Professor Veronika Coltheart
- Dr Genevieve Quek
If you are interested in working in the lab as an intern, research assistant, honours student or PhD student, please email Marina Butko (firstname.lastname@example.org), who will assist you with your enquiry.
Department of Cognitive Science
Australian Hearing Hub, 16 University Ave
NSW 2109 Australia