Cognition in Action Facility
The Cognition in Action Facility employs natural human movements such as reaching and grasping as measures in various tasks investigating cognitive processes. These sorts of movements can be quantified to yield continuous datasets that are useful to researchers in the study of human cognition. Continuous data potentially constitutes a much richer data source than discrete measures (i.e. reaction times captured on a button press task). Where discrete measures reflect the culmination of several stages of information processing, a continuous measure has the potential to reveal these processes as they unfold in real time. The goal of the lab is to harness these measures to further our understanding of aspects of cognition such as attention, subliminal processing and perceptual decision making.
Cognition in Action Tools & Equipment
In the Cognition in Action laboratory we use a combination of different tools to investigate information processing as it unfolds in real time. These systems allow our researchers to use participant action as a measure for various tasks. For example, instead of asking participants to respond by pressing a button or triggering a voicekey, we ask them to reach out and touch or grasp an object. By analyzing the kinematics of the participant's reaching response, we are able to compare several different dependent variables (e.g. peak velocity, peak acceleration, curvature and others) across different experimental conditions. Data analysis can be done using C-Motion's Visual3D, or with custom, in-house analysis programs.
Optotrak Certus
The lab is currently equipped with an Optotrak Certus, an optical motion capture device that can track an individual's movements with sub-millimeter and sub-millisecond precision. We also have an Ultrasound system that can be used in conjuction with the Optotrak system to record the movements of the articulators that are otherwise out of view (e.g. the tongue).
Optotrak Extension for Presentation
We have developed in-house an extension to Presentation to interface with the Optotrak Certus. Released under an open-source license we are making this tool available to interested researchers. All that we ask is that you register with us for the download.
Liberty Polhemus
Human movement experiments are also carried out using the Liberty Polhemus motion tracking system. This electromagnetic system can track participantsí reaching movements with high precision, sampling the fingerís position 240 times per second. The lab is equipped with several Liberty systems, which when used in parallel enables researchers to record movement data from up to four participants simultaneously.
Virtual Hand System
Our virtual hand setup includes a CyberGlove motion capture system which measures the angles of 23 hand joints, a magnetic motion capture system (Fastrack) which measures arm movements, and a 3D screen. Custom-built Matlab-based software (designed by Jason Friedman, Repeated Measures) is used to collect detailed hand posture information that can be used for both later analysis and real-time animation of virtual hands or objects. The system enables researchers to manipulate both temporal movement parameters (e.g. the temporal delay between a performed and viewed movement) and spatial movement parameters (e.g. the viewed hand orientation or movement direction), and can be employed to answer a wide range of questions in perception and action research. Some areas we are working on include how the brain represents human hand information and how viewing a hand influences movement perception, attention and action.
ASL Head Mounted Eyetracker
A recent acquisition in the lab is our ASL H6 Head Mounted Eyetracker. This system captures the participant's pupil diameter and point of gaze, superimposing this measurement on a head-mounted camera image of the environment from the participantís perspective. The system is able to integrate information about the position of the participantís head with their eyetracking data, such that point-of-gaze information can be obtained for multiple surfaces in the environment.
Additional Systems
Other systems currently in use within our lab include a Magstim Rapid2 Transcranial Magnetic Stimulator that enables both single plus and repetitive TMS experimental designs. The lab also uses a 64-Channel BioSemi ActiveTwo EEG system, a Minibird to measure 3D position and orientation, and a Phantom-Haptic (force feedback) device that applies arbitrary forces to the person holding it, so that it feels as if you are interacting with a real object.
The Lab in Action
An important feature of the lab is the real time interface between the Optotrak system and the stimulus-presentation system. This allows researchers to employ experimental designs whereby they can change the stimulus display depending upon the participant's movement position or velocity. It also allows for the use of psychophysical experiments in which stimulus properties vary during the course of an experiment as a function of various kinematic measures. You can see a demonstration of this real time interface in the video below. Note how the square on the screen changes location and size to reflect movements in 3D. Notice also that the colour of the square changes as the velocity of the movement changes.
Movement trajectories from a single subject in a masked congruence priming experiment in which the task was "Is it an animal or a tool?". In the "Congruent" condition, both the masked prime and the target referred to the same target (e.g. dog [prime] - DOLPHIN [target]). In the incongruent condition, the prime and target stimuli referred to opposite categories (e.g. hammer - DOLPHIN). The greater tendency by this subject to initially point to the wrong target in the incongruent condition suggests that the processing of the prime stimulus proceeds all the way down to include the formulation of an overt motor response.
Cognition in Action Facility Guidelines
Oversight Committee: Matthew Finkbeiner, Jason Friedman, Paul Sowman, Mark Williams
The Oversight Committee (OC) is in charge of the lab and is responsible for the quality of the research conducted therein.
Sometimes this means that they have to make big decisions.
New Researchers in the lab
If you are hoping to run an experiment in the Action lab, you will first need to present your research proposal/plan at the Attention and Action meeting, which is held on Friday afternoons at 3:30. This will give the members of the OC a chance to assess the quality and feasibility of the proposal and to determine if it can be done in the lab.
All new researchers will need to 'align' themselves with one of the members of the OC. This is meant to ensure that the OC does what it is tasked with doing: overseeing the research conducted in the lab.
You will need to obtain ethics for your experiment prior to beginning your experiment.
BioSemi
If you are hoping to use the BioSemi, you will first need to go through a series of training sessions with Shahd Al-Janabi (EEG System Custodian) to make sure that you know how to use it and how to take care of it.
TMS
If you are hoping to use TMS, you will first need to go through extensive training with Paul Sowman.
Optotrak
If you are hoping to use the Optotrak system, you will need to train with either Matthew Finkbeiner or Jason Friedman.
Funding
All researchers using EEG or TMS will be expected to pay $10/subject into the lab coffers. This allows us to keep the lab stocked with consumables.
Current Researchers
If you notice that we are getting low on consumables, let Shahd know via email. If you notice any hardware/software problems, contact Matthew immediately.
Cognition in Action Facility Researchers
Cognition in Action Oversight Committee
Dr Matthew Finkbeiner (Chair) - My research focuses primarily on nonconscious processes. To investigate this, I use the masked priming paradigm with several different dependent measures, including reaction times, ERPs, TMS and (mostly these days) motion capture. The action lab is ideally suited for my research program because in it we have motion capture, EEG and TMS systems.
Associate Professor Mark Williams - My research focuses primarily on the cognitive and neural mechanisms involved in face and facial expression perception. I am also interested in other aspects of perception such as the way we process other objects and complex scenes. I use neuroimaging techniques such as fMRI and simultaneous MEG/EEG to explore questions of the location and timing of neural events. I also work with neuropsychological patients and healthy individuals using visual psychophysics.
Dr. Paul Sowman - My research interest is in how the nervous system controls movement. I have a particular interest in motor control of the jaw and mouth. In my research I use electromyography (EMG), transcranial magnetic stimulation (TMS), electroencephalography (EEG) and Magnetoencephalography (MEG). I am currently investigating sensorimotor integration in stuttering.
Cognition in Action Researchers
Regine Zopf - The aim of my research is to investigate the influence of information regarding body ownership for perceptual and motor processes. Body ownership information enables us to distinguish our body from other bodies, and the sense of body ownership can be disturbed in neurological disorders (e.g. somatoparaphrenia). Research so far has focused on possible cues that may inform body ownership; the role of basic sensory and sensori-motor processes that underlie and are affected by such cues remain however unclear. One important question of my research is, if body ownership cues modulate the way our brain uses visual information regarding the seen hand for action.
Cognition in Action Student Researchers
Irene Chork - I am studying non-consciously processed stimuli through the use of masked primes and continuous motion tracking. This allows us to see reaching movements that reflect underlying cognitive processes. Past research has examined priming as information integrated and thus modulating a conscious target response. I will look at whether these non-consciously processed primes can elicit an overt behavioural response independent of the target.
Longjiao (Caroline) Sui - I am interested in exploring the advantages of bilingualism on non-linguistic tasks. The action lab provides me with the opportunity to investigate the advantages of bilingual over monolingual with high accuracy.
Manjunath Narra - My thesis focuses on exploring the effect of bilingualism on non-linguistic response conflict tasks i.e., a speeded response to one stimulus dimension, while ignoring another distracting stimulus dimension. The bilingualism literature reports a bilingual advantage in terms of reaction time and interference effect on these tasks. However, it is currently unclear how response conflict mechanisms differ between bilinguals and monolinguals. The main research question in my thesis is to understand the temporal dynamics of conflict resolution when subjects are engaged in a conflict task. I will investigate how the language groups differ when faced with conflict. I use refined temporal measures such as reaching responses and TMS motor evoked potentials to track conflict resolution processes for bilinguals and monolinguals.
Samantha Parker – My research investigates the perceptual processes that influence our interpretation of video and visual evidence and the impact this can have on legal decision making. Past research has demonstrated that video confessions shot from particular camera perspectives erroneously impact on the judgments mock jurors make as to the voluntariness of the confession and the suspect’s subsequent guilt. I am interested in examining the attentional and perceptual mechanisms that influence the relationship between video evidence and legal judgment.
Usha Sivaranjani Sista - I am interested in motor cognition. I am looking at the phenomenon of interference effect in arm reaching movements. I am currently studying what causes the effect by looking at arm movement trajectories using motion capture devices such as the Optotrak system. I intend to use other systems such as the minibird, and the cyberglove, in order to predict a model for the interference effect as well as test the predictions.
Cognition in Action Alumni
Bhuvanesh Awasthi - My broader research interests lie in exploring the cognitive aspects of brain-body-environment that bring about perceptual experience. There is growing consensus that perception is a parallel, distributed and interactive process. At the Action facility, I use visually guided reaching as a continuous behavioural measure to study perceptual processing of faces. Rather than several sequential stages, as suggested by early researchers, it seems likely that multiple, competing, parallel processes are involved in face processing. For example, using reach trajectories, we found behavioural evidence for early processing of low spatial frequency information in faces while reaching to high spatial frequency face targets. Reaching trajectories can reveal new information regarding otherwise hidden internal events. They can also reflect the continuity between brain-body-environment that enables mental phenomena. Perception and action are critical to behaviour and this approach can provide an insight into perceptual and cognitive deficits, besides having far-reaching influences on the way we think about sensory information processing.
Genevieve Quek - I study nonconscious processing of subliminal stimuli under different manipulations of temporal and spatial attention. I use motion capture technology to examine participants' reaching responses that reflect cognitive processes unfolding over time.
Dr Jason Friedman - I study arm movements and grasping as examples of how the brain plans movements in highly redundant systems. I use motion capture devices, such as the Optotrak system, the minibird, and the cyberglove, in order to test the predictions of these models. I also use arm movement trajectories as an analysis tool to provide new insights into problems in perceptual decision making.
Lincoln Colling - My primary research area is social cognition with a particular focus on understanding the mechanisms that allow people to engage in joint tasks. In particular, my experimental work is aimed at uncovering the mechanisms that allow people to plan and execute their actions in response to actions performed by other people. My research combines methods from experimental psychology and cognitive neuroscience. I employ techniques like motion capture and computer-based reaction time experiments to study how people produce and respond to actions, and brain imaging techniques like electroencephalography and magnetoencephalography (MEG) are employed to understand how the brain responds to action observation.
Lars Marstaller - I'm working on gestures, ie. the hand and arm movements that accompany speech. I am interested in developing ways of how to automatically detect gestures based on movement tracking. The action lab provides me with the opportunity to measure hand movements with high accuracy. This kind of data will enable me to search for movement patterns that are not available to video based methods of motion analysis.
Shahd Al-Janabi - My research interests lie in attention and non-conscious perception, and I explore these topics using EEG and the Optotrack system. The primary line of research concerns the automaticity (and depth) of non-conscious information processing – how task-based attention modulates non-consciously presented stimuli, and the consequences of such selective processing on non-conscious perception. I'll be investigating, in particular, whether different types of non-consciously presented stimuli can bypass the influence of task-based attention.
Information for Undergraduate Students
Cognition in Action researchers accept Psychology Honours students. Review the list of Cognition in Action researchers to learn more about their research interests.