Face Identification Using Haptic Interfaces

Face Identification Using Haptic Interfaces

Face Identification Using Haptic Interfaces (2006)

Macquarie University Safety Net Grant

Investigators: Kavakli, M

In this project, our main goal is to investigate how to improve the Face Recognition (FR) accuracy by measuring the standard deviations introduced by facial expressions in comparison to the neutral face data. To achieve this goal, we will develop a markup language (FaceXML) to encode a neutral face and compare instances of a face to this structure.

Facial identification compares an input image against a database and reports a match. Studies demonstrate that FR accuracy is 100% on frontal views and 97% on half-profile views, when 3D facial data is used. However, there is a significant drop in the performance of FR algorithms when facial expression variation is introduced. Our hypothesis is that using 3D face data and finding the structural elements of a neutral face may help improve the accuracy rate of face identification with variations in facial features (beard, moustache, hair, etc.) and expressions. We will use gesture recognition technology to construct a 3D face, and try to find a match for this in a 3D face database. Research on FR in humans has mainly relied on 2D images.

This approach has certain limitations. First, face encoding in 2D is problematic, although in reality face encoders or forensic examiners may be more spontaneous in exploring different views of a 3D face. Moreover, the volumetric information of a face is often confined to pictorial depth cues, making it difficult to assess the role of 3D shape processing. 2D face data relies on intensity variation, while 3D face data relies on shape variation.

Our proposal is similar to the existing approaches in terms of comparing query vectors with stored vectors in the database, using a 3D morphable face. However, our approach is unique, since we will account for facial expressions in the use of query vectors, and use a gesture recognition system to interact with the 3D morphable face, instead of getting images from calibrated stereo cameras. We will not use a 2D image but reconstruct a 3D face.

The technology we will use to reconstruct a face is reading facial data in 3D space as generated by a forensic artist sculpting the face. This will be an alternative to the use of face scanners which is a rather expensive technology that requires the voluntary involvement of the person to be identified. Person identification for crime prevention can not rely on voluntary involvement. We have been trained on how to track facial animations using an infrared Face Tracker, and how to import and manipulate facial animation data in Motion Builder.

We also obtained Ethics approval for testing. The test results will give us an opportunity to compare the advantage of using VR technology in comparison to the existing face recognition methods. In addition to a Digital Artist working as a Research Assistant in this project, we have a VR programmer. The project also proceeds parallel to a PhD thesis.

Back to the top of this page