Previous Story Next Story

Expert Answers: Auburn researcher comments on facial recognition systems

By Chris Anthony

Published: Jul 25, 2019 1:39:00 PM

Anh Nguyen Anh Nguyen

Anh Nguyen, assistant professor of computer science and software engineering in Auburn’s Samuel Ginn College of Engineering, comments on the growing capabilities and challenges of facial recognition technology. He researches deep learning, a subset of artificial intelligence.

Can you describe how facial recognition systems work?

The current state-of-the-art artificial intelligence technology can extract a huge amount of useful information from a single photo of a human face such as gender, age or identity. It also can recognize emotions (e.g., happy, angry, surprised or sad), estimate the head pose and where a person’s eyes are gazing (e.g., detecting whether a driver is focused on the road or distracted).

The technology has numerous real-world applications from face-identity-based phone locks, such as the FaceID feature on iPhone XR, camera surveillance, to assisting humans in autonomous vehicles and smart homes. Many schools in China now automatically take attendance of students as they walk past the gate or directly in the classroom.

A face recognition system typically has three steps. First, the system detects where a face is in the input photo by drawing a bounding box around the detected face. Second, because a face can appear in various poses, the detected face is transformed into a canonical frontal view, similar to a headshot passport photo, via a geometric warping procedure. Third, this transformed face photo is then fed into a classifier (often a giant artificial neural network) that extracts meaningful information out, such as identifying gender or identity.

Some critics have said facial recognition technology produces high numbers of false positives and contains racial and gender biases. What has the research shown?

Yes, facial recognition technology is indeed not yet perfect. In 2015, Google’s image-recognition system mislabeled an African-American user as “gorillas.” A recent groundbreaking study suggested that modern face recognition systems have a huge discriminative bias for female and darker-skinned faces. For instance, researchers at MIT found that facial recognition systems by Microsoft, IBM and Megvii are almost always correct (i.e., less than 1 percent error) on photos of white males. However, they are surprisingly wrong 20.8 percent, 34.5 percent and 34.7 percent of the time, respectively, when encountering female, darker-skinned faces. This issue raises serious implications in security, law enforcement and many real-world applications. For example, self-driving cars are estimated to have a high risk of failures when facing darker-skinned pedestrians.

How are you and other researchers working to improve the accuracy of facial recognition systems?

One goal of my research here at Auburn is to study the limits and biases of computer vision systems, including facial recognition systems, via rigorous and systematic tests. In addition, we are also proposing new methods for explaining the decision-making process of these giant neural network systems, which have long been considered mysterious black boxes.

For example, in a recent study, we found that state-of-the-art image recognition systems that are based on RGB photos are not robust for real-world tasks, such as self-driving cars. That is, these systems only correctly label a familiar object around 3 percent of the time when the object is not in the normal pose (e.g., when it is randomly positioned or rotated).

Specifically, in the face recognition area, a leading technique is to balance the heavily biased datasets, such as those that contain much more examples of white males than darker-skinned females, and provide even more examples to train the systems. However, there remain many open questions in fixing the biases in the vision systems and explaining how they work.

Where do you see this technology headed in the future?

The face recognition technology has already been popularly applied in a wide range of domains where there is a lot of data for computers to learn to recognize faces. However, the interesting challenges will be in successfully applying them to hard medical problems, such as diagnosing diseases from facial photos, or in the scenarios when we don’t have a lot of data (e.g., making it work as well for darker-skinned people).

This technology has prompted concerns over the invasion of people’s privacy. What role do you see computer scientists playing in mitigating these concerns?

Artificial Intelligence is one of the most exciting and fast-growing fields where the regulations need to catch up with the technology. I agree that the privacy issues are where policy makers and scientists need to collaboratively work together. As computer scientists, we are attempting to identify the failures and biases of such systems and explain their decision-making process. These findings will inform society on the pros and cons of using the technology and how to shape the privacy regulations.

Media Contact: Chris Anthony, chris.anthony@auburn.edu, 334.844.3447

Recent Headlines