Columbia University in the City of New York

Computer Models Mimic Brain’s Ease in Telling Faces Apart

Models that perform statistical analyses of hundreds of visual clues point the way to understanding how our brains give us the ability to distinguish faces

Comparing how different facial-modeling programs measure facial similarity can offer hints to how people make those judgements.

NEW YORK – There currently are 7.9 billion human faces on the planet. All are variations on the same template: two eyes flanking a nose above a mouth. Yet with a mere glance, most of us can tell the difference between any two faces. How do our brains make these lightning-fast judgments?

Spoiler alert: No one knows. And no computer program today is particularly good at discerning what makes one face different from another. But published today in the Proceedings of the National Academy of Sciences (PNAS), an international team of researchers testing object-recognition programs have uncovered clues about the kinds of computations brains might be making when assessing the familiarity of faces. 

“Distinguishing faces from one another is a very very fine visual discrimination, yet we are masters at it even when the parts of faces, like noses and cheeks, can look quite alike,” said Kamila M. Jozwik, PhD, of the University of Cambridge and lead author of the paper. “Our study could help us understand how we recognize and perceive people, which influences how we think about them and treat them.”

“The ability to notice differences even in the same face, as in facial expressions or in familiar faces that have aged, influences our emotions and how we interact with people,” added Katherine R. Storrs, PhD, of the Justus Liebig University Giessen, Germany, and a coauthor of the study.

In their paper, Drs. Jozwik and Storrs, Nikolaus Kriegeskorte, PhD, a principal investigator at Columbia’s Zuckerman Institute and leader of study, and several coauthors, identified a “surprisingly simple” computer model that proved to be quite good at gauging facial differences.

“We hope this provides us with theoretical insight into the computational process our brains make when recognizing faces or encountering new faces,” said Dr. Kriegeskorte, also a professor of psychology and neuroscience and director of cognitive imaging at the Zuckerman Institute. 

Toward that end, the researchers recruited 26 undergraduates at the University of Cambridge and asked them to rank many pairs of computer-generated faces (based on the scans of real faces) with respect to how similar the two faces in each pair appeared.

The researchers then tasked 16 different face-recognition programs, each running a model that represented faces in a different way, to make the same similarity ratings. Some of the models represented faces as digital images, massive arrangements of pixels. Some models relied on geometric meshes, whose facets can be adjusted to represent faces. Still others made comparisons of facial landmarks and textural features like color and stubble. 

“We were looking for a computer model that would make the same judgments people do when comparing faces,” said Dr. Jozwik. “That would put us in a great starting place to ask how the brain does this.”

We were looking for a computer model that would make the same judgments people do when comparing faces.

The researchers found that two types of models were best at replicating the students’ similarity rankings. One type, deep neural networks, is used on our mobile phones to recognize faces in photos and is often depicted in movies and TV shows whose storylines include artificial intelligence. 

Programmers train DNNs to recognize an object, say, a cat or a human face, with galleries of digital images that have previously been annotated by a person as being examples of the object of interest. The training phase continuously readjusts the sequence of calculations that DNNs make until these artificial intelligence programs can spot the target object in new images presented to the models. In the study, the DNN models compared the many thousands of pixels comprising different faces. From those comparisons, the DNNs calculated similarity rankings between the same face pairs the students rated.

The other type of program that was especially good at replicating the student’s facial similarity judgments was derived from the Basel Face Model (BFM). Think of the BFM as a kind of digital clay for faces. By massaging various portions of a digital face (scanned from a real person), it becomes possible to morph it, more or less subtly, into a different face. The starting and morphed faces then become pairs of faces whose shapes and textures can be precisely and mathematically specified. For the PNAS study, the researchers created pairs of faces from the BFM model and asked students to arrange them on a large computer touch screen according to how similar the face pairs appear.

The most striking result, the researchers said, is that the BFM was as good as the far more computationally intense DNN models in replicating the facial-similarity perceptions of the students. This suggests that the types of statistical variations between faces assessed by the BFM model are important to our brains, said Dr. Storrs. 

The researchers stress that their study has limitations. For one thing, the BFM was built from the student volunteer subjects at Cambridge University: 200 mostly young, White faces.

“The natural variation in a population of faces is different for different people in different places,” said Dr. Kriegeskorte. Unavailable at the moment are tools and datasets that are representative of the world’s facial diversity. That limits the confidence the researchers can currently have that their work does, in fact, point toward the brain’s own computational techniques for assessing faces.

“Our hope is that these findings can guide us toward research questions and methods that will unveil more precisely where and how in the brain this crucial information-processing task is going on,” said Dr. Kriegeskorte. “We also hope research like ours will help us understand the inner workings and shortcomings of artificial intelligence systems for recognizing faces, which are becoming more prevalent in our technological landscapes.”  

 

                                              ### 
 

This paper is titled “Face dissimilarity judgments are predicted by representational distance in morphable and image-computable models”

Additional contributors include Jonathan O’Keeffe, Wenxuan Guo and Tai Golan.

This research was supported by the Wellcome Trust [grant number 206521/Z/17/Z] awarded to KMJ; the Alexander von Humboldt Foundation postdoctoral fellowship awarded KMJ; the Alexander von Humboldt Foundation postdoctoral fellowship awarded to KRS; the Wellcome Trust and the MRC Cognition and Brain Sciences Unit.

The authors declare no competing interests.

RELATED NEWS

The Senses May 13, 2019

Room for Thought: Brain Region That Watches for Walls Identified

Advanced imaging technologies observe one brain area’s ability to rapidly sense our surroundings; lays groundwork for improvements to machine learning and robotics.

View All News >