NIST: Masks wreak havoc on facial recognition algorithms

July 29, 2020
Research finds error rates between 5% and 50% when face masks are applied digitally

While there has certainly been no shortage of security solutions introduced as of late to help organizations mitigate threats related to COVID-19, be it thermal imaging for temperature monitoring or video analytics for mask detection, new research from the National Institute of Standards and Technology (NIST) has found that facial recognition algorithms are having a difficult time identifying people wearing face coverings.   

According to a NIST Interagency Report published this week, even the best of the 89 commercial facial recognition algorithms tested had error rates between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask.

“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” Mei Ngan, a NIST computer scientist and an author of the report, said in a statement. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”

In its research, the NIST team explored how well each of the algorithms was able to perform “one-to-one” matching, where a photo is compared with a different photo of the same person. The team tested the algorithms on a set of about six million photos used in previous studies. Researchers then digitally applied mask shapes to the original photos and tested the algorithms’ performance. Because real-world masks differ, the team also came up with nine mask variants, which included differences in shape, color and nose coverage.

“We can draw a few broad conclusions from the results, but there are caveats,” Ngan said. “None of these algorithms were designed to handle face masks, and the masks we used are digital creations, not the real thing.”

With these limitations in mind, the study provides a few general lessons when comparing the performance of the tested algorithms on masked faces versus unmasked ones, including:

  • Algorithm accuracy with masked faces declined substantially across the board. Using unmasked images, the most accurate algorithms fail to authenticate a person about 0.3% of the time. Masked images raised even these top algorithms’ failure rate to about 5%, while many otherwise competent algorithms failed between 20% to 50% of the time.
  • Masked images more frequently caused algorithms to be unable to process a face, technically termed “failure to enroll or template” (FTE). Face recognition algorithms typically work by measuring a face’s features — their size and distance from one another, for example — and then comparing these measurements to those from another photo. An FTE means the algorithm could not extract a face’s features well enough to make an effective comparison in the first place.
  • The more of the nose a mask covers, the lower the algorithm’s accuracy. The study explored three levels of nose coverage — low, medium and high — finding that accuracy degrades with greater nose coverage.
  • While false negatives increased, false positives remained stable or modestly declined. Errors in face recognition can take the form of either a “false negative,” where the algorithm fails to match two photos of the same person, or a “false positive,” where it incorrectly indicates a match between photos of two different people. The modest decline in false positive rates show that occlusion with masks does not undermine this aspect of security.
  • The shape and color of a mask matters. Algorithm error rates were generally lower with round masks. Black masks also degraded algorithm performance in comparison to surgical blue ones, though because of time and resource constraints the team was not able to test the effect of color completely.

“With respect to accuracy with face masks, we expect the technology to continue to improve,” Ngan said. “But the data we’ve taken so far underscores one of the ideas common to previous FRVT tests: Individual algorithms perform differently. Users should get to know the algorithm they are using thoroughly and test its performance in their own work environment.”