Tips for Navigating Ethics and Compliance with Facial Recognition Solutions

May 15, 2024
Can businesses adopt cutting-edge biometric technologies without running afoul of regulation?

Today’s threat landscape is evolving at an unprecedented rate, inviting equally rapid innovation to combat it. Artificial intelligence finds itself a cornerstone of both sides as advancements in cybersecurity, video analysis and access control see the technology increasingly integrated into existing systems.

Facial recognition solutions, with the help of progressively more robust AI models, have become more precise than ever. But as the technology increases in sophistication, ethical and legislative pressures mount. Navigating these challenges ethically and compliantly is key for businesses looking to leverage biometrics and AI to protect their assets, said key experts in a webinar shared by Biometric Update, titled "Integrating facial recognition capabilities into your business while ensuring compliance and data protection."

An Ethical Crossroads

As both a threat and an opportunity, AI innovations have dominated the security industry from many angles. However, the technology is not typically well understood by consumers, and the influx of massive breaches from organizations like Change Healthcare and Dell has made them understandably more wary of security solutions that require their personal data.

What’s considered “ethical” can cover a broad spectrum of ideas and influences, so businesses must adopt a holistic approach and maintain transparency from the top down, says Pauline Norstrom, CEO of Anekanta AI.

“Ethical AI requires cooperation across a business,” says Norstrom. “All integration policies should be driven from the top down with a clear chain of accountability.”

Safety and consistency are key, she says, and businesses should seek guidance on how to best follow these principles. Receiving independent advice can stymie groupthink, preventing biases from appearing in both a business’s roadmap and its AI solutions. Test projects and employee training can further bolster an organization’s AI posture.

“Public trust can be built while achieving business goals, but the business needs to be transparent,” Norstrom says.

The Cyber Problem

One daunting aspect of maintaining an ethical stance on AI is its inextricability from the cyber realm. Data breaches are occurring with increasing frequency and compromising the personal data of millions of consumers, making the storage of biometric data by AI facial recognition solutions an unattractive side effect.

Biometric data is particularly sensitive personal information, which makes it even more attractive to threat actors. Experts say organizations need to take an active role in protecting sensitive customer data to allay this fear.

Mike Gillespie, Founder and Thought Leader at Advent IM Limited, says businesses must focus on protecting this data through supply chain assurance, insider threat management or a host of other preventative strategies. He urges businesses to manage this technology as an information project rather than a technological one.

Gillespie warns organizations against leaning too far into “scope creep,” or gradual increases in a project’s scale to the point where it becomes unmanageable. If a project is already underway, scope creep might also have an adverse impact on privacy -- every new data set procured is another asset that businesses must be held accountable for.

“Technology is part of the solution, but not all of it,” he stresses. “Businesses need to learn the difference between what is ethical and what is lawful.”

Navigating a Legal Minefield

Maintaining ethical standards is not the only hurdle, however. The rapid expansion of AI capabilities in such a short timeframe meant that the technology easily outpaced regulatory reach, and many industry giants are now facing the consequences.

The growing scrutiny of biometric technology from both consumers and lawmakers arrives alongside various legal challenges, including those currently being leveled against massive companies like Target, Rite Aid and Amazon. This highlights the importance of careful legal navigation.

“These cases exemplify the heightened regulatory landscape that now governs biometric technology,” says Tony Porter, Corsight AI’s Chief Privacy Officer. “They show the critical need for compliance and the repercussions of neglect.”

Porter advocates for a strong adherence to compliance and ethical AI practices to build consumer trust. Software legality needs to be assessed, and companies must make sure they are doing their due diligence to operate in a way that is “compliance-proof,” he says. By doing so, organizations can avoid noncompliance pitfalls.

Businesses can also take the opportunity to build trust with their consumer base by addressing their privacy concerns. The endless push for more advanced security systems is already in conflict with consumer privacy, so companies need to understand these perspectives from the get-go.

“We are at a crossroads – demand for enhanced security collides with privacy and ethical considerations. This is an old argument and an old tension,” says Porter. “If you’re going to use facial recognition, you need to let people know what you’re doing and why. Familiarizing and implementing AI in a safe and compliant manner represents a significant business opportunity and potential return on investment.”

Samantha Schober is Associate Editor at SecurityInfoWatch.com.

About the Author

Samantha Schober | Associate Editor

Samantha Schober is associate editor of SecurityInfoWatch.com.