Voice AI vs. Behavioral Biometrics

July 11, 2023
Examining the divide between technology and use case applications

This article originally appeared in Access Control Trends & Technology 2023, a special bonus publication to Security Business magazine, Security Technology Executive, and Locksmith Ledger magazine.  


Just last month, federal regulators at the FTC warned that “voice scams” are becoming more complex, targeted, and difficult to stop, due to an increasingly advanced process in which criminals “clone” a person’s voice for nefarious purposes. These scams represent an evolution of traditional voice fraud– which relied on criminals impersonating law officers, bank officials, or loved ones in an attempt to persuade the victim to send over funds– typically through a payments platform or other service. While many organizations have adopted physical biometrics like voice confirmation to authorize payments (and lower losses), the advent of AI means that we’re now seeing schemes that are increasingly targeted, advanced, and virtually undetectable by traditional security controls.

In the financial sector, specifically, audio-based schemes have become a boon for bad actors looking to defraud customers through impersonation and deceit. With AI-powered voice scams, criminals now have the ability to impersonate a loved one with a higher degree of accuracy, using a simple voice sample to replicate and then call someone requesting money. This new take on an old scheme has become significantly more effective and dangerous.

The most common targets of these attacks are older individuals who typically have lower rates of awareness and security literacy to identify and defend themselves against fraud. Recently, senior citizens in Canada lost up to $200,000 through AI-based voice scams. Even more alarming, criminals can also use stolen voice samples to try and bypass security protocols for a user’s bank account, allowing them to freely move about and siphon funds. And, thanks to the rise of social media, there is no shortage of samples for criminals to choose from, leading to a new era where quite literally anyone could be impersonated and used for an attack.

With consumers concerned about protecting their privacy and maintaining security, it begs the question, what are the alternative controls that can be used to secure accounts and are not subject to being stolen or copied by cybercriminals? One solution lies in behavioral biometrics– a machine-learning-based technology that analyzes a user’s digital, physical, and cognitive behavior to distinguish between cybercriminals/AI/bot activity and legitimate customers, weeding out the bad actors from the good. As a result, it’s imperative to clear up misconceptions regarding behavioral biometrics and other tools that will be essential in defeating scams of the future.

What’s the Difference?

To understand why behavioral biometrics is essential, we must first understand why traditional safeguards have come up short. Generative AI today is capable of making sophisticated copies of an individual’s physical biometrics, like for example cloning their voice, which can now bypass previously effective methods of authentication.

Beyond AI-generated voice profiles, fraudsters have many different ways to build up a physical biometric profile of a user. They can scour social media and/or employ social engineering tactics to build up sophisticated copies of real photo IDs, simply steal a user’s smartphone or do a SIM swap to trick the system into letting them into the user’s account. Particularly vulnerable to criminal exploitation are Photo ID scans. It’s relatively easy to make a realistic copy of one’s driver's license or passport and AI can trick even liveness detection systems. In each of these cases, a single breached access point can lead to the criminal accessing the system and then completing any number of illegal activities– most ending with a defrauded customer and the FI left to pick up the pieces.

Instead of relying on these traditional safeguards, FIs are better off establishing a risk-based authentication strategy whereby there is no single point of failure that can provide the fraudster access to the genuine user’s account. Even if all the frontline defenses are breached, behavioral biometric intelligence can sniff out signs of risk, alerting fraud teams when subtle anomalies appear, even mid-session, between the user’s genuine behavioral profile and the fraudster’s AI-generated behavior.

This goes beyond simple voice commands or 2FA and instead uses factors that analyze not only user behavior but how well it matches up to criminal intent. For example, with this technology, it is possible to detect an ongoing scam in real time as the following elements can be picked up as irregular:

●    Unusually long sessions and user hesitation when submitting a transaction, which could indicate coercion or intimidation.

●     On mobile apps, an active voice call and movement of the voice from ear to mouth and then back again. Intermittent changes in the x,y, and z coordinates of the device also signal unusual user activity while being on their online banking account.

●     On a desktop web login, aimless mouse motion is considered suspicious and a sign that the fraudster may need time to enchant, coerce and guide its victim. Or the victim needs to maintain the live session before an automated logoff stops a session or a screen saver takes over.

In other words, these systems are not simply relying on a single point of authentication to grant permission– say a voice-activated password– but instead continuously monitoring user behavior to detect anomalies based on past and present inputs. This is in addition to verifying the traditional controls such as location, device fingerprint, and IP address. Behavioral biometrics is not only capable of identifying hacked accounts but also ascertaining when the genuine account owner is acting under duress or being coached by a criminal.

Where Do We Go?

The long-term ramifications of the generative AI technology rush may not be understood for some time. Still, it seems safe to say that operating under true privacy will be, if not already, elusive. It is important for financial organizations to install safeguards that don’t rely on information that can be copied, including physical biometrics like voice verification. All entities entrusted with safeguarding consumer privacy will need to have a multi-tiered risk-based authentication approach and institute a rapid response system to ensure that their customers’ data is always secure. Behavioral biometrics will fill this need, and crucially, do so without requiring any additional steps on the user’s part– making it a seamless, effective, and powerful counter to emerging AI cyber threats. As awareness and adoption of these tools become more commonplace, we’ll also see an increased willingness from consumers to use these services, and significant progress made in the fight against fraudsters.

About the author: Raj Dasgupta is Senior Director, Global Advisory at BioCatch with over 15 years of Financial Services industry experience in the U.S. has worked for HSBC, Yodlee, Intuit, ID Analytics and TransUnion prior to joining BioCatch. Throughout his career, he has been at the cusp of business and technology advising his customers on how technological solutions can be operationalized to solve real-life business problems. 
About the Author

Raj Dasgupta | Senior Director, Global Advisory at BioCatch

Raj Dasgupta is Senior Director, Global Advisory at BioCatch with over 15 years of Financial Services industry experience in the U.S. having worked for HSBC, Yodlee, Intuit, ID Analytics and TransUnion prior to joining BioCatch. Throughout his career, he has been at the cusp of business and technology advising his customers on how technological solutions can be operationalized to solve real-life business problems.