RBI’s call to replace two-factor authentication requires a multi-layered security approach

April 25, 2024
SMS 2FA is vulnerable to exploitation through social engineering. And with the evolution of AI technology, the threats are growing.

Two-Factor Authentication (2FA) is among the most popular authentication methods for consumers and banks to provide additional security for consumer accounts. Recently, however, The Reserve Bank of India (RBI) has urged banking and financial entities in the country to drop or reconsider alternate authentication methods to SMS Over-The-Phone (OTP) authentication.

The issue that the RBI has found is the social engineering prowess of malicious actors are able to circumvent this 2FA authentication measure, citing the need for “principle flexibility” as technological advancements progressively make the method more vulnerable.

RBI introduces an interesting element to the discussion of consumer account security. SMS 2FA is currently among the most used authentication methods for banks to ensure access to authorized users on an account. At the same time, the method is indeed vulnerable to exploitation through social engineering. And with the evolution of AI technology, the threats are growing.

While US banks have yet to make an official change, many calls have been made for a new solution that can replace SMS 2FA. Unfortunately, those calls have not met any standards of replacement where action or solutions can take hold over the issue. RBI’s call for additional factors of authentication (AFA) could introduce added layers of security, but this measure would need to be balanced with ease of login for genuine users so that they are not forced to go through an unduly burdensome account access experience.

AI technology is exploiting SMS authentication, and it is scaling globally

2023 was a year that saw AI grow as a real pervasive threat in the proliferation of scams — some consumers have faced issues with AI deepfake voice messages looking to acquire their banking details using voice authentication. Data is even revealing now that GenAI is now a global technology enhancing the rate of growth of social engineering scams like romance scams, investment scams, impersonation scams and more.

One can imagine how the rise in the sophistication of AI based capabilities can increase the volume of unauthorized transactions when OTP is involved. Banks and federal entities warn consumers of these scams, but when fraudsters conduct phishing or social engineering scams en masse enabled by AI based tools and trick consumers into sharing confidential information for their accounts, the tool begins to render 2FA methods like OTP SMS authentication quite inadequate.

From a security standpoint, banks will need to approach the situation with nuance. Banking security is layered, and with that comes considerations for how to secure customer data, account access and internal infrastructure. The goal is always to determine how these security measures can exist in harmony with UX. One example, Santander, a large global retail and commercial bank added payment prompts to prevent purchase scams in the UK on platforms such as Facebook Marketplace — and this action was fueled by their research showcasing that over 70% of purchase scams first originated on social media.

Ultimately, consumer and banking security are intertwined, and developing the necessary safeguards to protect both is becoming more than a priority; it is evolving in its approach.

AI, deepfakes and security regulations

AI deepfakes, as mentioned previously, can both trick security systems (such as a voice activated authentication system) and consumers via convincing social engineering scams. Fraudsters are taking advantage of this and can run amok on financial entities, while for banks, compliance and regulation always takes higher priority.

Criminals obviously avoid laws and regulations. In the world of AI, social engineering scams and deepfakes are becoming a larger problem, and “fighting fire with fire”, or AI with AI, can be tricky for banks. Execution and development of these scams can at times surpass the blanket security that 2FA looks to provide consumers, and although it is still useful, the future is uncertain when it comes to combatting AI amidst managing risk and ensuring compliance with applicable regulation.

Under these circumstances, it’s imperative that banks consider sharing information with one another along with adopting the newest technologies into their customers’ digital interactions. Combining financial transactions with open-source intelligence, that can make suspicious account behavior from a fraudster stand out as anomalous will be essential in combatting fraud, scams and money mule networks that operate as organized crime syndicates.

Backend security technology combined with frontend security can protect consumers

Biometrics, like face and voice recognition software, can be exploited by AI technologies in addition to the bot scripts that run social engineering scams, but there lies a major difference between biometric security and behavioral biometric intelligence. Behavioral biometric intelligence is the process of analyzing human online interactions such as keystrokes and typing speeds, whether on mobile, desktop or tablet, to protect users and data.

As AI-enabled fraud scales, the capabilities of the banks to use technology to monitor and combat the heightened volume of attacks will become essential to banking and consumer security. Knowing the fallibility of SMS OTP as a 2FA, consumer defense must also strongly consider technologies that protect the consumer and add additional layers of security without a burden on user experience.

Several facets of banking security may begin to lean on the efficacy of the behavioral biometric intelligence solutions that banks have deployed. From mule account detection to scaling artificial intelligence capabilities, behavioral biometric intelligence exists as an ultimate measure for banks to identify, target and prevent fraud and scams.

Its purpose, in fact, is to tackle advanced threats. In the face of the growing threat of AI and their presence in the digital channel, it’s one of the most powerful lines of defense for financial institutions and cyberinsecure consumers. Combining the technology with MFA, digital channel prompts and ongoing security education can boost the odds of consumers avoiding fraud attacks or account takeovers.

Conclusion

As technology evolves, fraudsters adapt, and AI is changing the game in ways that are yet to be seen. While the future seems uncertain, banks have several security measures to put in place that can fortify the 2FA SMS OTP methods consumers use daily. AI deepfakes, social engineering scams and more are on the rise, but inter-banking communications via fraud/AML operations, behavioral biometric intelligence and exploring MFA applications can provide the holistic and supportive approach necessary to combat this new wave in digital fraud and financial crimes.

 

Raj Dasgupta is Senior Director, Global Advisory, BioCatch.