How fraud defenses’ predictions get predicted

Jan. 19, 2024
Business leaders must recognize that fraud actors are not a static threat and are constantly evolving

Fraud isn’t something that either happens at a certain point or doesn’t. It’s not a one-time threat, nor is it limited to a particular attack method or weakness. Rather, fraud is a threat that broadly always hangs over all businesses – and the routes that criminals can take to defraud one are expanding by the day.

Business leaders must recognize that fraud actors are not a static threat and are constantly evolving their practices to keep up with and overtake common fraud prevention strategies.

Many companies fail to acknowledge this and put their faith in outdated systems and practices that may have worked years ago but are well behind fraudsters’ current capabilities, or overfocus on a particular attack vector while ignoring other vulnerable ones. 

As convenient as it would be, there is no singular “silver bullet” for the threat of fraud; any business hoping to keep itself safe needs multiple layers of protection that are regularly reassessed and updated. This article will discuss the evolving threats that businesses face, what they can do to shield themselves, and why – and how – they must adapt their defenses to stay ahead in the fraud race.

The State of the Modern Fraud Race

Fraudulent activity doesn’t take a singular form – far from it. With fraud defense tactics both emerging and evolving consistently, and different types of companies each vulnerable to unique attack vectors, fraudsters often need specialized tactics to breach targets. Furthermore, unpredictable circumstances from business practices to world events (such as a global pandemic) can open a business to unexpected weak points.

For example, the COVID-19 pandemic has been a major driver behind the recent proliferation of fraud vectors, having spurred a haphazard rush to digitize all manner of previously analog processes without the usual level of security planning attached.

Major services that traditionally required in-person, face-to-face interactions – such as opening a bank account or applying for government services, like aid and licensing – were suddenly forced to set up online portals for users to do so through. These services remained as critical as they were pre-pandemic, so no compromise could be made on the speed to deliver these new processes; the compromise, then, fell to security.

This proved disastrous for many, including industries like financial services and the public sector. The United States, for one, saw its pandemic relief funds defrauded to the tune of potentially hundreds of billions of dollars (over 17% of all funds distributed).

With remote work and the widespread use of online services continuing to become the new de facto, businesses and consumers alike face a broad array of fraud types, ranging from romance scams on dating apps to full account takeovers.

As security practitioners settle into a status quo after the rush to digitize and bolster their defenses in response, fraud actors are following suit to pivot their approach. 

For example, though fake identifying documents are nothing new, businesses have had to strengthen their means of detecting them within virtual onboarding processes, leading many fraudsters to no longer focus on passing themselves off as someone else and instead convince innocent people to perform the fraud on their behalf.

Social engineering scams like phishing have also been around for quite some time but have become more diversified and inventive in response to tightening security measures. Many target employees rather than attempting to breach their companies’ defenses, contributing to the insider threats that are making up nearly a fifth of fraud incidents throughout 2022, per Verizon.

More than half of all social engineering incidents in Verizon’s 2023 Data Breach investigation report involved business email compromise (BEC), a common and dangerous form of fraud in which fraudsters masquerade as an internal executive to convince employees to use their access to perform a certain task.

Many fall victim to this type of fraud and are persuaded to change payment details to third parties, make purchases for criminals, or grant access where it shouldn’t be granted – while many savvy employees can spot fraudulent emails and safely ignore them. But what if the threat isn’t just a convincing email? What if the fake request comes in the form of a video or phone call that looks, and sounds, just like your boss?

Social engineering scams like phishing have also been around for quite some time but have become more diversified and inventive in response to tightening security measures.

Keeping an Eye on AI

Artificial intelligence (AI) is currently having its moment in the spotlight for many reasons, not the least of which is its usefulness to fraud actors’ schemes, particularly deepfake technology, which can allow a user to assume and manipulate the likeness - often in the form of an image or voice - of another.

While a facsimile of a company email may not fool the average worker, a direct call from someone who looks and sounds exactly like their CFO – urgently telling them to send a large payout for a new deal they’re closing – has a much better chance of successfully tricking them.

Aside from social engineering, deepfakes also provide criminals with a skeleton key of sorts to bypass security measures like biometric authentication through facial or voice recognition.

AI is a versatile tool that can make fraud activity both accessible and effortless. Generative AI, like the prominent ChatGPT, can enable a single fraudster to automate attacks like those described above en masse, rather than having to put in the legwork themselves. Even if just 0.1 percent of these automated attacks are successful, that one incident is a complete hands-free victory for the person behind it.

On the more sophisticated end of the spectrum, combative neural networks can be used to probe high-tech fraud prevention tools for weaknesses to exploit employees, highlighting flaws no human would recognize and keeping companies perpetually under threat. These threats are only expanding with the advancement of AI alongside fraud technology, and require regularly updated, specialized counterstrategies and tools to defend against.

AI is a versatile tool that can make fraud activity both accessible and effortless. 

Fortunately, a key characteristic of AI is that it doesn’t take sides – and as such, can be used for defensive purposes as well.

Many of the common fraud methods mentioned above can be shielded against by incorporating AI into security strategies and identity verification tools, allowing onboarding and authentication processes to pick up on minute and borderline-imperceptible details in fake documentation and deepfaked selfies, to perform liveness detection and ensure the image provided is of a real person, and even to predict the forms of future fraud attacks.

Of course, this leads fraudsters to use the same technology to predict these predictions – and with both sides fighting to remain one step ahead, we’re left with a fraud race.

Don’t Forget the End User

With sophisticated levels of technology available to both criminals and security practitioners, the latter have a major disadvantage: companies can’t make their fraud defenses so comprehensive that they become a burden on the end user, lest they drive their customer base away out of frustration.

Sure, a 20-step verification process requiring multiple forms of ID, biometric scans, and other laborious procedures will keep out most scammers – but it’ll do the same to actual users, who can’t be bothered to undergo overly rigorous testing and are liable to take their business elsewhere.

Friction is a necessary part of any worthwhile fraud defense, but the key factor that companies need to balance is how much customer friction is involved, as well as where in the customer journey it is placed: too early, and it may have little to no effect; too late, and the fraud may have already happened well beforehand.

Remaining Prepared for the Unknown

Fraud threats can come in about any form, through just about any attack vector imaginable – and with fraud actors’ arsenals of tools and tactics expanding by the day, threats extend well beyond what we’re currently even aware of.

A company that puts too much stock in certain forms of defense while ignoring others makes the act of defrauding them an easy one, as does a company that fails or refuses to continually update and refresh its defenses over time.

When pitted against criminals solely devoted to finding ways around companies’ security measures, those measures can become obsolete far quicker than one might expect. An effective fraud defense can not only be strong, but it also needs to be both multifaceted and dynamic.

New fraud technology like deepfakes serve as just one example of the formidable threats being introduced regularly, which must be dealt with proactively, rather than reactively.

Security leaders must be vigilantly on the lookout for anything that appears out of the ordinary or comes out of nowhere and have processes in place to deal with threats as they take shape. This is a tall order for an internal security team – which is where external knowledge and resources come in. Third-party security vendors with specialized technology and extensive experience with fraud are an invaluable asset to security teams and take a good deal of this load off their shoulders.

Access to the right kind of information is critical as well: keeping a bead on crowdsourced information about novel forms of scams and attacks as they pop up is a convenient way to predict and pre-empt the same attacks on one’s own business, and working alongside vendors with access to substantial fraud data – especially if it encompasses your business’ peers – is key to stopping repeat attacks across companies.

Staying one step ahead in the fraud race is a perpetually demanding task, but it’s important to remember that falling behind from even one misstep can be enough to bring an entire company to its knees.

 

 

David Divitt is Senior Director of Fraud Prevention & Experience at Veriff, a global identity verification provider. With more than two decades of experience working with major financial institutions, payment providers, and software vendors to help develop their fraud prevention strategies, Divitt supports the production and development of Veriff’s identity verification solutions to meet the need for modern and innovative fraud and financial crime prevention technology, while keeping the user experience seamless.

Divitt was most recently Vice President of Financial Crime Products at Vocalink, a Mastercard company. Before Mastercard, Divitt was the Product Manager of Financial Products at Alaric, an NCR company that offers global fraud prevention and intelligent transaction handling solutions. He has provided professional consultation to over 50 of the top global banks and helped design and structure fraud solutions and operations in multiple tier-one financial institutions.