Deepfakes at Scale: Why Digital Trust Is Collapsing and How Organizations Must Respond
Key Highlights
- Deepfake technology has caused a 1,000% increase in cyberattacks targeting private enterprises globally, with North America experiencing over 1,740% growth from 2022 to 2023.
- Traditional security tools are ineffective against synthetic content, leaving humans as the last line of defense, despite their limited ability to detect deepfakes.
- Organizations must adopt AI-powered detection tools, enhance employee awareness through training, and implement strict policies to verify identities and responses in high-risk scenarios.
- A shift to a zero-trust model is essential, where every voice, video, and visual must be actively verified to prevent deception in human interactions.
The digital world has always been rife with deception. The access, reach, and relative anonymity afforded by the online space naturally lend themselves to a certain degree of misuse and abuse. However, recent advances in technologies such as generative AI and deepfakes have ushered in an entirely new era of digital deception.
From 2022 to 2023, the total volume of deepfake-driven cyberattacks targeting private enterprises increased by 1,000% globally and by over 1,740% in North America. Capable of generating and mimicking human writing, speech, and even visual likenesses, these technologies have begun to blur the lines between fact and fiction. And they’re fundamentally threatening the foundations of digital trust in the process.
In this new, post-deepfake age, organizations and end users alike must equip themselves with the right knowledge, tools, and policies to remain secure in a world where trust is scarce.
The Long, (Ig)noble Tradition of Social Engineering
Spam emails, phishing scams, and the like have been around for roughly as long as the internet. And while other types of cyberattacks, such as malware and ransomware, also entered the modern lexicon, it was these confidence games, also known as social engineering, that became the most widely used attack vectors in the digital world.
Unlike cyberattacks that target digital systems and infrastructure, social engineering targets human psychology. For example, rather than using malware to harvest someone’s credentials or accessing sensitive information, a social engineering attacker might impersonate a co-worker in order to have their victim offer up the information willingly.
We’ve all seen these types of attacks. And historically, we’ve found most of them to be transparent. Glaring spelling errors, inaccurate personal details, poorly formatted emails, suspicious addresses…the list of red flags goes on and on. Although hackers’ tactics have evolved over time, it wasn’t until the advent of modern AI that we saw these attacks reach a new level of sophistication.
AI, Deepfakes, and the Unraveling of Digital Trust
With the advent of generative (and now agentic) AI, today’s threat actors can overcome these historical challenges regardless of their knowledge, fluency, or sophistication. With company org charts and professional relationships published openly across the web, and highly sophisticated, multi-lingual chatbots freely available to all, threat actors are now able to spin up highly targeted, polished social engineering campaigns with just a few clicks.
Add to that the rapid rise of deep-fake technology, and the average threat actor now has a social engineering arsenal that is orders of magnitude more powerful and effective than those of the most advanced nation-state actors less than a decade ago.
Take, for example, the now infamous Arup deepfake attack, which saw the British multinational defrauded of a staggering $25 million through a simple, impersonation-based social engineering attack. In the attack, a junior finance employee was invited to a Zoom call with several more senior colleagues (including the company’s CFO) to discuss an emergency funds transfer. After a brief exchange, the finance employee was persuaded to wire $25 million to an offshore account controlled by a hacker.
With company org charts and professional relationships published openly across the web, and highly sophisticated, multi-lingual chatbots freely available to all, threat actors are now able to spin up highly targeted, polished social engineering campaigns with just a few clicks.
What makes this attack so exceptional isn’t only the enormous sum of money lost, but the fact that every single individual attending the Zoom call, other than the junior finance employee, was a deepfake. With pitch-perfect vocal recreations and likenesses indistinguishable from their real-life counterparts, these AI-generated impostors leveraged their relative seniority and leadership positions to convince unsuspecting employees to take drastic, dangerous actions.
Traditional Tools Fall Short When Detecting Synthetic Content
While most of us would like to believe we would have known better than to have wired the funds, the reality is that when facing someone who looks and sounds exactly like your company’s chief financial officer, many of us would struggle mightily to defy their orders.
What makes matters worse, however, is that traditional security tools and technologies are no better at detecting this kind of synthetic content than humans are. Tools such as email filters and firewalls were never designed to detect or defend against synthetic content. And as deepfakes and other synthetic media are being used to supercharge social engineering attacks, even more modern email security tools are struggling to keep up. In a recent study, IRONSCALES found that traditional Secure Email Gateways (SEGs) fail to stop an average of 67.5 phishing attacks per 100 mailboxes every month.
All this means that, for most of today’s organizations, humans are being left as the last line of defense against deepfake-driven attacks. And that’s not a bad spot to be in. In fact, a recent study from the Proceedings of the National Academy of Sciences of the United States of America found that not only are humans highly unreliable at detecting deepfaked human images, but they also tend to perceive the synthetic humans as more trustworthy than their real-life counterparts.
Rethinking Security in a Post-Trust Age: Tech, Training, and Policies Unite
With all this in mind, it’s clear that new approaches are needed to counter this radical new threat. If all of the above represents a paradigm shift in the threat landscape, then an equivalent paradigm shift is needed in organizational cybersecurity.
There is no single silver bullet to defend against this shift. Instead, organizations will have to employ a blend of tools, training, and policy measures to remain secure in tomorrow’s post-trust digital age.
● Fight Fire with Fire: AI-Powered Defenses Are a Must – When it comes to defending against AI-powered threats, traditional technologies simply don’t cut it. Thankfully, innovation is happening on both sides of the cyber divide. AI-powered anomaly detection and behavioral analysis are critical for flagging social engineering attempts, especially via email. Meanwhile, video and audio authentication tools (including dedicated deepfake-detection technologies) are critical for detecting manipulated content when humans cannot. Finally, real-time verification platforms for high-risk communications (e.g., finance, HR, invoicing) are increasingly critical to protecting the most sensitive aspects of an organization’s operations.
● Empower Your People: Advanced Awareness Training & Testing – While the average individual may prove ineffective at detecting and defending against deepfake-driven attacks, the equation changes when dealing with a well-trained, aware, and vigilant workforce. AI-enabled security awareness training (SAT) and simulation testing tools have made it easier than ever to develop up-to-date, engaging, and effective training, testing, and simulation programs that can make all the difference for navigating a post-deepfake digital landscape.
● Policies & Governance That Get Ahead of the Problem – Looking back at the Arup attack, it’s easy to see how the right policies and governance could have prevented such a massive financial loss. Institute standard operating procedures for identity verification, especially for sensitive requests, and make them table stakes for all modern organizations. The same goes for establishing incident response protocols for suspected synthetic threats. Finally, ensuring one’s deepfake defense is a collaborative effort, with frequent cross-departmental collaboration among IT, HR, Legal, Communications, and more.
The Paradigm Shift: All Media Are Synthetic Until Proven Authentic
The principle of “zero trust” is not new in cybersecurity. However, to date, the idea has been applied almost exclusively to the network perimeter. In an age of declining digital trust, it’s increasingly important for organizations to apply the same principle to human interactions.
Most of our day-to-day human interactions are based upon a fundamental assumption of trust. We trust that the person we are looking at or speaking to is who they claim to be. We generally assume their stated intentions are their real ones. And we trust that, barring exceptional circumstances, we are not at threat of being defrauded. Moving forward, in the digital, AI-first world, those assumptions will have to be inverted. To stay ahead of today’s threats, organizations must move from a paradigm in which trust is assumed to one in which trust is actively verified.
Every voice, video, and visual representation should be challenged for its veracity, with clear verification steps integrated into daily workflows while minimizing friction. How this plays out will depend on each organization's unique operations. But, for all organizations, the following will be true: treat every digital interaction as potentially synthetic. And design your systems and operations accordingly.
About the Author

Eyal Benishti
Founder of IRONSCALES
Eyal Benishti is the CEO and Founder of IRONSCALES, pioneering the world’s first adaptive AI- email security solution to combat advanced phishing, BEC, and account takeover attacks.
With over 15 years in the software industry, Eyal has held roles as a security researcher and malware analyst at Radware and a technical lead for information security solutions at Imperva. He also held R&D positions at Comverse and Amdocs.
Eyal earned his bachelor’s degree in computer science and mathematics from Bar-Ilan University in Israel and has been passionate about cybersecurity from a young age.


