Deepfakes Create a Crisis of Trust Inside Modern Organizations
Key Highlights
-
AI-driven impersonation is making external attacks appear internal, creating confusion and eroding trust across teams.
-
Deepfake voices, emails and video leave few forensic clues, often triggering suspicion of employees before the true source is uncovered.
-
Organizations need shared visibility across departments to accurately determine whether a request came from inside or outside the business.
In an era where anything can be fabricated, distinguishing what’s real from what’s fake has become nearly impossible. Not long ago, companies could train employees to spot red flags in fraudulent emails or suspicious vendor requests. But that playbook no longer works.
Today, every channel — calls, videos, messages, even internal conversations — can be synthetically generated and indistinguishable from the real thing. And what follows is a cloud of uncertainty, with teams scrambling to get answers to some very pressing questions, including whether the request came from a colleague or an external attacker.
This is “insider threat confusion,” and it represents a new and troubling dimension of cybercrime. In the past, most scams left obvious fingerprints: misspelled domains, clumsy grammar or suspicious requests. Now that the footprint is strikingly hard to decipher, it leaves teams pointing fingers and ultimately eroding trust, destabilizing leadership and triggering suspicion even when no employee is at fault.
A problem that predates AI, now magnified
AI didn’t create these schemes — it supercharged them. Long before deepfakes and synthetic voices, attackers were already exploiting trust and routine. In 2015, Xoom Corp. fell victim to a business email compromise (BEC) scam that cost the company nearly $31 million. The fallout was immediate: Xoom’s stock plunged 17%, and its CFO resigned soon after.
Today’s attackers use artificial intelligence (AI) to replicate human tone, cadence and context across emails, calls and video. They can impersonate executives, vendors or partners with startling precision — so much so that even investigators struggle to determine whether an insider was involved. The line between internal error and external manipulation has never been thinner.
Take the British engineering firm Arup, for example. Last year, the company lost $25 million after an employee joined a video call with what appeared to be company executives, only to later discover that the participants were not employees, but AI-generated deepfakes. The attack was so convincing because it featured internal company-specific jargon and realistic visuals. In the wake of this incident, investigators initially suspected an insider plot, only to realize later that it was an external social-engineering scam.
Not all confusion starts with outsiders. Take Macy’s, which last year discovered that an employee had committed internal accounting fraud. In this case, the employee concealed up to $154 million in expenses to cover up a smaller accounting mistake. Thanks to a breakdown in the retailer’s internal controls, the company took three years to discover the fraudulent activity.
Whether it’s an insider or an outsider, it’s easy to see what comes next. Everything from major disruptions to business activities to doubts, questions and finger-pointing that can have very profound consequences. In the case of Macy’s, the company had to delay its quarterly earnings and later announced a CFO transition.
Another example involved the chemical manufacturer Orion, where an employee was tricked by attackers posing as an internal authority to authorize a series of wire transfers totaling $60 million. In this case, there was no evidence of any external system compromise, which created greater confusion on whether this was perpetrated by an employee or an outsider.
Even when the latter is ultimately found to be the cause, there may be a period when employees and even executives are questioned, which can erode trust and disrupt careers. In some cases, the aftereffects may reach all the way to boards or shareholders who may lose confidence in leadership.
How attackers manufacture doubt
The reason these schemes are so disorienting isn’t just how real they look, it’s how they mimic trusted internal behavior so effectively that they blur the line between insider and outsider. Some key elements include:
It begins on the outside: Most of these attacks originate outside company systems — from a compromised vendor, hijacked partner account or stolen executive credentials. Once attackers gain that foothold, every message they send arrives through legitimate-looking channels, tricking recipients and investigators alike into believing the threat started inside the organization.
It impersonates authority: Deepfake audio, AI-crafted emails, or synthetic videos of a trusted leader carry enormous influence. In one reported case, criminals used a cloned CEO’s voice to persuade an employee to transfer $243,000. This incident looked at first like an internal lapse in judgment rather than an external attack.
It’s timed and contextual: Requests are rarely random. Attackers time them to coincide with actual projects, vendor interactions, or in line with the company’s regular payment cycles. Whether the manipulation is human or AI-driven, the pattern looks the same from the outside: normal operations masking deception.
It leaves no breadcrumbs: These incidents often show no signs of intrusion. The logs are clean, malware is absent and the emails, voices or videos look legitimate. Further complicating matters, after the money moves, attackers tear down their infrastructure, including look-alike domains, disposable mail servers and cloud assets to erase DNS trails, IPs and header artifacts. The result is a forensic vacuum that points investigators inward and fuels insider threat suspicion, even though the operation was orchestrated entirely from the outside.
The human cost, and the way forward
When an attack blurs the line between insider and outsider, the confusion doesn’t stop at the incident itself, it spreads through the organization. Finance questions IT. Leadership questions employees. Boards question leadership. Even after the truth surfaces, the doubt lingers.
That’s the real damage of insider threat confusion: it doesn’t just steal money, it erodes confidence, relationships and judgment.
Solving it doesn’t mean adding more awareness training or isolated controls. It means giving teams a shared view of reality. When finance, vendor management and security teams can see the same context — who made a request, from where and how it aligns with normal behavior — they can tell whether a threat came from inside or out.
That visibility restores trust. It turns finger-pointing into collaboration and lets organizations respond decisively, before uncertainty does more harm than the attack itself.
About the Author

Shai Gabay
Co-founder, CEO of Trustmi
Shai Gabay is a co-founder and the CEO of Trustmi, a leading end-to-end payment security platform founded in Israel in 2021. Prior to Trustmi, he was General Manager at Opera, VP of Product and Services at Cynet, CIO at Cyberbit and the CISO at Discount Bank. Shai holds a Bachelor's Degree from Shenkar College in software engineering, and also a Master's degree in Business Administration and Management from Tel Aviv University. Additionally, Shai was selected for the prestigious 1-year full scholarship executive excellence program at the Hoffman Kofman Foundation, a program tailored to outstanding alumni of IDF’s Elite Units. Through this program, he had the opportunity to study with prominent co-founders and leaders at renowned global tech companies and professors at elite universities.
