FBI Warns of AI-Generated Deepfake Videos Fueling Kidnapping, Extortion Scams

An FBI alert and new fraud data by Entrust show criminals are increasingly using AI-generated deepfake videos to escalate kidnapping and extortion scams, raising new risks for consumers and families.
Dec. 12, 2025
3 min read

The FBI is warning that criminals are increasingly using generative AI to create deepfake “proof-of-life” videos as part of kidnapping and extortion schemes, signaling a new escalation in identity-based fraud tactics.

According to the alert issued by the FBI’s Internet Crime Complaint Center, attackers are leveraging artificial intelligence to fabricate convincing videos and images to pressure victims into making rapid payments.

Data from Entrust’s newly released 2026 Identity Fraud Report suggests the FBI’s warning reflects a broader shift in fraud activity driven by AI adoption. The report shows that one in five biometric fraud attempts now involves deepfakes, while deepfaked selfies have increased 58% year over year. Injection attacks, in which fraudsters inject pre-recorded or synthetic biometric data directly into authentication systems to bypass liveness checks, surged 40% according to the report.

Entrust’s findings also indicate that fraud operations are becoming continuous and highly organized. Fraud activity now operates as a “24/7 enterprise,” with peak activity occurring overnight when defenses may be reduced.

Vincent Guillevic, head of the Fraud Lab at Entrust, tells SecurityInfoWatch virtual kidnappings represent a dangerous evolution in social engineering tactics.

“Virtual kidnappings are a frightening evolution of a broader trend we’re seeing: fraudsters using AI and psychological manipulation to manufacture urgency, panic, and compliance,” Guillevic said. “Our latest data shows deepfakes now account for one in five biometric fraud attempts — a clear sign that criminals are adopting AI faster than defenses can keep up.”

Guillevic added that many of these fraud operations function like global enterprises, using scalable tools to increase volume and speed. “These organized fraud rings operate much like global enterprises. They use AI-driven tools and fraud-as-a-service kits to scale their operations at low cost, producing convincing fake photos or videos scraped from social media and delivered at speed,” he said. “The barrier to entry for highly sophisticated scams has never been lower.”

The FBI warning and Entrust’s data highlight growing risks for consumers as AI-driven impersonation techniques become more accessible. Guillevic emphasized that awareness and verification remain critical defenses. “Awareness is your best defense,” he said. “Establish a family code word known only to your household, and always try to contact your loved one directly before responding to any ransom demand. Fraudsters thrive on fear taking even a moment to pause and verify information can stop a scam in its tracks.”

The FBI encourages the public to remain vigilant and report suspected scams through its Internet Crime Complaint Center.

About the Author

Rodney Bosch

Editor-in-Chief/SecurityInfoWatch.com

Rodney Bosch is the Editor-in-Chief of SecurityInfoWatch.com. He has covered the security industry since 2006 for multiple major security publications. Reach him at [email protected].

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of Security Info Watch, create an account today!