NeuralTrust Survey Finds CISOs Struggling to Secure AI Agents as Adoption Outpaces Protection
A new report from NeuralTrust highlights a widening gap between enterprise AI deployment and security preparedness, with most organizations accelerating adoption faster than they can defend against emerging risks.
According to The State of AI Agent Security 2026, 73% of CISOs are “very” or “critically” concerned about the risks tied to AI agents, yet only 30% have mature safeguards in place. Based on responses from more than 160 security leaders worldwide, the survey shows that nearly half of enterprises fall into NeuralTrust’s Reactive maturity tier, while fewer than 10% have reached Proactive governance.
“AI agents are now part of enterprise operations, but the security controls protecting them are still catching up," said Joan Vendrell, Co-Founder and CEO of NeuralTrust. "Our findings show that Agentic Security has become one of the most urgent and complex challenges in modern cybersecurity.”
The study found that 1 in 5 organizations has already experienced an AI agent–related breach, driven primarily by prompt injection and data exposure. Among those affected, 40% estimate losses between $1 million and $10 million, while 13% report impacts exceeding $10 million, levels on par with large ransomware events.
While visibility into AI systems is improving, control remains limited. The survey found:
-
42% use activity monitoring to observe agent behavior
-
38% rely on access control to manage permissions
-
31% employ data loss prevention tools
-
Only 19% conduct adversarial testing
-
Just 16% validate their AI supply chain
Alarmingly, 25% of organizations report having no AI-specific controls at all.
NeuralTrust projects that by 2028, one in three enterprises will run more than 500 AI agents, and by 2030, over half will. As regulatory oversight expands, most organizations will need dedicated AI security specialists to manage risk and compliance.
The report concludes that enterprise resilience in the next phase of AI adoption will depend not on speed, but on trust and the ability to secure autonomous systems before attackers exploit them.
