Agentic AI Threats Surge Ahead of Security Spending, New Research Finds

Arkose Labs report reveals enterprises expect imminent incidents—but are investing just 6% of budgets to stop them.
March 31, 2026
4 min read

Key Highlights

  • 97% of security leaders anticipate AI-driven incidents within the next year, yet only 6% of security budgets are allocated to AI-related risks.
  • Traditional security models struggle to interpret autonomous AI activity that operates under legitimate credentials and mimics trusted users, complicating detection and response.
  • Fragmented ownership and limited executive engagement hinder effective governance of AI risks, increasing exposure to insider threats and scalable attacks.
  • Attribution remains a critical challenge because AI agents' actions are difficult to trace, thereby impacting incident response and forensic investigations.
69cc19fd373075bc0cbd6d42 Arkose7optimised

A new global study from Arkose Labs underscores a growing imbalance in enterprise security strategy: organizations are rapidly deploying agentic AI capabilities while failing to invest proportionally in defenses against them.

The company’s latest Agentic AI Security Report, produced in collaboration with Tech Studio™, surveyed 300 enterprise security leaders and found near-universal concern about impending threats. An overwhelming 97% of respondents expect a material AI agent–driven security or fraud incident within the next 12 months, with nearly half anticipating impact within just six months.

Yet despite that urgency, enterprises are allocating only about 6% of their security budgets to AI-agent–related risk, highlighting one of the most significant emerging gaps in cyber risk management.

From External Threats to Autonomous Insider Risk

The findings reflect a fundamental shift in how threats manifest inside modern enterprises. Unlike traditional attack vectors, agentic AI systems operate with legitimate credentials and interact across systems in ways that closely mimic trusted users.

That evolution is exposing the limitations of legacy security models.

“These models were designed around human behavior and external threats,” said Frank Teruel, Chief Operating Officer at Arkose Labs. “Autonomous, AI-driven activity operating continuously across services is harder to interpret and isolate.”

Teruel emphasized that enterprises are now facing a dual challenge: distinguishing between malicious AI agents and authorized ones, and identifying when trusted agents themselves become compromised or behave unpredictably.

“It’s the ultimate insider threat,” he said.

These models were designed around human behavior and external threats. Autonomous, AI-driven activity operating continuously across services is harder to interpret and isolate.

- Frank Teruel, Chief Operating Officer at Arkose Labs.

Fragmented Ownership and Limited Executive Engagement

Beyond funding gaps, the report points to structural issues in how organizations govern AI-related risk. Responsibility for agentic AI security remains fragmented across security, fraud, identity, and AI teams, limiting coordinated response.

At the same time, executive-level engagement has not kept pace with the speed of AI deployment, further compounding exposure.

Key findings include:

  • Nearly all enterprises expect near-term AI-driven incidents
  • Budget allocation for AI-agent risk remains disproportionately low
  • 10% of organizations do not track AI-agent risk separately at all
  • Governance maturity is lagging behind deployment velocity
  • Attribution—understanding who or what is acting within systems—remains the weakest link

Attribution and Scale: The Breaking Points

One of the most critical challenges identified in the report is attribution. As AI agents operate across systems using valid credentials, tracing actions back to a specific origin—human or machine—becomes increasingly complex.

This lack of visibility directly impacts incident response and forensic investigation, particularly as attacks scale.

Current defenses, the report notes, are not architected to handle the speed, volume, or persistence of AI-driven activity. Traditional detection and mitigation frameworks struggle to keep up with autonomous systems that can operate continuously and adapt in real time.

Defense-in-Depth vs. Economic Reality of Attacks

Arkose Labs positions its Arkose Titan platform as a response to this new threat paradigm, emphasizing a defense-in-depth approach that integrates detection, behavioral analysis, and adaptive mitigation.

The strategy is not just to block attacks, but to make them economically unviable.

By introducing dynamic friction and increasing the cost of execution for adversaries, the platform aims to disrupt attacker ROI—a concept gaining traction as organizations confront automated, scalable threats.

A Market Signal Enterprises Can’t Ignore

The report’s respondent base—spanning global financial institutions, major technology platforms, and Fortune 500 enterprises—signals that this is not a theoretical concern. It is an operational reality already shaping risk models at scale.

Arkose Labs, which has been named to the Deloitte Fast 500 list for five consecutive years, counts major digital brands among its customers, including Microsoft, Meta, Roblox, and Adobe.

Closing the Gap Before It Widens

The central takeaway is clear: enterprises are entering an era where AI is both an accelerator of business value and a multiplier of risk.

But the imbalance between deployment and defense suggests many organizations are still operating with outdated assumptions.

As agentic AI becomes embedded more deeply in enterprise workflows, closing that gap through funding, governance, and technology modernization will be critical to maintaining operational resilience.

Those that fail to adapt may find that the next major breach isn’t driven by external actors alone—but by the very systems designed to drive efficiency and growth.

Sign up for our eNewsletters
Get the latest news and updates