As enterprises accelerate the adoption of AI assistants, a new class of insider risk is emerging: one driven not just by human users but also by autonomous and semi-autonomous digital agents. This week, Exabeam moved to address that challenge, announcing a significant expansion of its Agent Behavior Analytics (ABA) capabilities designed to bring visibility and control to how AI tools operate inside the enterprise.
The update extends behavior detection and response to widely deployed platforms, including OpenAI ChatGPT and Microsoft Copilot, as well as to existing support for Google Gemini. The goal: transform these tools from opaque productivity engines into measurable, monitorable components of the enterprise attack surface.
The Rise of the “Agentic” Insider Threat
Security leaders have long relied on user and entity behavior analytics (UEBA) to detect anomalous human activity. But as AI agents increasingly authenticate, access systems, and execute workflows, those same models are being stretched beyond their original design parameters.
“AI agents are evolving from simple chatbots into autonomous digital workers,” said Steve Wilson, Chief AI and Product Officer at Exabeam. “When compromised, their activity often appears legitimate. Traditional guardrails—like prompt injection protections—don’t address that risk.”
That shift introduces a fundamental visibility gap. Without insight into what employees are asking AI systems, what data is being shared, and how frequently these tools are being invoked, organizations lack the baseline needed to distinguish normal behavior from potential misuse.
Turning AI Usage Into Security Telemetry
Exabeam’s approach centers on converting AI interaction data into actionable telemetry that feeds directly into threat detection, investigation, and response (TDIR) workflows.
By instrumenting AI platforms, the company enables security teams to track patterns such as query frequency, token consumption, tool usage, and outbound activity. This data is then used to establish behavioral baselines for both users and their associated AI agents.
According to Pete Harteveld, the expansion reflects a broader shift in enterprise operations.
“Organizations are deploying a digital workforce that operates at scale and speed,” he said. “Leaders need to understand how these systems behave internally if they want to manage risk while continuing to innovate.”
Five Capabilities Target the Agentic Attack Surface
The expanded ABA platform introduces a set of tightly integrated capabilities aimed at closing what many see as a growing governance gap:
AI Behavior BaseliningDynamic profiling establishes normal patterns across usage metrics such as API calls, token activity, and session behavior, flagging anomalies, such as sudden spikes, that may indicate misuse.Prompt and Model Abuse Detection
A significantly expanded detection library identifies prompt injection, model manipulation, and shadow AI usage at the point of entry, rather than after downstream impact.
Identity and Privilege Monitoring
The platform applies identity governance principles to AI systems, detecting unusual role assignments, privilege escalations, and permission changes tied to agent activity.
Agent Lifecycle Monitoring
Security teams gain visibility into the full lifecycle of AI agents, from creation to invocation, providing auditable tracking that has historically been absent in most environments.
Alignment With OWASP Agentic AI Risks
Coverage mapped to the OWASP Top 10 for Agentic AI introduces a structured framework for measuring and managing this emerging threat category.
From Alert Fatigue to Risk Prioritization
For practitioners, the value proposition extends beyond visibility to operational efficiency. As AI-driven activity increases, distinguishing meaningful threats from benign noise becomes critical.
That challenge is echoed by Nithin Reddy, Global VP of Cybersecurity at Dayforce, who noted that traditional detection models struggle to keep pace with hybrid human-agent environments.
“What we need is clear behavior visibility and a way to quantify risk,” Reddy said. “Otherwise, teams end up chasing thousands of low-value alerts instead of focusing on what actually matters.”
Platform Enhancements Support Analyst Workflows
In addition to the ABA expansion, Exabeam introduced updates to its New-Scale and LogRhythm platforms to improve usability for security analysts and administrators. Enhancements focus on workflow automation, alert prioritization, and streamlined investigation processes—areas that remain under pressure as SOC teams contend with increasing data volumes and complexity.
A Defining Moment for AI Governance
The announcement underscores a broader inflection point for enterprise security: the transition from securing users to securing ecosystems that include both humans and autonomous agents.
As organizations continue to embed AI into core business processes, the ability to monitor, baseline, and govern that activity is quickly becoming a prerequisite rather than an enhancement.
For more information, visit www.exabeam.com/whats-new or www.exabeam.com.
