AI agents are increasingly acting like digital employees within enterprise environments, but organizations are largely failing to treat them as such, according to a new report from BeyondID, a managed identity solutions provider.
The company’s 2025 survey of U.S.-based IT leaders uncovered a troubling disconnect: While a majority of organizations claim readiness for artificial intelligence in network security, fewer than half monitor the access or behavior of the very AI systems they deploy.
AI agents are autonomous software systems that can interact with enterprise applications, make decisions and carry out tasks — ranging from data analysis to system access and workflow execution — without direct human input. As these systems become embedded in daily operations, they present a new category of identity risk: digital insiders with broad access and minimal accountability.
The report, titled “AI Agents: The New Insider Threat?,” highlights a rising risk of AI agents becoming unmonitored, high-privilege actors within networks, operating autonomously and accessing sensitive systems with little oversight.
“Organizations often overestimate their AI security readiness,” Arun Shrestha, CEO of BeyondID, explained to SecurityInfoWatch. “This often occurs because of cognitive biases, where they believe they have more expertise or control than they actually do, and due to a lack of deep understanding of AI-specific risks.”
Competitive pressure is also fueling the problem. “Rapid adoption of AI tools can increase this overconfidence, sometimes without proper oversight — a.k.a. shadow AI,” Shrestha added, referring to the unsanctioned use of AI systems deployed outside of formal security review.
AI agents need identity governance, not just infrastructure
According to Shrestha, part of the issue stems from how companies conceptualize AI. “Many businesses treat AI as passive technology rather than active agents capable of influencing systems and decisions,” he said. This perception gap, coupled with the rise of shadow AI, has allowed many organizations to overlook critical governance functions.
To close this gap, BeyondID urges organizations to rethink their identity and access management (IAM) strategies to reflect the real-world role AI systems are now playing.
“Security leaders must treat AI agents as first-class identities, just like humans,” Shrestha explained. “Applying oversight similar to that for humans and service accounts — such as classifying entitlements, monitoring activities, enforcing policies and integrating AI behavior into threat detection tools — is essential.”
Fortunately, this shift doesn’t require a full overhaul of existing IAM infrastructure. “Integrators and CISOs can carefully update their IAM strategies for AI agents as ‘digital employees’ by recognizing these agents as distinct identities within their existing IAM systems,” Shrestha said.
He recommends that organizations assign clear ownership and descriptive metadata to each AI agent to support visibility and lifecycle management. In parallel, applying contextual access controls — such as restricting tasks based on time, sensitivity or location — helps reduce exposure without disrupting operations. Just-in-time provisioning, automated access reviews and behavior analytics can all be layered in to secure these identities within current frameworks.
Healthcare risks: high stakes, limited controls
While AI-driven efficiencies are welcomed across industries, the healthcare sector appears especially vulnerable. The report shows that 61% of healthcare organizations experienced an identity-related attack in the past year, yet only 17% identified compliance as a top concern. Meanwhile, 42% failed an identity-related audit.
Shrestha said the speed of AI adoption is outpacing governance in this highly regulated environment. “The emergence of AI agents and bots introduces new access points with poor lifecycle management,” he noted. “These systems often have dynamic, just-in-time access and may skip review cycles.”
Healthcare organizations also face the added burden of regulatory compliance. “Using shadow AI tools without approval increases HIPAA risk, and limited auditability of AI decisions complicates compliance,” Shrestha warned. The complexity intensifies in federated healthcare ecosystems, where AI must operate across EMRs, claims systems, and cloud platforms.
To help healthcare providers manage the risks, BeyondID advocates for zero-trust principles and AI-native IAM solutions. “Integrators like BeyondID support healthcare by modernizing identity governance and deploying AI-native IAM solutions that adopt zero-trust principles, automate compliance and enhance data protection,” he said. This includes authenticating all interactions — human and machine — while improving observability with analytics and embedding governance in onboarding workflows.
Monitoring AI: behavior baselines and anomaly detection
The report emphasizes that AI agents, even when not malicious, can cause harm if left unchecked. Shrestha recommends continuous monitoring and behavioral analysis to keep AI activity within policy boundaries.
“Security teams should look for unusual behavior, such as odd access patterns, privilege escalations, attempts to circumvent permissions, large data transfers or odd changes in system resources,” he said.
A strong monitoring strategy starts with establishing behavioral baselines — understanding what “normal” looks like for each AI agent. From there, real-time analytics and adaptive risk scoring can help distinguish benign automation from potential threats.
“Combining automated alerts with human oversight ensures that real threats are identified without generating too many false alarms,” Shrestha noted.
He also advised that AI agents should log their intended actions as part of their design. “It’s also helpful for AI agents to log their intended actions so teams can verify if behaviors align with their approved functions,” he said, adding this approach promotes transparency and makes it easier to trace unauthorized or unintended activity.
Treat AI as a user, because that’s how it’s acting
As AI continues to evolve from a static tool to a dynamic operator, enterprises must adapt their security posture accordingly. Governance frameworks, zero-trust policies and identity oversight are all crucial elements in minimizing risk and maximizing accountability.
“AI is acting like a user — logging in, making decisions, accessing systems,” Shrestha said. It’s time security teams start treating it like one.”