Even Security Leaders Are Breaking AI Rules: CalypsoAI Report

The CalypsoAI survey reveals that many security leaders and professionals are knowingly violating company AI policies, underscoring growing insider risks across both physical and cybersecurity domains.
Aug. 14, 2025
3 min read

A new study from AI security firm CalypsoAI adds weight to concerns raised in SecurityInfoWatch.com’s recent article, Survey: Widespread AI Use in the Workplace Creating New Security Risks.

While that earlier survey found employees across industries were willing to sidestep company AI policies, CalypsoAI’s Insider AI Threat Report shows the problem may be even more acute within the ranks of security leaders and professionals.

The survey of 1,002 full-time U.S. office workers, conducted by research firm Censuswide in June, found that 42% of security leaders and professionals admit they would use AI in violation of company policy if it made their work easier. Nearly half (46%) have already submitted proprietary company information to AI systems to complete a task, which is the highest rate among the highly regulated industries studied.

Perhaps more troubling, 58% of these security decision-makers say they trust AI more than their human colleagues, and one-third report feeling no guilt about breaking AI rules. Sixteen percent believe their IT team would be unable to detect a leak caused by AI, while 48% say their company’s AI policy is unclear, giving them latitude to decide for themselves how and when to use the technology.

“These numbers should be a wake-up call,” stated Donnchadh Casey, CEO of CalypsoAI. “The most dangerous threats aren’t coming from outside the firewall, they’re coming from trusted insiders empowered by AI. When policy becomes optional and trust shifts from people to AI, security programs need more than compliance checklists. They need visibility, enforcement and cultural change.”

A growing workplace risk beyond security

The report’s broader findings reinforce the idea that unauthorized AI use is becoming normalized across the workforce, even in highly regulated industries such as finance, healthcare, and security. Overall, 52% of U.S. employees are willing to break policy if AI makes their job easier, and 28% admit to using AI to access sensitive data or documents. Nearly half (45%) of all respondents trust AI more than their coworkers, and more than a third (38%) would rather have AI as their manager.

CalypsoAI also found that executives — including those overseeing security operations — are not immune to risky behavior. Half of C-suite leaders surveyed say they would prefer an AI manager over a human, 28% have submitted proprietary company information to AI, and 38% admit they don’t know what an AI agent is, which is the highest level of unfamiliarity across all job levels.

For security leaders across the physical, IT and cybersecurity domains, the implications are clear: technical safeguards alone are insufficient. CalypsoAI’s analysis calls for organizations to pair technical controls with employee education, clear and enforceable policies, and a workplace culture that prioritizes AI security.

The Insider AI Threat Report offers further evidence that the rapid adoption of generative AI tools is outpacing the ability of many organizations to manage the risks. As CalypsoAI warns, inappropriate use of AI “isn’t a future threat — it’s already happening inside organizations today.”

About the Author

Rodney Bosch

Editor-in-Chief/SecurityInfoWatch.com

Rodney Bosch is the Editor-in-Chief of SecurityInfoWatch.com. He has covered the security industry since 2006 for multiple major security publications. Reach him at [email protected].

Sign up for SecurityInfoWatch Newsletters
Get the latest news and updates.

Voice Your Opinion!

To join the conversation, and become an exclusive member of SecurityInfoWatch, create an account today!