Survey: Widespread AI Use in the Workplace Creating New Security Risks

Anagram’s latest survey reveals that 78% of employees use generative AI tools on the job — often without company oversight — raising the urgency for modern security training and governance models.
Aug. 5, 2025
5 min read

A new national survey by Anagram, a human-driven security training platform, reveals widespread use of generative AI tools in the workplace and growing behaviors that could put organizations at significant risk.

Conducted in July, the survey captured responses from 500 full-time employees across industries and regions in the United States, representing a range of ages, roles and income levels.

Key survey findings:

  • 78% of employees are already using AI tools like ChatGPT, Gemini and CoPilot at work, even when their companies have not established clear policies.

  • 58% admit to pasting sensitive data into large language models, including client records, financial data and internal documents.

  • Nearly half (45%) say they have used banned AI tools on the job.

  • 40% would knowingly violate company policy to finish a task faster.

The release of these findings comes at a time when the Cybersecurity and Infrastructure Security Agency (CISA) is facing major budget cuts and workforce reductions. “With government resources shrinking, private companies must take on a bigger role in securing their networks and educating their teams,” Harley Sugarman, founder and CEO of Anagram, told SecurityInfoWatch. “Our survey makes it clear: employees are willing to trade compliance for convenience. That should be a wake-up call.”

Understanding the risks

Sugarman noted that the biggest security risk from pasting sensitive information into AI tools is data leakage. “When employees copy sensitive internal data — such as a legal document, some code or personal healthcare data into tools like ChatGPT or Copilot, they can unknowingly violate compliance standards, or lose control over how that data is stored,” he said. This data could be stored or logged by the models and potentially accessed by hackers, competitors or members of the public.

To mitigate this risk, Sugarman recommended offering secure AI environments with internal guardrails, training employees on anonymizing data for use in large language models, deploying lightweight DLP tools to scan and redact sensitive content at the API level and using real-time nudges such as pop-up warnings when risky behavior is detected.

Policy challenges

The finding that nearly half of employees are using banned AI tools underscores a broader policy issue. “Blanket bans don’t work,” Sugarman said. He pointed to a need for tiered AI governance policies that distinguish between fully prohibited uses, conditionally approved uses and fully supported uses — and for clearly communicating the reasoning behind these rules.

"We’re seeing a shift toward tiered approaches to AI governance where some uses are fully banned, some are conditionally approved and some are fully supported. And most importantly, companies need to explain the 'why' behind the policies," he explained. "There’s a big difference between using a model to edit the wording on a sales deck versus loading in customer data that could violate an NDA. Once employees understand why, they’re more likely to comply ... bad rhyme intentional!"

Expanding the insider threat model

Sugarman said security teams should broaden their insider threat models to account for well-intentioned employees misusing powerful tools, not just malicious actors. Examples include a junior engineer pasting an API key into a coding assistant or a manager uploading an employee review for summarization. He suggested monitoring behavioral signals such as large text blocks pasted into browser-based AI tools or spikes in traffic from high-risk departments as early warning signs.

Private sector responsibility

With CISA facing reduced funding and staffing, Sugarman emphasized that the private sector must play a larger role in maintaining a security-aware workforce. This includes delivering training tailored to the specific tools employees use and the data they handle, as well as helping them protect themselves in their personal digital lives.

"First, (the private sector) must protect the organization by delivering content tailored to the tools employees use and the data they handle, rather than relying on the traditional, generic, 'one-size-fits-all' model," he explained. "Second, it must teach employees how to protect themselves outside of work, especially as the line between their personal and professional online lives continues to blur."

Modernizing security awareness

Traditional once-a-year security awareness training is no longer enough, Sugarman said. Instead, organizations should deliver targeted, role-based content that fits into employees’ workflows and track engagement based on behavioral change rather than course completion rates.

“Are fewer employees pasting sensitive content into unsafe tools? Are high-risk departments engaging with tailored learning? The answers to these questions are more predictive of organizational safety than any LMS report,” he said.

Long-term, Sugarman sees value in “just-in-time” interventions. “Just-in-time nudges can help reinforce the right behavior exactly when a risk occurs — like a browser-based warning when someone pastes sensitive information into an AI prompt,” he said.

About the Author

Rodney Bosch

Editor-in-Chief/SecurityInfoWatch.com

Rodney Bosch is the Editor-in-Chief of SecurityInfoWatch.com. He has covered the security industry since 2006 for multiple major security publications. Reach him at [email protected].

Sign up for Security Info Watch Newsletters