Survey Finds AI Trust Eroding Amid Security Risks

A new AvePoint report reveals that while enterprises are accelerating AI adoption, more than three-quarters have suffered AI-related breaches, underscoring the urgent need for stronger governance, data discipline, and trust-building strategies.
Oct. 8, 2025
5 min read

Artificial intelligence has moved from concept to reality, but trust in its outcomes is showing signs of strain. AvePoint’s newly released report, “The State of AI in 2025: Go Beyond the Hype to Navigate Trust, Security and Value,” finds that while organizations are racing to deploy AI at scale, most are struggling to balance innovation with governance.

The research, based on input from more than 750 business leaders across 26 countries, shows that more than 75% of organizations have experienced AI-related security breaches. In many cases, those concerns have delayed deployments by as much as 12 months. AvePoint’s findings highlight that the biggest differentiator in AI success is not adoption speed but stewardship, with data quality, policy evolution and human oversight emerging as the cornerstones of trusted AI programs.

Among the report’s key findings:

·       AI deployment delays average nearly six months, with some stretching to a year due to data quality and security challenges.

·       Inaccurate AI output (68.7%) and data security concerns (68.5%) are the leading factors slowing rollout of generative AI assistants.

·       Nearly one-third (32.5%) of respondents cite AI hallucinations as their top generative-AI threat.

·       More than 64% say employees’ lack of perceived value is a major barrier to adoption, underscoring the need for better enablement and communication.

Experts call for governance, data discipline and security-first design

AvePoint accompanied the report’s release with insights from cybersecurity leaders who warn that AI’s rapid evolution has made it a front-line security issue.

Diana Kelley, CISO at Noma Security, said AI risks have shifted “from a watch list item to a front-line concern.” She advised that effective defenses start with a complete inventory of AI systems, including agentic components, to establish a baseline for governance and risk management. Kelley emphasized the importance of adopting an AI Bill of Materials (AIBOM) to strengthen supply-chain visibility, combined with rigorous red-team testing, runtime monitoring and logging to detect and block attacks in real time.

Nicole Carignan, senior vice president of security and AI strategy and field CISO at Darktrace, noted that before meaningful governance can occur, organizations must first master foundational data science practices. “AI systems are only as reliable as the data they’re built on,” she said, adding that proper data sourcing, classification, and security are essential to ensure accuracy and accountability.

Carignan urged executives to establish tailored AI policies aligned to each organization’s risk profile and regulatory landscape. “There is no one-size-fits-all approach,” she said, stressing that leadership commitment and cross-functional collaboration are critical. She also pointed to the need for ongoing workforce education as AI becomes embedded across business functions: “Governance must be dynamic, real-time, and embedded from the start.”

New threats demand new defenses

John Watters, CEO and managing partner of iCOUNTER, warned that traditional security models are no longer adequate in the face of AI-driven attack methods. “Organizations need more than mere actionable intelligence,” he said. “They need AI-powered analysis of attack innovations and insights into their own specific weaknesses.”

Randolph Barr, CISO at Cequence Security, said AI’s evolution toward “agentic” systems is compounding the challenge. He cautioned that basic security controls are too often overlooked in the race to market. “When engineering teams cut corners to meet launch deadlines, those shortcuts make their way into production,” Barr said. “Security needs to be part of the development lifecycle from day one, not an add-on at launch.”

Ishpreet Singh, CIO at Black Duck, warned that malicious actors are using generative AI and deepfakes to create highly realistic false narratives capable of influencing public perception and eroding brand credibility. He described such campaigns as a direct threat to organizational trust and long-term value.

Mimoto CEO and Co-Founder Kris Bondi said enterprises should approach AI with a clear purpose. “Utilizing AI for the sake of using AI is destined to fail,” she said. Successful programs start with identifying specific problems AI can solve. She added that well-trained and monitored AI agents can play a valuable role in reducing security alert volume, freeing analysts to focus on complex threats.

As Bondi emphasized, “Well-trained and monitored AI agents can help reduce the volume of potential security threats that a team must handle directly. In theory, this gives security professionals more time to analyze complex risks — which is where human judgment still matters most.”

To download the complete report, go here

About the Author

Rodney Bosch

Editor-in-Chief/SecurityInfoWatch.com

Rodney Bosch is the Editor-in-Chief of SecurityInfoWatch.com. He has covered the security industry since 2006 for multiple major security publications. Reach him at [email protected].

Sign up for SecurityInfoWatch Newsletters
Get the latest news and updates.

Voice Your Opinion!

To join the conversation, and become an exclusive member of SecurityInfoWatch, create an account today!