Lack of AI Governance Is Putting Organizations Across the Globe at Serious Risk

How the lack of oversight is creating global cyber and compliance risks, as well as a governance gap.
Oct. 8, 2025
5 min read

Earlier this year, Verizon released its annual Data Breach Investigations Report (DBIR), which examines the current state of cybersecurity through the lens of emerging and continuing trends. As AI becomes increasingly ubiquitous across nearly every industry, it comes as little surprise that the technology has become a fixture in the report. And while Verizon is careful to note that adversaries are not yet using AI to create entirely new attack tactics, they are using the technology to increase both the scale and effectiveness of their current methods. Tactics like social engineering are already challenging to stop and attackers leveraging AI to generate more convincing phishing emails or SMS scams are a real problem. 

However, the most pressing issue regarding AI is the lack of effective governance practices. The DBIR highlighted a wide range of AI governance issues, including the widespread use of generative AI solutions outside of corporate policy and enforcement capabilities, which leads to significant security blind spots. A separate research report published this year noted that fewer than half of organizations have specific strategies in place to combat AI threats. With AI usage continuing to expand and technology like agentic AI becoming increasingly mainstream, organizations cannot afford to wait. They need a plan for AI governance before it’s too late.  

New Challenges Put AI Governance in the Spotlight

As AI models become faster and more advanced, the speed at which organizations can implement and utilize them is also increasing. This has enabled organizations to create a wide range of new efficiencies within their business processes, but it has also introduced risk. The apparent advantages created by specific AI capabilities have led to a “don’t get left behind” mentality, driving the need for rapid AI adoption, sometimes before it has been adequately vetted. Other AI technologies, including generative AI tools such as ChatGPT or Perplexity, are widely used and accessible, making it difficult for employers to regulate them. As the Verizon DBIR notes, employers have limited oversight over what employees share with ChatGPT when using personal devices and non-corporate accounts. 

That’s a significant problem, because data leakage remains one of the most common (and potentially damaging) issues with AI usage today. Employees may not understand whether specific data is safe to share with AI-based solutions, which can result in sensitive or confidential data making its way into public data models (or being stolen by crafty hackers via prompt manipulation and other emerging strategies). In fact, Gartner predicts that by 2027, an astonishing 40% of all breaches will be caused by the improper use of generative AI, explicitly citing the issue of users conducting unauthorized data transfers across borders. When it comes to AI, businesses are slowly realizing that security isn’t even their most significant challenge—it’s governance. 

Establishing Strong AI Governance Practices

For organizations seeking to enhance their approach to AI governance, it’s essential to recognize that change begins at the top. Business leaders need to fully support the initiative because they are the ones who set the tone for the organization and, more importantly, its risk appetite. When establishing governance practices, it’s essential to understand the organization's overall risk tolerance. That means the organization itself needs to have a “risk culture,” ensuring that employees consistently consider the risks associated with their decisions. It’s not enough to understand the potential advantages that come with AI usage—businesses need to know the risks that come with different AI tools, understand whether they align with the organization’s overall risk tolerance, and consider the operational and reputational consequences of shadow AI and/or approved AI model misuse. 

This process starts with intake. This involves collaborating with various business units to understand where AI is currently being utilized, as well as where employees would like to utilize it in the future. It’s essential to have a committee specifically working to develop acceptable use policies for AI, and several independent advisory organizations have already provided guidelines that can help those committees better understand how to move forward. Organizations like NIST and OWASP provide high-level frameworks that, while not comprehensive, can at least help businesses move forward with a more thorough understanding of the risks posed by AI and how to navigate them successfully. Armed with that knowledge, the AI committee should be able to approve and reject use cases from a more informed perspective.  

Organizations that already have a robust risk management program will have an easier time implementing strong AI governance. As technology has advanced, the regulatory and compliance environment has worked (with varying degrees of success) to establish minimum security and data privacy standards. Fortunately, today’s organizations have access to a wide range of solutions that help automate elements of the risk management process, which can be a significant aid in establishing AI guidelines. The ability to continuously map acceptable use policies and AI use cases against existing standards and frameworks can help organizations more easily visualize whether they are adhering to AI best practices. Perhaps more importantly, it also provides them with a basis for comparison when it comes to potential partners—allowing businesses to move on from vendors using AI in risky or irresponsible ways before they can cause real damage. 

Organizations that already have a robust risk management program will have an easier time implementing strong AI governance. As technology has advanced, the regulatory and compliance environment has worked (with varying degrees of success) to establish minimum security and data privacy standards.

Put AI Governance in Place Before It’s Too Late

As AI becomes more powerful, organizations worldwide are seeking ways to quickly leverage its advanced capabilities to further their own business objectives. However, AI comes with risks and organizations that have not established a culture of risk awareness to inform their decision-making processes may fail to identify the potential pitfalls associated with the technology. Today’s organizations cannot afford to wait until it’s too late. By emphasizing risk management and establishing clearly defined governance policies and practices based on accepted AI risk guidance, business leaders can ensure they are maximizing the benefits of their AI solutions without exposing themselves to significant and unnecessary risks. 

About the Author

Matt Kunkel

Matt Kunkel

Matt Kunkel is the CEO and Co-Founder of LogicGate

Matt Kunkel is the CEO and Co-Founder of LogicGate, a market-leading SaaS platform that operationalizes Regulatory, risk, and compliance programs for organizations. Before LogicGate, he spent over a decade in the management consulting space building custom technology solutions to run regulatory, risk, and compliance programs for Fortune 100 companies.

Sign up for SecurityInfoWatch Newsletters
Get the latest news and updates.

Voice Your Opinion!

To join the conversation, and become an exclusive member of SecurityInfoWatch, create an account today!