The AI Revolution: How Cybersecurity Professionals Must Adapt Now

June 17, 2025
The future of cybersecurity doesn’t belong to AI alone, but to those who can harness its power responsibly.

The emergence of artificial intelligence (AI) has fundamentally reshaped the cybersecurity landscape, with AI being used as both a solution and a threat. Looking at the cyber workforce alone, 88% of ISC2 members reported changes to their existing roles as a result of AI implementation. Despite AI’s rising influence, nearly half of cybersecurity professionals still claim to have minimal experience with AI tools, raising questions about whether the cybersecurity industry is prepared for the AI transition.

The good news is that despite AI’s growing presence, cybersecurity operations will always require human oversight. The ever-evolving nature of digital threats demands strategic thinking, ethical judgment and decision-making, all of which are areas where human professionals remain irreplaceable. Nevertheless, AI has proven invaluable in reducing the operational burden of data overload, offering much-needed relief to security teams operating under extreme duress.

As AI becomes increasingly embedded in cybersecurity products and workflows, professionals must proactively evolve their skill sets. From the C-suite down to entry-level, cybersecurity workers need to gain new competencies in three critical areas: AI-driven governance, risk visibility and quantification and compliance oversight.

AI Governance: Building Trust and Transparency with AI Governance

As AI systems increasingly make autonomous security decisions, governance is more essential than ever. When AI systems fail to detect a data breach or block a user, accountability falls on the organization. Security leaders must establish governance frameworks that address bias, explainability, auditing and compliance. To ensure these frameworks are robust and effective, it’s crucial for security leaders to work with legal, risk and compliance teams to set AI usage policies. Being part of that dialogue means understanding regulatory implications and building transparent, auditable AI systems.

One of AI’s cybersecurity advantages lies in its ability to scale and automate repetitive and complex security tasks, such as real-time threat and anomaly detection. However, in most instances, cybersecurity teams rely on vendors to introduce GenAI and machine learning capabilities, meaning they need to evaluate these offerings to ensure that AI capabilities yield good outcomes. This reliance on vendors doesn’t eliminate the need for cybersecurity workers to develop hands-on AI skills, as introducing AI capabilities can add another layer of risk. The key is striking the right balance — trusting AI while maintaining human oversight.

One of AI’s cybersecurity advantages lies in its ability to scale and automate repetitive and complex security tasks, such as real-time threat and anomaly detection

To achieve this balance, AI fluency is required to ensure cybersecurity workers understand the limitations of AI tools. This doesn’t necessarily need deep coding knowledge, but it does require an understanding of the basics of machine learning, model training, bias and false positives.  Workers must ask critical questions, including: How was this model trained? What does a flagged anomaly represent? Can this system be tricked?

Despite what AI can promise them, at a technical level, cyber professionals still need to understand foundational concepts like network protocols, operating systems and architecture, log analysis and analytical thinking. Blind reliance on AI can lead to critical oversights if workers are unable to detect algorithmic errors or biases. With that said, much like software engineers that shifted their focus from hardware mechanics to code logic and architecture, cybersecurity experts must move beyond manual execution and instead focus on analyzing, tuning and validating AI-driven processes. The real value lies in understanding both how and why an AI system arrived at its decision. Critical thinking and technical literacy will remain essential to avoid turning abstraction layers into blind spots that jeopardize system integrity.

Additionally, AI literacy must extend beyond the CISO and into the C-suite. Board members and senior leaders need to be educated on AI-enabled threats, compliance obligations and governance best practices. AI is not just an efficiency tool. It’s a strategic asset that redefines cyber risk management at every level of the organization.

Risk Visibility and Quantification

Data breaches are not just an inconvenience — they're a critical threat to business continuity and reputation. Seventy percent of organizations experienced a cyber-attack in the past year, with the average breach costing around $4.88 million. Furthermore, 68% of these incidents involved human error, reinforcing the need for stronger cybersecurity training and oversight.

The rise of AI is not just another technological trend, but it’s a fundamental change in how threats are detected, decisions are made and defenses are deployed. However, teams cannot blindly trust AI. Without properly vetting the data, AI outputs can significantly increase the already considerable amount of risks enterprises face in today’s digital landscape.

The convergence of cybersecurity and data science is accelerating. As security tools become more data-driven, teams need hybrid skills. Analysts must be able to interpret AI-generated insights and collaborate closely with data scientists to enhance detection accuracy and minimize noise. Additionally, upskilling in areas like data analytics, Python scripting and AI ethics gives cyber professionals a competitive edge. Courses in machine learning for security, utilizing data lakes, Python, and machine learning software libraries, are becoming increasingly valuable.

AI-powered cyber risk quantification (CRQ) tools are also helping teams prioritize threats and allocate resources by modeling expected financial loss. To be effective in today’s AI-driven, risk-sensitive environment, CISOs and cyber professionals must use CRQ not just as a measurement tool but as a storytelling framework that drives action. By translating technical vulnerabilities into financial and operational impact, the CISO can frame cyber risk in terms that resonate with executives and boards, highlighting what’s at stake, what can be done, and what the return on security investment looks like. This narrative transforms abstract threats into tangible business scenarios, enabling leadership to make informed decisions on priorities, funding and risk acceptance. In essence, CRQ empowers the CISO to speak the language of business, align security strategy with enterprise goals and advocate for meaningful change backed by clear, data-driven insights.

Finally, CRQ efforts must be part of a living process. Teams should establish feedback loops, updating CRQ models regularly based on threat landscape shifts, business changes and executive input. Staying current with AI capabilities, risk modeling best practices, and regulatory requirements is not optional. It’s essential.

Compliance Oversight

Seventy-eight percent of organizations expect compliance demands to increase annually — a trend that cybersecurity teams must prepare for.  After all, effective cybersecurity governance depends on meeting compliance requirements, and AI is no exception. Global regulators are already setting new standards for AI transparency, risk reporting and accountability. The EU AI Act is a prime example, requiring organizations to provide greater clarity on how AI impacts data protection and risk management.

Incorporating cybersecurity into a broader governance framework allows companies to improve not just their risk posture but also their strategic decision-making. The goal is to create a unified structure where cybersecurity, compliance and business leadership work in concert, not in silos.

With regulatory demands accelerating, organizations should consider a more integrated approach, placing governance, risk and compliance at the center of their cybersecurity strategy. These platforms help cyber workers align compliance with broader security objectives, automate risk assessments, and monitor regulatory changes in real-time. Leveraging AI in this context can streamline oversight and deliver more actionable compliance insights.

To further strengthen compliance oversight, organizations must narrow the gap between cybersecurity and legal governance. This includes recruiting board members with cyber expertise and appointing Chief Legal Officers who can oversee the complex intersection of technology and regulation.

Cybersecurity professionals should also be familiar with laws and standards that impact AI-powered practices, such as HIPAA, GDPR and industry-specific guidelines. Compliance isn’t just the legal team’s job anymore; it’s a core competency for cybersecurity.

The Future of Cybersecurity is AI-Enhanced, Not AI-Dependent

As AI continues to play a transformative role in cybersecurity, organizations can no longer afford to maintain the status quo. Professionals must evolve beyond basic skill sets and adopt AI-enhanced capabilities to meet emerging challenges head-on.

Success in this new landscape requires cybersecurity workers to incorporate AI into governance frameworks to provide automation, while maintaining rigorous oversight. It’s not just about making workflows faster, but also about making decisions smarter.

Cyber professionals must also become proficient in interpreting AI-generated risk assessments and translating them into strategic insights that can guide boardroom conversations. As compliance standards grow more complex, workers must bridge the divide between cybersecurity and governance, ensuring their organizations remain agile, secure and accountable.

The future of cybersecurity doesn’t belong to AI alone. It belongs to those who can harness its power responsibly, interpret its insights wisely, and build resilient systems that thrive in an increasingly digital world.

About the Author

Monica Landen | Chief Information Security Officer (CISO) at Diligent

Monica Landen serves as the Chief Information Security Officer (CISO) at Diligent, a leading GRC SaaS company. In her role, Monica leads Diligent’s security team, oversees the organization’s robust security standards and helps to ensure that our data and IT infrastructure are protected against potential threats. 

Before joining Diligent, Monica was the senior vice president and CISO at FactSet, a leading provider of financial data and analytics solutions. With over 20 years of experience in IT security, Monica’s expertise spans incident management, identity management, vulnerability management, and security technology and infrastructure. 

Monica holds a Bachelor of Science in Computer Science from Texas State University. She is a Certified Information Systems Security Professional (CISSP) and a Certified Secure Software Lifecycle Professional (CSSLP) through the International Systems Security Certification Consortium, Inc. (ISC).