Potion or poison? How AI is reshaping the cybersecurity battlefield

April 23, 2025
The double-edged sword that could save—or sabotage—enterprise security

Will artificial intelligence provide a fairy tale experience for enterprises? Many seem to think so. In the rush for adoption, CIOs prioritize building development teams with the promise that AI will completely transform workplace productivity. As someone deeply ingrained in the tech landscape, I embrace opportunities to utilize new digital tools and software that allow me to be more efficient, agile, and innovative. After all, being on the cutting edge is what makes our jobs so exciting to begin with.  

Yet, I also know this eagerness and acceptance are not felt as strongly amongst all CISOs. Those of us with a security mindset know that enterprise “fairy tales” don’t always work out the way we hope they will, and the protagonist often falls into a trap due to blind faith, overconfidence, and rose-colored glasses. With AI, in particular, some security leaders believe this approach is not a dream come true and is, in fact, security poison in disguise.

While this cautious nature is valid, the narrative across the industry is shifting towards harmful rhetoric fueling fear and distrust in AI. The cybersecurity ecosystem circulates enough fear, uncertainty, and doubt that it can feel like there’s no light at the end of the tunnel. I want to champion a shift in perspective, where security leaders feel confident embracing AI across the workplace and within defensive programming, with the caveat that proper data preparation is in place. 

Keep Your Enemies Closer

Is AI a friend or foe? Well, it’s a little more complicated than that. AI can be both, but perhaps we start with the positive—how AI uniquely supports security. AI enhances security operations in two main use cases: threat detection and productivity acceleration.

 1.  AI in Threat Detection: AI's novel use case is identifying patterns and making predictions to advance anomaly detection, behavioral analysis, and real-time threat intelligence. AI’s ability to ingest and learn from cyber threat data allows the technology to elevate threat detection from tracking for known signatures to identifying potentially unknown zero-day malware and ransomware attacks.

2.  AI for Automation: 61% of security teams deal with staff shortages. When a security issue is identified, time is of the essence, and security teams need support to analyze and respond to alerts efficiently and expediently. Aside from aiding in threat detection, AI can also automate response to suspicious activity, such as prioritizing threat remediation based on urgency and taking the necessary steps to block and eliminate malicious content without human intervention.   

Cybercriminals are undoubtedly using AI to enhance the complexity of their attacks and security teams need to adapt their defensive strategies to match.

Cybercriminals are undoubtedly using AI to enhance the complexity of their attacks and security teams need to adapt their defensive strategies to match. This means embracing AI-powered security solutions and strategies to ward off AI-powered attacks. But what exactly are we up against?

The Rise of AI-Powered Attacks

There's long been talk about how AI aids cybercriminals' tactics, but we’re just starting to see this come to light. From reimagining old campaigns to paving the way for a new era of cyber threats, it is no surprise that threat actors have become skilled in AI for their own malicious gains. Understanding and identifying the latest threat landscape in the era of AI will be critical to fortifying defenses now and into the future.

 1.  Advancing the ‘Tried and True’: We’re all familiar with the classic tactic of “CEO asks employee to buy gift card” phishing scam. It’s campaigns like these, and greater security awareness, that have made these tactics less effective over the years. But gone are the days of having to look for grammar and spelling errors or wonky requests, as AI enables cybercriminals to create compelling content that accurately mimics mannerisms, tone of voice, and jargon used by the individual they are impersonating. From there, malware embedded into content and browser links is easier to create with AI, with reproducibility at an all-time high. AI can craft, stress test, and tweak malicious code to generate new variants that bypass traditional security systems' defenses and user error.

There's long been talk about how AI aids cybercriminals' tactics, but we’re just starting to see this come to light.

2.  The LLM Ingestion Era: Then there’s cybersecurity activity targeted towards your organization’s personal AI applications, like prompt injection, where cybercriminals manipulate large language models (LLMs) to influence and skew outputs. On one hand, there’s concern that sensitive data will be put at risk if bad actors can override “rules” within LLMs to either leak internal prompts built off sensitive information, or encourage end users to share this sensitive information to proceed with their requests. On the other hand, there’s fear that these LLMs are being fed sensitive data, from PII to corporate secrets, without proper data masking in place, meaning that critical data is flowing into generative and agentic AI tools without proper encryption or obfuscation—accessible by threat actors with the right tools or AI assistance.

These are just a few examples of how bad actors capitalize on AI. There’s also the threat of deepfakes, cyber espionage, and other campaigns that cybercriminals have been witnessed using AI to elevate. CISA and the FBI have vocalized over the past year the increasing threat of AI in cybercriminal operations. That’s why I advocate for enterprises to fight AI with AI to safeguard the future of enterprise data and productivity.  

Cybersecurity for the 21st Century

Most of us enter the cybersecurity industry and field because we’re problem solvers, eager to tackle challenges head-on. As master troubleshooters, we should be intrigued, to a certain degree, by the prospect of taking this new, albeit complex, technology and making it safer for the workplace to consume and an integral part of security operations. I know that’s easier said than done; security leaders face many technical and overall business obstacles. 

But I want to encourage us to embrace the unknown, seeking opportunities to work together and learn from one another to build a resilient and innovative future. So long as we remember that every fairy tale has its dangers, the right potion can be a cure or a curse with the proper precautions.

 

About the Author

Michael Bowen | Sr. Solutions Engineer at Votiro, recently acquired by Menlo Security.

Michael Bowen is a Sr. Solutions Engineer at Votiro, recently acquired by Menlo Security.