How AI Is Helping Hackers Hunt for Weaknesses

May 16, 2025
Armed with AI, cybercriminals are emulating the methodical tactics of Jurassic Park’s velociraptors — probing for weaknesses, adapting rapidly, and launching faster, more precise ransomware attacks.

As the cow is lowered into the velociraptor enclosure, Jurassic Park’s game warden, Robert Muldoon, explains that the “raptors” display “problem-solving intelligence.” He says they’re systematically testing the enclosure for weaknesses, throwing themselves at the electric fences to find a weak spot. 

“They remember,” he says, with a dreadful seriousness. 

The film Jurassic Park, released in 1993, may as well have been talking about how cybercriminals operate. Every day, these bad actors are out there systematically testing your systems, throwing their code at your defenses, looking for a weak spot. 

This has been the way hackers have operated for decades: going down a list of ports “sniffing” for openings, “wardialing” for open systems, phishing, or injecting their way into your network. The difference now is they’re armed with AI. And they’re no longer simply using it to write malicious code faster. They’re using it to probe defenses, learn from failed attacks, and pinpoint weak spots before quickly striking again. 

And, just like Jurassic Park’s velociraptors, they remember.

Anatomy of a ransomware attack

The danger of having all of these adversaries continuously testing your defenses for weaknesses is that someday, eventually, they will find one. When that happens, they won’t simply alert you and go about their day. Chances are, they will attempt to steal your data — and then ransom it back to you. 

This isn’t purely hypothetical. Ransomware attacks are on a sharp increase, with ransoms exceeding $1 billion in 2023. Research shows that 88% of organizations were hit by ransomware in the past year, and more than half had to shut down operations for an average of 12 hours. Attackers are out there, testing your fences to find a weakness and succeeding. 

The way they work, much like the velociraptors, is methodical and systematic. They create or purchase (more on this later) a payload that will do the bad thing. Then they create a layer of code they call “the dropper.” The dropper gains access to your system. Below the dropper is the code containing the exploit, which targets a specific weakness in your security code and gets around it. Then, once they’re in, the payload activates, and they execute their attack. 

The challenge for the attackers is that we typically know how they will approach entering a system, and we’ve gotten very good at securing those access points. They can only use a few standard ways of moving around, like RDP, SSH and SMB. Because of that, it’s fairly easy to predict their attacks and devise security measures to stop them. 

The challenge for us is that every time we devise a new defense strategy, they devise a new attack. And now, with AI, they’re devising new attacks faster than ever.

Everything’s for sale on the dark web

Everything we do to try and stop attackers has a mirror on their side for how they try to circumvent our defenses. For example, you don’t typically create your own security software; you buy it. You do your research, check the ratings, maybe use word of mouth, and buy a solution that suits your needs. Hackers do this, too.  

Marketplaces exist on the dark web where adversaries can purchase snippets of code as simply as buying a flat-pack table at IKEA. This code has been created by experts, tested in the wild, and is put up for sale for the express purpose of allowing others to use it to exploit the weaknesses in your security software. It even has ratings, like you would find on any online marketplace. Other hackers have reviewed it. 

The end result is that the process of invading your systems is as simple as ever. And now, using AI, adversaries are creating snippets of code and testing them faster than ever. And they’re selling those snippets of AI-tested code to their compatriots on the dark web.

How AI is helping them

Part of any successful ransomware attack is the “reconnaissance” phase. It is exactly what it sounds like. Adversaries will scout out your system, testing your defenses like the velociraptors, to learn what defenses you have in place. Then, if they don’t already have evasion techniques, they can basically develop those techniques on the fly using large language models. 

This AI enhancement allows them to create exploits very, very quickly, making it that much harder to secure your systems and contain the breach. Attackers now have the capability to say, “Okay, we found that this enterprise is using CrowdStrike and Palo Alto Networks.” Then they type into AI, “develop me an exploit that uses that and can evade these protections.” 

In addition, they’re using AI to mount their reconnaissance. Imagine the velociraptors outsourcing their fence-testing to a machine that could perform the same task, throwing itself at the defenses, a thousand times faster. That’s the current advantage of the adversaries.

How to thwart AI attacks

Although advanced technology is giving adversaries an advantage, the best method of thwarting them is to go back to the basics. According to a recent report by Cortex Xpanse, 32% of overall security issues were rooted in Remote Desktop Protocol (RDP). This means that simply securing RDP ports can stop about a third of all ransomware attacks. 

Simply by doing the basics, securing ports, maintaining good cyber hygiene, preventing lateral movement, and adopting Zero Trust principles, organizations can fix and contain their risk by an incredible amount. And implementing these policies, even if they don’t stop an attack, will slow it down, allowing you time to contain the breach. 

Another common issue is patching. Understandably, organizations want to test patches before deploying to avoid situations like the 2024 CrowdStrike crash. The problem with patches, however, is that to effectively apply a patch, you must often reboot the system. This process is cumbersome and takes time away from work, which impacts productivity. 

The workaround a lot of organizations have adopted is the “Patch Tuesday” concept, where patches are applied on a specific day of the week or even month. By setting a regular cadence for updates and patches, organizations make it more likely that the patches will actually be applied. Unfortunately, this also means that patches aren’t being applied on the other days of the week or month, leaving systems vulnerable. 

This piecemeal approach to patching, combined with the near-constant, AI-enhanced reconnaissance by bad actors, means that it’s almost impossible to effectively prevent an attack. That’s why organizations must implement containment tools powered by AI security graphs in addition to prevention tooling to contain attacks and protect against the entire attack lifecycle. 

Not being able to identify and contain risks makes the job of the adversaries’ AI that much easier. After all, they’re attacking methodically and systematically. Are you securing in the same way?

About the Author

Trevor Dearing | Director of Critical Infrastructure Solutions

Trevor Dearing is Director of Critical Infrastructure Solutions at Illumio. He has been at the forefront of new technologies for nearly 40 years. From the first PCs through the development of multi-protocol to SNA gateways, initiating the deployment of resilient token ring in DC networks and some of the earliest use of firewalls. Working for companies like Bay Networks, Juniper and Palo Alto Networks, he has led the evangelization of new technology. Now at Illumio, Dearing is working on simplifying segmentation in Zero Trust and highly regulated environments.