ChatGPT is changing the phishing game

April 18, 2023
Understanding how cyber criminals are manipulating ChatGPT to slip past defenses

Historically, social engineering attacks have had a few red flags that the average user would notice. Strange greetings, misspelled names, poor grammar, confusing and high-priority requests – these are the infamous hallmarks of a phishing email that we’ve all come to recognize, allowing us to roll our eyes, delete the message, and move on with our day.

But new advances in artificial intelligence are removing these glaring key indicators, making it increasingly difficult for users - even technical professionals - to differentiate between a legitimate email and a phishing one, leaving enterprises exposed. ChatGPT, the long-form AI text generator that is seemingly everywhere these days, has permanently altered the social engineering game. The tool has swept the news cycle in the past few weeks, stirring up a flurry of uncertainty around how it will impact the cybersecurity landscape. While much of the conversation has been sensationalized, its impact on phishing campaigns is very real.

ChatGPT is Changing Phishing Campaigns for the Worse

Launched by OpenAI in November 2022, ChatGPT is a generative model that is programmed to mimic human conversation, draft long-form content, compose music, and even write and correct code. This last capability has garnered plenty of attention in the cyber community as leaders consider the possibility of ChatGPT-scripted malware. There’s been a general consensus that the threat of AI-written malware code is still not upon us, as any nefarious code that has been produced by the solution has been incredibly buggy and basic. However, the chatbot poses another threat to cybersecurity that has been missing from the conversation: AI-powered phishing emails.

ChatGPT allows threat actors to infuse their phishing emails with the communication skills they usually lack. Using this chatbot, phishing emails that are coherent, conversational, and indistinguishable from legitimate messages can be produced for free, meaning even the most rudimentary cybercriminal can uplevel their social engineering attacks. Gone are the days when a misspelled name or clunky grammar can cause concern. While ChatGPT has controls in place to prevent this sort of misuse, it’s easy for a threat actor to manipulate the solution simply by rewording the request to avoid any red flag phrases. Another way to circumvent ChatGPT’s guidelines is to simply use the tool to polish existing phishing communications. The resulting advanced phishing emails may very well trick even the most tech-savvy user into clicking a suspicious link, leading to an uptick in account takeover attacks that can cost organizations and individuals their time, resources, and reputations.

Enterprise Security Teams Can Evolve in Tandem with Cybercriminals

Organizations need to act now to prepare their workforce for these sophisticated social engineering attacks, starting at the board level. As ChatGPT continues to make headlines, it’s likely that board members are asking what it means for their business from an opportunity or innovation standpoint. But it’s up to CISOs to keep them informed about what ChatGPT means from a cybersecurity standpoint, especially as many phishing emails impersonate board- and C-suite-level executives.

CISOs must execute an educational campaign to build awareness of this new capability in threat actors’ toolboxes. Employees at every level of the business should be instructed to always use caution, even when an email seems legitimate. This means always confirming the domain of the email is correct – an inaccurate domain is a telltale sign that an email is illegitimate, even when the body of the email seems authentic. Employees should also reach out to the supposed sender via a secondary channel to confirm that they actually sent an email, especially if the email includes a suspicious request or link. When in doubt, it is always better to be safe than sorry – double checking an email is legitimate takes much less time than remediating an account takeover attack.

While education is important, humans are imperfect, and as social engineering campaigns become more sophisticated, the burden to defend against them cannot lie solely on the shoulders of employees. Security teams need to deploy a comprehensive approach to combating phishing scams by leveraging advanced threat detection and remediation capabilities to analyze the output of single sign-on (SSO) systems and identify strange patterns. Today’s automated solutions can immediately flag anomalous behavior to alert security teams of an account takeover attempt, allowing them to intervene before it’s too late.

Chatbots Can Change How Security Teams Communicate

In the cybersecurity realm, the rise of ChatGPT isn’t all doom and gloom. The chatbot also opens up new opportunities for enterprise communication around cybersecurity best practices. Typically, organizations’ cybersecurity updates and reminders come in the form of dry monthly emails to the entire workforce, often laden with jargon that many readers may not understand and therefore may disregard.

ChatGPT can help security teams revamp and customize these communications to improve engagement, readability, and comprehension. They can meet employees where they are, optimizing their content to be easily shared on Slack or Microsoft Teams messages. Just as threat actors can use ChatGPT to humanize their content, so can enterprise security teams.

Enterprises Have Tools to Bolster Defenses Against ChatGPT Phishing Emails

ChatGPT is unlocking a new era in cybersecurity, especially when it comes to social engineering. So much of what we have all learned about phishing emails must now be thrown out the window as the chatbot enables greater sophistication in these illegitimate communications. But as with all new threats, security teams and users will evolve right with these new capabilities. By taking a two-pronged approach of education and advanced threat analytics, enterprises can protect their employees and their data against these new chatty threat actors.

About the author: Matt Caulfield is the Founder & CEO of Oort, a VC-backed security startup that helps enterprise security teams to adopt an identity-first, data-driven approach to cybersecurity, starting with Identity Threat Detection & Response (ITDR). Prior to founding Oort, Matt led the Boston Innovation Team for Cisco Systems. He is an industry expert in distributed systems, computer networking, and cybersecurity.

Courtesy of BigStock -- Copyright: olechowski
Courtesy of Getty Images -- Credit: alexsl