Corporations Must Fortify Defenses Against AI Threats

June 10, 2024

The frequency of enterprise and corporate data breaches has been as commonplace as men dressed in blue suits and donning long red neckties standing before microphones outside a New York City courthouse during a former president’s criminal trial. You know they are coming, but they are often a bewildering challenge.

The emergence of generative AI tools represents a turning point in corporate cyber risk and data insecurity. These cutting-edge technologies, capable of generating text, images, and code, have transformed business operations, offering unprecedented efficiencies and innovations. However, these benefits are accompanied by a substantial increase in the complexity and sophistication of cyber threats. Generative AI has provided cybercriminals with the means to create highly convincing phishing emails, generate malicious code, and even mimic legitimate user behavior, rendering traditional security measures less effective.

The adaptability of generative AI and its ability to learn and evolve from vast datasets has improved its outputs and reshaped the cyber threat landscape. AI-generated phishing schemes can now be personalized to a concerning extent, significantly boosting their success rates. Similarly, AI-powered malware can autonomously modify its structure to evade detection. These capabilities outpace conventional defense mechanisms and have made traditional security measures obsolete. This compels businesses to adopt more sophisticated and flexible security measures to stay ahead of the evolving cyber threats.

ChatGPT Is a Coin Flip

“Over the past year, we have seen AI applications, like ChatGPT, gain remarkable ground in practical utilization. Cybersecurity is an arms race and bad actors are constantly evolving their tools to circumvent detection, while defenders are trying to adapt. ChatGPT or other popular artificial intelligence tools can be used on both sides of the cybersecurity landscape. For cybersecurity professionals, AI’s natural language processing capabilities enable it to streamline threat intelligence analysis, extracting valuable insights from vast datasets to stay abreast of emerging threats. ChatGPT can assist in real-time incident response by providing quick insights and suggestions during security incidents. Cybersecurity professionals can use these capabilities to analyze logs, identify potential attack vectors, and recommend mitigation strategies,” Darren Guccione, CEO and Co-Founder at Keeper Security, recently told me.

“Meanwhile, a bad actor can utilize ChatGPT in several ways, including to create convincing phishing emails. By leveraging ChatGPT or the natural language processing capabilities of other generative AI tools, bad actors can quickly and easily craft sophisticated messages tailored to specific individuals or organizations, making it more likely for recipients to fall victim to them. ChatGPT can also be utilized to generate deceptive content for social engineering, creating fake profiles or messages to manipulate individuals into disclosing sensitive information.”

Furthermore, integrating generative AI into legitimate business processes introduces new vulnerabilities. As companies increasingly depend on AI for data analysis, customer service, and decision-making, the risk of data breaches and unauthorized access escalates. The advanced nature of these AI tools also sparks ethical concerns about data privacy and the potential for misuse of sensitive information.

Guccione says that AI in the hands of adversaries has the potential to ramp up social engineering exponentially, which is currently one of the most successful scamming tactics available. Cybercriminals can use AI for password cracking, phishing emails, deep fakes, impersonation, and malware attacks.

“Phishing emails used to be easy to spot because they had frequent grammatical and spelling mistakes. However, AI is now making it easy for cybercriminals to generate well-written, convincing content for phishing scams. Instead of writing their own phishing emails or text messages, cybercriminals leverage AI to write the scams for them. Because AI algorithms can analyze large amounts of data, they can also create fake personas, such as impersonating someone’s voice or creating a deep fake video,” adds Guccione.

Vigilance is Critical

In today’s rapidly evolving technological landscape, the rise of generative AI tools presents both unparalleled opportunities and significant threats to corporate security. Bad actors increasingly leverage these sophisticated technologies to target sensitive corporate and personnel data, necessitating a robust and proactive response from organizations.

Corporations must prioritize a multi-faceted approach to mitigate these threats. This includes investing in advanced cybersecurity measures, conducting regular vulnerability assessments and fostering a culture of vigilance among employees. Collaboration with AI experts and ethical hackers can provide invaluable insights into emerging threats and effective countermeasures. Moreover, staying abreast of regulatory changes and participating in industry-wide information sharing can fortify defenses.

Ultimately, the key to protecting corporate integrity in the face of generative AI threats lies in a dynamic, informed, collaborative strategy. By remaining vigilant and adaptive, corporations can safeguard their most valuable assets against cyber adversaries' ever-evolving tactics.

About the Author

Steve Lasky | Editorial Director, Editor-in-Chief/Security Technology Executive

Steve Lasky is a 34-year veteran of the security industry and an award-winning journalist. He is the editorial director of the Endeavor Business Media Security Group, which includes the magazine's Security Technology Executive, Security Business, and Locksmith Ledger International, and the top-rated website SecurityInfoWatch.com. He is also the host of the SecurityDNA podcast series.Steve can be reached at [email protected]