2023 was undoubtedly the year of artificial intelligence (AI). Today, nearly every organization - regardless of size, sector, or location - has adopted some form of AI or ML, due in large part to rapid innovation with LLM models. By 2025, it’s projected that global investments in AI will reach $200 billion.
We are still in the infancy of society’s collective AI journey. This past year has taught us that the latest advances in AI and LLMs make its usage extremely powerful, but navigating this space alone – especially considering all of the technology’s ramifications – is challenging. As organizations continue to invest heavily in their rush to implement and experiment with AI, they should proceed with caution and heed the following best practices to ensure security isn’t an afterthought. After all, it’s easy to be blindsided by shiny, new technology.
- Test, test, test – and test again
With any technology that’s deeply embedded within an enterprise, continuous testing is paramount to ensure it’s free of security flaws. This same rule of thumb applies to AI and ML, specifically the Large Language Models (LLMs) powering such technologies. Organizations prioritize security testing of cloud services, internal and external networks, and applications, so they should ‘follow the sun’ to ensure LLMs are also being tested as part of this process.
Security testing should not only be done regularly, but it should also take on a holistic and contextual approach across the entire technology stack. This will help ensure any potential vulnerabilities are found, while developing a more comprehensive view of how LLMs integrate with other tools and services across the business.
- Understand use cases context around vulnerabilities
One of the best ways to protect AI and ML systems from adversarial threats and improve overall resiliency to such attacks is to better understand the use cases and, more importantly, the data fed to LLMs and the output they generate.
This kind of insight will go a long way in helping prevent targeted attacks on LLMs. Also, immerse yourself in the latest knowledge and guidance regarding this emerging technology. Security teams must critically review past and recent vulnerability reports and their remediation instructions for a more holistic view of LLM security.
By doing so, they’re also better positioned to predict what may happen next. As part of these reports, evaluating the defenses against major attacks is important to be better prepared for the next instance across the entire technology stack – LLMs included.
- Don’t embark on the AI journey alone
While AI has immense promise and potential, it’s also keeping security teams up at night. A study found that nearly half (46%) of security professionals believe generative AI will increase their organization’s vulnerability to attacks.
The top three generative AI threat issues include growing privacy concerns (39%), undetectable phishing attacks (37%), and an increase in the volume and velocity of attacks (33%). This can be overwhelming for a group of security professionals already short-staffed and burnt out.
The silver lining is that AI security doesn’t have to be a journey taken alone. Security teams need proper training, resources, and tools to ensure they feel supported. As part of this, AI and ML penetration testing services through a dedicated partner can help organizations stay creative and confident as they experiment with emerging technologies while ensuring security isn’t an afterthought.
By proactively testing AI and ML systems, organizations will be able to better understand and bolster the security of LLMs – before any damage is done. This security partner can also help with ideation, development, training, implementation, and real-world deployment of AI and ML technologies within the enterprise, which, in the long run, can mean business growth, differentiation, and success.
Every new paradigm shift brings along a new set of opportunities and challenges, and the widespread adoption of LLMs is no different. LLMs must now be a part of an organization’s security strategy, and overall business plan, in order to innovate freely and confidently. Only then can organizations unlock the full potential of AI and ML technologies.
Vinay Anand is CPO at NetSPI.