Salt Security uncovers security flaws within ChatGPT extensions that allowed access to third-party websites and sensitive data

March 13, 2024
Salt Labs researchers identified plugin functionality, now known as GPTs, as a new attack vector where vulnerabilities could have granted access to third-party accounts of users, including GitHub repositories.

PALO ALTO, Calif.March 13, 2024 /PRNewswire/ -- Salt Security, the leading API security company, today released new threat research from Salt Labs highlighting critical security flaws within ChatGPT plugins, highlighting a new risk for enterprises. Plugins provide AI chatbots like ChatGPT access and permissions to perform tasks on behalf of users within third-party websites. For example, committing code to GitHub repositories or retrieving data from an organization's Google Drives. These security flaws introduce a new attack vector and could enable bad actors to:

  • Gain control of an organization's account on third-party websites
  • Allow access to Personal Identifiable Information (PII) and other sensitive user data stored within third-party applications

ChatGPT plugins extend the model's abilities, allowing the chatbot to interact with external services. The integration of these third-party plugins significantly enhances ChatGPT's applicability across various domains, from software development and data management to educational and business environments. When organizations leverage such plugins, it subsequently gives ChatGPT permission to send an organization's sensitive data to a third-party website and allow access to private external accounts. Notably, in November 2023, ChatGPT introduced a new feature, GPTs, a similar concept to plugins. GPTs are custom versions of ChatGPT that any developer can publish, and contain an option called "Action" which connects it with the outside world. GPTs pose similar security risks as plugins.

The Salt Labs team uncovered three different types of vulnerabilities within ChatGPT plugins.

The first of which was noted within ChatGPT itself when users install new plugins. During this process, ChatGPT redirects a user to the plugin website to receive a code to be approved by that individual. When ChatGPT receives the approved code from a user, it automatically installs the plugin and can interact with that plugin on behalf of the user. Salt Labs researchers discovered that an attacker could exploit this function, to deliver users instead a code approval with a new malicious plugin, enabling an attacker to install their credentials on a victim's account automatically. Any message that the user writes in ChatGPT may be forwarded to a plugin, meaning an attacker would have access to a host of proprietary information.

The second vulnerability was discovered within PluginLab (pluginlab.ai), a framework developers and companies use to develop plugins for ChatGPT. During the installation, Salt Labs researchers uncovered that PluginLab did not properly authenticate user accounts, which would have allowed a prospective attacker to insert another user ID and get a code that represents the victim, which led to account takeover on the plugin. One of the affected plugins is "AskTheCode", which integrates between ChatGPT and GitHub, meaning by utilizing the vulnerability, an attacker can gain access to a victim's GitHub account.

The third and final vulnerability uncovered within several plugins was OAuth (Open Authorization) redirection manipulation. Like pluginlab.ai, it is an account takeover on the ChatGPT plugin itself. In this vulnerability, an attacker could send a link to the victim. Several plugins do not validate the URLs, which means that an attacker can insert a malicious URL and steal user credentials. Like the case with pluginlab.ai, an attacker would then have the credentials (code) of the victim and can take over their account in the same way.

Upon discovering the vulnerabilities, Salt Labs' researchers followed coordinated disclosure practices with OpenAI and third-party vendors, and all issues were remediated quickly, with no evidence that these flaws had been exploited in the wild.

"Generative AI tools like ChatGPT have rapidly captivated the attention of millions across the world, boasting the potential to drastically improve efficiencies within both business operations as well as daily human life," said Yaniv Balmas, Vice President of Research, Salt Security. "As more organizations leverage this type of technology, attackers are too pivoting their efforts, finding ways to exploit these tools and subsequently gain access to sensitive data. Our recent vulnerability discoveries within ChatGPT illustrate the importance of protecting the plugins within such technology to ensure that attackers cannot access critical business assets and execute account takeovers."

According to the Salt Security State of API Security Report, Q1 2023, there was a 400% increase in unique attackers targeting Salt customers. The Salt Security API Protection Platform enables companies to identify risks and vulnerabilities in APIs before they are exploited by attackers, including those listed in the OWASP API Security Top 10. The platform protects APIs across their full lifecycle – build, deploy and runtime phases – utilizing cloud-scale big data combined with AI and ML to baseline millions of users and APIs. By delivering context-based insights across the entire API lifecycle, Salt enables users to detect the reconnaissance activity of bad actors and block them before they can reach their objective.

The full report, including how Salt Labs conducted this research and the steps for mitigation, is available here. To learn more about Salt Security or to request a demo, please visit https://content.salt.security/demo.html.

About Salt Security
As the pioneer of the API security market, Salt Security protects the APIs that form the core of every modern application. Protecting some of the largest enterprises in the world, Salt's API Protection Platform is the only API security solution that combines the power of cloud-scale big data and time-tested ML/AI to detect and prevent API attacks. With its patented approach to blocking today's low-and-slow API attacks, only Salt provides the adaptive intelligence needed to protect APIs. Salt's posture governance engine also delivers operationalized API governance and threat detection across organizations at scale. Unlike other API governance solutions, Salt Security's AI-based runtime engine pulls from the largest data lake in order to continuously train the engine. Salt supports organizations through the entire API journey from discovery, to posture governance and threat protection. Deployed quickly and seamlessly integrated within existing systems, the Salt platform gives customers immediate value and protection, so they can innovate with confidence and accelerate their digital transformation initiatives. For more information, visit: https://salt.security/