OWASP Releases First Top 10 List Targeting Security Risks in Autonomous AI Agents
Key Highlights
- The OWASP Top 10 for Agile Applications identifies critical security risks, including hijacking, tool misuse, and privilege abuse, in autonomous AI systems.
- The framework was developed through extensive research and collaboration with industry leaders, security researchers, and global organizations like NIST and the European Commission.
- Security practitioners are encouraged to adopt the guidance to defend against emerging threats and ensure the safe deployment of agentic AI technologies.

WILMINGTON, Del. — Dec. 10, 2025 — The OWASP GenAI Security Project on Wednesday unveiled the OWASP Top 10 for Agentic Applications, a new framework designed to help organizations identify and mitigate security risks tied to the growing use of autonomous AI agents.
The release follows more than a year of research and incorporates input from more than 100 security researchers, industry practitioners, user organizations and leading cybersecurity and AI technology providers. The list aims to provide not only a ranking of key risks but also practical, data-driven guidance intended for hands-on security teams.
The framework was reviewed by the project’s Agentic Security Initiative Expert Review Board, which includes representatives from global bodies such as NIST, the European Commission and the Alan Turing Institute.
“This new OWASP Top 10 reflects incredible collaboration between AI security leaders and practitioners across the industry,” said Scott Clinton, co-chair of the OWASP GenAI Security Project. “As AI adoption accelerates faster than ever, security best practices must keep pace.”
Highlighted risks include agent behavior hijacking, tool misuse and exploitation, and identity and privilege abuse—threats that underscore how attackers can subvert AI agent capabilities or supporting infrastructure. Project leaders say such incidents are already emerging across sectors.
“Companies are already exposed to Agentic AI attacks — often without realizing that agents are running in their environments,” said Keren Katz, co-lead of the Top 10 for Agentic Applications and senior group manager of AI security at Tenable. Katz said defending against these threats requires both strong security intuition and a deep understanding of how AI agents operate.
John Sotiropoulos, a project board member and co-lead of the Top 10, said the guidance emphasizes real-world attacks and actionable mitigations. “This release marks a pivotal moment in securing the next generation of autonomous AI systems,” he said.
The new list joins a growing set of peer-reviewed resources from the OWASP GenAI Security Project, including a governance guide for autonomous AI, a quarterly analysis of agentic security tools, a practical guide for securing agentic applications, a capture-the-flag reference environment and a threat-model-based reference of emerging agentic AI threats.
Steve Wilson, project board co-chair and founder of the OWASP Top 10 for LLM Applications, said the rise of deployed agentic systems required an expansion of OWASP’s guidance. “This year, we’ve seen agentic systems move from experiments to real deployments, and that shift brings a different class of threats into clear view,” he said.
OWASP invited organizations, researchers, policymakers and practitioners to review the new Top 10, contribute to future updates and participate in the broader initiative to support secure AI development.
The OWASP GenAI Security Project is a global open-source community focused on the security and safety risks associated with generative and agentic AI.
About the Author
Steve Lasky
Editorial Director, Editor-in-Chief/Security Technology Executive
Steve Lasky is Editorial Director of the Endeavor Business Media Security Group, which includes SecurityInfoWatch.com, as well as Security Business, Security Technology Executive, and Locksmith Ledger magazines. He is also the host of the SecurityDNA podcast series. Reach him at [email protected].
