Inside the Emerging Internet of Agents and Its Security Implications

Pascal Geenens of Radware discusses the emerging Internet of Agents, the technologies driving it, and the new cybersecurity challenges it creates.
Aug. 13, 2025
6 min read

The Skinny

  • The Internet of Agents is taking shape with autonomous AI agents working across networks.

  • MCP and A2A protocols enable real-time discovery, information exchange and coordination.

  • Expanded attack surfaces require stronger trust, authentication and security measures.

The idea of an “Internet of Agents” might sound futuristic, but it’s already starting to take shape. In this new digital environment, autonomous AI agents — software programs capable of talking to each other, making decisions and taking action — are beginning to change how systems work together.

That shift brings opportunities for faster workflows and smarter automation, but it also opens the door to new kinds of cyber threats. To unpack what’s happening and why it matters, SecurityInfoWatch spoke with Pascal Geenens, director of threat intelligence at Radware, about the technologies enabling this change, the risks to watch and how organizations can prepare.

Defining the Internet of Agents

We’ve read and heard talk about a new internet that will replace the internet as we know it. Can you explain what is meant by the 'Internet of Agents'?

The Internet of Agents refers to an emerging digital ecosystem in which autonomous AI agents — powered by large language models (LLMs) and multi-modal systems — communicate, coordinate and execute tasks across interconnected networks without direct human oversight. Unlike the traditional internet of documents (webpages) or even the internet of things (connected devices), the Internet of Agents enables machines to interact using natural language, taking actions, summarizing data, invoking APIs, making decisions and even collaborating with other agents.

In essence, it’s a shift toward a new operational paradigm where software agents, not humans, are the primary actors in digital interactions. These agents can represent users, organizations or themselves, and they are beginning to form dynamic, interoperable networks across industries and infrastructures.

What is the Agent Ecosystem and how does it relate to the Internet of Agents?

The Agent Ecosystem is the foundation that supports the Internet of Agents. As agents grow in capability and are deployed across cloud, edge and enterprise environments, they begin to network with each other; thus, creating the Internet of Agents. Think of the ecosystem as the infrastructure and tools, and the Internet of Agents as the web of interactions made possible by them.

The role of MCP and A2A protocols

Which protocols are at the core of enabling the Internet of Agents, and what do they do?

Two protocols are at the heart of enabling the Internet of Agents, the Model Context Protocol (MCP) introduced by Anthropic and the Agent-to-Agent (A2A) protocol by Google.

MCP governs how tools, APIs and external data sources are described and exposed by MCP servers to LLMs (MCP clients). It enables agents (MCP clients) to understand what tools are available and how to use them by providing natural language context. It is the bridge that turns static tools into usable, dynamic extensions of an agent’s capabilities.

The A2A protocol allows agents to interact with other agents in a structured, interoperable way. It defines a messaging format that lets agents discover, query, and collaborate with one another, even across platforms.

Together, MCP and A2A allow agents not just to execute tasks, but to coordinate across distributed systems. This breaks down silos and enables composability, much like microservices did in traditional architecture. But now it’s happening via natural language and autonomous agents.

This is a major leap for agentic AI because it makes large-scale, cross-organizational orchestration of autonomous tasks feasible. Agents can now dynamically find services, delegate tasks and optimize workflows in real-time — without requiring hardcoded integrations.

Expanding the threat surface

How will this affect the threat landscape and threat surface for organizations?

The rise of agentic systems powered by MCP and A2A brings with it a radically expanded threat surface. These changes create multiple avenues for exploitation, which can be grouped into several key threat categories.

Natural language as an attack vector — Traditional security systems are designed to detect structured payloads (e.g., SQL injection, command injection). AI agents and LLMs interpret and act on natural language, meaning that malicious prompts or indirect instructions can be embedded into seemingly benign messages (prompt injection). These payloads can bypass filters, mislead agents, and trigger unauthorized actions leading to zero-click attacks such as EchoLeak.

Indirect prompt injection and supply chain risks — An attacker might not attack the agent directly but target upstream content, such as a document, webpage, or API response that the agent will process. This is the equivalent of the third-party software supply chain risk, but in a natural language form.

Autonomous execution with escalated impact — Since agents can autonomously make decisions and call APIs, any compromise can quickly scale. Agents can leak sensitive data, perform unauthorized actions or even interact with other compromised agents, escalating the blast radius.

Agent identity spoofing and trust boundaries — With agents operating on behalf of humans or systems, establishing trust and identity becomes critical. Without strong authentication and authorization mechanisms, malicious agents could impersonate trusted ones.

Emerging protocol abuse — MCP and A2A are new and not yet fully mature in terms of security hardening. This opens doors to protocol-level exploits, spoofed capability declarations, unauthorized agent discovery, enumeration attacks and misconfiguration or bad deployment issues.

Guidance for CISOs

Wrapping this up, can you summarize and provide some guidance for CISOs in the upcoming era of the Internet of Agents?

The Internet of Agents is the next big evolution in AI and distributed computing, but it comes with critical implications for cybersecurity. Just as cloud computing demanded a rethinking of perimeter security, the rise of agentic AI requires rethinking how we manage trust, execution, observability, and threat detection in a world where autonomous software agents make real decisions.

The AI Agent ecosystem is a big opportunity for businesses, and most will not have the luxury to miss this boat if they want to stay relevant in their industry. However, just as with the cloud, business will move into the agent ecosystem and only gradually start to appreciate and understand the expanse of this new threat surface. AI technology evolves at incredible speed and sooner rather than later, LLM-aware security solutions will have to be integrated in the ecosystem to provide prompt-level inspection and input sanitization combined with agent identity and trust protocols before large-scale adoption leads to large-scale exploitation.

About the Author

Rodney Bosch

Editor-in-Chief/SecurityInfoWatch.com

Rodney Bosch is the Editor-in-Chief of SecurityInfoWatch.com. He has covered the security industry since 2006 for multiple major security publications. Reach him at [email protected].

Sign up for SecurityInfoWatch Newsletters
Get the latest news and updates.

Voice Your Opinion!

To join the conversation, and become an exclusive member of SecurityInfoWatch, create an account today!