Cisco Warns of ‘Connective Tissue’ Risks in State of AI Security 2026 Report

The company’s "State of AI Security 2026" report finds enterprises racing to deploy agentic AI while leaving critical integration layers, identity pathways and model supply chains exposed to emerging AI-driven threats.
Feb. 19, 2026
3 min read
Cisco “State of AI Security 2026” report
Cisco’s latest research underscores how autonomous AI agents are interacting directly with enterprise systems and external data sources, introducing machine-to-machine security challenges that many organizations are not yet equipped to monitor.

Cisco’s latest research underscores how autonomous AI agents are interacting directly with enterprise systems and external data sources, introducing machine-to-machine security challenges that many organizations are not yet equipped to monitor.

Cisco on Tuesday released its “State of AI Security 2026” report, cautioning that enterprises are deploying agentic AI systems at a pace that far outstrips their ability to secure the infrastructure connecting those systems to corporate data and to one another.

According to the report, much of that “connective tissue” — the protocols, integrations and identity layers that allow large language models and autonomous agents to interact with tools, datasets and other AI systems — remains largely unmonitored and undefended. The company warns this shift marks a new phase in cybersecurity, where AI is not only assisting attackers but increasingly operating autonomously in offensive campaigns.

Speed of adoption outpaces security readiness

The report finds that 83% of enterprises planned to deploy agentic AI capabilities in 2025. However, only 29% reported feeling prepared to do so securely.

Cisco characterizes this imbalance as a trade-off between speed and security, with organizations prioritizing rapid integration of large language models and AI agents into workflows while postponing foundational controls. The result, the report suggests, is that poorly secured AI tools could be weaponized against the very organizations deploying them.

Expanding attack surface

As AI systems grow more autonomous, the attack surface is expanding beyond traditional endpoints and applications.

The report points to the protocols that connect AI systems to corporate data sources, APIs and other agents as emerging primary attack vectors. Identity risks are also intensifying as autonomous agents increasingly communicate with other agents, tools and enterprise systems, multiplying potential points of compromise.

Traditional security tools, Cisco notes, were not designed to monitor or defend these dynamic, machine-to-machine interactions at scale.

The report also highlights supply chain risks tied to AI model and dataset repositories.

Platforms that host millions of pre-trained models and hundreds of thousands of third-party datasets present a systemic risk, Cisco warns. A large-scale compromise of a widely used AI model or dataset repository could allow attackers to distribute poisoned updates to thousands of enterprise AI systems simultaneously — a scenario the company describes as a potential “SolarWinds of AI.”

Such a compromise would represent a shift from isolated breaches to synchronized, large-scale AI-driven intrusions.

Nation-state and criminal convergence

Cisco’s research further suggests that state-sponsored actors and criminal syndicates are industrializing their use of AI, frequently sharing infrastructure, payloads and advanced AI-driven tactics.

As these groups integrate AI more deeply into their operations, AI-enabled cyber threats are expected to remain a top global organizational risk, the report states.

For additionals findings and to download the complete report, go here.

Sign up for our eNewsletters
Get the latest news and updates