AI agents are rapidly becoming a major source of web traffic, but most organizations lack the visibility needed to properly identify and manage these automated interactions, according to a new report from DataDome.
The company’s “AI Traffic Report: High Volume, Low Visibility, and a Growing Risk” analyzes the scale, composition, and security implications of AI agent activity across enterprise websites during the first months of 2026. The findings suggest that organizations are increasingly confronted with large volumes of automated traffic from AI systems—many of which cannot be clearly identified or trusted.
According to the report, DataDome’s network recorded 7.9 billion AI agent requests in January and February 2026, representing a 5% increase over the fourth quarter of 2025. For at least one enterprise customer, AI agents accounted for nearly 10% of total web traffic over a 30-day period, highlighting how quickly agentic activity is scaling.
“Invisible traffic is unmanaged traffic, and right now most organizations cannot see this clearly enough to do anything meaningful about it,” said Jérôme Segura, vice president of threat research at DataDome. “AI agent traffic is complex. Billions of requests are hitting sites every month from agents with different identities, different purposes, and varying degrees of transparency about who they are.”
AI Agents Becoming a Significant Traffic Source
The report indicates that large technology platforms are driving much of the current AI agent activity online. In February 2026, Meta ExternalAgent accounted for nearly 25% of the top AI agent traffic observed across DataDome’s network, followed by ChatGPT-User at 19.1% and Meta WebIndexer at 14.3%.
However, the analysis notes that high-volume agents are not necessarily high-value ones. Some agents may generate referral traffic or legitimate indexing activity, while others simply scrape content or extract data with little or no benefit to the websites they visit.
Spoofing and Impersonation on the Rise
A growing concern identified in the report is the impersonation of well-known AI agents. DataDome found that Meta-externalagent was the most frequently spoofed identity, with 16.4 million fraudulent requests. The ChatGPT user agent followed with 7.9 million spoofed requests.
Meanwhile, PerplexityBot recorded the highest impersonation rate, with nearly 2.4% of all requests using that identity deemed fraudulent.
These spoofing tactics pose risks to organizations that rely on traditional allowlisting based on user-agent strings, which attackers can easily forge.
Agentic Browsers Introduce Additional Risk
The report also highlights agentic browsers—AI-powered browsing tools that autonomously interact with websites—as an emerging risk vector.
This traffic tends to concentrate in industries rich with valuable transactional data, including e-commerce and retail, which accounted for roughly 20% of agentic traffic, followed by real estate (17%) and travel and tourism (15%).
Because these tools can interact with websites in ways that resemble legitimate user behavior, distinguishing them from both human visitors and malicious automation becomes increasingly difficult.
Visibility Gap Complicates Security Decisions
According to DataDome researchers, the core challenge facing many organizations is the inability to accurately classify AI agents by both identity and intent.
Without reliable identification mechanisms, security teams struggle to determine whether an automated system should be allowed, restricted, or blocked altogether. Websites that allowlist known crawlers solely based on user-agent identifiers may unintentionally create an attack surface when those identities are spoofed.
The report concludes that as AI agents continue to expand their presence across the web, organizations will need more advanced traffic classification and trust management capabilities to safely manage automated interactions at scale.
For more details, the full report is available from DataDome.
