From Hype to Hard Results: How AI Is Delivering Real Value Inside the SOC
Key Highlights
- AI enhances SOC efficiency by filtering high-fidelity alerts, enabling analysts to focus on critical threats and reducing response times.
- Effective AI deployment relies on building accurate behavioral baselines, continuously refining models, and seamlessly integrating into existing workflows.
- Governance frameworks lag behind AI adoption, underscoring the need for organizations to develop policies to address risks such as model poisoning and adversarial attacks.
- Balancing AI capabilities with human judgment is essential, especially for high-stakes decisions such as account suspensions or content quarantines.
This summer, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released a concept paper and proposed action plan for the use and development of AI systems, pledging to create use cases that address risks across generative, predictive, agentic and other types of AI. While still in development, NIST’s framework reflects the need for a roadmap to govern AI. It also shows that even among policymakers and practitioners, the right approach to the responsible use of AI remains a work in progress.
At the same time, the security applications of AI are no longer theoretical. They are actively reshaping processes inside the SOC. In fact, IBM’s 2025 Cost of a Data Breach Report found that nearly one-third of organizations use AI and automation extensively across the security lifecycle, helping to shorten breach timelines and lower costs compared with organizations that do not use AI. Yet, new research from OpenText also shows that less than half (48%) of organizations have implemented a formal AI use policy. Adoption continues to outpace governance. While AI has become a critical part of security operations, its true value comes not from replacing analysts, but from enhancing what human practitioners can do and making their work more effective. Organizations can only harness AI’s full potential in the SOC when it is guided by human expertise and integrated into well-defined operational practices.
Where AI is Winning in the SOC
Analysts are continually confronted with high volumes of alerts and event logs, sifting through vast amounts of data to identify threats and anomalous behavior that may indicate an adversary. Any delay in detecting a real incident can have significant consequences for an organization, but the sheer number of alerts can make a rapid response a considerable challenge for SOCs. Furthermore, traditional approaches such as fixed rules or static signatures often miss subtle, anomalous behaviors that could signal a breach.
This is where AI is making a difference. By learning what normal behavior looks like across users, systems and applications, AI can identify deviations that might otherwise go unnoticed. For example, a user suddenly downloading hundreds of sensitive files or using unfamiliar tools may not trigger a human alert. Still, AI can automatically flag these behaviors as potentially malicious and draw analysts’ attention to investigate them further.
By learning what normal behavior looks like across users, systems and applications, AI can identify deviations that might otherwise go unnoticed.
Practitioners see several tangible benefits from these kinds of deployments:
- Noise Reduction: As SOC teams are inundated with high volumes of low-fidelity signals, AI surfaces high-value alerts, helping analysts prioritize their attention and resources. Improving the overall signal-to-noise ratio helps teams focus on the threats and incidents that matter.
- Faster Triage and Enrichment: By establishing system and user baselines, AI can analyze behaviors and automatically assign risk scores, providing a prioritized queue of incidents for practitioners to investigate. This accelerates response to high-risk threats and reduces the time adversaries can move laterally within a network. In addition, AI can help enrich signals with threat intelligence and MITRE ATT&CK context.
- Improved Threat Hunting: By linking disparate behaviors together across systems and users, AI can detect coordinated or persistent threat activity that may otherwise go unnoticed by traditional monitoring. This helps analysts uncover stealthy campaigns before they turn into major breaches.
- Automated Remediation: AI can automatically detect and remediate threats. Although human oversight will remain crucial until AI agents are trusted to operate entirely autonomously, AI can alleviate much of the burden currently faced by analysts.
- Real Outcomes: Leveraging AI effectively strengthens security operations in measurable ways, improving organizational security while generating savings in labor, operational cost and talent retention. AI use can also enhance practitioner morale and reduce burnout by shifting focus from mundane tasks to more rewarding work.
AI’s Strengths, Limitations and the Human Role
AI excels in scale, speed, and pattern recognition, but it cannot currently replace human judgment and intuition. For instance, AI might flag an employee accessing sensitive data late at night. Still, only a human can determine whether this activity is malicious, a misconfiguration or simply a teammate working late to meet a tight deadline.
AI adoption also introduces new risks. Model poisoning, prompt injection and adversarial manipulations are real threats, and attackers are already adapting to evade AI-driven defenses. Static, supervised machine learning models are particularly vulnerable. Without self-learning capabilities, AI systems risk becoming obsolete against adaptive adversaries and evolving threat tactics.
To mitigate these risks, AI should be integrated into a multilayered defense rather than treated as a standalone solution. Incorporating AI to strengthen foundational tools, such as traditional access controls, endpoint detection and network segmentation, can provide more holistic protection across an entire organization. Human oversight should also remain central, especially for high-stakes actions like suspending accounts or quarantining content.
SOC teams implementing AI can follow several practical principles to maximize its value:
- Start Targeted: Deploy AI to SOC operations with clear behavioral patterns that are high-impact, such as monitoring identity and access logs, where deviations are easier to detect and assess. Gaining early success in focused areas helps build confidence, demonstrate ROI and expand into broader adoption.
- Build Behavioral Baselines: Accurate AI outputs depend on high-quality data inputs. Establish what “normal” looks like across users and systems to improve detection of true anomalies. Continuously refine baselines as the environment changes, incorporating analyst feedback to reduce false positives and strengthen model accuracy over time.
- Measure Outcomes: Define benchmarks, like reduced false positive rates, mean time to detect or improved analyst efficiency, to evaluate the return on AI investments. Regularly reviewing these metrics validates effectiveness and supports cross-functional alignment on security.
- Integrate Seamlessly: AI tools should fit naturally into analyst workflows, supporting efficient handoffs and decision-making without adding friction. Prioritize tools that complement existing platforms and enable analysts to interpret AI-driven insights, ensuring the technology enhances operations quickly.
Continuous fine-tuning and evaluation of AI tools are essential. Threat actors adapt, environments change, and AI models must evolve alongside them to remain effective.
NIST’s proposed control overlays highlight an AI governance gap, as policy frameworks are still lagging behind the pace of AI adoption. Yet organizations don’t need to pause deployment while the rules are being finalized. By using AI strategically to detect anomalies, prioritize alerts and free analysts’ time for deeper investigation and response, SOCs can start achieving real security gains now.
The SOC teams that benefit most will be those that strike a balance between AI use and human insight. AI excels at scale and speed, but human judgment ensures decisions are grounded in context. As both AI and adversaries evolve, resilience will depend on keeping that balance at the core of all security strategies and operations.
About the Author

Tim Bramble
Director of Threat Detection and Response at OpenText
Tim Bramble, Director of Threat Detection and Response at OpenText, has more than twenty years’ experience developing enterprise solutions addressing cloud security, data encryption, identity and access management, email security and web fraud detection. He is well-versed in current information security threats and the challenges governments and enterprises face in defending against them.
