ISC West Panelists Say Stop Asking What AI Can Do and Start With What You Need

At ISC West 2026, a panel of security technology leaders argued that the organizations getting real value from AI aren't the ones chasing the technology. They're the ones who started with a problem worth solving.
April 8, 2026
7 min read

Key Highlights

  • Panelists at the session said AI’s real value lies in outcomes, turning overwhelming security data into clear priorities and actions, not more dashboards

  • Successful deployments start with fixing processes first, then applying AI to specific, repeatable tasks where it can deliver measurable impact

  • Purpose-built, domain-specific AI — grounded in clean, connected data — will determine what works in real environments and what remains hype

Jordan Hill made it plain early on. When you're managing 10 buildings, you're looking at millions of access control events every single day. Nobody has time to sort through that manually.

"Three years ago you needed a data scientist just to run those reports,” said the HiveWatch co-founder and head of product. His company’s answer: use AI as compression — take the enormous data sets and collapse them into high-level insights a security manager can act on in under five minutes.

That framing set the tone for the “Purpose-Built: Unlocking the Real Power of AI in Physical Security” session at ISC West 2026. Moderated by Sarah Rodrigues, chief product officer at Acre Security, the conversation pulled in four voices with different vantage points on the same problem. The result was less of a hype session and more of a working discussion about where AI actually earns its keep in enterprise security — and where it still falls short.

Start with process, not AI

One of the sharpest points of the session came from Jason Veiock, founder and CEO of Bearing. A lot of security leaders, he said, are walking into boardrooms where every other department has an AI strategy and feeling the pressure to produce one fast. That's a trap.

“Technology doesn’t solve your problem,” Veiock said. “Process solves your problem.” His advice: map out what you’re doing today, find where it’s broken, identify the bottlenecks using real data — mean time to detect, mean time to contain, mean time to resolve — and then ask whether AI can help fix a specific piece of that. Not all of it. A piece.

He gave a concrete example. Right now, when something happens in the world, a security team might receive four separate alerts from four different platforms, all about the same incident. A human in Outlook is manually correlating those. That's a well-defined, repeatable task that an AI agent can handle, he explained.

“That’s a human doing something a thousand times a year that takes 15 minutes,” Veiock said. Put an agent there, and that person is freed up for work that actually requires judgment.

Vertical depth matters more than general capability

Veiock reached for an analogy that was hard to argue with. A transportation company and a biotech firm are both running access control systems. But what those systems need to flag, prioritize and escalate looks completely different. The biotech company probably cares far more about interior lab doors than perimeter entries. A logistics hub is the opposite. The point wasn't just that every industry is different; it’s that every security program within an industry is different.

Jeffrey Groom, director of engineering for AI at Acre Security, built on that. “You could have verticals within verticals,” he said. General-purpose AI won't cut it. The systems need to be specialized not just to physical security broadly, but to the specific context of the organization running them. And, he added, you need a way to measure whether they actually are. Gut feel isn’t enough.

His “very simple mental model” for getting there was straightforward: think of AI agents like very capable interns. Smart, worth using, but you check their work. Start them on workflows you’d give a junior employee. Evaluate the outputs. Iterate.

The agents are already here

Groom also made a point worth noting for anyone still thinking of AI as a smarter search engine. He described three inflection points in recent AI development: generative models, then reasoning, then agents. “I’ve always said agents are the future,” he told the room. “But the reality is I was wrong. Agents are the present.”

Veiock’s company builds on ServiceNow and uses at least two distinct types of AI in its product: conversational AI that can trigger multiple workflows from a single question, and a separate monitoring agent that sits on alert queues and looks for patterns across incoming data.

Looking further out, Veiock’s prediction for what comes next is agent-to-agent communication: systems that talk directly to other systems across organizational boundaries. His company integrates with Epic, the healthcare records platform. “We don’t want to pull HIPAA data out of Epic and bring it into ServiceNow,” he said. “We want it to stay in Epic.”

MCP — Model Context Protocol — came up as the direction the industry needs to move toward.

Groom was blunt about what this means for manufacturers: the companies that build agentic APIs will win. “If your API is so brittle that you can’t do this agent-to-agent thing, that reduction in friction will be massive.”

AI doesn't replace review — it requires it

Rodrigues raised a question that cut against the usual AI skepticism. As AI gets better, will security professionals trust it too much? Thru Shivakumar, CEO and founder of Cohesion, pushed back on the idea that human error is an acceptable benchmark.

“Humans make probably more mistakes than AI,” she said. But her real point was simpler: you don’t remove the review layer. You still check the analyst’s work. You still check the PowerPoint someone handed you. The same standard applies to AI outputs.

Groom framed it at the product level. “I look at it through two lenses,” he said. “One is using AI as a tool for internal business operations, and one is putting AI into your product. These are two very different things.”

At Acre, he said, the standard is evaluation sets: a defined baseline of inputs with known good outputs, tested against a minimum accuracy threshold before anything ships.

“It’s unlikely we’re ever going to be 100% perfect,” Groom said, “but we at least need to be good enough.”

Veiock argued that holding AI to a standard of perfection that humans themselves never meet is a losing game. Consider, he suggested, self-driving cars. They aren’t perfect, but they sure cause far fewer accidents and deaths than human drivers.

He’s heard the same logic applied to physical security and it frustrates him. “It drives me absolutely nuts,” he said. Security professionals claim the stakes are too high to trust AI, all the while their CISOs down the hall are already using it.

“You're falling behind by not doing it,” he said. “I don’t know about being perfect. I think it’s about being better and being realistic.”

The strategic layer is next

When Rodrigues asked what AI delivers in physical security over the next 12 to 18 months, the answers converged on one idea: the move from operational workflows up to strategic decision-making.

Hill sees the next jump coming from data consolidation. Most large organizations are still fighting to get all their security data into one place. Once that’s solved, AI can start answering the harder questions — where should guards be deployed, what does the vulnerability profile look like across 200 sites, which risks need to be addressed first and why?

“You’re going to have a partner to process that data with you in a world where you would have hired an analyst,” Hill said.

Shivakumar pointed to video AI specifically, arguing that advances in what cameras can detect and interpret are accelerating fast enough that some sensor-based systems may become redundant. The gap, she said, is getting that video intelligence into the broader data ecosystem where it can actually drive decisions.

Groom put the long-term vision in stark terms: buildings that largely manage themselves. Ambient agents receiving inputs, catching anomalies, correcting access level errors, flagging policy violations — without waiting for a human to notice and log a ticket. “Just like we have dark factories,” he said.

None of it comes free. The message from every panelist was the same: the organizations that will get value from these tools are the ones that already have their processes in order, their data centralized, and their people willing to actually use the technology.

“If you are not good at it today,” Rodrigues said, “AI will only magnify where you have gaps — very, very loudly.”

About the Author

Rodney Bosch

Editor-in-Chief/SecurityInfoWatch.com

Rodney Bosch is the Editor-in-Chief of SecurityInfoWatch.com. He has covered the security industry since 2006 for multiple major security publications. Reach him at [email protected].

Sign up for our eNewsletters
Get the latest news and updates