Five AI Questions to Ask Vendors at GSX 2023

Sept. 11, 2023
AI is arriving via software components, integrations with AI SaaS services, as well via embedding or hosting in security devices. The avalanche of AI capabilities warrants a separate set of questions that should be asked about AI.

As I recently wrote in a Real Words or Buzzwords article, Operationalizing AI, Artificial Intelligence is not a product per se but is a broad category of software that includes cloud-based applications and device embedded software whose applications are finding their way into many types of cyber-physical systems, including security system deployments. AI is arriving via software components, integrations with AI SaaS services, as well via embedding or hosting in security devices. The avalanche of AI capabilities warrants a separate set of questions about AI, which is the subject of this article.

AI Models

As stated in my Real Words or Buzzwords article, AI Model, at the core of AI software lies an AI model, a software and data framework that represents or approximates certain aspects of a real-world phenomenon, encapsulating the knowledge, patterns, or relationships learned from data during the AI training process.

AI models are designed to enable two critical AI product capabilities: massively parallel computation (which uses large-scale computer hardware parallel processing) and the ability to learn.

For AI-enabled physical security system products, nowadays, a product’s AI model is pre-trained before it is deployed.

AI-enabled products can utilize these two capabilities to perform highly accurate in-context real-time data analysis and response well beyond the capabilities of previous generations of electronic physical security products.

 
AI and the Physical Security Industry

Initially, many but not all physical security industry companies–especially those with video analytics products–were touting “machine learning” and “deep learning” capabilities as product differentiators and as substantiation of their status as “leading edge” technology.

They didn’t realize that the runaway growth of AI was fueled by the exponential advancement of computing technology, and especially by the development of computing and networking hardware specifically designed to support the kind of data processing and data throughput required by advanced AI applications. Intel, NVIDIA, and Google are three of more than a dozen big-name companies who are developing hardware to support specific aspects of AI processing, not just on server computers but on all kinds of mobile, IoT and IIoT devices.

Having AI-enabled capabilities is no longer a product differentiator. It’s the specific value to security operations and administration that sets a product apart from competitors. One aspect of that value is how easily the AI-enabled product works with existing physical security system deployments.

A very good example of this is Calipsa, which provides advanced video analytics for real-time security. Its initial claim to fame was its false alarm detection, based on whether or not a camera’s video motion detection alert was based on human or vehicle motion. Until Calipsa emerged, video motion detection produced so many non-alarm alerts that it was primarily used for determining whether to record video or not.

The ingenious aspect of Calipsa’s AI was that it could make its determination of human or vehicle motion using only three still images (or small video clips) just a few seconds apart. Most IP cameras and encoders could transmit three such images to Calipsa, who could forward an alarm message to the VMS system or monitoring service. No new hardware needed. No sending video streams to the cloud. This enabled effective video monitoring at large scale–something not possible or cost-feasible before Calipsa.

Another example is Alcatraz AI, whose Rock product provides facial authentication (not facial image recognition) based on using multiple sensors and AI to generate a super-accurate mathematical model of a face. The facial image cannot be reconstructed from the mathematical model; thus, the model contains no PII and complies with such as BIPA, CCPA and GDPR. It works with existing access control systems using Wiegand or OSDP connections. Additionally, it provides accurate multi-sensor tailgating detection and tailgating activity video capture.

The point is that the most important questions to ask are about the value that the AI-enabled product provides. That comes first and is the basis for this article’s questions. Technical details about the AI in the product only matter if the product itself provides a valuable enhancement to your security operations and/or management capabilities at the scale you need, especially if it increases the value of your existing security system deployment.

AI Questions

1.      Time to Value. What is the “time to value” for the product, system or service? 

This includes accounting for AI model training time and then initial learning in the deployment environment. How long will it take before the AI-enabled product is consistently doing exactly what you want it to do?

For Calipsa and Alcatraz AI, it’s practically instant–mainly because they have a very specific narrow focus and are pre-trained. For AI capabilities that require learning what is “normal” activity for specific environments, such as a sports field or university campus, that is a much more complicated learning scenario and may require a “human in the loop” element. Time to value may involve operationalizing the AI-enabled product, the subject of the following question.

2.       Operationalizing AI. What is involved in operationalizing the AI-enabled capability?

I was recently part of a group conversation in which a security practitioner said that he was going to use AI-enabled products to transition from a reactive to a proactive, preventive and preemptive security posture, as if all that was requires was simply to attend GSX and select out some good AI-enabled products.

I asked, “Do you mean like the capability to identify an intrusion situation before the intrusion happens, as opposed to waiting for an alarm because the intruder has already gained access?” He responded, “Exactly.” I asked if he had worked out the requirements for operationalizing such a technology. He asked, “What do you mean?” I said I was pretty sure that his contracted security officer post orders did not cover stopping an intrusion before it happens.

How do you respond to that knowledge? How do you prevent the intrusion? Do you use AI to automate initial responses, such as alerting the person with a generic warning? Or would it be a customized warning such as, “Hey you in the green jacket and white hat!” At night, would you automatically turn on a PTZ-controllable LED spotlight and focus it on the individual? Would you have a live security officer inside the building go to the targeted intrusion location as a preemptive measure, or would you send a robot instead? Or would you simply turn on the building hallway lighting at the target location?

Would you respond differently if it were a current or former employee, such as by using a live two-way conversation, first? Once you identified the likely scenarios, worked out the desired responses, and had or most of the technical response capabilities in place, who would develop the officer training?

A well-qualified vendor should be able to discuss operationalizing the AI-enabled product capabilities in detail, and ideally relate them to your specific situation, based on customer experience.

3.       Human in the Loop. Are there any “human in the loop” elements involved in deploying the product?

In the context of AI, “human in the loop" refers to a human being involved in the decision-making process of an automated system. This can be for validation and quality control, complex decision-making, or human approval or authorization of automated response actions. Typically, this means that instead of an AI system making decisions entirely on its own, it consults a human at certain steps or under certain conditions.

It may be that the product itself does not include a human-in-the-loop element, but you would like to have one. This is a vendor discussion topic. This question certainly applies, but is not limited to, robotics. In essence, "human in the loop" is a way to combine human judgment and expertise with AI's computational power. It offers a balanced approach, leveraging the strengths of both humans and machines.

Even automated actions that don’t involve AI, such as an automated visitor management kiosk, can benefit from human involvement with a visitor where complications exist that aren’t covered in the automated programming, and additional actions are required or more information is needed from the visitor. Companies such as Salesforce are using robots to provide a host of workplace functions across physical security, facilities management, EH&S (Environmental, Health, and Safety), and an array of operational capabilities. Read about it here.

Cobalt Robotics has created and manages the world’s largest fleet of commercial security robots. Realizing that at any time a security robot could identify or encounter a situation in which human judgment and expertise are required, its robot-as-a-service offering includes a Command Center staffed by Security Analysts comprised of more than one-third U.S. Combat Veterans, including individuals with private and military security experience who speak over a dozen languages. Robot service customers don’t need such a Command Center capability every day–and certainly couldn’t afford to establish and maintain one. But it is available every day for when the need arises.

Recently, Cobalt Robotics has released its Cobalt Omni service, which is a security camera and access control system automation service designed for companies of all sizes. Omni takes your existing physical security hardware and uses Cobalt’s AI and Machine Learning software to monitor and detect threats in real time. Cobalt’s human-in-the-loop system has specialists verify alerts, reducing noise and providing security teams with actionable information.

4.       Increasing Existing Security Investment ROI. How can your product increase the value we get from our existing physical security system investment and/or our security operations capabilities? 

You should see many products at GSX that can add value to existing security systems. But don’t limit your product evaluations to only those.

For example, Davista is an AI platform for achieving proactive physical security operations capabilities. Davista is the pioneer in the delivery of artificial intelligence and data visualization solutions for the physical security and law enforcement industries.

Davista's goal is to help organizations fully automate their physical security operations by continuously orchestrating and dynamically mobilizing their safety and security apparatus and personnel in response to developing events and in anticipation of emerging and future trends.

5.       Interoperability. What are the interoperability capabilities of your product? 

Most security applications are stovepiped, creating data silos. Open APIs are usually used for integrations with other products and systems. Can data easily be shared via the API? AI operates on data and creates data as well. An AI-enabled product should have sharable metadata that can be consumed by other applications. This is a part of the value proposition for many, but certainly not all, AI-enabled products.

Ray Bernard is the principal consultant for Ray Bernard Consulting Services (RBCS), a firm that provides security consulting services for public and private facilities (www.go-rbcs.com). In 2018 IFSEC Global listed Ray as #12 in the world’s top 30 Security Thought Leaders. He is the author of the Elsevier book Security Technology Convergence Insights available on Amazon.

Follow Ray on Twitter: @RayBernardRBCS.