Could regulations tamp down promise of AI in security industry?

June 2, 2023
As the feds seek feedback on proposed guidelines, industry leaders are already discussing ethics and responsibility issues and showcasing how their firms use artificial intelligence.

With the tentacles of artificial intelligence (AI) beginning to reach into nearly every corner of American society, the U.S. government is taking the first steps toward slowing down the train. 

But aside from the doomsday scenarios being hurled about related to the technology’s presence, the security industry is grappling with how to respond to proposed government regulations as the promising technology is integrated into potentially beneficial uses. 

Through the National Telecommunications and Information Administration (NTIA) and U.S. Department of Commerce, the government has issued a Request for Comments on AI system accountability measures and policies.   

“We must move fast because these AI technologies are moving very fast,“ said NTIA head Alan Davidson at a recent press conference at the University of Pittsburgh. “We’ve had the luxury of time with some of those other technologies. This feels much more urgent.” 

But some observers question whether this effort will capture industry sector guidance requirements for the fastest growing application -- generative AI? Generative AI refers to a class of AI systems that, after being trained on large data sets, can be used to generate metadata, text, images, video content, sound and other outputs when a well-formatted progression of queries is input. 

Generative AI does many things right, but there are many use cases where it displays subtle inaccuracies. 

The President’s Council of Advisors on Science and Technology (PCAST) launched a working group on generative (AI) whose goals are largely focused on developing a wide range of improvements to the technology, from the accuracy of advice given to healthcare professionals, to computer code generation and descriptive analyses of retail customer behavior and purchasing trends over time. 

In SecurityInfoWatch’s ISC West 2023 coverage, we reported on solutions like Dragonfruit’s Star Trails, powered by the generative AI engine GPT. Retail security professionals can choose to see an infographic of behavior at one location that includes human “Star Trails,” a floor plan with layers of shopper behavior.  

A CEO or board member can make a request directly to the solution’s chat-capable input, such as: “Analyze customer trends across all stores, during the busiest times, of the most popular endcap displays.” An output of the behaviors and projected results of the most popular endcaps is delivered. 

Endcaps help shoppers find what they need quickly without having to venture down multiple aisles and can encourage them to buy a product they normally wouldn’t.  

In this case, the C-level executive finds out customers are now buying a brand of cleaner they had not seen before. The retailer adjusts the ad rate for the new product, dedicates an expanded area and runs an “introductory” sale. It’s a win for the retailer, the manufacturer and, most importantly, the customer. 

Leveraging AI Algorithms

To best create the NTIA response, and hopeful inclusion in some form of national framework on AI accountability, I solicited industry leaders already working on ethics and responsibility. 

The University of Houston, together with HoustonFirst, and the Public Safety Interest Group within the Security Industry Association, are collaborating in a submission to educate on the public safety, life safety and security Industry solutions that already leverage AI algorithms in efficient, ethical and trustworthy services and solutions. 

Admittedly, China has developed a comprehensive, regulative framework in “The Measures” but what works in an environment focused on maintaining the 100-year-old authoritarian world champion will not work for the complex and diverse needs of Western culture. This is reported in a separate article, “U.S., China and Europe begin push to regulate AI.” 

At a recent SIA Public Safety Interest Group meeting to kick off the AI System Accountability response to NTIA, James Connor of Ambient AI described how their artificial general intelligence solution works to achieve trustworthy AI processes. 

“It’s humanism associated with AI. You try to mimic human decisions, but in a very ethical way -- which is great because it approximates an easy way for end users to use it. The simple AI algorithm to detect piggybacking is one example.” 

Looking Into the Fire

The Import: AI can improve the survivability of victims and the safety of first responders. 

Jim Cooper, a 20-year volunteer firefighter veteran, is intimately familiar with the public safety technology side, weighed in. 

“Seeing the crossover between public safety and AI we're starting to see augmented video for first responders,” Cooper says. “We must figure out a way to achieve trust in the public safety community with tech like AI. The joke among the fire departments is that firemen hate new things and want things to stay the way they are.” 

The necessity of relevance in developing AI accountability leads to use cases and workflow development. How can AI advancement continue unencumbered, yet continue in a spirit of responsibility with a passionate focus on life and public safety? 

“I've got a use case with drones, machine learning, environmental detection, inference – it’s actually something that affects every single fire alarm,” Cooper continues. “It’s referred to as an art of reading smoke – looking at what a fire is doing with smoke volume, turbulence and color. 

“If you can determine what gases are being emitted, there’s a whole bunch of data that can be aggregated together for real-time situational awareness. So you just pop a drone up, fly it in circles and let it build a model to see what it’s doing.” 

Cooper continues describing the use case where unrestricted sensor fusion, together with AI inference, will ultimately save lives. He says one of the leading causes of injuries are fire flashover burns. 

“A lot of incident commanders get tied up with tactical details instead of standing across the street watching the big picture,” Cooper notes. “You need to watch that smoke and see what it’s been doing over the last hour. 

“As it moves through the phases, you can tell a lot about interior conditions based on what’s coming out of the windows, and at what point the fire chief needs to say, ‘Alright, it’s been going on too long. I’ve got to pull the guys.’” 

This is an example of how AI could augment life safety. But an onerous privacy policy protecting civilians in the drone’s field of view could present legal challenges in using a drone for overwatch – although ideally it would be collecting smoke visual, thermal and gas data where the public should not be observing. 

This is also a use case where it will be possible – through low power, high TOPS AI processing in drone cameras – to anonymize bystanders in favor of privacy, at the same time as performing diverse data collection.  

Security Info Watch has reported on these technologies, which are now available in IP cameras but not yet deployed on lower-cost drones. My projection is CES 2024 will showcase some of these drones at the sub-$5,000 price point and will include sensor fusion and AI inferencing, making them ideal for firefighting. 

Sensing the Environment

The Import: Helping aviators avoid wind shear on airport approaches can save lives, and detecting air quality issues reduces health risk. 

Frank De Fina of Vaisala, a sensor fusion company focused on the environment, has a solution that accommodates privacy and can analyze the air in real time. The company can do remote sensing of the atmosphere. 

“A lot of times people combine sensors in video surveillance, so the camera can see the weather while the sensor observes it. But we do upper air, so if there’s an inversion or mixing layer our LiDARs can observe this. They can also monitor the smoke layers in the air,” De Fina says. 

“Our LiDAR sensors are looking up rather than down. They tell you the wind speed and direction within an area and track the plume where the smoke is moving. They’re at every airport measuring the clouds and particulates in the atmosphere from a near-infrared laser.” 

One advantage to leveraging diverse environmental data for AI accountability are benefits for day-to-day activities, he notes. 

“We’ve already had cases on the air-quality side where people were routing to where there’s not a lot of pollution when riding their bikes, going for a walk or run,” De Fina said. “Do they wait until it’s not as bad? This is LiDAR for the environment.” 

Including Vaisala’s sensors as AI-inferenced data fusion in public situations, together with Jim Cooper’s smoke detection workflow, and James Connor’s AI “Humanism,” will demonstrate to the NTIA how this industry sees and analyzes the “big picture” responsibly.

Potential of Public/Private Partnerships

The import: Policies and procedures are just as important as technology. Public/private partnerships reveal this. 

But how is industry linked to practitioner for responsible AI development? Maurice Singleton, a 30-year trusted advisor and fellow solutions architect for City of Houston projects, works with the U.S. Department of Homeland Security’s Law Enforcement Liaison Jack Hanagriff. 

They oversee and run two technology labs, one dealing with public safety and security technology and the other dealing with smart city in the transportation mobility space for the City of Houston. 

Hanagriff describes the public/private partnership used at Super Bowl LIVE! -- a fan fest with concerts, NASA exhibits and the NFL Experience, attended by millions -- that I was fortunate to support as solutions architect. 

“We have two living labs, one of public sector/security and the other one for transportation mobility. AI governance is part of our advisory group where the University of Houston is involved,” Hanagriff says. “That means working with public safety officials and industry partners to start looking at AI governance in the public sector because the public is putting out all these analytics capabilities, but maybe without the policy and procedures dealing with it.” 

Don Zoufal of Crowznest Consulting, who is also a 20-year veteran in aviation and law enforcement leadership in the Chicago area, is working on AI governance at Duke University, and the International Association of Chiefs of Police committee on AI governance.           

Sounding Off on AI

The import: Audio is of equal importance as video in smart-city use cases. 

Zenitel is an audio company focused on voice communication that people can hear and understand in virtually any environment. Dan Rothrock, Zenitel’s president of Security & Safety Americas, describes how AI accountability is relevant to its mission. 

When it comes to public safety, Rothrock says Zenitel is augmenting what they’re collecting.

“The audio portion is really interesting right now because it’s not just AI, but it’s using machine learning that’s available now in the public safety area -- especially in the smart-city applications,” says Rothrock, who chairs a Security Industry Association committee on audio. 

“AI will not distinguish data, whether it’s voice or video. I believe they are subject to the exact same scrutiny as far as whether there’s bias in there, who has access to the information, what decisions you’re making, and the two-party state laws around privacy.” 

Importance of Data Diversity

The import: Data is front and center in the need for transparency in AI decisions.

Ambient AI’s Connor says diversity of data is a key topic.

“When you move up into what would be considered cognitive AI, you’re doing inferencing. One example is that after training you might conclude that black shirts have something to do with fence jumping. That’s very wrong,” Connor explains. “So, diversity of data is really important to ensure we’re transparent about the data inputs, what the AI does and what it doesn’t do.

“To answer trust issues, you need some governance or suggestions around how to govern connections and inputs, so you know what the technology is capable of delivering and what decision levels are being triggered.” 

Customer-Facing Sectors Advance

The import: Industries need faster, more accurate client transactions. 

To prioritize responsible innovation safely, accurately and ethically, SalesForce has developed Guidelines for Trusted Generative AI based on five goals. 

  1. Accuracy: Industries need to deliver verifiable results, balancing accuracy and speed.  Why did the AI give the responses it did? What should be double-checked? Let’s prevent critical tasks from being fully automated; e.g., launch code into a production environment without a human review.
  2. Safety: Every effort must be made to mitigate harmful output by conducting bias, explainibility and red teaming. Protect the privacy of personal identifiable information  present in datasets. Use digital sandboxes first and go into production later.
  3. Honesty: Respect data provenance and ensure consent to use open-source, user-provided data. Be transparent that an AI has created content autonomously from generative AI systems, like chatbot responses to a consumer.
  4. Empowerment: Know when to augment and when to automate. There are some cases where it is best to fully automate processes but there are other cases where AI should play a supporting role where human judgment is required.
  5. Sustainability: Simpler coding may require a series of input phrases, but will most likely take less time to test, execute and therefore, use less power (carbon footprint).  Smaller, better-trained models can outperform larger, more sparsely trained models and be more resilient.

Generative AI that meets these goals, helping to understand customer behavior and personalizing timing, targeting and content delivery can be useful. In ethical and efficient AI-powered commerce, a highly personalized experience unfolds with meaningful analysis at every touchpoint. 

The AI ‘On/Off Switch’

The import: The need for AI risk management.

So what happens when an AI processes need to be halted because something has gone very wrong? 

In a recent interview with Tucker Carlson, Elon Musk explained that the most complex AI processes having the greatest influence on society are done in very few places. “They generate a great deal of heat from energy consumption. You can even see them from space. You would simply cut power to these locations,” he says. 

But the security industry is dealing with many smaller AI processing locations. 

“We don't have any sort of on/off switches. How do we have the governance and remove liability?” Connor laments. “Companies will get shut down because people will challenge them and embroil them in litigation around bias. 

“We have to provide the framework so that people can feel safe to develop the technology within the framework, so they’re not liable.”

Steve Surfaro is Chairman of the Public Safety Working Group for the Security Industry Association (SIA) and has more than 30 years of security industry experience. He is a subject matter expert in smart cities and buildings, cybersecurity, forensic video, data science, command center design and first responder technologies. Follow him on Twitter, @stevesurf.