Tech Trends: Sensor Fusion Pushes AI Forward

Jan. 14, 2022
Combining data from multiple sensors can ease the strict reliance on video surveillance when creating deep learning algorithms

This article originally appeared in the January 2022 issue of Security Business magazine. When sharing, don’t forget to mention Security Business magazine on LinkedIn and @SecBusinessMag on Twitter.

At October’s annual CONSULT conference in San Antonio, I had the opportunity to moderate a panel discussing the current state of artificial intelligence in the security industry. The panel included Quang Trinh from Axis Communications, Aaron Saks from Hanwha Techwin, and Srinath Kalluri from Oyla, and we discussed the many challenges in moving AI forward – particularly the area of AI known as deep learning.

While I have defined deep learning in previous articles, lets summarize the concept for this column’s purposes as a form of AI where computers are taught to mimic the human thought process through multiple layers of algorithms known as “neural networks.” Within the security industry, the holy grail of deep learning would be the ability for computers to interpret video feeds to identify behaviors that constitute a security threat. Video analytics have that same mission, but they are focused on finding discrete elements based on very narrowly defined rules.

Deep learning algorithms would have the ability to interpret scenes based on a much broader set of rules – many of which the computer has defined itself through its own ability to learn from data.

Still a Ways to Go

While some sales and marketing folks would have potential customers believe that we are near the end-state of deep learning, in reality we are far from it. There are a multitude of issues with deep learning.

For one, in order for computers to be able to train themselves on what constitutes a security incident, they need massive amounts of annotated training data to determine what constitutes an actual threat. While teaching a computer to find a red balloon is easily achievable due to the massive number of free images available on the internet, video clips of security incidents are much more difficult to obtain. In addition, these clips would need to be annotated (i.e., this one is a video of a fight, this is a video of vandalism, this one is innocuous, etc.). Libraries of vast security-specific datasets are rare or nonexistent, and certainly not in a publicly accessible internet location available to solution developers. This problem is compounded by privacy laws, which in many cases would prohibit the creation of these libraries in the first place based on data retention limits.

Sensor Fusion Advances AI

One way the industry can make beneficial use of AI technology today is by combining data from multiple sensors. The concept of sensor fusion involves using multiple sensor types to create a more robust picture of reality that can help detect threats while also providing data used to train AI learning engines.

One company focused on sensor fusion is Oyla, which combines video data with LiDAR to create a 3-dimensional picture of a scene, as opposed to traditional 2-dimensional video feeds. I spoke with Srinath Kalluri, Founder and CEO of Oyla, about how sensor fusion will help advance AI technology.

“Neural network-based deep learning models, when combined with sensor fusion, ‘learn’ the environment and get better with use (data),” Kalluri says. “This enables the user to train the AI to recognize and eliminate false alarms. AI models can also be used to classify the nature of threats, further improving the accuracy of threat assessment.”

Sensor fusion also helps us move beyond a strict reliance on video surveillance as the only data source.

“Video is used for classifying the nature of the perimeter threat,” Kalluri explains. “The initial use of video in this application was retroactive – i.e. for investigative and evidentiary purposes – but recent advances in video analytics are enabling proactive, real-time applications. However, video as a sensor has limitations – it does not work in the dark and often lacks the detail in contrast or spatial context for high accuracy detection/classification.

“These limitations can be overcome by combining the video data with data from complimentary sensors,” Kalluri adds. “For example, tripwire sensors such as fiber provide alerts for large, fenced perimeters, while radar and LiDAR add spatial information and work in poor environmental conditions.”

Kalluri also notes that sensor fusion is particularly helpful for perimeter security applications: “The AI revolution is increasingly important for perimeter security applications for enabling the automation of error-prone and expensive human tasks,” he says. “AI can take up these attention-intensive tasks and assist human operators in making better decisions. Deep learning AI approaches when combined with new sensors such as LiDAR, radar and thermal are increasingly being used to provide robust 24/7 perimeter solutions.”

AI Provides Tangible ROI

Ultimately, as AI solutions continue to evolve, they will help reduce operational costs for end-users.

“AI solutions, when engineered correctly, provide significant operational cost savings,” Kalluri says. “At a high level, the automation of error-prone tasks saves the cost of sending officers to investigate and resolve incidents – in this way AI is a force multiplier. Indirect cost savings include lower insurance costs, and compliance costs in addition to the direct costs sue to loss or liability brought on by intrusions; however, to realize these significant cost-saving intrusion prevention technologies need to be accurate with low false positives.”

Learn more about Oyla at http://oyla.ai.

Brian Coulombe ([email protected]) is Principal and Director of Operations at Ross & Baruzzini | DVS. Connect with him on Linkedin at www.linkedin.com/in/brian-coulombe or Twitter, @DVS_RB