Leveraging the power of Wave 2 video analytics

Oct. 31, 2022

When it comes to addressing physical security events, clarity is rarely provided by one single piece of information. Rather, clarity is most often achieved by applying learnings from past experiences, known patterns, available data at hand and perceived possible outcomes.

It’s this contextual information that drives more efficient, appropriate responses to fast-evolving security events.

For security professionals responding to such events, it is the human brain that must work hard to derive clarity from unknown situations. People can make their best guesses, but with limited contextual awareness, proper decision-making abilities are often limited.

But what if there was a way to gather greater levels of context for use both by humans and technology to improve security event outcomes?

Thanks to advancements in AI, computer vision and deep learning, this kind of contextual awareness is possible. This is the second wave of video analytics.

What Are Wave 2 Analytics?

When the first wave of video analytics was introduced, security professionals rejoiced. There was now a way to extract both descriptive (men in white shirts near blue trucks) and definitive attributes (face, person, vehicle identities) from fixed and mobile video streams.

It was with this descriptive and definitive information that security professionals could more quickly and efficiently respond to security events (i.e., scan all surveillance feeds in a given area for blue trucks and white shirts).

However, these analytics merely provide information and do little to understand the context in which the information was gathered and disseminated.

Moreover, many analytic providers have yet to even meet the basic capabilities indicative of Wave 1 analytics. This includes lightning-fast search, accurate tracking and technology that works across challenging scene conditions, smaller objects and when objects are not fully visible.

Despite the fact that not all video analytic solutions available in the market are fully embracing Wave 1, we’re at a point where even the flourishing players offering Wave 1-like solutions will coalesce.

But if the first wave of video analytics was all about information and data gathering, then the second wave is about offering recommendations and answers. Wave 2 analytics correlates data acquired from multi-class algorithms to deliver higher levels of analysis and understanding of event detection, classification, tracking and forensics.

By applying machine learning to the ever-growing body of machine learning-produced metadata captured from video, the system is forced to control its accuracy, learn from its own outputs and distinguish specific priorities.

In short, Wave 2 analytics delegate more of the context-gathering responsibilities to the system, lessening the burden on human operators while allowing for better decision-making on their behalf.

Harnessing Wave 2 Analytics

Much of Wave 2 analytics’ contextual awareness comes from adapting human behaviors for technology. Just as the human brain identifies and recognizes patterns over time and correlates these patterns to make logical decisions, Wave 2 analytics can do the same on a massive scale and in ways not feasibly possible by humans.

Imagine trying to identify who are the people that interact with a particular person of interest over six months of video footage with millions of detections. Now where do they meet? On which days and at what time? With what objects do they interact? And what about the other additional people with whom the person of interest has met?

Wave 2 analytics are designed to quickly establish distinct patterns, match events to those patterns, and find anomalies where known patterns are violated – thus allowing an operator to quickly identify potential threats.

In Wave 2, AI-powered video analytic solutions also know exactly what each camera is looking at, providing a level of detail and context never seen before. This contextual awareness helps frame a particular issue more clearly and improve situational awareness on an entirely new level.

For example, if an analytic solution is designed to alert to instances of loitering, the very act of loitering itself is not necessarily a security concern and therefore could result in many nuisance alarms.

Now suppose that the video analytic identified an individual loitering near an ATM machine or backup generator. This specific behavior with the added locational context may then warrant an alert for further investigation.

To Overcome Previous Limitations

Early iterations of Wave 1 video analytics were marred by performance issues and customer complaints, mostly surrounding false alarms and misclassifications. And while the first true wave of widely adopted video analytics focused on fixing these issues, Wave 1 analytics are still often limited in their capabilities.

This is largely due to the fact that many Wave 1 solutions are open-sourced rather than purpose-built. As the scenes where analytics are deployed become more challenging and specific, purpose-built analytics often use unique approaches like synthetic training data and transfer learning to extract relevant and potent information about the scene at hand. Wave 1 and open-source analytics can rarely say the same.

To extract more information from videos such as new detection categories, Wave 2 also demands flexible model architectures that make it easy to add new detection categories quickly and accurately. Purpose-built solutions come with the ability to create tailored detectors and classifiers for unique scenarios.

Of course, to enable such customization, training data and models must be constantly pushed and expanded. For example, in the early days of the COVID-19 pandemic, it was this kind of technology that allowed mask detection analytics to be built in days, not months. It is this kind of technology that is now a hallmark of Wave 2 solutions.

Deploying Smarter Technology Solutions

Wave 2 analytics have the potential to make technologies smarter in the same way they make humans smarter.

Take a central monitoring station for example, wherein hundreds of feeds are manually monitored for threats. As part of Wave 2, the system as a whole will control the monitor selection to show only the camera feeds on which operators need to focus at that moment in time. This is also known as “Intelligent Monitoring,” in which AI-powered video analytics will not need to define any rule or simplistic or empirical condition — the system will already know and be able to present to the operator only the things that truly matter.

Furthermore, Wave 2 solutions can gather even more context when used on non-traditional cameras. This includes infrared cameras commonly used by first responders and healthcare professionals, and multi-imager cameras that are growing in popularity due to their wide-area coverage. As the breadth of applications for Wave 2 analytics grows, so do the number of industries that can make use of such solutions.

Looking to the Future

AI-driven video analytics are just a small piece of a greater puzzle for coalescing security data. Look to see metadata gathered from other sensor types, such as badge access and infrared camera data, combined with Wave 2 video analytics to provide entirely new levels of intelligence.

The idea here is with greater security data coalescence, higher levels of contextual awareness can be achieved.

Labor resources can then be re-deployed to perform mission critical tasks with greater efficiency. For organizations this is a win-win; operations are not only more cost-effective, but more secure using situational awareness gathered with less effort. 

Brent Boekestein is the co-founder and CEO of Vintra, Inc., a Silicon Valley company that delivers AI-powered video analytics solutions that transform any real-world video into actionable, tailored and trusted intelligence. He holds an MSc from the University of Manchester (UK) and dual bachelor's degrees in Business and Communications from Westmont College.