The field of artificial intelligence (AI) has received an unprecedented amount of investment, research, education and product development in recent years. AI is advancing at an increasingly rapid pace and the rate of new product emergence is skyrocketing; in fact, ISC West will likely feature more vendors with AI-enhanced products than in any previous year.
AI-enhanced video analytics stand to transform the video landscape to an extent not previously conceived. What we’ve seen so far is just the tip of the iceberg.
Subscription-based offerings will fit AI best because today’s AI software is being developed under the continuous delivery model, an engineering approach for providing applications that evolve in place through updates delivered every week or so – much like what we see happening with our smartphones and with the model that IT has shifted to.
Hype will abound, as it typically results from over-excitement and misunderstandings around impressive technology breakthroughs; thus, the impact of these significant changes will be hard to digest initially, partly because hype will hinder clarity.
The role of AI in shaping the way we live and work, and the way organizations – including governments –operate, spurred Stanford University’s 100 Year Study on Artificial Intelligence initiative, which includes AI Index reports from 2016, 2017 and 2018 (available at https://ai100.stanford.edu). According to The 2017 report, “Without the relevant data for reasoning about the state of AI technology, we are essentially ‘flying blind’ in our conversations and decision-making related to AI.”
For several years we have heard a lot about AI in the security industry – including machine learning and deep learning – especially relating to video analytics. Having been through highly disappointing video analytics hype cycles in the past, it is important to understand how the new combination of AI and video analytics works so well – to avoid missing out on the AI-related business opportunities for RMR services.
Why Security Industry AI Decisions are Easier
It is important to realize that when it comes to AI for video surveillance applications, several factors make the evaluations and decisions easier than for AI applications in other industries:
1. Video analytics are visual. James Connor, former Sr. Manager Global Security Systems for Symantec and current CEO of security technology consulting firm N2NSecure, explains: “We can count the people in an image and check the results of a people-counting analytic. We can see whether a bicycle, person or vehicle is correctly recognized. We can compare the results of the analytics from multiple vendors by feeding in example video clips representing what we want the analytics to process. We can use multiple live video feeds from selected cameras to compare the performance of competing self-configuring and self-learning analytics. Results evaluation is quick and simple in comparison to other types of AI where the performance is not so easily observable.”
2. AI is transforming security technology beyond its protective purposes into active sources of business-relevant real-time data. Computer scientist Elaine Rich originally defined AI as “the study of how to make computers do things at which, at the moment, people are better.” Today, AI goes beyond replacing and exceeding human performance to extract valuable business operations information from camera data and other sensors in real-time, as we have seen recently with retail video analytics.
3. Security technology AI concepts are understandable. Although the underlying AI technologies in video and other IoT analytics are various and complex, the basic way they work is completely understandable to even a non-technical person. Vendors will help with this by providing plain language descriptions.
4. The number of AI-savvy people is growing quickly. Companies in many industries are developing and strongly supporting large AI scientist and developer communities, which are providing a wealth of AI-related educational materials and development tools. Last year’s enrollment in university AI courses was about five times that of 2017 – meaning that in addition to receiving support from AI product vendors, integrators will be able to find personnel at universities and in developer communities who are very interested in working on real-world AI applications and deployments.
5. High-performance computing hardware for AI is finally available for on-premises deployments. As of just a few years ago, the kind of computing power deep learning AI needed was only available in cloud data centers; now, Intel, Dell, NVIDIA and others are collaboratively developing hardware for high-performance “edge computing.” This means processing data from IoT devices – such as cameras and other sensors – close to where the data is created instead of making long data transmissions to central corporate or cloud data centers.
The Security Industry’s Advantage
Most of the hard work in AI and analytics advancement is being funded and developed by other industries; the security industry’s role is one of fine-tuning breakthrough AI results for security applications. The high levels of AI funding referred to earlier, has made that feasible – thanks to the release of new tools that security product engineers can harness for AI development.
Last year, NVIDIA released its Tesla graphical processing unit (GPU), which contains hundreds of processing cores that is expected to accelerate the deep learning training that is central to most AI applications. Intel recently launched OpenVINO (Open Visual Inference & Neural Network Optimization), a toolkit for the quick deployment of computer vision for edge computing in cameras and IoT devices. The OpenVINO toolkit’s open source software works with Intel’s traditional CPUs or chips specifically for AI calculations.
Intel’s Neural Compute Stick 2 contains the company’s Movidius Myriad VPU (Vision Processing Unit), along with software that offloads the deep learning processing to a USB stick. This enables software development to be done using traditional PC and laptop computers. Case in point for the security industry’s development using these tools came at GSX 2018, when Avigilon announced its next generation of advanced AI cameras – based on the Intel Movidius VPU.
AI brings valuable capabilities to security technology – especially video surveillance – that go beyond the traditional roles of security, such as retail analytics. As manufacturers like Avigilon continue to invest in AI-based product development, it makes sense for security service providers to start taking a closer look at how these products can fuel revenue improvements for their current and future client markets.
To do that, it is vital to understand the terminology, how the technologies work, and how they can be applied for security customers.
Machine Learning and Deep Learning
Machine learning is the science of getting computers to perform actions without specifically being programmed to do so. For example, machine learning software for email spam filtering would be “trained” on recognizing spam by being fed thousands of emails labeled either as spam or not spam, and the software would analyze each email and determine from those examples how to identify spam.
Deep learning is a type of machine learning that involves artificial neural networks, whose designs are inspired by the way that scientists believe the brain works. A neural network is built from pieces of software called “nodes” – which are organized into layers. Each layer performs a step in the data processing, passing along its results from one layer to the next. Deep learning software typically contains three parts: an input layer, hidden layers and an output layer. Hidden layers are so named because there are no connections to them from the neural network’s input and output interfaces.
The term “deep” refers to neural network software that has many hidden layers – the number of layers determining the depth. A simple neural network has one or two hidden layers between the input and output layers; three or more hidden layers makes it a deep learning neural network, as shown in the nearby illustration. For example, an object detection deep neural network may have the following layers: input layer (receive a still image of a scene), hidden layers (detect moving object, detect object parts, classify object parts, classify object) output layer (provide information on object).
For bicycle detection, let’s say that the layer for “detected object parts” identifies wheels, handlebars, frame and cyclist in the image. The layer for “classify object parts” differentiates between a bicycle wheel and a motor cycle wheel. Based on all the analysis, the layer for “classify object” concludes it is a bicycle not a motorcycle.
How many hidden layers there are depends upon how challenging the various steps to object recognition are and how much software is required for each step. What if a cyclist’s backpack must be detected? What about a second rider on the bicycle? Do colors matter? Does object speed matter?
Recent advances in deep learning have made significant improvements in video analytics accuracy. For example, in people-counting applications where the machine accuracy ranged between 80 and 90 percent, deep learning improvements have brought that accuracy up to 98 percent or better. The greater the accuracy, the more complex the deep learning is – and the more computing power it requires.
Higher accuracy means higher cost; fortunately, not all applications require 98 percent or better accuracy.
Cameras on a Mission
Current AI research by Milestone Systems is applying context in a different way – using deep learning to automatically adjust camera configuration in real time to optimize camera settings based on the camera’s purpose.
At the 2019 annual Milestone Integration Platform Symposium (MIPS) event, about six minutes into his Day 1 presentation, Barry Norton, Milestone’s Director of Research, provided example video for a camera whose purpose is to perform license plate recognition (LPR). The demonstration used two Canon VB-S900F cameras, both initially configured optimally for best general performance using on-camera settings. Then one of the cameras activated server-based AI to constantly adjust for best contrast, lack of glare, and lack of motion blur – creating a startling difference between two versions of the same low-light scene. Obtaining this level of camera performance around the clock is not possible with on-camera configuration settings.
AI and RMR: Perfect Partners
Cities are already deploying AI technologies for public safety and security. In Stanford’s 2016 AI Index report, the authors concluded that by 2030, the typical North American city will rely heavily on AI technologies, including cameras. Although not all types of AI perform as well as others, there are AI-based quality improvements that make AI camera analytics much more effective than ever. Such analytics also apply beyond smart city use cases into many business and industry sectors.
AI-based technologies are typically offered under an “as-a-service” monthly-subscription model – whether the AI computing is done on the cloud or on premises. Typically, that delivery model results in customers expanding their subscriptions year over year due to the increasing value of the new features.
PSA Security Network is working hard to help its integrators transition into a world in which subscription-based services and RMR are foundational elements. PSA recently introduced its Managed Security Service Provider (MSSP) program, designed to help systems integrators diversify their service offerings and realize the full potential and benefits of a managed services business model. AI is poised to be the game-changer that helps advance that transition for integrators.
It has been an uphill journey because of the mismatch between 20th century security industry practices and the low number of as-a-service offerings available; however, reaching the summit where AI and RMR meet is within sight.
Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (www.go-rbcs.com), a firm that provides security consulting services for public and private facilities. In 2018 IFSEC Global listed him as No. 12 in the world’s Top 30 Security Thought Leaders. He is the author of Security Technology Convergence Insights, available on Amazon, and is an active member of the ASIS member councils for Physical Security and IT Security. Follow him on Twitter, @RayBernardRBCS.