Analyzing Analytics

Video analytics as a security technology has been hyped for the past several years, with the widespread view that it has been oversold, overpriced, misapplied and simply not ready for prime time. But recent advancements may change that view.

The field of video analytics — otherwise known as video content analysis — finds its roots in the broader field of computer vision, and, more specifically, machine vision, which evolved in the late 1970s and was used extensively in automotive manufacturing and other industries for parts inspection and quality control. The technology has also been used successfully in the Intelligent Transportation Systems (ITS) market for control of traffic lights at intersections and license plate recognition (LPR).

It only seems natural, then, that the technology would be applied to CCTV in physical security — where, when supplemented with some rules of logic, the value of surveillance cameras could be drastically enhanced, both for real-time and for forensic analysis. In the last 15 years, companies began to emerge to address the opportunity, with many attracting venture capital due to the technology’s obvious promise. However, the dramatic spike in interest and start-up activity that occurred early on was followed by a measurable drop in interest, as many products were plagued by underperformance, false alarms and misguided pricing policies.

But it would be a gross injustice to say that the technology does not work or that all analytic vendors are incapable. There are many cases where the technology has been successfully deployed, and, in general, these have tended to be where vendor focus has been on specific applications or environments. A la carte menus of video analytic algorithms tend not to be optimized across the board. Two areas where some success has been demonstrated are in customer-oriented situations (retail, for example) and in outdoor environments. In the latter, the challenges of lighting, heat, wind, vibration and background clutter are quite difficult, and vendors have been forced to focus on those specific challenges to be successful. It is also important to note that, while working technology is always nice, the business or financial value to the customer — reduced manpower costs, fewer cameras, decreased shrink, etc. — must be realized.

There are some recent technological advances that are contributing to the start of the long-term emergence of video analytics. Certainly one of the most significant is camera-level analytics, allowing a significant amount of processing to be performed at the edge. This conserves network bandwidth by controlling times and rates of transmission — particularly helpful if transmission is via bandwidth-limited wireless networks.

Many of the embedded analytic algorithms can compensate for various camera issues and quality levels, making possible the use of lower-cost cameras or achieving better range and resolution. Closely associated with this is the increasing capability of thermal cameras (see the Jan. 2011 Tech Trends, “Thermal Imaging a Hot Technology”) in the outdoor environment, leading to improved detection capability at night. Megapixel cameras represent a different challenge, as they provide a richness of scene detail and information that allows greater analysis to occur over larger coverage areas.

We are also seeing a trend towards tighter integration with adjacent systems including Video Management (VMS), Physical Security Information Management (PSIM), Point of Sale (POS) and Building Automation — mirroring the broader integration and interoperability trend in our industry.

Similarly, there is an increased use of tying analytic cameras to conventional PTZ cameras — usually via GPS coordinates — to create real-time tracking systems. Since most video is used in a forensic mode, video content analysis can be quite useful in intelligent management of storage resources, including content-sensitive resolution and frame rate adjustments, length of storage, and implementation of archiving and duplication policies. From the operator standpoint, awareness of potential events not yet classified as alarms can be increased through decisions made about what types of occurrences are worth highlighting.

We have every reason to expect continuing advances in processing power at the camera and at central servers. In parallel, the imagers in surveillance cameras will become increasingly capable. Combined, this will lead to improved performance, lower cost and more intelligence at the camera.

This greater horsepower and intelligence has several ramifications. Increasingly, the rich detail derived from megapixel will be exploited for better overall performance. Next, as image quality becomes less of a differentiator for camera manufacturers, look for those devices to serve up new flavors of metadata that can be exploited by VMS and the other systems for more intelligent, effective and faster search of stored video, and real-time search capability.

It is reasonable to project that we will see more application-specific and market-specific products due to better information capture and processing. Those who process this data in centralized systems have the opportunity to develop customized algorithms based on different types of metadata flowing into the system; and they can execute them with increasingly powerful processors, providing advanced search functionality, contextual search, correlation with other events and real-time forensics. Systems that get smarter over time will provide better performance and easier installation and calibration.

Ultimately, the utility and business value of surveillance systems will be enhanced, enabling expanded use of services such as cloud monitoring, and tighter integration of other business and building functions.


Ray Coulombe is founder of, enabling interaction with specifiers in the physical security and ITS markets; and Principal Consultant for Gilwell Technology Services. He can be reached at