When installing an active surveillance system, video analytics and the associated ease of installation and maintenance are critical to the overall cost and accuracy of the solution.
The amount of calibration required is the main driver of both cost and accuracy. And while a common definition of calibration is not agreed upon in the industry, the generally-accepted meaning is defining the height and size of a human in the specific field of view of an individual camera through a manual process.
Systems integrators and users must define what they are looking for to decide which method-manual calibration versus adaptive analytics-will result in the required level of precision in identification.
Manual calibration is typically performed after the camera is mounted and multiple points in a scene are mapped out and recorded with a consistent object, such as a pole. The pole helps the camera determine the height of an average human and trains it to trigger an alarm when something at that height enters a field of view. The expectation is that the field of view is not going to change dramatically-such as with landscape, trees and other objects-and the camera will never be repositioned or knocked during routine maintenance. One of the challenges rarely discussed is how easily detection can be missed if a person is only partially visible, such as when being obscured by objects within a room or walking behind a parked vehicle. To prevent this, analytics that require manual calibration often have to simulate all possible scenarios during the installation process, including having individuals walk an area with and without cars in order to properly detect partial objects. As a result, this further extends and complicates the calibration process. And if the foliage changes or new objects are included in a field of view, cameras need to be recalibrated again to properly detect people in the new scene.
Adaptive side of the equation
Adaptive analytics deliver a new alternative to manually calibrated analytics with improved accuracy, reduced installation costs and zero maintenance. Adaptive analytics uniquely identify people, vehicles and boats by comparing an object's appearance (texture, silhouette or unique visual features) and motion with a set of representations synthesized from a database of hundreds of thousands of examples of people, vehicles and boats-in all weather conditions, lighting and fields of view in real-time. Adaptive analytics automatically extend its set of internal representations of people, vehicles and boats by adding examples from its field of view, becoming more accurate over time. By classifying an object on multiple characteristics, this unique capability delivers the greatest accuracy on the market while eliminating the need to ever manually calibrate or tune the analytics, saving significant time, money and resources while increasing accuracy. As a result, installers and end-users can simply install an analytic camera with adaptive analytics and begin detecting people, vehicles and boats within minutes. They can also reposition any camera without having to recalibrate the channel as a scene changes.
Adaptive analytics bring a fresh approach, leading to improved accuracy, lower installation costs and less maintenance. They produce the results integrators and end-users have been looking for-the highest levels of accuracy in all weather and lighting conditions with a simple installation.
Mahesh Saptharishi, PhD, chief technology officer and chief scientist, oversees core technology architecture and development efforts for VideoIQ, Bedford, Mass.