More often than not it’s the process — not the technology — that is to blame for failed perimeter detection projects.
Q: My security team spent hours comparing megapixel cameras and analytics technologies and finally made selections for our fence line perimeter security project. After installation and several months of fine-tuning, we are still getting nuisance alarms, and we are also failing to detect intrusions. How can I hold the manufacturers accountable?
A: Unless they contractually promised to provide a perfectly working system, the most you can do is enable them to determine what role, if any, their products can have in your solution when you fix it.
The real problem lies in the technology selection and deployment process, which is the diagnosis for most failed technology projects. Unfortunately, this is a common situation when it comes to perimeter detection projects, from small to large. Some projects can cost upwards of $100 million and fail miserably. But when your own project is in crisis, it’s small comfort to know that other projects have failed as well.
The only way to guarantee success is to perform pilot tests in the field. That is actually what the project above turned out to be — a very expensive and extremely upsetting field test.
Requirements are a Critical Challenge
Detecting motion and objects is hardly ever the challenge. The challenge is to report only the valid objects or conditions as alarms, and being able to ignore the “normal” conditions that would otherwise trigger an analytics event report.
There are numerous stories of projects where animals turned out to be a significant source of nuisance alarms. Then there are the wind-blown newspapers and shopping bags that hit fences and stick, or cross video detection areas. Sunrise and sunset change the video image, as do headlights from passing cars outside the target area.
For this kind of project, developing the requirements is significantly more than a word-processing exercise. You have to be 100-percent sure what the field conditions are — which usually means temporarily installing the kind of cameras you expect to use, and recording a couple of weeks of video around the clock. During this time period, you want to create multiple instances of each type of intrusion activity you want to detect. Be sure to create each condition in three ways: at the near and far ends of the camera’s field of view, plus in the middle. Close to the camera, a dog can appear larger than a person crawling at the farthest point from the camera. Additional complicating factors are seasonal changes. Facility grounds can look very different in summer and winter, and leaves can cause many problems, too.
Testing is Also Critical
Save time and money by using lab testing before field testing. Now that you have recorded the video containing examples of valid events (what you want to report as alarms) and nuisance events (what you don’t want to report as alarms), give copies of the video to candidate analytics vendors. It saves time to test multiple analytics products and approaches — server-based, edge appliance-based and camera-based analytics — in parallel.
Providing all of the vendors with the same video clips is the only way to get a true competitive performance comparison. Vendors are usually receptive to this approach, as they get examples of what works well and what does not, and they can more easily avoid raising false expectations. If you are using multiple technologies as a way to reduce false positives, expect that to increase the testing time by at least 50 percent.
If and when you have found products that perform satisfactorily, it is then time for an actual field pilot test, where you can closely examine the products’ strengths and weaknesses. Create as many activities and conditions as you can think of to stress-test the technology and determine its boundaries of acceptable performance. Remember that not all sites are the same, so establishing a few pilot sites rather than just one might be wise.