Determining the Effectiveness of Physical Security Systems

This question has a variety of prompters such as equipment purchase requests, annual maintenance and service budget approvals, security incidents, and unresolved investigations.

Q:    I’ve been asked to rate the effectiveness of our physical security system deployment. How do I do that?

A:    There are several perspectives involved in rating security system effectiveness. Which to use depends upon who is asking for the rating, and why?

Rating any aspect of a security program – people, process or technology – requires baseline criteria against which to measure the results being achieved. By baseline is meant, “a clearly defined starting from which a comparison is made.” The baseline can be established by answering one or more of these questions:

  • What minimum results do we absolutely need?
  • What results do we want?
  • What are we expecting that we’re achieving?
  • What have we told management or other stakeholders we’ll achieve?
  • For people and process: What results are possible to achieve according to current best practice?
  • For technology: What results are possible to achieve with the current state of technology?

Which questions to ask may depend upon what is prompting the need for evaluation. If there is an opportunity to obtain approval or funding for improvements, or if there has been a personnel change in a security management role, then it’s best to consider all questions (a) to be fully prepared to make the business case for the technologies (design and quality of deployment) and (b) to educate stakeholders whether familiar or not with the security program. Additionally, there may be a response time frame to consider. Is the effectiveness rating needed next week, in 30 days, or for a quarterly or annual review? Evaluation time will be shorter the better your security system deployment documentation is.

For this article we’ll look only at door access control, but similar approaches can be used for radio communications, intercom systems, intrusion detection, photo ID badging, visitor/contractor management systems, entrance turnstile systems and so on.

Evaluation Context

The job of security is to reduce security risks to acceptable levels, at an acceptable cost. If formal security risk assessments have not been performed and used as a means of establishing the security technology requirements, then some of the risk factors will need to be documented as part of establishing the rating criteria. This approach is a thumbnail-sketch approach to what really should be a formal risk assessment. But for purposes of quickly evaluating the effectiveness of security systems, it establishes the right perspective and should be correct enough for quick rating purposes.

For example, considering door access control, have facility floor plan maps been marked to establish which zones contain critical assets? For example, executives, critical records, confidential information, information technology, cash, food, expensive or long-lead-time key equipment, manufacturing lines, customer service areas, etc. If not, identify such areas and rate them as high, medium and low business impact risk based on the impact of a serious threat actor obtaining physical access to the area. Use the worst-case scenario for evaluation, such as a disgruntled ex-employee tailgating into an area.

Where door access control technology measures are not sufficient, what other measures are in place or could be established to reduce or eliminate the risk. For example, is proper notification performed when an employee is terminated, and perhaps a broader reminder not to allow tailgating?

This kind of high-medium-low rating system provides a visual way of identifying and sharing the status of critical area door access control, by indicating which doors are controlled and highlighting additional door control needed. Consider layers of security, so that if someone tailgates into a certain part of the building, area internal access zones exist to prevent close access to critical assets.

Baseline Criteria

Some organizations have security design plans that their security system deployments are based on, and the rationale these contain for each security system technology, along with the documentation of how it is deployed, can be a good starting point for baseline evaluation criteria. Here are some examples of baseline design criteria that can be used as rating criteria:

  •  Layered Access Control. All building areas containing critical assets must have at least three layers of access control starting at the building entrance point and following each possible pathway to the area holding the critical asset.
  •   Video Record of Access. All access-controlled doors must have at least one camera covering the doorway so that a visual record of entry is retained. Ideally facial views will be captured.

Along with descriptions like those above, the rationale (reason for) each requirement should be stated.

I included the video record of access criterion item because it is common for critical infrastructure deployments, and for conformance to CTPAT program site security plans. Video records also provide a means of determining to what extend tailgating occurs. Soon AI-enabled video systems will be able to identify tailgating situations and offenders. If there is no tailgating problem, anti-tailgating technology may not be needed. Stakeholders should also be led to understand that the technologies work together. That’s another evaluation criterion, but an advanced one that requires in-depth analysis.

If such security design documentation doesn’t exist, it can be created either as what is desired to have or “reverse-engineered” from what exists. Sometimes there is good thinking behind a deployment that’s simply not documented.

Do Check Technology Configurations

Door access control evaluation also includes the door itself and its electro-mechanical controls, as well as the door access control configuration. Does each door automatically close in acceptably short time? If a door can be only be partially closed, will a “door open” alarm be issued? Are the door-held-open times (before alarming) appropriate? What kind of notification is there for a door held open? Local audible only, or is there active notification to at least one security person? Can a screwdriver or other tool be used to improperly open the door (penetration testing)?

Reporting Rating Results

Documenting the evaluation should include two parts:

  • Each evaluation criterion and its rating
  • What each rating means

The ratings showing how well the deployed system conforms to the established criteria. Use a simple rating system such as percentage of compliant vs. non-compliant doors and/or percentage of compliant critical asset areas.

If tailgating is a concern, review enough video to accurately estimate how often it occurs and where, using two numbers, a count of tailgating events per door and a number for the percentage-of-access-events in which tailgating occurs per door.

What About rating Video Systems?

In the next column we’ll take up rating video surveillance systems, which is more complicated than rating door access control and deserves its own coverage.

About the author: Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (RBCS), a firm that provides security consulting services for public and private facilities (www.go-rbcs.com). In 2018 IFSEC Global listed Ray as #12 in the world’s Top 30 Security Thought Leaders. He is the author of the Elsevier book Security Technology Convergence Insights available on Amazon. Mr. Bernard is a Subject Matter Expert Faculty of the Security Executive Council (SEC) and an active member of the ASIS International member councils for Physical Security and IT Security. Follow Ray on Twitter: @RayBernardRBCS.

© 2019 RBCS