Debunking vulnerability assessment myths: Part 1

Editor’s note: This is part one in a two-part series on clearing up misunderstandings people have regarding vulnerability assessments. Part two, which will examine who should conduct vulnerability assessments, will be published next week.   

The man on the other end of the phone was nothing if not enthusiastic.  “So,” he said, “you folks do vulnerability assessments?”  “Yes,” was the response.  “That’s great,” he said, “we have an outstanding new security product we would like you to test and certify.”  “Well, I’m afraid we don’t do that,” he was told.  “I guess I’m confused,” said the man.

As vulnerability assessors, we encounter this kind of confusion dozens of times a year.  The problem isn’t solely that security managers are confusing testing/certification (and other things) with vulnerability assessments, it’s that by not understanding vulnerability assessments in the first place, they are probably not having them done, or at least not done well.  This isn’t conducive to good security.

The purpose of a vulnerability assessment is to improve security.  This is done in two ways: 1) by finding and perhaps demonstrating vulnerabilities (weaknesses) in a security device, system, or program that could be exploited by an adversary for nefarious purposes and possibly suggesting countermeasures and 2) by providing one of the 10 or so major inputs to an overall modern risk management approach to security.

Myth:  A vulnerability assessment is a test you pass.

You no more pass a vulnerability assessment (VA) than you “pass” marriage counseling.  Passing a VA can certainly not mean there are no vulnerabilities, or even that all vulnerabilities have been mitigated.  Any given security device, system, or program has a very large number of vulnerabilities, most of which you will never know about.  Every time we look at a new security device, system, or program a second or third time, we find new vulnerabilities that we missed the first time, and vulnerabilities that others missed, and vice versa. A VA is never going to find all the vulnerabilities, but hopefully assessors can—by thinking like the bad guys—find the most obvious, the most serious, and the most likely to be exploited.

What people sometimes mean when they say that they passed a vulnerability assessment is that they took the results of the VA as one of the inputs, and then made a subjective, context-dependent value judgment about whether their security is adequate for the specific security application of interest.  While it may be completely necessary and reasonable to make such a judgment, that decision belongs in the domain of the security manager, not the vulnerability assessor.

Myth:  The purpose of a VA is to accomplish one or more of these things:  Test performance;  do quality control;  justify the status quo;  apply a mindless stamp of approval;  engender warm and happy feelings;  praise or accuse somebody;  check against some standard;  generate metrics;  help out the guys in the marketing department;  impress auditors or higher ups;  claim there are no vulnerabilities;  endorse a product or security program;  rationalize the expenditures on research and development;  certify a security product as “good” or “ready for use”;  or characterize the ergonomics, ease of use, field readiness, or environmental durability. 

Certainly, some of these issues are very important and may have a bearing on security vulnerabilities, but they are not the focus or the purpose of a VA.

Myth:  A vulnerability assessment is the same thing as a threat assessment.

Threats are who might attack, why, when, how, and with what resources.  A threat assessment (TA) is an attempt to identify threats.  Vulnerabilities are what these threats might exploit. 

Myth:  A TA is more important than a VA. 

Effective VAs and TAs are both essential for good security and for modern risk management.  A TA, however, entails speculations about groups and people who may or may not exist, their goals, motivations, and resources.  TAs are often reactive in nature, i.e., focused on past incidents and existing intelligence data.  Vulnerabilities, on the other hand, are right in front of you (if you will open your eyes and mind), and can often be demonstrated.  VAs are thus typically more proactive in nature. 

If anything, an effective VA may be more important than a TA.  If you get the threats exactly right, but have no clue as to your vulnerabilities, you are probably at significant risk.  If, on the other hand, you get the threats at least partially wrong (which is likely), but you have a good understanding of your vulnerabilities and have mitigated those you can, you may well have good security independent of the threats.

Myth:  These techniques are effective for finding vulnerabilities:  Security survey (walking around with a checklist), security audit (are the security rules being followed?), feature analysis, TA, design basis threat (DBT), fault or event tree analysis (from safety engineering), Delphi Method (getting a consensus decision from a panel of experts), and the CARVER method (DoD targeting algorithm). 

The truth is that many of these techniques—while very much worth doing—are not particularly effective at discovering new vulnerabilities.  The last four aren’t even about discovering vulnerabilities at all, but rather are tools to help decide how to field and deploy your security resources.  None of these make much sense for testing security because the logic in using them that way is circular.

Myth:  Safety or safety-like analyses are good ways to find vulnerabilities.

Safety is a very different kind of problem because there is no malicious adversary attacking deliberately and intelligently at the weakest points.  Safety issues aren’t completely irrelevant for infrastructure security, but they are limited in their ability to predict many malicious attacks.

Myth:  These things are the vulnerabilities: The assets to be protected, possible attack scenarios, security delay paths, or security/facility features. 

These things are important in analyzing vulnerabilities and understanding your security, but they are not vulnerabilities in and of themselves.

Myth:  One-size-fits-all. 

Obviously, no single test or certification could have much meaning across a wide range of security applications.  The same thing is true for VAs.  Whenever possible, they should be done in the context of the actual security application and adversaries of interest.

Myth:  Past security incidents will tell you all you need to know about vulnerabilities. 

Looking only at the past is a good way to overlook the risk from rare but catastrophic attacks.  Moreover, the world is now rapidly changing, and what was once true may no longer be true.  Good security requires imagination, peering into the future, and seeing things from the adversary’s perspective.

Myth:  A software program or package will find your vulnerabilities. 

There is nothing wrong with using a software program as a VA starting point, as a checklist, and as a way to stimulate your thinking.  But with security, the devil is in the details.  No security program or package is going to understand your particular security application, facility, personnel, and adversaries in sufficient detail to adequately identify on-the-ground vulnerabilities.  A software app is unlikely, for example, to recognize that frontline security officer Bob falls asleep every day at 3 p.m.

Myth:  Vulnerabilities are bad news. 

Vulnerabilities are always present in large numbers. Finding one means you can do something about it.  This concept is a tough sell to security managers, but it is the correct way to look at vulnerabilities and VAs.

Myth:  You can eliminate all your vulnerabilities.

The unfortunate fact is that some vulnerabilities can’t be fully eliminated, you just have to live with them (and that’s alright as long as you are aware they exist).

Myth:  The ideal scenario is when a VA finds zero or just a few vulnerabilities.

 The reality is that any such VA should be redone by assessors who are competent and/or willing to be honest with you.

Myth:  A VA should be done at the end, when the product is finished or the security program is ready to be fielded. 

VAs should be done early and iteratively.  If you wait until the end, it can be very difficult, expensive, and psychologically/organizationally challenging to make necessary changes.   In our experience, having intermittent VAs (even from the very earliest design stages) while a security product or program is being developed is a useful and cost-effective way to improve security.

About the Authors: Roger G. Johnston, Ph.D., CPP, is leader of the Vulnerability Assessment Team at Argonne National Laboratory.  He was founder and head of the Vulnerability Assessment Team at Los Alamos National Laboratory from 1992 to 2007.  Roger graduated from Carleton College (1977), and received M.S. and Ph.D. degrees in physics from the University of Colorado (1983).  He has authored over 170 technical papers and 90 invited talks (including six keynote addresses), holds 10 U.S. patents, and serves as editor of the Journal of Physical Security. 

Jon S. Warner, Ph.D., is a systems engineer with the Vulnerability Assessment Team at Argonne National Laboratory.  From 2002-2007 he served as a Technical Staff Member with the Vulnerability Assessment Team at Los Alamos National Laboratory.  His research interests include vulnerability assessments, microprocessor and wireless applications, nuclear safeguards, and developing novel security devices.  Warner received B.S. degrees in Physics and Business Management at Southern Oregon University (1994), and M.S. and Ph.D. degrees in physics from Portland State University (1998 & 2002).