An overarching objective of disaster exercises is to shed light on how well participants and other resources will perform in the heat of a live disaster. While this "predictive" function is clearly bound by the degree of realism in the exercise itself, the extent to which one can assess how prepared we truly are is largely determined by what, how and when we measure performance during the exercise. Performance measures therefore play a vital role in closing "proficiency gaps" and determining when we have in fact reached the desired levels of preparedness.
In the author's experience, performance measures are all too often dismissed as unnecessary or given low priority in relation to the overall success of an exercise. Yet, if well crafted, a performance measurement program can be a relatively small piece of the effort and cost that goes into a disaster exercise. A first and essential step in bringing performance measurements to the forefront of disaster exercise management is having guidelines and techniques that can make the design, recording and reporting of performance measurements a straightforward and streamlined task. To this end, this first in a series of articles offers some guidelines and a framework for developing a practical exercise scorecard. In subsequent articles, related topics such as recording technologies, reporting tools and after-action analysis techniques will be covered.
Measuring for Success
What makes a good exercise performance measurement process? At a minimum, the exercise performance measurements should serve two ends:
1. Answer the question "Did the exercise achieve the desired performance levels?", and
2. Provide the exercise participants with a clear understanding of how well they performed and what they need to improve upon.
Any exercise lacking in either of these top-level objectives is unlikely to be deemed a success.
To get one's arms around the seemingly large task of planning and carrying out an exercise measurement process, a number of carefully orchestrated steps are needed.
Alignment with Performance Objectives
As a starting point, the performance objectives of the exercise must be clearly defined. For instance, are we setting out to test the speed with which our internal security group can detect a breach of the grounds perimeter, evacuate a building, etc.? Or are we out to verify that our company's executives will properly and at the right moment notify government agencies in the event of a never-seen-before disaster. With objectives in hand, the exercise planner can go about defining measurements that will ultimately tell us whether or not these performance objectives were met.
Along the way, the exercise type must be taken into account. For instance, a discussion-based exercise might not be concerned with response times so much as it would be with making the correct decisions and commands. A scorecard for the exercise might therefore consist of measures that focus on the correctness of decisions and compliance with protocol rather than how quickly decisions were carried out. In contrast, field exercises involving search and rescue or bomb diffusion would surely call for specific time-oriented performance measures.
The disaster type may also dictate what measures should be included in the exercise scorecard. An exercise involving the exposure of an air-borne biohazard to the company's corporate headquarters building would probably not be concerned with how effectively the company's web site can fend off a denial of service attack.
Coverage and Balance
Given the uniqueness of each exercise, there's no "cast in stone" list of measures for disaster exercises. However, there is good reason to have a set of measures which together cover four essential categories: Timeliness, Effectiveness, Efficiency and Learning. One might ask why all four categories are advised, especially when only timeliness and effectiveness appear directly related to preparedness. The answer rests on the presumption that all stakeholders have a desire to know (1) how swiftly the exercise participants will act, (2) how well they will perform their duties, (3) whether or not the same result could be achieved with fewer resources, and (4) how to become better prepared for a live disaster. The following explains the role of each category in an exercise measurement scorecard.
Timeliness -- Perhaps the most frequently cited and arguably most important exercise performance measurement category is timeliness. Timeliness measures that might come to mind first are those relating to the management of the exercise, i.e., the times that key events, tasks and decisions start and end.
However, the scope of timeliness measurements spans far outside the box of scheduling and project timelines. In particular, of greater importance to evaluating preparedness are measures of how much time was taken for specific "mission critical" tasks and decisions. For example,
- From the time of first biohazard detection, how quickly did the security team seal off the building's ventilation system?
- How many minutes into the disaster did the CEO decide to send people home versus shelter them?
- For how long were the denial of service attacks allowed to persist before the internal network was restricted from outside connections?
- At what point in the disaster exercise did the CIO become fully apprised of the situation?
Operational measures like these and others are needed to rate how well prepared the exercise team is to perform in a live disaster. Merely completing an exercise that ran according to schedule is not enough. Having precise measures of time intervals for mission critical tasks and decisions is essential to knowing whether or not the disaster response capabilities of the organization are ready to avoid business operations meltdown, property loss, impairment of critical infrastructure and casualties.
Effectiveness -- While speed of execution and punctuality are often key measures, how well mission-critical tasks and decisions are carried out may be of equal or greater importance. Such measures of effectiveness can address procedural compliance (e.g., Were evacuation procedures followed properly?) as well as the results of activities performed or decisions made during the course of the exercise (e.g., How many gallons of oil spilled over the containment barrier?, How many patients were diagnosed and inoculated in the first four hours?).
Human performance is not the only consideration, however. Other assets such as equipment, information systems, and protocols may also be appropriately placed under the effectiveness lens. For instance,
- What percentage of the breathing apparatus failed during the exercise?
- For how many members of the forward command team were protective suits made available before the team first entered the hot zone?
- Of the 17 critical steps, how many were successfully carried out by participant #21?
- Were the protocols sufficiently clear to be executed without hesitation or confusion?
Efficiency -- Efficiency measures direct our attention to how much was accomplished in relation to the number of resources utilized. Historically, efficiency has not been a top concern with respect to preparedness. However, in these times of shrinking budgets and an expanding set of threats, doing more with less has become an imperative. Measuring efficiency in exercises is a key first step in learning how to better utilize precious resources. Examples of efficiency measures are:
- What number of security personnel were dispatched to the point of perimeter breach?
- How many analyst work-hours were required to arrive at a correct interpretation of the intelligence?
- On average, how many victims passed through each of the four decontamination stations per hour?
When critiquing exercise performance it is not uncommon to find a strong interdependency between measures of timeliness, effectiveness and efficiency. For example, a decontamination station will not likely be deemed timely or efficient if its people and equipment are highly prone to making errors. Having side-by-side measures of timeliness, efficiency and effectiveness sheds far more light on the root cause of any deficiencies than if only one or two measurements are used. In the decontamination station example, a prudent first step would be to lower the error rate through training and/or use of more accurate equipment. Simply adding more decontamination station resources could be a far more costly solution.
Having such measures will allow exercise directors and training instructors to set targets for improvement and build practical plans for achieving them over time. Diligence in this regard will surely pay off as and when a large-scale disaster reaches far beyond the capacity of available resources.
Learning -- Foundational steps for effective learning are to perform fact-based after-action analyses and disseminate results to participants promptly. Yet measuring how well the exercise participants learned from their experience can be far more subjective in nature than the other three measurement categories discussed above.
Numerous techniques can be employed. Administering "participant satisfaction" surveys following the exercise can be an effective tool in measuring the perceived level of learning. In some cases, more precision may be achieved by comparing results of proficiency tests taken before and after the exercise.
When the exercise can be repeated over time with the same participants, comparisons of performance for each successive exercise can show the degree of progress. For example, in the first exercise only 64 percent of participants fully complied with the established protocol, yet by the end of the third exercise, the same group of participants demonstrated a 97 percent compliance rate.
If performance from exercise to exercise remains stagnant, or worse, declines, then something may be drastically wrong with the process by which participants are given feedback on their performance and advised on steps to improve. Exercise directors and trainers should pay close attention to such indicators and adjust or reinforce their exercise practices accordingly.
Design with the End in Mind
Upon taking a closer look at the measurement process, the number of possible performance measurement data points can become unwieldy. For example, with each measure that applies to human performance, the exercise planner will have to address questions including:
1. Will the measurement be recorded for specific individuals, teams, functional departments, or by some other grouping or resources?
2. Will the measure be recorded separately for a series of distinct tasks or decisions?
3. In the case where measurements are taken by a human observer, will the observer's location be recorded with each measured value?
Clearly, the number of data points to define a single measurement can grow exponentially, and rather quickly.
While there's no set rule for important planning decisions like those called out about, the exercise planner should let the desired end result guide decisions in this matter. Knowing the type of after-action feedback he or she wishes the participants to be given can serve as a valuable guide. Yet exercise planners may find themselves at odds with an inadequate number of resources to record all of the desired measurements. In the end, a mix of prudence and innovative thinking will likely be the key to striking the right mix of measurements.
Incorporating a sound exercise performance measurement process can make the difference between a productive learning experience and a costly project with no material benefit. Indeed, an upfront commitment must be made and followed through to the after-action analysis and debriefing. But if a practical, balanced measurement process is implemented, the benefits of knowing precisely how your organization can become better prepared and do more with less will surely outweigh the effort and cost of such a process. So placing exercise performance measurement high on the priority list is imperative to delivering the payback that the exercise participants and your organization's stakeholders surely deserve.
Having guidelines and a framework for covering all bases of an exercise scorecard starts with the material discussed above. But there should be no mistake that there's more to implementing the exercise measurement process than is covered herein. Things like techniques and information technologies that are used to efficiently record performance measurements, systems that can automate the analysis and reporting of results, and an approach for setting practical targets for performance improvements are just a few considerations. Stay tuned for future articles in which these and other important topics will be addressed.
About the author: William Comtois is managing director of Varicom, Inc., a consultancy and software company specializing in homeland defense and service logistics. He has over twenty years of experience in applying leading technologies and innovative process management practices to business and defense solutions. Over the past fourteen years, his work has focused on large service companies where he has lead numerous performance improvement, training and process management initiatives that have resulted in major breakthroughs in financial performance, service levels and disaster preparedness. He can be reached by phone at (212) 561-5782 or via email at firstname.lastname@example.org.
(c) 2005 Varicom Inc., All rights reserved.