Only human: writing code to match behavior

Dec. 3, 2015
Instincts and characteristics play a key role in information security

Security represents the basic polarization in human behavior. It's so important, and yet so not fun. It has to be highly visible and top of mind, yet also unobtrusive. It’s a painful chore but undeniably necessary. It's an act of adult responsibility, since after all that effort, the best scenario is that nothing will happen. It's like eating vegetables: not what we want but what we need.

 Let’s face it, no one wants to have to buy security in the first place, and then do the work needed to make it work. We might install the software—or acknowledge the fact that it comes pre-installed with the computer—but then tuck it away and ignore warnings that should be noticed. And that's the real tragedy: Even the most user-friendly technology offering solid defenses against different kinds of threats can fail, simply because the users aren’t playing their role.

 It's no wonder, then, that so many really good products don't actually do the job, and so many otherwise sophisticated users allow security vulnerabilities to remain and exploits to come through. Developing software that's visible when it needs to be, and works silently and unobtrusively in the background the rest of the time, while making the user happy they made the effort, is really vital.

 So what’s the best way to get past this mental block? How can security software be made if not easy, at least easier?

 It’s not as if users today aren’t engaged with technology. They post daily on Twitter or Facebook and make connections on LinkedIn. They play games and download apps just because they can. But when it comes to security, well, that's another story entirely. User engagement just about disappears. This is a critical issue, but it sometimes gets lost on software developers.

 This is why a true sense of engagement is integral to the process. Sure, the level of security the software provides has to be the top priority, but usability is equally important. If security programs don't mimic the ease, convenience and even enjoyment of other applications, they lose the battle to gain the user’s attention and interest.

 But why is this issue a challenge?  When it comes to usability, it may seem like a mistake to draw parallels between security software and video games—the only characteristic they share is that they run on a computer. It’s like trying to make the controls of an oven similar to a TV sitcom just because people find sitcoms interesting.

 Actually, security software needs to do a better job of modeling the users’ understanding of what it means to be secure. Consumer security software forces the user behind the curtain into technical complexities and jargon. This is why, to extend the last metaphor, oven controls feature the temperature, not volts. Security software designers can take the same approach.

 Think simplicity: With some products, a wealth of features and capabilities can be appealing, but in security—thanks in part to the contradictory nature of human priorities—too many options and even more colors can have the effect of turning off activation and interaction. On a related note, the footprint must be small and the presence unobtrusive. (Think how annoying and confusing even necessary pop-ups can be.) However, they can’t exactly be invisible either. Striking the right balance is critical.

 Consumer security software vendors must accept responsibility for the usability of their products. Furthermore, security software vendors must understand the very real connection between product usability and product effectiveness. If users must play a role in the consumer security process, then vendors must come to see broken usability as a very real security threat. 

 The hard conundrum for vendors is that usability and simplicity are deeply connected.  Adding more features (a.k.a. reasons to charge more money for the product) often undermines usability. . .and therefore security. This seems unique to security software, particularly given that the ‘value’ of security software isn’t measured in gain, but what you don’t get (yet another conundrum!).

 Finally, keep in mind that the people on the other side of the fence are preying on human characteristics too. That’s why, on a different security front, social engineering tactics work—they lull users into a false sense of familiarity. On the flip side, the good guys can benefit from monitoring the bad guys’ behavior, and write code to address what attackers are currently doing, what’s been done in the past, and of course what’s likely to be done in the near future.

 This is the approach good security technology can and should take too. It can build on what forms of human behavior might lead to vulnerabilities, which habits don’t match up with given roles in the company, which features might be most appealing to a particular demographic, and so on.

 When it comes to security, remember that we’re only human. Technology development should see that as a strength as much as a weakness.

 About the Author:

Mark Patton is SVP of Engineering at Malwarebytes. Since 2014, Mark has led the engineering teams responsible for development, testing and delivery at Malwarebytes as well as the technology that supports its research teams.