Self-defense: Training your network to defend itself

Jan. 9, 2017
Software programs that evolve after deployment to match wits with hackers and mitigate new threats goes beyond science fiction

Attacks on companies and organizations of all sizes are increasing, as are the sophistication and success rates of those attacks. Regrettably, this is coming at a time when the best frontline defense for most organizations, the top tier analysts who can instinctively react to threat indicators to protect their networks, are in critically short supply. Companies need help, and may no longer have the luxury of relying solely on adding new humans to stem the tide. Even if a junior analyst can be located or lured away from another job, it might take six months to a year before they become reasonably proficient at defending a new network.

But what if technology could be trained to help defend itself? That may not be such a far-fetched idea anymore. Since they were first invented, computers have always processed information solely based on pre-programmed triggers. Programs do what they were designed to do when first created, and nothing more. They stick to their programmed behaviors even if it would be obvious to any thinking brain that a deviation from the norm was needed.

The idea that programs can’t evolve after deployment is being upended by the growing field of artificial intelligence. If human cybersecurity experts could teach computers the tricks of the trade, as well the nuances and patterns of a specific network, it would be like adding a whole new set of analysts to the team. The process of training a computer to help defend itself and its network is different from creating a true artificial intelligence, so there is no worry about the program acting outside of its training. But within the parameters of the program, it could become proficient enough to mimic human behavior, but without all the human weaknesses like getting tired, needing to be fed or wanting a big raise. In terms of a security hierarchy, such a program could be deployed to think like a human analyst.

Designing software that can reason like an analyst requires a lot of thinking about the process of thinking. A crash course in neurology teaches that the part of the brain called the neocortex is used to process input from human sensory systems to form an impression of the world. Computers and software don’t have access to eyes, ears, a nose, taste or a sense of touch, but within a security operations center, there can be hundreds of data feeds streaming in from logs, security appliances, and threat feeds. Just as the human brain processes its five sensory inputs, this new breed of software processes its data feeds and forms its own interpretation of the world.

Researchers are just starting to understand how to mimic the neocortex’s processing power to allow software to think and reason like a human, but they have already achieved some impressive results.

Look at what Google’s AlphaGo program achieved using this new science, beating a human grandmaster at the ancient Chinese game of Go for the very first time in human history. Unlike Chess simulations that can rely on brute force computing, AlphaGo needed to think like a human to beat a human.

Go is played on a 19 by 19 square grid with two players alternatively placing white and black tiles. Players can capture their opponent’s tiles by surrounding them. The number of possible endings for a game of Go is practically unlimited, akin to the number of atoms in the known universe, so a complete strategy can’t be pre-programmed. Even during a game, it’s almost impossible to tell who is winning until the very end, with many of the top masters relying on instinct as opposed to any pre-game strategy. To beat the top player in the world, Lee Se-dol, the program needed to rely on superior pattern recognition and on-the-fly strategic decisions based on its training, just like a human player with years of experience would have done. Interviewed after the event, Lee Se-dol, who was shut out of the best of five series by the program, said that the gameplay of his opponent was indistinguishable from a human player, although one with a skill level he had never encountered.

That type of simulated human thinking combined with machine processing power is sorely needed to help defend today’s networks. The question now becomes whether we can effectively represent the experience and knowledge of a top cybersecurity analyst, allowing a program to think like a human, but with all the speed advantages of a machine. The answer is yes, and we’re well on the way to perfecting that very simulation.

Driven by this new science of allowing humans to train an artificial intelligence program that can think like a human while addressing the overwhelming amount of data and alerts that flood into every security operations center can make up for the worldwide shortfall of experienced analysts. A program with those skills can vastly reduce the wild goose chases of false positives by instantly responding to them, as it was trained, without tiring out or failing. That frees up human analysts to deal with the very small percent of incidents that are true of high concern to the enterprise.

Fast computational security software is finally realizing its true potential. It’s crossing the last barrier by adding human-like behavior to its already powerful toolset, and thus shifting the balance of power away from attackers and back to beleaguered network defenders. When software can be trained to finally think like an analyst, the good guys can’t lose.

About the Author: Ryan Hohimer is Co-Founder and CTO of DarkLight. Hohimer received a Bachelor of Science in Electrical Engineering (BSEE) in 1995 from Washington State University. Immediately, the US Department of Energy (DOE) Pacific Northwest National Laboratory (PNNL) put him to work in data collection and analysis in energy and national security domains. This placed him into “Big Data” before “Big Data” was cool. Dealing with the challenges of managing massive data sparked his interest in metadata. The Semantic Web Technologies (SWT) which emerged from metadata representations became a central component of Ryan’s Knowledge Representation and Reasoning (KR&R) acumen.

Ryan honed his KR&R skills through nearly 19 years of research and project management at PNNL. He is the lead inventor of the DarkLight technology. He was the Principal Investigator and Project Manager overseeing the development of the first prototypes. The first prototype addressed cyber-behaviors in Insider-Threat and Cybersecurity.