There's no doubt that the age of online information has created new national security threats, which have made it a priority for enterprise and governments to ensure the security of their network and IT infrastructure. The use of anthropological techniques presents an alternative perspective for researchers whose intent is to develop intuitive and multi-tiered security as it relates to cyberspace.
A focal point of this research is what’s called ‘tacit knowledge’, or knowledge of something that is implicit rather than formal knowledge that characterizes the duties and tasks performed by security institutions. One way to illustrate this kind of knowledge is the content of folk songs. We are all familiar with the sayings and implications of the lyrics of folk songs in our own culture, yet those of other cultures are foreign to outsiders. You have to live it to understand it.
Primary to security analysts' concern is the fact that open-source and commercially developed tools lack the intrinsic understanding of security analysis, leaving the tools used by many analysts inadequate. This subsequently results in labor-intensive resolutions to problems, such as exactly what data has been compromised, and how did an attacker penetrate the system.
Where human technical tasks may need several minutes to be performed, an algorithm may only need a few seconds, freeing up intelligent minds to concentrate on problems of greater complexity. Frequently, network attacks are automated by software scripts called net bots. Network defenses would be at a disadvantage when analysts on the other side don’t possess streamlined processes to repel such attacks or minimize the damage that it incurs. With more sophisticated tool support, researchers hope to automate standard tasks that are traditionally performed by human beings.
Eradicating human error may improving the nation’s defense apparatus
Because a good number of cyber-attacks are automated, automation is central to cyber security. However, professional analysts usually require a lot of time to locate the system that has been breached by a virus or malware. When we add in the element of human error, valuable time finding a resolution and its deployment is lost, thus leaving even more data at risk. Humans are prone to mistakes. While algorithms, are not entirely mistake free, they can include quality control instructions that a human analyst may fail to carry out because of fatigue or other reasons.
As a result, what we have is a numbers game when attempting to identify a breach, but an automated attack has inherent advantages not including time in its favor. If some defense mechanisms are automated, including matrix processing, error-correction and other statistical techniques, the potential for a faster problem resolution, along with a sharp reduction in human error, can be accomplished.
Greater central standards and algorithms trigger mechanisms to combat threat scenarios
Algorithms translate processes that are performed by humans into instructions that can be understood by computer systems on a sophisticated level. At its core, an algorithm can ‘understand’ a problem, and based on the available data and instructions produce a desirable result. Well-designed algorithms are able to get at the heart of a process and produce output on a computational level for a very large number of scenarios. Defense mechanisms must gather and process large amounts of data from a wide net, which presents difficulties for humans. But with the aid of automation based on a framework, quicker deployment of an appropriate response can be achieved. Macro components such as communication and power are constantly under attack, and while system redundancy is one means of protection, threats are advancing in their complexity and damage potential.
Algorithms could predict attacks and improve situational awareness