Beware the human aspect of social engineering attacks

April 6, 2018
The motivations for social engineering range from espionage and competitive research to electronic crimes and data theft

Most observers associate Social Engineering Attacks (SEAs) with deceptive phishing emails, but the rise of social media and messaging apps have allowed scammers to move beyond the more conventional email infection vectors.

The motivations for social engineering range from espionage and competitive research to electronic crimes and data theft. But unlike malware and exploits, SEAs represent a much broader and less predictable category of attacks. Con artists can perpetrate these schemes by gaining the confidence of their unsuspecting victims. No fixed set of rules apply to SEAs, so they cannot be identified by a simple signature or static set of if-then-else sandbox rules.

Because no file exists for a signature or sandbox-based detection system to examine, the attacks simply bypass existing defenses. Hackers spread social engineering through a variety of digital communication mediums, but in nearly every case they deliver the final payload via the web. Therefore, protecting against these attacks involves stopping them at the common web delivery point. This strategy requires a system designed from the ground up to detect and block web-based social engineering attacks.

In examining breach after breach, we find that the human techniques used to discover and understand SEAs fundamentally differ from the rules built into most cybersecurity systems. Humans have an ability to think and adapt based on changing circumstances, while most security solutions are hard-coded to perform the same functions over and over. The ever-changing threat landscape dictates that defense systems should become more adaptable to take on all internet-based threats, not just malware and exploits.

Unlike traditional machine learning algorithms that must be constantly retrained to detect new types of attacks, a progressive learning artificial intelligence derives its feature sets through dynamically curated dictionaries. In this way, the system can pinpoint zero-day and polymorphic cyberthreats before the user gets tricked into action.

In one prevalent type of scam, a web ad offers one of several possible prizes to users who spin a colorful wheel to determine their prize. Hey, what a zany distraction and you can even win a fun prize – what’s not to like?

Prompts then ask users to log-in to Facebook to claim their prizes, but of course, the Facebook log-in page involves a convincing fake controlled by the attacker. Once the scammer has secured the user’s Facebook log-in, hackers can reuse those credentials to open new doors for subsequent attacks on other sites accessed by the same user, such as banks and lenders.

To trap and block these kinds of attacks, a security system must recognize the underlying context of the message. For instance, it should identify the source of the malicious advertisement by examining the server behind the Facebook prize, which would reveal it as a phony.

Again, to trap and block these kinds of attacks, the protection system must understand the underlying context of the message. Working backward from the final attack screen, hundreds of clues exist that can definitively identify these social engineering attacks as malicious before they ever reach a user. Yet to succeed, security systems must move beyond existing sandbox and signature strategies to newer technologies that replicate the thought processes of human malware researchers and cybersecurity experts.

Most people tend to favor trustworthiness unless some peculiar clue triggers their suspicions. SEA attackers prey upon this basic human quality to trust others. For adequate protection, SEA security systems should imitate a humanlike mentality. This approach requires putting a contextual frame around every inbound message, whether it comes through an email, a social media post, an instant message, a webpage, or any other emergent infection vector. To defeat a hacker, your security system must first think like a hacker.

About the Author:

Atif Mushtaq, Founder & CEO of SlashNext, has as spent most of his career on the front lines of the war against cybercrime. Before founding SlashNext, he spent nine years as a senior scientist at FireEye, where he was one of the main architects of FireEye’s core malware detection system. Mushtaq has worked with law enforcement and other global agencies to take down some of the world’s biggest malware networks including Rustock, Srizbi, Pushdo and Grum botnets. His natural product sense has contributed greatly to SlashNext’s Active Cyber Defense System, an extremely powerful tool that is at once elegant, functional and simple to use.