Blind spots – those areas we can’t see when looking in car mirrors – are such a danger that the automotive industry began to introduce blind spot warning technology nearly 20 years ago. Using sensors to detect vehicles in adjacent lanes, the technology alerts the driver so that they have greater awareness of the vehicles around them. Credited with providing a 23% reduction in lane-change injury crashes, blind spot warning systems are so effective that they have become a safety standard on all vehicles. While some older cars don’t have the feature, it’s often not a problem until there is an accident.
The same goes for modern enterprise networks. Security teams today are tasked with protecting dynamic environments that are spread out across clouds, instances within clouds, untethered endpoints, and physical spaces. Unless they have full visibility into the network in the cloud and on-prem and a source of truth for how users, devices, and applications are behaving, it’s easy for bad actors to hide.
Blind spots are proliferating for these primary reasons:
Lack of visibility where and when needed
Pre-Covid, many organizations still operated with network security architectures that were primarily on-prem and consisted of appliances. The pandemic accelerated a shift to the cloud to the point where we are now seeing roughly 87% of enterprises taking a multi-cloud approach. This shift has made deploying sensors, taps and other hardware in all the places you need visibility impractical, costly, and complex.
Furthermore, these same traditional architectures have relied on deep packet inspection (DPI); however, most traffic is encrypted which renders DPI ineffective. On-prem, cloud, multi-cloud, and hybrid environments make DPI nearly impossible to scale. Add to that the sheer bandwidth and variety of traffic hitting all of those points and the computing resources it takes to inspect it all and we are looking at a prohibitively expensive endeavor for the majority of organizations.
Multiple, disparate sources of data
It’s no secret that security teams are suffering from alert fatigue. Recent research found that on average, organizations have deployed more than 30 security tools that are a mix of in-house and open-source tools, and solutions from multiple vendors to secure their clouds, networks and applications. This “tool sprawl” means there is no unified view for security teams who then must switch between tools, consoles and dashboards to see what is happening.
Naturally, this can also lead to an overload of alerts that need to be managed by small teams of people charged with sifting through noisy data to find what’s important. Invariably some alerts will be ignored because they simply can’t address them all.
To eliminate the blind spots, we need better threat detection which requires:
Consolidating tools and breaking down silos
Because infrastructure is spread across legacy, on-premises, hybrid, and multi-cloud environments, the need for specialization comes into play. Organizations have security operations center (SOC), network, cloud operations, and in some cases operational technology (OT) teams all tasked with keeping the business up and running and secure. Each team consists of subject matter experts with specialized levels of knowledge and specific tools that they use. This can make it incredibly difficult to get a big-picture view of what is happening across the organization, let alone maintain visibility into the traffic traversing the network.
The key to consolidating disparate systems, processes, and data sources is pulling sets of information into the same platform and making it consistent across a unified data set that multiple teams can leverage. Teams can view the data and understand it because it is all in the same real-time platform, with real-time views and real-time detections. Consolidating data and tools can eliminate blind spots and silos in the organization and open new forms of collaboration.
Comprehensive, real-time visibility
Security teams must look for new approaches that can be deployed anywhere at any time for visibility where and when it is needed. Rather than deploying multiple-point security solutions that will struggle with mirroring the scale, speed and agility of the cloud, teams should consider a cloud-native platform, like a network defense platform (NDP). Cloud-native solutions provide the flexibility to scale services up or down based on demand, making it easier to adapt to changing network requirements and work equally well in the cloud and on-prem. This scalability helps in deploying security measures across the network without constraints.
Flow-based solutions that do not rely on packets and are encryption-agnostic are also good considerations that can be deployed across the environment for more complete visibility. When paired with context-enriched metadata, security teams can inspect flow data in real-time for a wide variety of behaviors and activities such as asset risk, type of environment, last known user, and vulnerability counts, ratings and scores that may otherwise go unseen by more traditional technologies.
Source of Truth
Context also cuts out the manual research that analysts must often do to understand what they are seeing. With context-enriched metadata, analysts have the critical information that they need at their fingertips for identifying anomalous behavior and accelerating incident response.
Context-enriched metadata is also valuable for integrating with security tools and systems. It enables these tools to work more effectively by providing a broader understanding of the network and potential threats. By providing additional context, such as the relationship between different events or activities, enriched metadata can reduce false positives, allowing security teams to focus on genuine threats and decrease alert fatigue.
All of this can be offered via a single console which provides a unified view of what’s happening to what across the entire network.
Much like the blind spot warning technology used in automobiles, new approaches, like a cloud-native NDP that combines metadata-enriched flow-based technologies, can be a powerful force in significantly reducing blind spots and eliminating “tool sprawl.” This approach provides enhanced visibility, automated responses, scalability, and the ability to embed security measures across the entire network infrastructure. A cloud-native NDP aligns well with modern networking requirements, especially as businesses increasingly rely on cloud services and dynamic infrastructure.
Matt Wilson is the Vice President of Product Management at Netography. Over his 25+ year career, Matt has held senior technology leadership positions across numerous industries including Neustar, Verisign, and Prolexic Technologies. With a rich background in innovation and go-to-market strategies, Matt has been a critical leader in helping many companies conceptualize solutions from the customer lens and drive them to market with significant impact.