SIW Executive Q&A: Data Without Border Means Closing the Visibility Gap Before It Becomes a Breach

Why CISOs must confront fragmented data, third-party sprawl and AI-accelerated risk head-on.
April 2, 2026
5 min read

Key Highlights

  • Data visibility is essential for effective cybersecurity, yet fragmentation across systems and third-party tools creates significant blind spots.
  • Organizations should conduct comprehensive audits to identify data locations and establish universal classification standards for consistent handling.
  • Reducing shadow IT by controlling OAuth connections and maintaining an approved application list minimizes unsanctioned access points.
  • AI can be both a threat and an opportunity, enabling faster detection of vulnerabilities and anomalies in complex environments.

Large organizations are finding that enterprise data growth has outpaced enterprise data governance. Accelerated cloud migration, SaaS proliferation, AI integration, and API-driven business models have dramatically expanded the digital footprint of most organizations over the past year. Yet while data volumes continue to surge, visibility has not kept pace. For many CISOs, the uncomfortable reality is this: they cannot definitively answer where their most sensitive data resides, who can access it, or how it is being used across third-party environments.

This visibility gap has become one of the most consequential security challenges of the last 12 months. High-profile supply chain incidents, regulatory scrutiny of data protection, and the rapid adoption of generative AI tools have heightened the risks posed by fragmented environments and shadow IT. Disconnected internal systems, unmanaged OAuth integrations, and misaligned SaaS platforms have created blind spots that adversaries increasingly exploit using AI-powered reconnaissance and automation.

In this executive Q&A with SecurityInfoWatch, Editorial Director Steve Lasky speaks with Peterson Gutierrez, VP of Information Security and Interim CISO at Barracuda, about the structural roots of the data visibility crisis and its implications for modern security leadership. Gutierrez brings operational experience managing complex, hybrid ecosystems where governance, third-party risk, and business velocity must coexist.

In the discussion that follows, he outlines why continuous data discovery is foundational, how universal classification standards can reduce fragmentation, and why aligning third-party tools with enterprise security frameworks is essential to shrinking today’s expanding attack surface.

SecurityInfoWatch: Why is data visibility such a critical issue for organizations today, and what are the most surprising challenges you’ve encountered in this area?

Peterson Gutierrez: Data visibility has become a cornerstone of modern cybersecurity, yet many organizations are still playing catch-up. The sheer volume of data being created and managed today is staggering, and it continues to grow exponentially. From a security leadership perspective, the challenge is straightforward but enormous: you can’t protect what you can’t see.

One of the most persistent and surprising issues is the fragmentation of data environments. It’s not just that data lives in multiple systems; different teams often adopt their own third-party tools without centralized oversight. This creates a shadow IT landscape in which the attack surface extends far beyond what security teams actively monitor.

Each unsanctioned tool becomes a blind spot. Without a unified view, enforcing consistent security policies or data classification standards becomes extremely difficult, leaving organizations exposed to unnecessary risk.

 

SIW: How does the fragmentation of internal systems and third-party tools exacerbate the data visibility crisis?

Gutierrez: Fragmentation breaks the chain of custody for data. When organizations operate across on-premises systems, cloud environments, and numerous third-party SaaS applications, they’re dealing with environments that don’t speak the same language.

You may have strong visibility internally, but the moment data moves into a third-party tool that doesn’t integrate with centralized monitoring, that visibility disappears. At that point, enforcing policies consistently becomes nearly impossible.

Standardization is the key to addressing this problem. Organizations need data policies and classification standards that extend across all systems, including third-party platforms. If a new tool doesn’t align with your security framework, it introduces more risk than value.

SIW: What steps should organizations take to improve data visibility and reduce their attack surface?

Gutierrez: It always starts with discovery. Organizations must conduct comprehensive audits to understand exactly where their data resides—across servers, SaaS platforms, endpoints, and third-party vendors.

Once that foundation is established, the next step is creating a universal data classification standard. Sensitive data should be identified and handled consistently, regardless of where it lives.

From there, organizations must enforce these standards with vendors. Any third-party tool must align with internal classification and security requirements. Functionality alone isn’t enough.

It’s also critical to rein in shadow IT. Restricting who can authorize OAuth connections or maintaining an approved application allow list can dramatically reduce the number of unsanctioned access points.

Finally, organizations need to focus on resilience. Even with strong visibility, breaches can occur. Having a clear recovery and communication plan ensures that a vendor incident doesn’t become a business-wide crisis.

SIW: How are AI-driven threats changing the way organizations approach data visibility and security?

Gutierrez: Attackers are increasingly using AI to identify and exploit visibility gaps faster than ever, reducing the time organizations must detect and respond.

At the same time, AI presents a major opportunity for defenders. Security teams are understaffed and overwhelmed by alerts, making it impossible to manually analyze the massive volumes of data generated by modern environments.

AI-driven security tools help monitor access patterns in real time and flag anomalies that could indicate a breach or rogue application. This allows organizations to scale visibility without scaling headcount. It’s an arms race if attackers are using AI to find blind spots, defenders must use AI to illuminate them first.

SIW: Looking ahead, what should be the top priorities for CISOs addressing the data visibility crisis?

Gutierrez: The top priority is shifting from reactive security to proactive data governance. Data discovery and mapping must be continuous, not one-time exercises.

Security also needs to be embedded into procurement and development processes from the start. The goal isn’t to slow innovation, but to ensure that new technologies operate within a visible, classified, and monitored data environment.

When organizations achieve strong data visibility, they can innovate faster and with greater confidence, reducing risk while supporting business growth.

 

About the Author

Steve Lasky

Editorial Director, Editor-in-Chief/Security Technology Executive

Steve Lasky is Editorial Director of the Endeavor Business Media Security Group, which includes SecurityInfoWatch.com, as well as Security Business, Security Technology Executive, and Locksmith Ledger magazines. He is also the host of the SecurityDNA podcast series. Reach him at [email protected].

Sign up for our eNewsletters
Get the latest news and updates