How do you manage persistent data in a DevOps world?

Oct. 31, 2019
The answer to the data problem is likely to require change along three, separate fronts: people, process and technology

You often hear that data is the new oil. This valuable, ever-changing commodity has begun to play a starring role in many cloud-native applications. Yet, according to a number of DevOps teams, data issues continue to plague their efforts to continuously integrate, test and deploy frequent software releases. More specifically, issues with persistent data (and its underlying database engine) often appear to be the culprit.

Your organization might be pursuing emerging IoT applications that incorporate and analyze sensor data from multiple sources. Or, your DevOps team may be trying to develop applications that extract further actionable insight about customers. Whatever the use case, there’s no doubt that back-end database architectures have gained growing importance to the success of such projects. Yet, in many cases, such database systems appear to be having difficulty just keeping up with the pace of today’s DevOps pipelines.

According to one report on the state of database deployments in 2019, 46% of DevOps teams on an accelerated release schedule (with weekly or, even, daily releases) found it “extremely or very difficult” to speed their database release cycles accordingly. In a related Redgate report on the “2019 State of Database DevOps,” 20% of respondents cited slow development and release cycles as one of the biggest drawbacks to “traditional siloed database development practices.” Another 23% saw a higher risk of deployment failure and extra downtime when database changes were introduced in a traditional database environment.

Getting to the Heart of the Data Problem

Are such database issues the result of the wrong choice of underlying database technology--such as the use of an RDBMS, NoSQL or, even, a NewSQL system? Possibly.

Maybe database issues are also caused by poor (or non-existent) cross-communication between database experts and their developer counterparts? Undoubtedly, this is true as well.

Would you be surprised to learn both situations (wrong technology and poor communication) are often to blame? Also, to blame is the urge to solve new data problems in the same old way that organizations approached their earlier, legacy use cases.

Ultimately, as with many things DevOps-related, the answer to the data problem is likely to require change along three, separate fronts: People, process and technology. It will also require a fundamental shift in how such issues are approached at the start.

Starting with People and Process

Instead of worrying about database slowdowns, what if you began again with how DevOps teams first approach the many facets of the underlying data layer? This means:

1.     From the start of a project, include DBAs as part of the cross-functional DevOps team. These will help promote healthy cross-communication. They will also positively influence the development of appropriate underlying data layers and infrastructure that support your emerging use cases. (In larger organizations that manage many data sets, these may be specialized database administrators (DBAs). In smaller companies, these may be more full-stack engineers who also have high-quality expertise in database operations.)

2.      Identify early the different data types, domains, boundaries and optimal cloud-native patterns associated with your data. Once these are established, the team can gain a better understanding of the different ways that might be needed to develop applications and data architectures for each new use case.

In effect, organizations need to make a concerted effort to “shift left” with their overall data architecture discussions. This shift allows more of the right questions about data to be asked, answered and incorporated early on in the design of the overall application.

Asking the Right Questions about Data

Part of this early collaborative work may involve asking key questions about data. These include the best ways for an application and its users to interact with the underlying data, such as:

  • How will data be ingested and from where?
  • How will we access the data?
  • How and where will we store it?
  • How do we update and/or maintain legacy databases as we build for new use cases?
  • What types of data will we have? How will the types change or be increased over time?
  • How will we query from it (and what are the most common queries users will want to run?)
  • How will we manage and scale the data in use?
  • How will we protect the data from data loss, disaster or corruption?
  • How will we secure the data?

If these questions sound a lot like features in a data lifecycle, you’re right. But, such basic questions about data are often overlooked in the rush to deliver, integrate, test and deploy application code. So often, what happens instead is an application team creates a design, then hands it off saying, “Okay, now create a data architecture to support the application.”

This fundamental miss with data and collaboration at the start often leads to poor database architecture decisions later. It can also cause DevOps teams to resort to workarounds to accommodate such wrong architecture choices made at the start. This may cause some development teams to choose to circumvent DBA involvement altogether. In this case, they might even find themselves opting to use public cloud services instead (or some other types of data management) to support their application.

Instead, application teams should choose the path of early collaboration between developers and all functions of IT (including DBAs). This approach can offer better DevOps outcomes. It also helps move organizations toward a future where data architects provide an effective “bridge” between all parties critical to the application’s success: From the needs of developers to the platforms provided by DBAs and the underlying infrastructures needed to enable them.

Changing Skillsets, Changing Minds

As with most things DevOps-related, success at the data layer has as much to do with changing the education, mindset, and culture toward database architectures and database deployment methods as it does with the specific steps in the DevOps pipeline.

For example, if most DBAs in your organization are narrowly focused on one area (such as an Oracle DBA or SQL Server admins), your organization may benefit from investing in education to close skill gaps. Such investment can help expand the knowledge and exposure of that workforce to DevOps practices or emerging database approaches to cloud-native application patterns.

It can also help some organizations to first identify those individuals in the organization who want to transform, innovate, and learn about new tools, methods and practices.

Hiring those with that expertise already in place is another option. When you hire new skillsets, however, the answer may not be just to hire several noSQL administrators. This tactic will not necessarily solve your data problem, either. It may be just as beneficial (or more so) to hire someone with little specific noSQL skills but who is good at thinking about data in a different way.

Business First, Technology Second

In our practice, we are often asked to weigh in on technology choices to support DevOps efforts in emerging areas like IoT, Big Data and advanced analytics. Organizations ask us about using traditional relational databases (SQL Server, Oracle, etc.) vs. NoSQL databases (MongoDB, Cassandra, etc.). They ask about the merits of one NoSQL iteration over another. They ask about how to manage persistent data in container environments. They even ask about workarounds when the data choices they’ve made cause other, unexpected problems.

We do our best to answer these questions. But, whenever possible, we also tell organizations to back up a few steps and start, instead, from their business objectives:

  • What are you trying to accomplish?
  • What are your business drivers?
  • How do you hope data will be used in these contexts?

Answers to these questions can help you make better choices about the best data architecture to support your growing applications.

Ultimately, new tools and technologies can enable a lot of impressive data architectures. But, just throwing tools at the data problem is not enough. Early effort and investment in people, process and cross-departmental communication are just as important to a successful data outcome for any project.

Many of the discussions we have now are trying to take everyone back to the steps they skipped. This harks back to the fundamentals of software engineering: Addressing things early so you don’t have problems later.

Unfortunately, data is the one component in the technology stack that you can’t easily undo once you’ve chosen a specific path with your application’s data architecture. When it comes to planning for data, it really is a “Pay Now” or “Pay Later” equation.

Isn’t it best to take the time and choose your data path wisely rather than paying later for slowdowns or other issues with deployment and performance?

About the Author:

Jeff Bozic is a Principal Architect with Insight Enterprises, a Fortune 500-ranked global provider of Digital Innovation, Cloud + Data Center Transformation, Connected Workforce, and Supply Chain Optimization.