The Top 5 Mistakes Made When Apps and Data Move to the Cloud

We don’t have resources to invest in that new server farm, so let’s move it to the Cloud. Have you ever heard that from your C-Suite?

New applications or significant upgrades to existing applications often require additional computing power. Since these expansions can strain existing in-house data centers in terms of power, floor space, and cooling, CIOs may want to shift new operations to outside cloud providers. For organizations who do not have previous experience with the cloud, this can be a high-risk endeavor.

An important first step in moving assets to the cloud is to know the common mistakes that can lead to a data disaster or a breach of sensitive information.

         1. Treating the Cloud as an Extension of the Data Center

Information technology professionals have historically taken the “M&M” security approach when designing network and application architectures – by designing a hard outer shell with a soft gooey core. Defending a perimeter is easier because the number of connections and ports is finite, therefore internal security controls have not traditionally needed to be as robust. The use of firewall rules also limits access inside, so less attention is needed.

When data and applications are moved into a cloud, IT staff should not attempt to transplant the internal data center architecture as it will be necessary to redesign the architecture with the unique risks the cloud brings. History has exposed a few common mistakes in credential management, addressing, and application management.

A robust credentialing architecture must balance the complexity of a secure system with the amount of inconvenience that end users will endure. When applications and data are hosted in the same protected network environment as the end users, the lower level of risk will allow developers to leverage the user ID and password for general users. Administrator accounts should have additional controls, such as longer passwords or even multi-factor authentication (MFA). For access to cloud applications, a user ID and password is probably not appropriate for sensitive data so additional controls such as device credentialing and/or MFA should be the minimum. Administrator accounts should use MFA plus additional measures including VPN access, edge devices that are geolocation aware, frequent password changes, and of course increased logging and alerting.

There have been several breaches of sensitive data because application developers used web shortcuts. A more secure architecture would require a separate login to a web application before data can be accessed. It is never proper to create direct external links to sensitive data. In previously reported breaches, data was accessed because the suffix of the URL contained a record number, which was assigned sequentially. The breaches were reported when a third-party user was sent a direct URL link to access their personal data, then when they changed one character, were able to access another individual’s data. There are a few workarounds, including requiring a complete login into a secure environment, then entering specific credentials to access the sensitive data. While less secure, the use of very long and randomly generated unique keys will thwart most accidental disclosures. Finally, any public or private website can limit automated search engine indexing by putting a ‘robot.txt’ file at the root of every website. While this will stop legitimate search engines, it will not stop web crawlers and spammers who are looking for email addresses.

           2. Assuming the Current Staff Has the Right Skills

Amazon and Microsoft have made it easy for developers to put applications into their clouds. Microsoft even offers credits for Azure services to Microsoft Development Network (MSDN) subscribers. This means that many IT professionals could create and move sensitive data into the cloud, often without oversight or authorization. Organizations have no technical controls and little oversight that would stop this action.

The first question every CIO should ask is, “Just because we CAN use the cloud, are we PREPARED to use the cloud?” Application and network architects and operators who have spent their entire career designing systems used in a corporate data center may not have the skills or experience to design and operate a cloud-based solution. For one, designs and techniques used in a closed environment, e.g., a self-hosted data center, may not be appropriate for the cloud. Authentication, firewalls, and event monitoring tools well-suited for local access may not be optimized for the cloud. Tools suited for the cloud will likely be virtualized so additional training may be needed before the IT team is proficient.

            3. Shortcutting the Testing and Validation Cycle

Other management processes that worked in a self-hosted data center may not be adequate to control risks. For example, organizations using cloud-hosted applications and data will want to consider a strong separation of duties between the developers and the operations team. At a minimum, cloud-facing applications should have independent vulnerability scans and penetration testing that is different than internal applications. These skills may not be available in all internal IT departments.

Addressing the second issue of maintaining a separation of duties, it is more important to have an independent gatekeeper with authority to stop deployment of untested applications. In many IT departments, there is pressure to maintain deadlines, but this can lead to higher error rates and ultimately more breaches. Formal quality gates with formal reviews of independent testing results can significantly reduce the risk of breaches caused by misconfigured or poorly designed systems.

             4.  Skipping the Responsibility Matrix During Contract Negotiation

Another common mistake does not involve the technology itself, but how the cloud technology is managed. First, not all cloud environments are the same as there are major architectural differences between Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Application as a Service (AaaS) also known as Application Service Providers (ASP). All cloud models will require the organization planning to move into the cloud to mitigate the risk to confidentiality, integrity, and availability as the ultimate responsibility for securing one’s data cannot be delegated.

One method for clarifying roles is to create a responsibility matrix and include it in the contract. Specific actions to be addressed include who has responsibility for each of the required security controls, as well as who will manage the operations. There is at least one instance where an organization put applications and data into a PaaS cloud and assumed the hosting organization would manage the firewalls. While the assumption was correct that the hosting data center did have a firewall, it only controlled external access into the data center but did not adequately firewall off individual users within the data center.

Other users have incorrectly assumed that an IaaS or PaaS solution would provide geographically separated environments for disaster recovery purposes. For example, many of the 9,000 clients using The Planet, a nationally recognized data center provider, learned the hard way that contracting for a secondary backup location is not foolproof. In this instance, The Planet had its primary and backup data centers in the same building and shared many of the same utilities. When a fire occurred in 2008, separate data centers on different floors went offline because the fire department did not let the company operate the backup generators.

Starting with availability, providers of SaaS (who host their own software) and ASP (typically solutions based) generally have turn-key solutions that address data backups and disaster recovery challenges. Contracts with SaaS and ASP providers typically contain fewer details about “how” the cloud is designed to be resilient and more about “what” happens if a downtime occurs. Both SaaS and ASP providers have been known to include contract language that promises a minimum reliability, as measured in uptime. Depending on the criticality of the application, uptime guarantees should be in the 99.9 to 99.999 range. Financial penalties for not meeting these expectations should be reasonable, but a definite deterrent to incentivize better performance. Contracting officers may expect to see language where downtime is paid on the percentage missed, but this may not be adequate to compensate the using organization who must resort to paper downtime procedures.

PaaS and IaaS providers require a much different contracting vehicle. In these cases, the using organization is responsible for identifying all security and privacy controls, then either contracting for those services or providing them organically using virtual servers and virtual applications. Typical errors to avoid include not planning for a second data center to use as a disaster recovery center.

             5. Failing to Perform a New Risk Assessment

All the above mistakes could have been avoided if a thorough risk assessment was conducted prior to implementing any cloud-based solution. A proper risk assessment should have identified these risks. A mature risk management plan should have mandated that these risks be mitigated to an acceptable level.

Performing a comprehensive risk assessment requires different skills and perspective than what is typically found in IT departments. Attempts to conduct risk assessments with untrained staff or staff with an ‘agenda’ can result in a myopic view of risks. This leads to unsafe conditions being scored at a lower level than what an independent assessor will find.

The decision to move data and applications into a cloud is an excellent example of a justifiable and appropriate use of an independent risk assessor. This risk assessor can work for the same organization but should not report into the IT department or else he/she may be pressured to not slow down the project. If these conditions cannot be met, consider the use of an independent third-party assessor who can not only perform the risk assessment, but also the vulnerability scans and penetration testing.

Conclusion

By now, it should be clear that any move from a self-hosted data and application environment into a third party or cloud environment introduces significant risks to any organization that is unprepared for the change. There are technical challenges, but more importantly, organizational and management adjustments that must be made before the first connection is made. These challenges require careful planning, or else organizations will find themselves in the news trying to explain why they exposed sensitive data.