Real words or buzzwords?: True Cloud - Part 2

Sept. 19, 2017
Examining the characteristics of 'native-cloud' applications and why it's important for the physical security industry

Editor’s note: This is the 14th article in the “Real Words or Buzzwords?” series from contributor Ray Bernard about how real words can become empty words and stifle technology progress.

Leading cloud technology organizations are continually revolutionizing the way true cloud applications are designed, developed, and delivered. This means that there is a growing gap between “true cloud” applications and those that aren’t. (True cloud is a term coined several years ago by Dean Drako, founder and CEO of Eagle Eye Networks and owner of Brivo Systems, to emphasize the differences between applications designed specifically for the cloud and legacy-design applications being run on a cloud-hosted virtual server.) True cloud, of course, means an application designed to take advantage of the cloud computing characteristics that make true cloud applications different—and better—than traditional server-based applications.

The problem of legacy-design applications masquerading as cloud applications has not been confined to the physical security industry—it can be found in any area of cloud application development. Thus, it should be no surprise that additional terms have been developed to make the distinction between applications designed specifically for the cloud, and those that are not. One such term has come into common use: “cloud-native.” The term cloud-native resonates strongly with IT folks because “native” has been a mainstream term in IT for well over a decade.

A native application is one that has been developed for use on a specific platform or device, and executes more quickly and efficiently because it makes maximum use of the capabilities built into (i.e. native to) that platform or device, and doesn’t require any extra layers of translation or interface to run there. Thus, we see the terms “native iOS app” and “native Android app” used to refer to mobile apps whose software code is written just for Apple’s iOS or Google’s Android operating system.

In 2015 the Cloud Native Computing Foundation was founded with the specific purpose of creating and driving the adoption of cloud-native design.


Cloud-native refers to an application that has been designed and built to take maximum advantage—based on the purpose of the application—of the key characteristics of cloud computing. These computing characteristics are very well defined and each provides specific benefits not available in standard server-based computing.

Cloud computing is not simply one or more virtual servers that can be scaled up in real time from a virtual “standard server” to a virtual “humongous server” with gigantic CPU power, RAM memory and database storage. Scaling up every aspect of a virtual server, or firing up multiple instances of a server, is not what cloud scaling and on-demand services are about. It’s not efficient and it costs too much due to its wasteful use of cloud resources.

Cloud computing takes a set of computing resources, such as processors and memory, and puts them into a big pool, typically using virtualization. Thus, a cloud-native security system application, when experiencing a spike in user activity due to a critical security incident, would, for example, allocate exactly and only what resources are needed out of the pool, such as 32 CPUs to process analytics for many video streams. The cloud infrastructure instantly assigns those resources to the application. The user-responsiveness of the system is not affected by the spike in demand. There is no “maxing-out” at 100% CPU utilization. When incident response activity is done, the application releases the resources (in this case, virtual CPUs) back into the pool for someone else to use.

When this happens, cloud-application service providers pay only for the computing power that is used, and this resource-cost minimization is passed along to the subscribers. This kind of cost effectiveness can’t be duplicated with on-premises systems, and it can’t be duplicated in the cloud with non-cloud-native applications.

Why the Key Cloud Characteristics Are Important

Cloud-native applications are the only way to achieve the high performance and cost-effectiveness required for large scale (i.e. high subscriber count) cloud-based systems. They are also the only way to achieve the rapid application advancement required to keep up with accelerating trends in technology, business and security. There is zero future-proofing in non-cloud-native systems.

This is why it is important to know about the six key characteristics of cloud computing. Cloud application vendors should be able to explain to integrators and consultants, exactly how they are using these cloud computing characteristics to support the features and capabilities of their applications. For example, two cloud-based video management systems, the Axis Video Hosting System solution and the Eagle Eye Cloud Security Camera VMS, have users specify video storage retention as the number of days of recorded video to retain, as opposed to the number of gigabytes of video storage space to reserve. The cloud applications manage the storage automatically, expanding and contracting it as needed to maintain the video retention requirement.

The Six Key Characteristics of Cloud Computing

The NIST Definition of Cloud Computing provides five key characteristics, and ISO/IEC 17788 adds a sixth. The original language is liberally paraphrased below to make it less technical and more user-friendly. In the paragraphs that follow, the original NIST broad term “consumer” is replaced with “cloud-native application,” “user” (meaning a customer or integrator using the cloud application) or “subscriber” (meaning the integrator’s customer) to specify which type of consumer is being referred to.

From the NIST Definition: Cloud computing is a model for enabling anywhere, anytime, convenient, on-demand network access to applications that run using a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or cloud service provider interaction—typically, automatically provisioned and released based upon the level of subscriber use of the cloud-based application.

However, since that definition was written, cloud server virtual resources have been further refined to enable just parts of a server to scale up, such as CPUs or RAM. Such capability has led to serverless computing, a broad category that refers to any situation where the cloud user doesn’t manage any of the underlying hardware or virtual machines, and just accesses exposed computing functions. It’s this kind of fine-grained resource virtualization that the Cloud Native Computing Foundation was formed to enable.

Key Characteristics

1. Resource pooling. The computing resources of the cloud infrastructure provider (such as Microsoft Azure, Amazon AWS or Google Cloud Services) are pooled to serve multiple subscribers with different physical and virtual resources dynamically assigned and reassigned according to subscriber demand. Examples of resources include storage, processing, memory, and network bandwidth.

2. On-demand self-service. A cloud-native application can on its own automatically provision computing capabilities, such as server computing time and network storage, as needed, without requiring human interaction with service providers.

3. Broad network access. Cloud-native application capabilities are available over a network and accessed through standard mechanisms, such as an Internet Service Provider connections and corporate networks that provide Internet access, and this enables use by various kinds of client devices (e.g., mobile phones, tablets, laptops, and workstations).

4. Rapid elasticity. Capabilities can be elastically provisioned and released, preferably automatically, to scale rapidly up and down commensurate with demand. To the user, the capabilities often appear to be unlimited and can be appropriated in any quantity at any time. For example, if a subscriber needed to send out an emergency notification to 5,000 employee mobile users, the network capabilities to establish several thousand mobile device connections at once would be automatically allocated to that subscriber, for the 5- or 15-minute duration of the message broadcast and the mobile users’ return responses. Network traffic of other subscribers would not be affected.

5. Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability within the cloud infrastructure, at some level appropriate to the type of service (e.g., storage, processing, network bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the cloud-application provider and the subscriber to the utilized cloud service.

6. Multitenancy. Multitenancy (sometimes hyphenated as “multi-tenancy”) is a software architecture in which a single instance of software serves multiple subscribers, referred to as tenants. A tenant is a group of users (for a security system, this could be subscriber employees and building occupants) who share a common access with specific privileges to the software instance and associated database storage. Under the multitenant model, each subscriber’s allocated cloud resources are separate and distinct from those of another subscriber, so that subscribers can only access their own data and will only use their own allocation of computing resources (necessary for accurate billing). This contrasts with subscribers each being given a separate virtual servers, applications, and databases. In a cloud-native application, each cloud data center runs only a single instance of application software and any databases, shared by all subscribers.

Implications of Cloud-Native Systems

Cloud-native system architecture is very different from the client-server-based system architectures of traditional on-premise security systems. With the client-server-based systems, it was easy to perform lab-based proof of concept (POC) testing for systems integrations, and site-based security system acceptance testing. Now that the computing infrastructure has moved to the cloud, accomplishing the objectives of such testing requires different approaches. These and other cloud-native subjects are generally not being discussed within the security industry. True cloud computing, at least for electronic physical security systems, should take these things into account—because cloud computing is intended to improve the system experience, not negatively constrain it. True Cloud (Part Three) will directly address these subjects.

About the Author:

Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (RBCS), a firm that provides security consulting services for public and private facilities ( He is the author of the Elsevier book Security Technology Convergence Insights available on Amazon. Mr. Bernard is a Subject Matter Expert Faculty of the Security Executive Council (SEC) and an active member of the ASIS International member councils for Physical Security and IT Security.

(Image courtesy contributor Ray Bernard examines what it takes to determine a security product or system's true level of 'intuitiveness' in his latest 'Real words or buzzwords' column.
(Photo courtesy Ammentorp Lund)
This article provides a checklist that security systems integrators, specifiers and end-users can use to evaluate and compare offerings that are labeled 'Enterprise Class.'