Deploying Real-Time, High Availability Systems in Secure Environments

Integrators, IT teams being asked to deliver more sophisticated architectures at lower costs

Computing demands on secure servers and storage continue to rise. More than ever, users demand higher levels of availability and data integrity, while IT organizations are challenged to deliver lower capital and operational expense.  This creates a fundamental paradox, placing pressure on system integrators and the IT security teams to deliver more sophisticated architectures at lower acquisition and operational costs.  In this environment, what good is a high-availability security solution that requires a rocket scientist to maintain?  Then again, how do you implement an affordable high-availability security solution that can guard against a concerted terrorist threat?

Consider this common scenario: IT security organizations are often required to deliver real-time authentication for access control, ensuring that authentication is performed in real time and not buffered to edge devices that pose a potential security risk.  The need to eliminate the authentication lag and to provide real-time authentication requires organizations to provide “always on” security systems that must deliver continuous availability.

At the same time, IT security organizations are being challenged to do so economically with solutions that deliver lower cost in terms of acquisition and ongoing operation.  Traditional high-availability solutions that utilize redundant systems address the availability requirement; however, they are costly due to the required redundant servers, external storage and multiple software licenses.  This is particularly true when considering virtualized systems.  In addition, this does not include the operational aspects of the solution, where highly customized and trained experts are required to maintain these complex systems.  How many times has an IT security manager been on the phone late at night trying to rebuild a cluster due to a basic server or hard drive failure?  If you want to ensure a system is available, the last thing you want to have is complexity.

Hardware based fault tolerance

In many situations, IT security organizations are considering a relatively new solution to this common challenge: hardware-based fault tolerance. Through patented technology, fault-tolerant servers address the real-time security application requirements while delivering sustainable capital and operational cost advantages over traditional redundant systems and clusters.  This is achieved not by throwing another server at the problem, but by taking an intelligent approach through fully redundant, modular components that can be replaced as easily as a hot-swap disk drive.  With thousands of systems now deployed in mission-critical applications such as 911, the cost of true fault tolerance has reached affordable levels.  Because fault tolerant systems utilize “lockstep” technology, these systems ensure that there is no “failover” or loss of data or access to systems (reminiscent with a cluster failover).  Also, since these systems are fully self-contained in an innovative rack mount 4U package, external networks, storage, management software, and additional software licenses are not required.  This means lower costs in the short and long term. 

Take a traditional VMware High Availability solution used for access control.  This requires external storage, networking, management servers and additional licenses for the standby servers.  In terms of acquisition costs, such a system is actually more expensive than a hardware-based fault tolerant system—not to mention the additional challenges associated with designing and deploying the infrastructure.

These fault tolerant solutions combined with storage for video and system software delivering simplified disaster recovery has been qualified and deployed by leading security access control application providers throughout the world.

This content continues onto the next page...