Operating budgets are being slashed. IT departments are being asked to do more with less money and resources. Much of the daily responsibilities of the IT staff revolve around the maintenance of the servers. Furthermore, the growing number of servers in the enterprise is very costly from many perspectives including capital expenditure, energy consumption (fostering in the energy required to cool server rooms) and the real estate to house the equipment. This presents an opportunity for system integrators to provide solutions.
The Federal Government of the United States has a serious issue to contend with. The reported number of Federal data centers grew from 432 in 1998 to 2,094 as of July 30, 2010 (based on agency submissions). Vivek Kundra, Chief Information Officer of the United States Government, wrote:
"This growth in redundant infrastructure investments is costly, inefficient, unsustainable and has a significant impact on energy consumption. In 2006, Federal servers and data centers consumed over 6 billion kWh of electricity and without a fundamental shift in how we deploy technology, it could exceed 12 billion kWh by 2011. In addition to the energy impact, information collected from agencies in 2009 shows relatively low utilization rates of current infrastructure and limited reuse of data centers within or across agencies. The cost of operating a single data center is significant, from hardware and software costs to real estate and cooling costs."
In parallel to this need to control cost in the face of growing demands on the IT department is the fact that computing power increased significantly. The commonly quoted observation by Gordon Moore, founder of Intel Corporation, that effective computing power of microprocessors has roughly doubled every 18 months since the very first computer chips were sold remains generally true today. Unfortunately, most computers never realize their full processing potential.
By using the processing power of today's server hardware and spreading it across multiple virtual servers, IT departments can leverage the company investment and meet the growing need for individual servers without incurring the hardware cost each time.
First consider how an operating system like Microsoft Windows works. The operating system (OS) sits between the hardware components of the server and the applications running on the hardware. For example, the OS is the conduit for your World Wide Web browser to connect to the local area network or the modem that lets you communicate with the Internet Service Provider (ISP). The browser sends some messages to the OS that are meant for the network port, for example. The OS converts the messages as necessary and sends data to the network port. What we want to do is have many copies of the OS running at the same time, but the OS as we know it wasn't designed to do that.
Enter server virtualization; just how does it work in the real world?
Now consider another kind of software that sits between the hardware and the OS. What this software does is trick the OS into thinking that it is the hardware. This software is designed to support multiple instances of the OS (or different operating systems), all running concurrently, all thinking they are the only OS on the hardware. Therefore, you might have five copies of Microsoft Windows running, all talking to this new software service layer. This new service layer is called a hypervisor because it is one level up from a "supervisor". The hypervisor is also called a virtual machine manager because it virtualizes the hardware. Each virtual machine (VM) has its own OS and software can be loaded on this VM as if it were a standard physical computer. Each VM is allocated physical RAM memory, hard disk space and processor resources (multi-core and multi-processor systems work best for VM environments). The hypervisor allocates these resources and manages each VMs access to its resources.
What does this mean to the systems integrator and their specifications?