Making the move to a virtual environment…

Recently, it seems that a lot of organizations have adopted the practice of migrating away from their old physical servers and move towards converging them into a virtualized environment. This process, known as virtualization, consists of the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with Ubuntu Linux operating system; Ubuntu-based software can be run on the virtual machine.

There are several things to consider when the decision is made to start the migration process from the conventional way of server management to a virtualized environment. A network administrator may find himself thinking of all the issues that relate to such a migration. They may research questions such as…

 

What costs are involved?

What software licenses will be required?

Does my virtualization plan include a single point of failure?

Are all my applications supported in a virtual environment?

Do I have any servers that are not good virtualization candidates?

How will domain controller placement work?

What is the most suitable virtualization platform?

What is the contingency plan if a host server dies?

How many guest machines can each host accommodate?

 

Let’s look at each of these questions in detail.

 

What costs are involved? There is no doubt about it. Implementing a virtualized environment takes a lot of thought. Cost is an important factor as well as making sure that you have some redundancy built into your plan. Virtualization promises to reduce hardware acquisition costs by consolidating multiple servers onto one physical box. Traditionally, this was an expensive endeavor often running thousands of dollars per machine in software licensing.  Recently the cost has become bearable even for small enterprises.  One of the main factors that have forced the restructuring of VM pricing was Microsoft’s release of Hyper-V that is now part of Windows Server 2008. Microsoft’s initial pricing of $28 per system ignited a price war with industry leader VMware which promptly undercut Microsoft with its free, ESXi hypervisor. ESXi is a limited version of the company’s top-of-the-line ESX product.


The combination of the price war between Microsoft and VMware and the fact that we live in world fueled by open-source projects and advertising-supported Web services has made the ability to create multiple virtual systems on a single piece of hardware a commodity, free for the downloading, the problems associated with managing a potentially chaotic virtualized environment with an exploding number of VMs has become the latest vendor arena. The short of it is that virtualization has adopted business model, similar to those pioneered by shaving razors and blades, but since adopted by everything from inkjet printers to online applications. Give away the base product and make money on associated options, accessories, consumables, and services. The costs seem to depend on what platform you want to run, hardware, redundancy options (such as a failover with a Storage Area Network), and software licensing.

What software licenses will be required? This seems to be an often overlooked issue with virtualization is the complexities of licensing. For instance, a server running a Linux OS attempting to offer a virtualized Windows Server must still satisfy licensing requirements. Therefore the benefits of on-demand virtualization and flexibility of virtualization is laden by closed-source, proprietary systems. Fortunately, some vendors of proprietary software have attempted to update their licensing schemes to address virtualization, but the flexibility vs. license cost issues are opposing requirements.

Software licensing often works differently in a virtual environment. One example of this is if you are using Hyper-V, you may not be required to license the Windows operating systems that are running on your guest machines. Things aren’t always straight forward, though, because the actual license requirements vary depending on the versions of Windows being used. Windows Server 2008 R2 Enterprise allows you to run up to four software instances at a time in virtual operating system environments on a server under a single server license. While Windows Server 2008 R2 Datacenter allows you to run any number of software instances in physical and virtual operating system environments on a server. As network administrators, it is important to understand the license requirements for the operating systems and applications that will be run on your guest machines in a virtual environment.

Does my virtualization plan include a single point of failure? This is certainly an important factor to consider. For example, let’s say that you have an organization with two domain controllers. Both are virtualized and are hosted on the same hosting server. If that host had dies, it will take all the domain controllers with it. It’s important to plan your virtual server deployment so that the failure of a single host server will not have catastrophic consequences. You could have at least two hosting machines and have the guest operating systems on a Storage Area Network. Hyper-V offers a fail over for this type of setup. This way as long as you are not over committed on system resources in the cluster, the guest OS should immediately become handled by the secondary hosting machine. This should be completely transparent to the end user.

Are all my applications supported in a virtual environment? There are some fairly common applications are not supported on virtual servers such as some versions of Exchange Server which is supported only on physical servers. Others are supported only on specific virtualization platforms, Hyper-V or VMware. Some software venders manage copy protection or licensing using physical USB dongles. These of course cannot be virtualized. It important that before beginning to virtualize servers, that you make sure that your applications will be supported in a virtual environment.

Do I have any servers that are not good virtualization candidates? Some of us may want to cut some of the cost of implementing a virtualized environment buy cannibalizing some of our existing servers. I meant isn’t the point of switching to that type of environment, to be able to consolidate our physical machines? While we may have good machines already in production that may soon be looking for new homes so to speak, the simple fact of the matter is that some servers simply do not make good virtualization candidates. Some servers run resource-intensive applications or require special hardware. Again going back to my dongle example, some enterprise applications enforce copy protection through the use of a dongle. Dongles are almost never supported in a virtual environment. Some processors simply don’t support virtualization. Also, depending on how many guest operating systems you need to run you may not have enough hard disk space, processing power, or memory to meet your needs.

How will domain controller placement work? As I have already stated, you shouldn’t place all of your domain controllers on a single host, but there is more to domain controller planning than that. You have to consider whether you want to virtualize all your domain controllers. Although there is no rule against this practice, if you do virtualize all of them, you will have to decide whether the host servers will be members of the domain. Making the host servers domain members when all of the domain controllers have been virtualized leads to a “which came first, the chicken or the egg” paradox.

What is the most suitable virtualization platform? There are so many choices available. I keep using VMware and Hyper-V as examples but there are numerous server virtualization products are on the market, and each has its own strengths and weaknesses. You want to deploy a platform that is reliable, stable, and cost-effective and easily adapts to your changing business needs. Some network admins that I have asked prefer VMware simply because it’s a proven platform that has been around for quite some time.

Personally, I only have experience with Hyper-V. Hyper-V is fairly new (released with windows Server 2008) but is also stable and has its strengths over VMware such as inexpensive server-attached storage, aspects of live migration and failover for site-to-site disaster recovery. Plus Hyper-V will have capacity access more virtual CPUs than ever before.

What is the contingency plan if a host server dies? Redundancy and fail over should be at the heart of your plan for implementation. Can your organization afford any downtime if a host server fails? On disadvantage to virtualization is the simple fact that if we are not careful we could be tempted to “put all our eggs in one basket”. You can set up SANs that house the actual guest operating systems and have redundant host machines. Hyper-V allows for the failure of a host machine by immediately failing over to another host, assuming that the secondary host has the resources to sustaining the new guest OS. (In other words you are not over committed on resources). Server failures are never good; its effects are even more compounded in a virtual environment. A host server failure can take down several virtual servers and cripple your network. Have a good plan in place before you set up your VMs.

How many guest machines can each host accommodate? This really depends on the hardware of the host machine and the licensing for the operating system that is running the software for the guest operating systems. Again, Windows Server 2008 R2 Enterprise runs up to four software instances at a time in virtual operating system environments on a server under a single server license. Windows Server 2008 R2 Datacenter runs any number of software instances in physical and virtual operating system environments on a server. Having enough processor resources isn’t as big a concern as it used to be prior to multi-core processors. In fact, most server processors run at 20 percent or less utilization, making the virtualization process such a good fit.

For memory, a good rule of thumb is to consider the number of virtual machines you intend to support on the host hardware and the memory required for each virtual machine as well as the host. The formula for calculating the amount of memory to include on the virtual host is: virtual machine requirements X # of VMs + 32MB X #VMs (for overhead) + 512MB (for the host). For example, if I intend to host 7 VMs, and they will each have 4GB of memory, I would calculate the total as follows: 7 X 4GB = 28GB + 7 X 32MB = 224MB + 512MB. You’ll see that this doesn’t come out to a round number, which is how we purchase memory. But in this case, adding eight 4GB DIMMs in the host server for a total of 32GB covers the calculated total with some to spare.

In conclusion, virtualization offers a flexible solution to cut hardware cost, consolidate rack space and reduce power consumption. Virtualization also provides high availability for critical applications, and streamlines application deployment and migrations. Virtualization can simplify IT operations and allow IT organizations to respond faster to changing business demands. It’s here to stay and seems to be the way of the future for businesses and the IT industry.

Posted in Technology, Work and tagged , , .

One Comment

  1. “put all our eggs in one basket” is an important thing to keep away from. The VM idea is a gold mine in today’s age, and has been for a few years now. If a back up can still be done on a the physical side, a failure on the virtual side would only cripple the system, not disable it. Financially, the virtual market is a brilliant idea. Plus down sizing the hardware from a fully functional system to a support (for failure) system would offer a small portion of revenue to resale unneeded equipment. Or save that extra equipment for maintenance on the smaller support system. Just never forget, “put all our eggs in one basket” would be top priority to avoid.

Leave a Reply

Your email address will not be published. Required fields are marked *