How do you size a virtual machine host?

There are 4 main criteria which you have to evaluate:

  • CPU
  • Memory
  • Storage
  • Network

CPU

Make sure you have CPUresources sufficient to eclipse the most demanding guest VM that you are planning to run on the host.  If you are planning on running a SQL server, get at least a quad-core proc.  Not to belittle this component, but you are unlikely to run into a problem with processor speed, since most processors are severely under-utilized as it is.  The most important thing I can say about processors is to think ahead.  IF, at some point, you think the server you are designing might be part of a VMware cluster, go look at the processor compatibility list for VMware.  Some of the features, such as fault tolerance (FT) require the exact same model processor in all the nodes in cluster.  I’m sure Microsoft Hyper-V has similar restrictions, so if Hyper-V is your preference, check the compatibility guides for Microsoft as well.

Memory

The more the better.  Memory is cheap these days.  Get as much as you can afford or as much as the physical server you are installing can support, whichever threshold you reach first.  In addition, keep in mind that with most servers, memory slot population is very important.  Doubly so when you have a dual proc config.  On top of that, VMware ESX 4 is extremely picky when it comes to the memory slot population.  Have you ever heard of a NUMA node?  Well, you don’t want to learn about it when VMware ESX fails to boot because the NUMA node requirements for your server/blade have not been met.  Most likely, if you follow the guidelines for your server on memory slot population, you will also meet the NUMA node requirements for VMware.  At least, that is what  VMware support will tell you when you call them in a panic… 🙂.  Moving along.  If you have a single or dual proc server that has fully populated memory slots, which you should, it isn’t likely to be an issue anyway.

Storage

This is the most difficult thing to get right and the hardest thing to fix if you get it wrong.  So…get it right.  For one, you are going to need more storage than you think you will need, unless you are just a gross overestimator.  Here are some of the things you need to plan for:

  • Amount of storage: you are going to need space for your OS, data, snapshots,  pagefiles, and, at least in VMware’s case, .vswp files.  A .vswp file is a file created in a datastore that is equal to the size of the amount of memory in your virtual machine; created when you boot the VM.  Not a big deal when you have a machine with 2 – 4 GB of RAM.  A bigger deal when you have a SQL VM that has 20 GB of RAM in it.  You are  also going to need room to grow.  If you are consolidating a bunch of physical servers, I would recommend adding up the total amount of used space on all of them and multiplying by at least 4.  That way, you have plenty of room to grow, and you have room to keep multiple generations of full VM backups on disk for quick restore if necessary.

  • Type of storage: this depends on what kinds of VMs you are going to be running.  To some degree, you can plan it like you would storage for separate physical servers.  Example:  say you need to put up a SQL VM.  So, that means you are going to need some fast storage, such as that provided by a RAID 10, for the log files.  And you are going to need another volume, on separate disks if possible, for the database files themselves; that can probably reside on a RAID5 or 6.  You will also need a separate volume for the OS, which can probably just be a RAID1.  If you are working on a large virtualization project, you most likely have shared storage, such as a SAN.  So, you might break out two large disks into a RAID 1 config to host the OS disks for several VMs.  Then, you can take 4 disks and carve that out for a RAID10 for high performing storage for your log files.  Then, you can take the remaining disks and partition them into a large RAID5 or 6 for general storage such as  for your database files.   These volumes can all be  shared by many different VMs.  How many will depend on what types of VMs you have and the speed and capacity of your disks.  General guidelines I have come across when learning about shared SAN storage say this:
  1. Shoot for LUN sizes of up to 500 – 600 GB
  2. Use a single LUN to support up to 15 – 20 virtual disks

Network

Make sure you have plenty of network adapters in your physical server.  I would recommend 4 network adapters (ethernet ports) per physical server, and I will use VMware ESX to give an example.  With 4 adapters, you can dedicate one to the service console, which is where all the management traffic is handled.  Another adapter can be dedicated to VMotion.  Finally, you can either set up NIC teaming on the other two adapters in VMware and present them to your VMs as a single NIC, or just pass along two NICs to each VM and have two connections on each server.  I recommend the former, since it avoids you having to set up NIC teaming in your VM for two separate connections.

Also, a couple of websites that I would recommend for virtualization news and information:

TechTarget – SearchServerVirtualization

VMware Support (lots of documentation here, some specific to VMware, some general virtualization design info)

Microsoft Virtualization Calculator – Meeting your licensing requirements in your virtual environment

Thanks for reading!

Posted via email from Aaron Johnstone

2 Replies to “Sizing virtual machine hosts”

    1. It has to do with SCSI disk locking and the number of hosts accessing the same LUN at the same time. Over a certain number starts to degrade performance. I found, through research, that people were recommending 15 – 20 as a general guideline. I think the amount of space kind of follows that because if you have 30 GB OS disks and you have 20 of them, that is 600 GB. However, if you have a very demanding application, such as a SQL server, it is worth considering setting up the log files disk using Raw Device Mapping (RDM) so that it has a dedicated LUN and is accessing it directly. That also eliminates a layer of overhead that is created by having a virtual disk that would, in turn, reside on a LUN.

Leave a Reply

Your email address will not be published. Required fields are marked *