Using Virtual Machines at Home for Fun, Learning, and Ridiculousness

Technology has certainly progressed beyond the point of requiring a single physical server for every task. Using virtualization, I can easily have as many systems as I would like. Virtualization of servers allows for the opportunity to build and use systems for single intended purposes with minimal physical overhead. This is particularly useful for technology enthusiasts or budding professionals who need an environment to "play around with" in order to learn about various systems and services. For example my personal virtualization server, which I will describe in detail below, hosts three CentOS 6 servers, an Ubuntu 12.04 server, and three Windows Server instances, all running different services and purpose-built to run those services or applications efficiently. Additionally, it is easy to build a new server, or even an instance of a desktop OS, whenever I need to test something. I probably don't need all of these systems or all of the ones I'll build in the future, but it's fun and I like scaling my infrastructure to ludicrous proportions despite the fact that I am the only one that uses my network.

Virtualization Types and Options

All virtual systems, called "guests," have to run on a base system. This base system is called the hypervisor and there are two major types of hypervisors:

  • Type 1: Bare-metal hypervisors. These hypervisors are operating systems in themselves and run directly on physical hardware. The only task that can be done with a bare-metal hypervisor is configuring and running a guest OS. The major players in this market, at least the ones I'm most familiar with, are:
    • Microsoft Hyper-V Server. This is a stripped down version of Windows Server used only for running virtual machines. This is a very high quality and stable product, although with a little more resource overhead and support for fewer guest operating system types than other solutions.
    • VMware ESXi. My hypervisor of choice since there is a free version which supports systems with up to 32GB of memory. This is a rock-solid product made for use in production environments with high availability requirements.
    • Xen. I do not have much experience with this hypervisor, but it's certainly on my list. It has a reputation for stability, although it is a bit complex to configure at first.
  • Type 2: Hosted hypervisors. These hypervisors run on top of pre-existing operating systems. For instance, there are software hypervisors that run on Windows which enable a guest OS to be run inside of the host OS. Some major type 1 hypervisors are:
    • VMware Workstation. Versions are available for Windows and Mac. This is a paid product, but it provides high quality, stable virtualization.
    • Windows Hyper-V. Hyper-V is a feature built into certain versions of Windows, including Windows 8 Pro and Windows Server 2012, which allows users to run guest operating systems inside of Windows.
    • VirtualBox. A free product available for Windows, Mac, and Linux with a good track record of support and stability, but not something that should be used in a high availability production environment.

Many people who simply want to play around with alternative operating systems will use a type 2 hypervisor to just run a virtual machine on their existing computer. However, those who want have these alternate virtual operating systems to run all the time or want to run a lot of virtual machines have the option of building a separate machine and using it only to run virtual machines.

ESXi -- Stringent Hardware Requirements, but Worth the Effort

ESXi is my chosen virtualization solution due to its combination of stability, ease of setup and management, and free licensing up to 32GB of memory. However, since ESXi is meant to run on server-class hardware in large production environments, there are some stringent hardware requirements. I did not want to spend the money on server hardware, so I had to build a custom machine and carefully select my components. I made sure the components were supported by the version of ESXi I would be running (ESXi 5.5) by comparing the components with the VMware hardware compatibility list. The hardest thing to find was a non-server motherboard that had a compatible networking chipset. I ended up building the system with the following hardware:

  • Motherboard: ASUS P8Z77 WS -- this motherboard has dual server-class NICs. Some dual NIC motherboards run each NIC off of a different chipset, but both NICs on this board run on the same model chipset so both ports can be used by ESXi.
  • Processor: Intel Core i7-3770K -- I chose the Core i7 to take advantage of the extra cores and threads as much as possible. Component purchasing tip: MicroCenter always has the lowest prices on processors and they have great warranty coverage.
  • Memory: Corsair 32GB DDR3 1333 -- just make sure the memory is compatible with the motherboard you select.
  • Hard Drive: 2TB Seagate -- If you notice I only bought one hard drive, you'll realize my storage in this system is not setup in a RAID. That's a mistake I'll be rectifying soon.
  • Case -- It's a case with sufficient ventilation...I don't understand why people get so worked up over cases.
  • 500W PSU -- Once again, the boring component, although I probably could have spent a little more money on a power supply to guarantee clean, steady power.
  • DVD-ROM Drive -- No one's gotten excited about a DVD drive since 2004.

ESXi Tips

Here are a few things I've learned about running an ESXi server that will hopefully help some others along the way.

  • Attach the "Client Device" to the virtual CD/DVD Drive before exporting virtual machines into OVA format packages. I have not tested in ESXi 5.5, but in at least ESXi 5.1, you must select the Client Device option instead of an image file before exporting to OVA. If you don't and you try to import an OVA with the image file setting, the import will fail.
  • The desktop vSphere Client has been deprecated by VMware. The desktop client used for managing the ESXi host will no longer be updated. VMware is pushing for people to use the web client to manage ESXi hosts, but you need to pay for a license to run the web client.
  • Use VMware Workstation to manage settings on VM Versions 9 and 10. VMware has been in the virtualization business for a long time and they have extended their virtual machine format continuously to support newer technologies and operating systems. They keep track of which features different virtual machine containers support by using version numbers. Using the vSphere Client, you can only create VMs up through version 8. VMware Workstation can connect to ESXi and create newer VMs though. A free trial of VMware Workstation is available if you only need to do this once.
  • ESXi does not natively support NAT. Any virtual machine with a physical network connection will connect directly to the network and must have its own IP address. ESXi will not perform NAT for hosts. However, you can go through the trouble of setting up a software firewall system to route your traffic through. I will likely do a post on this in the future but the summary is that the firewall system would have two virtual NICs, one connected to the network and one connected to an ESXi internal virtual switch. All of the guest machines that you want to perform NAT on would then connect to the internal virtual switch and use the software firewall as the gateway.

Conclusion

In the future, I'll go into detail about my personal network setup and what I use all of my virtual servers for. In the meantime, know that ESXi offers an excellent, high quality, high stability solution for the perfect price: free. If you need to build a virtualization lab at home for playing around or professional development, then my hardware selections will provide you with a high powered system at a much lower cost than a true server.