Backstory
Up until now, I’ve been handling my own hosting via my home DSL service for a number of years now, with a few VPS instances and Squarespace service for some other web sites. One key component is my mail server, currently hosted on OS X Server in a VM running on ESXi at home. I had an unfortunate incident where the local telco cut off my DSL line by accident and as a result I was without internet connectivity for almost three weeks starting around Christmas.
This meant that all mail destined to my infrageeks.com domain ended up bouncing after a certain amount of time which was the impetus to finally go out and get a Mac Mini hosted in a datacenter. I’ve been thinking about doing this for a long time, but put off by the cost. But now I find that some of the Mac Mini colocation services will install ESXi as a standard install, which makes it considerably more attractive since I am no longer limited to one OS instance and can run multiple OS environments. This also means that I’ll be able to consolidate some of my other VPS and web hosting services onto this machine.
Yes, I could have done the same thing with a base install and VMware Fusion, Parallels or VirtualBox, but the result is not as efficient nor flexible enough for what I have in mind.
For the notes and instructions, I’m assuming you have a basic understanding of ESXi so I won’t be filling in a ton of screen shots, although if I get some feedback, I’ll go back and try and add some more in. Or (shameless plug) ping me for consulting assistance.
Configuration
I went with an upgraded Mac Mini with a 500 Gb SSD + a 500 Gb internal HD, running ESXi from an SD Card which leaves the full disk space available for ESXi.
The basics
Networking
One nice feature of running ESXi is that you can easily create a set of virtual networks with a router instead of just putting your virtual machines directly on the internet. This also allows you to do some interesting stuff like create a private network on your ESXi instance where your machines live, but are connected to your own network via VPN so it becomes just another subnet that behaves like an extension of your network.
This is important since my mail server is a member of an Open Directory domain that is hosted on another machine. By setting things up as an extended private network, the servers can talk to each other through normal channels without opening up everything to the internet or doing firewall rules on a machine by machine basis.
I’m using pfSense in a virtual machine linked with a VPN to a Cisco RV180.
Storage
You can use the internal disks formatted as VMFS-5 volumes natively to store your virtual machines, but I wanted to ensure that I had an effective backup plan that fit into my current system. I could have used VMware’s built-in replication feature or a third party tool like Veeam or Zerto which are designed for this kind of replication, but they all require vCenter and I wanted to see what could be done without additional investment.
So in this case I’m using OmniOS to create a virtual NFS server backed by VMDK files. It adds some extra work on the server since it has to go through NFS to OmniOS virtual machine to the disks, but I gain snapshots and free replication back to my ZFS servers at home.
I’m in the middle of an existential debate concerning the most efficient configuration for the zpool in this situation. One option is to have two zpools, one for the SSD and one for the HD and to replicate from the SSD to the HD on a regular basis, and then off to the storage server at home. If the SSD fails, I can easily switch over to the HD. But in this case, I don’t have any real automated protection against bit-rot since each block is stored on non-redundant (from the ZFS point of view) storage. The other obvious option is to create a single zpool with a mirror of the SSD and the HD which would ensure that there are two copies of each block and if there is a problem with the data, ZFS will read both copies and use the one that matches the checksum. But the flip side is that performance will become less predictable since some reads will come from a slow 5400 RPM disk and others from the SSD, while all writes will be constrained by the speed of the spinning disk. I could also just set the SSD as a very large cache to a zpool backed by the hard drive, but this seems a little silly with no additional data protection.
The other side effect of slow disk IO is that I will end up in many more situations where the CPU is waiting on disk IO which will slow down the whole system, so I’m going with the two pool setup for now. Budget permitting, a mirrored zpool with two SSDs is the ideal solution.
Next up: Step by step