Mordtech's Blog

General Technology Blog

Leveraging Solaris 10 ZFS Functionality for ESXi

With the decision of VMware to release the ESXi version of the industry leading hypervisor for free, one might wonder where else can I save money and still get enterprise level performance. One area is in the storage. The Solaris ZFS file system is a great starting point for small businesses looking to move into enterprise level virtualization while still hitting a relatively small price point. Sun recently released three new products in the 7000 series of products: the 7110, 7210 and the 7410. While the 7210 and 7410 would make a techie drool. For the small business, the 7110 offers most of the benefits of the two larger versions, at a very affordable price. The 7110 starts at around $10,000 list and would rise to $16,000 with the three year platinum warranty and SCSI controller. Now, you could build a SUN server with similar performance characteristics, but you would lose one of the best features of the new Sun Storage; Storage Analytics. This feature is an enhancement of Dtrace that allows the administrator to drill down into the performance of the storage, to isolate performance issues.

So what do you have to do to get VMware ESXi and Solaris ZFS doing their thing together? Build your Solaris 10 Server. There are many good resources for available on the internet for configuring Solaris 10 and ZFS. Here are the instructions from SUN. Some basic recommendations: keep the system drives and the data drives separate. Use a mirrored ZFS pool for the system drive. Use RaidZ2 for the data drives, with a minimum of 5 drives: 3+2 and no more than 11 drives: 9+2. When you go above the 11 drive mark, you’ll just add an additional raid set to the ZFS pool. Leave a drive or two available for a spare(s) in the pool.

Another source is the OpenSolaris Bible

After you have your ZFS pool created.

You first need to first create the file system. All you really need to decide is: do you want a quota (Upper limit), reservation (minimum limit) and a name. you would enter the following command

zfs create <ZFSPool>/<fileSystemName>

You can get creative with the organization, if you know that you are going to have multiple NFS filesystems, you might enter zfs create pool/VM_NFS/FS01 or pool/VM_NFS/root. Ok, done, you’ve created a ZFS file system. Now you might want to set a quota and/or reservation.

zfs set quota=250G <ZFSPool>/<fileSystemName>

There you now have set a limit of that pool of 250 GB. Also, at this point, that is a thin provisioned size. Very little space will be utilized until data is written. The next step is to set a reservation, if so desired.

zfs set reservation=100G <ZFSPool>/<fileSystemName>

You now have a file system that has taken up 100 GB of the pool. While no data has actually be written to disk, the pool will show that the available size has been reduced by 100 GB. Now, the last step before we can start configuring the ESXi datastore is to mount the filesystem as a NFS share.

zfs set sharenfs=root=<IP address of the ESXi server> <ZFSPool>/<fileSystemName>

The key in the above command is the sharenfs=root=. That command will give the ESXi server root access to the NFS share. Without that, you will be able to mount the share, but you will not be able to create or open VMs hosted on the share. Ok that’s it, on to the ESXi server.

Build the ESXi Server

First download the ESXi server software. You can always download the free version, and then apply the purchased license purchased at a later date. Install ESXi per the instructions found here. Now, open your browser and go to the IP address that you assigned to the ESXi server from a Windows workstation. In the left hand corner of the page, you will see a link that states “Download VMware Infrastructure Client.” Go ahead and install the client. When it is complete, click on the shortcut and use the username Root and the password you created during the ESXi install. The next step is to install either the free license or a purchased license for the ESXi install. Left click the server name and select the configuration tab. Next select the “Licenses Feature” option. Select the top “Edit” to the right of “License Source.” Select the “Use License” option and either type or paste the license provided by VMware.

Next, you need to enable NTP time. You can either point it towards and internal time server if available, or towards public name servers. For my home network or other small businesses, I point towards the following: pool.ntp.org, 0.pool.ntp.org, 1.pool.ntp.org and 2.pool.ntp.org. You enter the NTP servers by selecting the Configuration tab, “Time Configuration” option and properties in the upper left hand corner. Ensure the “NTP Client Enabled” Checkbox is checked. Next, select the Options button, and select the “NTP Servers” option. Delete the default server. Click Add, type the first of the NTP server and repeat for each the remaining 3 NTP servers. Check the “Restart NTP service to apply changes.” Select OK twice and wait until the change is applied.

Next we need to configure the networking portion. If you are a small enough, you can get by with two network ports and push both client traffic and NFS traffic across them. You might notice latency if your traffic begins to exceed, if you experience this, you will need to add/configure additional NICs. Ensure that any traffic that involves clients and/or NFS traffic is redundant. Also, Vmotion and the NFS datastore traffic should be placed on GB interfaces on GB switch ports.

We are now ready to mount the NFS datastore. First select the configuration tab and select the Storage option. In the right hand corner, select “Add Storage.” Select the “Network File System” option and select Next. Next enter the IP address or Fully Qualified Domain Name (FQDN) of the Solaris NFS server. For the folder enter /<ZFSPool>/<fileSystemName>. Ensure the “Mount read only” is unchecked and then enter a easily recognizable name for the datastore. Select OK and you should have a datastore created that shows a size equal to the Quota configured earlier. Go ahead and create a VM, you done.

Other features of ZFS that is useful to ESXi

One cool feature of Solaris ZFS is the ability to make writable clones of file systems. Imagine building 1,2 5 VMs. Sysprep them, clone the file system and re-present the new cloned filesystem to the ESXi server. Or you could build a test environment of a web and SQL server that could be stood up in seconds. The clone process leverages the snapshot functionality.

zfs snapshot <ZFSPool>/<fileSystemName>@<snapshotName>

zfs clone <ZFSPool>/<filesystemName>@<snapshotName> <ZFSPool>/<fileSystemName1>

zfs set sharenfs=root=<IP address of the ESXi server> <ZFSPool>/<fileSystemName1>

zfs set quota=250G <ZFSPool>/<fileSystemName1>

zfs set reservation=100G <ZFSPool>/<fileSystemName1>

Now, you have a second file System that contains an exact copy of file system as it existed at the time of the snapshot. The snapshot name can be nearly anything you want, if you are creating a snapshot that will be use for cloning test environments, you might name the snapshot : webServer_SQL_Test_Gold. Also, if these are windows VMs, run sysprep against the servers and shut them down fully before creating the snapshot and the clone. Another thing to be aware of is that you are limited to 8 NFS datastores by default in ESX, to extend past that you will need to modify the NFS.maxvolumes under the Configuration tabà advanced setting. You can run up to 32 NFS datastores per ESX host/cluster.

You can also present iSCSI drives to your ESX clients from the ZFS pool.

On the Solaris Wiki site, you will find a great write-up about how to present ISCSI ZFS luns to various initiators.

If you have questions, concerns, gripes, etc… leave a comment and let me know.

2 Comments

  1. Nicely explained!!!, thanks a bunches

  2. Thanks, wish I’d found this this morning but sorted now 🙂 Mark

Comments are closed.

Mordtech's Blog © 2015 Frontier Theme
%d bloggers like this: