NVMe Tiering in vSphere 8.0 Update 3 is a Homelab game changer!

As someone who is always on the lookout for interesting and clever ways to make the most out of your vSphere homelab investment, I was surprised there has not been more noise about the new NVMe Tiering capability in vSphere 8.0 Update 3!?

NVMe Tiering is currently in Tech Preview and it enables ESXi to use an NVMe device as a secondary tier of memory for your workloads, which IMHO makes it one of the killer features in vSphere 8.0 Update 3, especially with some interesting implications for Homelabs!

As the old saying goes, a picture is worth a thousand words …


Picture on the left shows a system with 64GB of memory (DRAM) that is available before enabling NVMe Tiering and on the right, the amount of memory that is available after enabling the NVMe Tiering which is whopping 480GB! 🤯

For my initial setup, I used an older Intel NUC 12 Enthusiast as it allows for up to 3 x NVMe devices, which I have allocated for the ESXi installation, Workload Datastore and NVMe Tiering. The maximum amount of physical DRAM memory that the Intel NUC 12 Enthusiast is capable of is 64GB which I have fully maxed out on the system and I am using a 1TB NVMe device for NVMe Tiering, which is how I was able to get to 318GB of memory for my physical ESXi host running on the Intel NUC!

So how usable is the Intel NUC with the “extra” available memory? … Well, I figure I should put it through a real test and I was able to successfully deploy a fully operational VMware Cloud Foundation (VCF) Holodeck solution! 😎


Since the Intel NUC is a consumer platform, I was surprised at how responsive the deployment was and the overall speed of the deployment, it took a little over ~2hrs to complete and it was fully accessible without noticing any real performance degradation when logging into SDDC Manager or the vSphere UI.

My second experiment included a more recent hardware platform with the ASUS PN64-E1 which had 96GB of DRAM and after enabling NVMe Tiering on the same 1TB NVMe device, I was able to reach the 480GB (which is actually from the screenshot at the very top of the blog post).

Note: I opted to leave all CPU cores enabled and I did observe the overall deployment took a bit longer than the Intel 12th Generation CPU and I had to retry the bringup operation a couple of times with Cloud Builder as the NSX VM had to be rebooted. It eventually did complete, so if you are using an Intel-based 13th Gen or later, you may want to disable the E-Cores, even though I had more physical DRAM, the impact was more of the CPU than actual memory, which speaks volumes on how robust the NVMe Tiering capability is!

While I was able to supercharge several of my consumer-grade systems, just imagine the possibilities with a more powerful system and server grade CPU and memory or what this could mean for the Edge!? The possibilities are truly endless, not to mention the types of workloads vSphere can now enable at a much lower cost! 🙌

Have I piqued your interests in upgrading to latest vSphere 8.0 Update 3 and take advantage of the new NVMe Tiering capability? What additional workloads might you be able to run now?

Below are the steps to configure NVMe Tiering:

Step 0 – Ensure that you have a single NVMe device that is not in use or partition before enabling NVMe Tiering, you can not share the device with any existing functions. You should also review KB 95944 article for additional considerations and restrictions before using NVMe Tiering.

Step 1 – Enable the NVMe Tiering feature by running the following ESXCLI command:

esxcli system settings kernel set -s MemoryTiering -v TRUE

Step 2 – Configure a specific NVMe device for use with NVMe Tiering by running the following command and providing the path to your NVMe device:

esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____Samsung_SSD_960_EVO_1TB_________________8AC1B17155382500

Note: After enabling NVMe Tiering for your NVMe device, you can see which device is configured by using “esxcli system tierdevice list” and this is a one time operation, which means if you reinstall ESXi or move the NVMe device, it will still contain the partition that marks the device for NVMe Tiering.

Step 3 – Configure the desired NVMe Tiering percentage (25-400) based off of your physical DRAM configuration by running the following command:

esxcli system settings advanced set -o /Mem/TierNvmePct -i 400

Note: To learn more about the NVMe Tiering percentage configuration, please see the PDF document at the bottom of this KB 95944 article


Step 4 – Reboot the ESXi host for the changes to go into effect and after ESXi fully boots up, you will see the updated memory capacity that has been enabled by your NVMe device.

Archives