SvSAN 6: now with memory and SSD caching

StorMagic SvSAN schematicA couple of weeks ago StorMagic announced their newest SvSAN 6 release. The basics are still the same: SvSAN takes the internal disks from two hypervisor servers (HyperV or VMware) and turns them into highly available shared storage. Yes, that’s a two server minimum, not three; so this should be a little bit cheaper compared to VMware VSAN and the likes. What’s new in version 6 is the addition of an Advanced edition with SSD and memory-based caching and tiering.

SvSAN 6 Advanced Edition

A quick summary for those people that aren’t familiar with SvSAN: it’s a piece of software (called a VSA or Virtual Storage Appliance) that you run on a HyperV or VMware host. The SvSAN VSA takes the internal disks of the server, does a bit of magic and spits out a highly available datastore. This HA datastore is what allows virtual machine mobility and survivability in case one of the servers blows up.

The magic in this is that SvSAN can do this with just two ESXi/HyperV servers, whereas other VSAs usually need 3+ servers. It does this with one shared quorum resource for potentially 1000s of clusters. Hence less server investment and power needed.

SvSAN 6 editions

All the bold features are new in SvSAN 6, and you can see the difference between the advanced and standard version: caching and tiering. The advanced version is slightly more expensive (around $10k list), but obviously has a lot more performance potential and also some potential to save on hardware investments. Licenses are capacity based: anywhere between 2TB and unlimited, with 6 and 12TB as intermediate steps.

Cache all the data!

In SvSAN 6 the VSA will automatically decide on the optimal read:write ratio of the SSD cache. Less used data is automatically removed from the cache based on age.

Good to know is that the older SvSAN 5.3 only did write caching, not read caching. If the blocks you wanted were in write cache, you could read them back out of the cache, but the VSA would not actively prefetch blocks and put them on SSD for rapid retrieval. This changes in 6.0 with active read caching.

Memory caching is currently only read-only, and you can enable it in one of the following three modes:

  • Most frequently used: the default mode that looks at access patterns and stores the frequently used blocks in memory. This will benefit all workloads.
  • Read-ahead mode: looks for sequential read streams and starts prefetching data into memory.
  • Data pinning mode: for targeted workloads such as VDI environments that suffer from boot storms.

You can increase the capacity of memory available for caching by assigning extra RAM to the VSA: I didn’t find an upper limit, but this will no doubt be disclosed at a later date.

It’s also possible to both use memory and SSD caching at the same time: the algorithms in SvSAN 6 will populate the cache tiers based on access frequency, so both caching tiers will be used effectively.

StorMagic IO distributionStorMagic looked at real customer data while designing the caching algorithms. The results were promising: up to 70% of IO would most likely be satisfied from the read cache, with 100% of the writes being absorbed by write cache. So now to the savings: what does this mean?

If you can absorb the majority of the IO in memory and with an SSD drive, that means you can use cheaper, slower and larger capacity drives for persistent data storage. Swap out those SAS drives for an SSD and a couple of NL-SAS drives. This means that the slightly more expensive license is offset by cheaper servers and lower power consumption.

Ease of management

If you are trying to save money, it pays to not only look at hardware and licenses, but also at people. Specifically in the installation and maintenance phase: how much effort does it take to deploy SvSAN and to keep it running.

StorMagic promises a number of options that help:

  • A handy GUI that helps with deploying multiple VSAs at the same time, with incremental naming and IP addressing.
  • Once you finish a GUI deployment, you can dump the corresponding PowerShell script, clearly marked with all the variables. You can then edit the variables in that script and scale it to world-conquering proportions. Imagine deploying one site with the GUI, then doing the other 999 while leaning back in your chair and drinking mojitos…

    SvSAN PowerShell deployment
    Something like that…
  • The VSAs offer an out-of-box experience (OOBE) where you can stage them onto the hardware (either yourself or the hardware partner), and configure them once you’re onsite.
  • And once everything is deployed and operational, you can upgrade multple VSAs using the GUI, with plenty of options to stage upgrades for overnight upgrade, or straightaway upgrades. Of course with plenty of automated health checks, so the upgrade doesn’t crash your whole environment…

My thoughts on SvSAN 6

First of all its good to see that StorMagic is continuing to improve SvSAN. They’ve partnered with a number of hardware vendors (Cisco, Lenovo, Dell and HP), whom will install ESX and the SvSAN VSA and preconfigure them for the customer. I’ve written an earlier post about the partnership between VMware and SvSAN and it’s good to hear that this is paying off for both companies: VMware VSAN in the DC deployments, SvSAN in the ROBO environments. I imagine the same will happen with the new partnerships such as Cisco: HyperFlex for the 4+ node DC deployments, SvSAN for the ROBO 2-node deployments.

A little bit of flash goes a long way; using it for a caching layer in SvSAN looks like a proven way to both lower latency and also use fewer, larger capacity NL-SAS drives.

SvSAN 6 will be generally available on the 26th of August; keep an eye on the StorMagic website and the upgrade promotions!