MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.
FAST Cache
In September 2013 EMC announced the new generation VNX with MCx technology (or VNX2). The main advantage of the new generation is a massive performance increase: with MCx technology the VNX2 can effectively use all the CPU cores available in the storage processors. Apart from a vast performance increase there’s also a boatload of new features: deduplication, active-active LUNs, smaller (256MB) chunks for FAST VP, persistent hotspares, etc. Read more about that in my previous post.
It took a while before I could get my hands on an actual VNX2 in the field. So when we needed two new VNX2 systems for a project, guess which resources I claimed to install them. Me, myself and I! Only to have a small heart attack upon unboxing the first VNX5400: someone stole my standby power supplies (SPS)!
The storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!
Earlier this month EMC announced the new VNX series which promises more performance and capacity at a lower cost per GB and a smaller footprint. The hashtag for the event was #Speed2Lead which was trending on Twitter during the official event and the weeks leading up to the Mega Launch in Milan, Italy. With performance being key in the new systems, the announcement was built around the Monza race track which had the Formula 1 circus in town. Guess what the logo for the launch was?
I myself was on summer holidays during the big event (ending up only a hundred miles away from Milan, albeit a week late ;)), so I couldn’t do much more than refresh twitter and get my timeline blasted to bits. So consider this a catch-up post!
Welcome to the (mini-)series on VNX Performance and how to leverage SSD in a VNX systems. You can find part one on skew and data placement over here. The second post discussed the improvements for FAST Cache in VNX OE 32. This third and final post will discuss some of the ideal use cases for SSD in a VNX.
If you’ve got SSD in a VNX you can use it in two ways: as FAST Cache or as an extreme performance tier in your storage pools (FAST VP). Either of these implementations has advantages and disadvantages and they can also be used concurrently (i.e. use FAST Cache AND FAST VP). It depends on what you want to do…