Not all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…
MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.
When troubleshooting performance in a CLARiiON or VNX storage array you’ll often see graphs that resemble something like this: write cache maxing out to 100% on one or even two storage processors. Once this occurs the array starts a process called forced flushing to flush writes to disk and create new space in the cache for incoming writes. This absolutely wrecks the performance of all applications using the array. With the MCx cache improvements made in the VNX2 series there should be a lot less forced flushes and a much improved performance.
“Check your email ;)”. That was the first Twitter DM I read one sleepy morning in June. It’ll suffice to say, a minute later I was wide awake: I was chosen to represent the EMC Elect at the EMC “Redefine Possible” MegaLaunch event in London (UK)! I knew about these launch events because my colleague Rob attended one last year in Milan. Excitement started building and a couple of hours later I figured out I wasn’t going alone…
EMC World 2014 is back in town, this time with the REDEFINE punchline. After some logistic challenges to get here the show is on the road; general sessions, break-out sessions, hands on labs (HOL). So what’s up with the REDEFINE punchline? What are we redefining in the IT / data infrastructure? And what are the EMC Elect doing at EMC World when not flooding your Twitter feed?
Finally! I’ve just passed the USPEED Performance Guru exam! I first got aware of the SPEED programs during a Celerra Performance Workshop a couple of years ago. Initially it was an EMC Internal program, so I couldn’t get in without switching employers. In the beginning of 2013 this changed and EMC Partners could also enroll. Too bad I ran out of time with a mountain of projects and other training material… up until now! So what does it mean when someone is USPEED certified?
XtremIO is the new all-flash array from EMC, announced not too long ago. Flash has an enormous performance advantage over traditional spinning disks. Although there are no moving parts in a solid state drive, they can still fail! Data on the XtremIO X-Brick will still need to be protected against one or multiple drive failures. In traditional arrays (or servers) this is done using RAID (Redundant Array of Independent Disks). We could simply use RAID in the XtremIO array, but SSDs behave fundamentally different compared to spinning disks. So while we’re at it, why not reinvent our approach of protecting data? This is where XtremIO XDP comes in.
The storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!
Earlier this month EMC announced the new VNX series which promises more performance and capacity at a lower cost per GB and a smaller footprint. The hashtag for the event was #Speed2Lead which was trending on Twitter during the official event and the weeks leading up to the Mega Launch in Milan, Italy. With performance being key in the new systems, the announcement was built around the Monza race track which had the Formula 1 circus in town. Guess what the logo for the launch was?
I myself was on summer holidays during the big event (ending up only a hundred miles away from Milan, albeit a week late ;)), so I couldn’t do much more than refresh twitter and get my timeline blasted to bits. So consider this a catch-up post!
Welcome to the (mini-)series on VNX Performance and how to leverage SSD in a VNX systems. You can find part one on skew and data placement over here. The second post discussed the improvements for FAST Cache in VNX OE 32. This third and final post will discuss some of the ideal use cases for SSD in a VNX.
If you’ve got SSD in a VNX you can use it in two ways: as FAST Cache or as an extreme performance tier in your storage pools (FAST VP). Either of these implementations has advantages and disadvantages and they can also be used concurrently (i.e. use FAST Cache AND FAST VP). It depends on what you want to do…