Not all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…
I/O
MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.
When troubleshooting performance in a CLARiiON or VNX storage array you’ll often see graphs that resemble something like this: write cache maxing out to 100% on one or even two storage processors. Once this occurs the array starts a process called forced flushing to flush writes to disk and create new space in the cache for incoming writes. This absolutely wrecks the performance of all applications using the array. With the MCx cache improvements made in the VNX2 series there should be a lot less forced flushes and a much improved performance.
A big change in the VNX2 hot spare policy compared to earlier VNX or CLARiiON models is the use of permanent hot spares. Whereas the earlier models would have dedicated, configured hot spares that would only be used during the drive failure, the VNX2 will use any eligible unused drive to spare to and NOT switch back to the original drive. I’ve written about this and other new VNX2 features but didn’t get to try it first hand yet. Until now: a drive died, yay! Continue reading to see how you can back-track which drive failed, which drive replaced it and how to move the drive back to the original physical location (should you want to).
July 8th 2014. EMC MegaLaunch 4. Theme: Redefine Possible (or #RedefinePossible on Twitter). What previously was impossible, now is possible! Catchy theme and something that we’ve seen in IT for a number of times now. For example: In the 1990’s, who would have thought it was possible to migrate a server from one datacenter to another, possibly a couple of miles away, without downtime, in a couple of seconds?! Doing things fundamentally different, better: that’s the goal we’re always trying to achieve. So how can we apply this to Isilon?