FAST VP: Let it do its job!

FAST VP in actionNot all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…

Continue reading “FAST VP: Let it do its job!”

VNX2 MCx FAST Cache improvements

EMC World 2013 - Day 3 306MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.

Continue reading “VNX2 MCx FAST Cache improvements”

VNX2 MCx Cache improvements

MCx Cache improvements Forced FlushingWhen troubleshooting performance in a CLARiiON or VNX storage array you’ll often see graphs that resemble something like this: write cache maxing out to 100% on one or even two storage processors. Once this occurs the array starts a process called forced flushing to flush writes to disk and create new space in the cache for incoming writes. This absolutely wrecks the performance of all applications using the array. With the MCx cache improvements made in the VNX2 series there should be a lot less forced flushes and a much improved performance.

Continue reading “VNX2 MCx Cache improvements”

VNX2 Hot Spare and Drive Mobility Hands-On

A big change in the VNX2 hot spare policy compared to earlier VNX or CLARiiON models is the use of permanent hot spares. Whereas the earlier models would have dedicated, configured hot spares that would only be used during the drive failure, the VNX2 will use any eligible unused drive to spare to and NOT switch back to the original drive. I’ve written about this and other new VNX2 features but didn’t get to try it first hand yet. Until now:  a drive died, yay! Continue reading to see how you can back-track which drive failed, which drive replaced it and how to move the drive back to the original physical location (should you want to).

Continue reading “VNX2 Hot Spare and Drive Mobility Hands-On”

Redefine Possible – Isilon

Redefine Possible LogoJuly 8th 2014. EMC MegaLaunch 4. Theme: Redefine Possible (or #RedefinePossible on Twitter). What previously was impossible, now is possible! Catchy theme and something that we’ve seen in IT for a number of times now. For example: In the 1990’s, who would have thought it was possible to migrate a server from one datacenter to another, possibly a couple of miles away, without downtime, in a couple of seconds?! Doing things fundamentally different, better: that’s the goal we’re always trying to achieve. So how can we apply this to Isilon?

Continue reading “Redefine Possible – Isilon”

VNX2 hands-on (a.k.a. Who stole my SPS?!)

A VNX5400 installed in the rackIn September 2013 EMC announced the new generation VNX with MCx technology (or VNX2). The main advantage of the new generation is a massive performance increase: with MCx technology the VNX2 can effectively use all the CPU cores available in the storage processors. Apart from a vast performance increase there’s also a boatload of new features: deduplication, active-active LUNs, smaller (256MB) chunks for FAST VP, persistent hotspares, etc. Read more about that in my previous post.

It took a while before I could get my hands on an actual VNX2 in the field. So when we needed two new VNX2 systems for a project, guess which resources I claimed to install them. Me, myself and I! Only to have a small heart attack upon unboxing the first VNX5400: someone stole my standby power supplies (SPS)!

Continue reading “VNX2 hands-on (a.k.a. Who stole my SPS?!)”

EMC World 2014 – Redefine your storage

EMC World 2014 Las Vegas StripEMC World 2014 is back in town, this time with the REDEFINE punchline. After some logistic challenges to get here the show is on the road; general sessions, break-out sessions, hands on labs (HOL). So what’s up with the REDEFINE punchline? What are we redefining in the IT / data infrastructure? And what are the EMC Elect doing at EMC World when not flooding your Twitter feed?

Continue reading “EMC World 2014 – Redefine your storage”

Entering the league of USPEED Performance Gurus

VNX USPEED logoFinally! I’ve just passed the USPEED Performance Guru exam! I first got aware of the SPEED programs during a Celerra Performance Workshop a couple of years ago. Initially it was an EMC Internal program, so I couldn’t get in without switching employers. In the beginning of 2013 this changed and EMC Partners could also enroll. Too bad I ran out of time with a mountain of projects and other training material… up until now! So what does it mean when someone is USPEED certified?

Continue reading “Entering the league of USPEED Performance Gurus”

XtremIO XDP vs RAID

EMC World 2013 - Day 3 306XtremIO is the new all-flash array from EMC, announced not too long ago. Flash has an enormous performance advantage over traditional spinning disks. Although there are no moving parts in a solid state drive, they can still fail! Data on the XtremIO X-Brick will still need to be protected against one or multiple drive failures. In traditional arrays (or servers) this is done using RAID (Redundant Array of Independent Disks). We could simply use RAID in the XtremIO array, but SSDs behave fundamentally different compared to spinning disks. So while we’re at it, why not reinvent our approach of protecting data? This is where XtremIO XDP comes in.

Continue reading “XtremIO XDP vs RAID”

Need more speed? Hello XtremIO!

EMC Xtrem Stacked LogoThe storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!

Continue reading “Need more speed? Hello XtremIO!”