I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.
To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).
Not all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…
Last year a little bird told me about Tech Field Day: a big knowledge sharing event where multiple vendors inform a select group of people or delegates about what’s hot, new or revolutionary in their field or lineup. There’s multiple types of Field Days: networking, virtualization and (of course) storage. Storage Field Day 6 will be held on 5-7 November 2014 in Silicon Valley and… WOOHOOO, I’m attending!
Among the presenters for this year are Avere Systems, Coho Data, NEC Storage, Nexenta Systems, Nimble Storage, PureStorage, StorMagic and Tegile. I’ll be honest: I’ve only heard of a couple of these names, so I’ve got a bit of reading up to do in the next couple of days and I can’t wait to hear what they bring to the table.
I’d like to thank the Tech Field Day team (Stephen, Claire and Tom) for inviting me and flying me to San Jose next week. I’m looking forward to an exciting and challenging week filled with tech and discussions. Keep an eye on the Storage Field Day 6 page for the live stream of the event and this page for updates during the week! And maybe I’ll sneak in a picture of the Golden Gate bridge if I get the chance… 😉
MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.
When troubleshooting performance in a CLARiiON or VNX storage array you’ll often see graphs that resemble something like this: write cache maxing out to 100% on one or even two storage processors. Once this occurs the array starts a process called forced flushing to flush writes to disk and create new space in the cache for incoming writes. This absolutely wrecks the performance of all applications using the array. With the MCx cache improvements made in the VNX2 series there should be a lot less forced flushes and a much improved performance.
“Check your email ;)”. That was the first Twitter DM I read one sleepy morning in June. It’ll suffice to say, a minute later I was wide awake: I was chosen to represent the EMC Elect at the EMC “Redefine Possible” MegaLaunch event in London (UK)! I knew about these launch events because my colleague Rob attended one last year in Milan. Excitement started building and a couple of hours later I figured out I wasn’t going alone…
July 8th 2014. EMC MegaLaunch 4. Theme: Redefine Possible (or #RedefinePossible on Twitter). What previously was impossible, now is possible! Catchy theme and something that we’ve seen in IT for a number of times now. For example: In the 1990’s, who would have thought it was possible to migrate a server from one datacenter to another, possibly a couple of miles away, without downtime, in a couple of seconds?! Doing things fundamentally different, better: that’s the goal we’re always trying to achieve. So how can we apply this to Isilon?
In September 2013 EMC announced the new generation VNX with MCx technology (or VNX2). The main advantage of the new generation is a massive performance increase: with MCx technology the VNX2 can effectively use all the CPU cores available in the storage processors. Apart from a vast performance increase there’s also a boatload of new features: deduplication, active-active LUNs, smaller (256MB) chunks for FAST VP, persistent hotspares, etc. Read more about that in my previous post.
It took a while before I could get my hands on an actual VNX2 in the field. So when we needed two new VNX2 systems for a project, guess which resources I claimed to install them. Me, myself and I! Only to have a small heart attack upon unboxing the first VNX5400: someone stole my standby power supplies (SPS)!
EMC World 2014 is back in town, this time with the REDEFINE punchline. After some logistic challenges to get here the show is on the road; general sessions, break-out sessions, hands on labs (HOL). So what’s up with the REDEFINE punchline? What are we redefining in the IT / data infrastructure? And what are the EMC Elect doing at EMC World when not flooding your Twitter feed?
Finally! I’ve just passed the USPEED Performance Guru exam! I first got aware of the SPEED programs during a Celerra Performance Workshop a couple of years ago. Initially it was an EMC Internal program, so I couldn’t get in without switching employers. In the beginning of 2013 this changed and EMC Partners could also enroll. Too bad I ran out of time with a mountain of projects and other training material… up until now! So what does it mean when someone is USPEED certified?