Once upon a time there was a data center filled with racks of physical servers. Thanks to hypervisors such as VMware ESX it was possible to virtualize these systems and run them as virtual machines, using less hardware. This had a lot of advantages in terms of compute efficiency, ease of management and deployment/DR agility.
To enable many of the hypervisor features such as VMotion, HA and DRS, the data of the virtual machine had to be located on a shared storage system. This had an extra benefit: it’s easier to hand out pieces of a big pool of shared storage, than to predict capacity requirements for 100’s of individual servers. Some servers might need a lot of capacity (file servers), some might need just enough for an OS and maybe a web server application. This meant that the move to centralized storage was also beneficial from a capacity allocation perspective.
I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.
To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).
Earlier this year Nimble Storage announced their all-flash array called the Predictive Flash Platform; you can read my thoughts on the launch over here. InfoSight is one of the core components of that announcement, which is why we had the opportunity for a fireside chat with the Nimble Storage data science team. We discussed the workings of InfoSight & VMVision and how this relates to actual benefits for an owner of a Nimble Storage array. This post will also touch on some of the key points discussed during the later Storage Field Day 10.
On the 23rd of February, Nimble Storage announced their new Predictive Flash platform as an extension of their current product portfolio. It uses the same trusted software, but leverages the speed of flash and advanced analytics to offer higher performance storage. A customer expects data to be available instantly and without delays. Nimble Storage makes sure this is the case based on a three-pronged approach: high density solid-state storage, cloud based management and big data analytics to proactively solve issues before they cause a problem for the business.
Storage Field Day 6 covered presentations from 8 vendors spread across 3 days. On day 3 it was Nimble Storage and NEC’s turn to tell us about their products. The previous two days were pretty heavy on easy to use (and shiny!) graphical user interfaces and centered around simplicity to provide storage to the application / app owner. Day three would continue on this theme with amazement and… a bit of a disappointment.
Storage Field Day 6, Day 2! Having recovered somewhat from yesterdays program we all hopped in the limo and drove to Sunnyvale, CA. Breakfast would be served at Coho Data who would also give the first presentation of the day. Second presenter of the day was Nexenta Systems talking about their SDS solutions. Another limo ride later we arrived at Pure Storage who would talk about their all flash arrays (AFAs). The day would be concluded with some video games, dinner and a trip in a time machine. Let’s go!
Storage Field Day 6, Day 1! The day before was spent acclimatizing to the 9 hours of time travel but on Wednesday it was finally about to begin. I had a general idea of what was waiting for me: first of all it’s going to be an exchausting, challenging but also fun week, and also: 99% of Americans likes stroopwafels. So armed with a bag of them I headed to the SFD6 breakfast briefing. Stephen laid out the ground rules, explained the newbies what was about to happen and off we went to the first vendors of the week: Avere, StorMagic and Tegile.
Not all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…
Last year a little bird told me about Tech Field Day: a big knowledge sharing event where multiple vendors inform a select group of people or delegates about what’s hot, new or revolutionary in their field or lineup. There’s multiple types of Field Days: networking, virtualization and (of course) storage. Storage Field Day 6 will be held on 5-7 November 2014 in Silicon Valley and… WOOHOOO, I’m attending!
Among the presenters for this year are Avere Systems, Coho Data, NEC Storage, Nexenta Systems, Nimble Storage, PureStorage, StorMagic and Tegile. I’ll be honest: I’ve only heard of a couple of these names, so I’ve got a bit of reading up to do in the next couple of days and I can’t wait to hear what they bring to the table.
I’d like to thank the Tech Field Day team (Stephen, Claire and Tom) for inviting me and flying me to San Jose next week. I’m looking forward to an exciting and challenging week filled with tech and discussions. Keep an eye on the Storage Field Day 6 page for the live stream of the event and this page for updates during the week! And maybe I’ll sneak in a picture of the Golden Gate bridge if I get the chance… 😉
MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.