The first SSDs in our storage arrays were advertised with 2500-3500 IOps per drive. Much quicker than spinning drives, looking at the recommended 140 IOps for a 10k SAS drive. But it was in fact still easy to overload a set of SSDs and reach its max throughput, especially when they were used in a (undersized) caching tier.
A year or so later, when you started adding more flash to a system, the collective “Oomph!” of the Flash drives would overload other components in the storage system. Systems were designed based on spinning media so with the suddenly faster media, busses and CPUs were hammered and couldn’t keep up.
Queue all sorts of creative ways to avoid this bottleneck: faster CPUs, upgrades from FC to multi-lane SAS. Or bigger architectural changes, such as offloading to IO caching cards in the servers themselves (e.g. Fusion-io cards), scale-out systems, etc.
2019 – NVMe and 3D NAND SSDs: speed and capacity
Fast forward to 2019 and the first day of Storage Field Day 18. At these Storage Field Days there are usually one or two themes for the week. This time around, one of these themes was NVMe. We talked with several companies that built their systems from the ground up around this technology. Alll of them showing impressive performance numbers in synthetic benchmarks: Tens of GBs per second throughput and millions of IOPs, with latencies in the low microseconds.
And it’s not just performance: capacity is also rapidly increasing with multi-layer 3D NAND flash, while prices are plummeting. So quickly even, that VAST Data (one of the presenters) was so bold as to announce themselves as an “extinction level event for traditional spinning disks”. And indeed, they do some really interesting things with a lot of low-cost flash and some NVM 3D-XPoint memory to keep it all performing well. (But more on that in a different blog post)
A little while later, WD told us quite a bit about their 3D NAND Flash production and how 96-layer 3D NAND is just the first commercial step, with 1xxL 3D NAND already on the roadmap.
What is interesting to me, is that much of the discussion is moving away from pure IOPs numbers. Sure, there are still some “speeds and feeds” discussions, but most of the discussion in the room was instead about (geographical) placement of data. Or how lower storage media latency has a positive impact on the other components in the IT stack. Enabling more efficient use of compute resources for example, or higher business revenue due to quicker page loads.
And I recognize this in my own day-to-day job as well. When I started out, I used to be calculating IOPs all the time. Making sure that the starting component of the MetaLUNs were nicely distributed across the available RAID groups. Nowadays, you just shove some SSDs in a box and that’s fast enough for general purpose workloads. We now worry more about data reduction (and thus placement of data) than the number of IOPs a disk can deliver.
Bottlenecks are also appearing outside of the storage box. We’ve seen several products that can fill a 100 Gbps Ethernet link without breaking a sweat. If you want to install such a system in your datacenter, you will absolutely need to sit down with the networking team and talk this through with them. Just shouting “it’s a problem for the network team” will not work. There needs to be bidirectional communications between the storage/data admins and the network admins (and other teams for that matter).
My thoughts on the changing storage industry
NVMe (the storage protocol) is going to profoundly change how we address flash media. It’s tackling many of the command queue challenges we’ve previously had in FC/SAS/SCSI, as we saw in a previous presentation from SNIA. At the same time, solid state capacities and durability will continue to increase, while costs will decrease. So VAST Data’s claim that “HDD is dead” holds some merit. And I for sure hope it becomes a reality at some point.
At the same time though, WD is confident that HDDs will stay around for many years to come. And I must agree: the developments in HDDs are still strong, with techniques such energy assist and shingled magnetic recording. HDDs currently still win from a € per raw TB perspective. You will need some pretty solid data reduction to bridge that price gap. And tape is still around too…
I do think that SSDs are now so fast, it’s completely revolutionizing the knowledge and tasks of a storage admin. Storage specific skills and knowledge are not going away! Someone still needs to recognize the different types of flash, how they rank on the Writes Per Day (WPD) scale, and which data reduction techniques work best. At the same time though, we’re not juggling disk IOPs or data placement across RAID groups anymore. Systems do that automatically. It frees up our time to talk about adding value to the business. On what to do with all this data. Or to help out other teams in IT. Maybe that’s why the traditional storage admin is dying?
Check out the presentations from Storage Field Day 18 over here.
Disclaimer: I wouldn’t have been able to attend Storage Field Day 18 without GestaltIT picking up the tab for the flights, hotel and various other expenses like food. I was however not compensated for my time and there is no requirement to blog or tweet about any of the presentations. Everything I post is of my own accord and because I like what I see and hear.
WD handed each delegate a 1TB WD Black SN750 NVMe M.2 SSD, which I will install in my PC as a system drive. I expect it will not increase my blogging speed, although my games will probably load much quicker. This does not influence my opinion on the eventual death of the HDD, many years from now.