flash

23 posts

Put all your data on flash with VAST Data

VAST Data logoRoughly 6-7 years ago (around 2012), flash storage became affordable as a performance tier. At least, for the companies I was visiting. It was the typical “flash tier” story: buy 1-2% of flash capacity to speed everything up. All-flash storage systems were still far away into the future for them. They existed, and they were incredibly fast, but they also drove the €/GB price too far up, out of their reach.

However, in the background you could already hear the drums: it is going to be an all-flash future! Not just for performance, but also for capacity/archive storage. In fact, one of those people beating that drum was my colleague Rob. I recall vividly our “not yet!”-discussions…

And it makes sense. Solid-state drives are:

  • More reliable: there are no moving parts in SSDs, and media failures are easier to correct with software/design.
  • Power consumption is very low at rest: there is no little motor to keep platters spinning 24/7.
  • Faster: the number of heads and the rotational speed of the platters limit a hard drive’s performance. Not so with flash!

They are still quite expensive, looking at €/TB. Fortunately, cost is coming down too. The last year or two, all flash arrays have taken flight in general-purpose workloads. Personally, I have not installed a traditional tiered SAN storage system in over a year anymore. Hyper-converged infrastructure: same story, all flash. The development of newer, cheaper types of QLC flash only helps close the gap in €/GB between HDD and SSD. But there is still a 20x gap. And one company we met at Storage Field Day 18 has a pretty solid plan to bridge that gap: VAST Data.

Continue reading

PSA: Isilon L3 cache does not enable with a 1:1 HDD:SSD ratio

Isilon L3 cache not enablingI recently expanded two 3-node Isilon X210 clusters with one additional X210 node each. The clusters were previously installed with OneFS 7.x, and upgraded to OneFS 8.1.0.4 somewhere late 2018. A local team racked and cabled the new Isilon nodes, after which I added them to the cluster remotely via the GUI. Talk about teamwork!

A brief time later the node actually showed up in the isi status command. As you can see in the picture to the right, something was off: the SSD storage didn’t show up as Isilon L3 cache. A quick check did show that the hardware configuration was consistent with the previous, existing nodes. The SmartPool settings/default policy was also set up correctly, with SSDs employed as L3 cache. Weird…

Continue reading

Faster and bigger SSDs enable us to talk about something else than IOps

Bus overload on an old storage array after adding a few SSDs

The first SSDs in our storage arrays were advertised with 2500-3500 IOps per drive. Much quicker than spinning drives, looking at the recommended 140 IOps for a 10k SAS drive. But it was in fact still easy to overload a set of SSDs and reach its max throughput, especially when they were used in a (undersized) caching tier.

A year or so later, when you started adding more flash to a system, the collective “Oomph!” of the Flash drives would overload other components in the storage system. Systems were designed based on spinning media so with the suddenly faster media, busses and CPUs were hammered and couldn’t keep up.

Queue all sorts of creative ways to avoid this bottleneck: faster CPUs, upgrades from FC to multi-lane SAS. Or bigger architectural changes, such as offloading to IO caching cards in the servers themselves (e.g. Fusion-io cards), scale-out systems, etc.

Continue reading

Intel SPDK and NVMe-oF will accelerate NVMe adoption rates

Intel logoOnce upon a time there was a data center filled with racks of physical servers. Thanks to hypervisors such as VMware ESX it was possible to virtualize these systems and run them as virtual machines, using less hardware. This had a lot of advantages in terms of compute efficiency, ease of management and deployment/DR agility.

To enable many of the hypervisor features such as VMotion, HA and DRS, the data of the virtual machine had to be located on a shared storage system. This had an extra benefit: it’s easier to hand out pieces of a big pool of shared storage, than to predict capacity requirements for 100’s of individual servers. Some servers might need a lot of capacity (file servers), some might need just enough for an OS and maybe a web server application. This meant that the move to centralized storage was also beneficial from a capacity allocation perspective.

Continue reading

Hardware has set the pace for latency, time for software to catch up

I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.

To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).

Continue reading