solid-state

23 posts

Put all your data on flash with VAST Data

VAST Data logoRoughly 6-7 years ago (around 2012), flash storage became affordable as a performance tier. At least, for the companies I was visiting. It was the typical “flash tier” story: buy 1-2% of flash capacity to speed everything up. All-flash storage systems were still far away into the future for them. They existed, and they were incredibly fast, but they also drove the €/GB price too far up, out of their reach.

However, in the background you could already hear the drums: it is going to be an all-flash future! Not just for performance, but also for capacity/archive storage. In fact, one of those people beating that drum was my colleague Rob. I recall vividly our “not yet!”-discussions…

And it makes sense. Solid-state drives are:

  • More reliable: there are no moving parts in SSDs, and media failures are easier to correct with software/design.
  • Power consumption is very low at rest: there is no little motor to keep platters spinning 24/7.
  • Faster: the number of heads and the rotational speed of the platters limit a hard drive’s performance. Not so with flash!

They are still quite expensive, looking at €/TB. Fortunately, cost is coming down too. The last year or two, all flash arrays have taken flight in general-purpose workloads. Personally, I have not installed a traditional tiered SAN storage system in over a year anymore. Hyper-converged infrastructure: same story, all flash. The development of newer, cheaper types of QLC flash only helps close the gap in €/GB between HDD and SSD. But there is still a 20x gap. And one company we met at Storage Field Day 18 has a pretty solid plan to bridge that gap: VAST Data.

Continue reading

How To: Clone Windows 10 from SATA SSD to M.2 SSD (& fix inaccessible boot device)

1TB WD Black SN750 NVMe M.2 SSDA few weeks ago I received a 1TB Western Digital Black SN750 M.2 SSD, boasting an impressive 3470 MB/s read speed on the packaging. I already had a SATA SSD installed in my gaming/photo editing PC. Nevertheless, those specs got me to pick up a screwdriver and install the new M.2 SSD. The physical installation is dead simple: remove graphics card, install M.2 SSD, reinstall graphics card. I wasn’t really looking forward to a full reinstallation of Windows 10 though. There’s just too many applications, settings and licenses on that system that I didn’t want to recreate or re-enter. Instead, I wanted to clone Windows 10 from SATA SSD to M.2 SSD.

After a little bit of research, I ended up with Macrium Reflect, which is freeware disk cloning software. Long story short: I cloned the old SSD to the M.2 SSD, rebooted from the M.2 SSD, and… was greeted with a variety of errors. The main recurring error was Inaccessible Boot Device, however in my troubleshooting attempts I saw many more errors.

Continue reading

Faster and bigger SSDs enable us to talk about something else than IOps

Bus overload on an old storage array after adding a few SSDs

The first SSDs in our storage arrays were advertised with 2500-3500 IOps per drive. Much quicker than spinning drives, looking at the recommended 140 IOps for a 10k SAS drive. But it was in fact still easy to overload a set of SSDs and reach its max throughput, especially when they were used in a (undersized) caching tier.

A year or so later, when you started adding more flash to a system, the collective “Oomph!” of the Flash drives would overload other components in the storage system. Systems were designed based on spinning media so with the suddenly faster media, busses and CPUs were hammered and couldn’t keep up.

Queue all sorts of creative ways to avoid this bottleneck: faster CPUs, upgrades from FC to multi-lane SAS. Or bigger architectural changes, such as offloading to IO caching cards in the servers themselves (e.g. Fusion-io cards), scale-out systems, etc.

Continue reading

NVMe and NVMe-oF 101 with SNIA: queues everywhere!

SNIA dictionaryDr. J. Metz talked with us about NVMe at Storage Field Day 16 in Boston. NVMe is rapidly becoming one of the new hypes in the storage infrastructure market. A few years ago, everything was cloud. Vendors now go out of their way to mention their array contains NVMe storage, or is at the very least ready for it. So should you care? And if so, why?

SNIA’s mission is to lead the storage industry worldwide in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement, and security of information. They do that in a number of ways: standards development and adoption for one, but also through interoperability testing (a.k.a. plugfest). They aim to help in technology acceleration and promotion: solving current problems with new technologies. So NVMe-oF fits this mission well: it’s a relatively new technology, and it can solve some of the queuing problems we’re seeing in storage nowadays. Let’s dive in!

Continue reading

XtremIO X2: easier scaling, fewer cables and metadata aware replication

Storage Field Day 14 VMAX and XtremIO X2The Dell EMC High-End Systems Division talked about two systems. First about the VMAX All Flash, and later about the XtremIO X2. This post is about  the latter one. The XtremIO X2 builds upon the foundation of the original “old” XtremIO, but also does a couple of things differently. This post will explore those difference a bit, and will also talk about asynchronous and synchronous replication.

Continue reading

Excelero NVMesh: lightning fast software-defined storage using commodity servers & NVMe drives

Excelero NVMesh logoExcelero Storage launched their NVMesh product back in March 2017 at Storage Field Day 12. NVMesh is a software defined storage solution using commodity servers and NVMe devices. Using NVMesh and the Excelero RDDA protocol, we saw some mind blowing performance numbers, both in raw IOps and in latency, while keeping hardware and licensing costs low.

Continue reading

Hardware has set the pace for latency, time for software to catch up

I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.

To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).

Continue reading

NexGen Storage: All-Flash Arrays can be hybrids too!

NexGen Pivot3 logoNexGen has been building hybrid storages for several years: systems with spinning disks for capacity and flash for performance. This is a skill set that will not go away with the onset of all-flash Arrays. There are many types of flash available and each type of non-volatile memory will have advantages and disadvantages in capacity, performance, cost, power draw, etc. Mixing those characteristics properly inside one array allows a vendor to leverage the strengths of each technology. Say Hi to the hybrid all-flash arrays!

Continue reading

FAST VP: Let it do its job!

FAST VP in actionNot all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…

Continue reading

VNX2 MCx FAST Cache improvements

EMC World 2013 - Day 3 306MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.

Continue reading