Storage Field Day

40 posts

NVMe and NVMe-oF 101 with SNIA: queues everywhere!

SNIA dictionaryDr. J. Metz talked with us about NVMe at Storage Field Day 16 in Boston. NVMe is rapidly becoming one of the new hypes in the storage infrastructure market. A few years ago, everything was cloud. Vendors now go out of their way to mention their array contains NVMe storage, or is at the very least ready for it. So should you care? And if so, why?

SNIA’s mission is to lead the storage industry worldwide in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement, and security of information. They do that in a number of ways: standards development and adoption for one, but also through interoperability testing (a.k.a. plugfest). They aim to help in technology acceleration and promotion: solving current problems with new technologies. So NVMe-oF fits this mission well: it’s a relatively new technology, and it can solve some of the queuing problems we’re seeing in storage nowadays. Let’s dive in!

Continue reading

CloudIQ is looking out for your storage system’s health

A few week ago we visited Dell EMC in Boston for Storage Field Day 16. Susan Sharpe presented CloudIQ to us. If you’re unfamiliar with CloudIQ: it keeps track of your storage system performance, health, capacity and notifies you in case of any anomalies. If you’ve got a Dell EMC Unity storage system, you can already use it for free. And it’s also being actively developed, so expect many new features to come into production over time!

Continue reading

I’m shipping up to Boston for Storage Field Day 16

Taking the train to AMS for Storage Field Day 16Yes, I’m sorry about the title too. But also glad to announce I’m shipping up to Boston for Storage Field Day 16 this week! Just ignore the fact I’m not on a ship but on a train for now, and all should be well… Next stop is AMS, then a direct flight to BOS. It’s going to be a slightly shorter, two-day Storage Field Day this time around. But that doesn’t mean we’re going to receive a lot less content!

Continue reading

XtremIO X2: easier scaling, fewer cables and metadata aware replication

Storage Field Day 14 VMAX and XtremIO X2The Dell EMC High-End Systems Division talked about two systems. First about the VMAX All Flash, and later about the XtremIO X2. This post is about  the latter one. The XtremIO X2 builds upon the foundation of the original “old” XtremIO, but also does a couple of things differently. This post will explore those difference a bit, and will also talk about asynchronous and synchronous replication.

Continue reading

VMAX All Flash: Enterprise reliability and SRDF at <1ms latency

Storage Field Day 14 VMAX and XtremIO X2Back in October we visited Dell EMC for a few Storage Field Day 14 presentations. Walking into the new EBC building we bumped into two racks. One with a VMAX all flash system and another with a XtremIO X2. Let’s kick off the Storage Field Day 14 report with VMAX All Flash. There’s still a lot of thought going into this enterprise class storage array…

Continue reading

Back to SFO for Storage Field Day 14!

SFD LogoStorage Field Day 14 is taking place next month on 8-10 November in Silicon Valley. After having to skip one Storage Field Day, I’m glad to be back at the table for this one. If you look at the event page, it might seem there’s not that many presentations going on: only 4 companies listed as of today. But don’t be mistaken: Dell EMC will have 5 presentations, so we will not be slacking!

Continue reading

Excelero NVMesh: lightning fast software-defined storage using commodity servers & NVMe drives

Excelero NVMesh logoExcelero Storage launched their NVMesh product back in March 2017 at Storage Field Day 12. NVMesh is a software defined storage solution using commodity servers and NVMe devices. Using NVMesh and the Excelero RDDA protocol, we saw some mind blowing performance numbers, both in raw IOps and in latency, while keeping hardware and licensing costs low.

Continue reading

Moving to and between clouds made simple with Elastifile Cloud File System

Elastifile CloudMoving your data and applications to the cloud isn’t the easiest of tasks, if you want to do it right. There’s a multitude of decisions to make. Some you’ll get wrong, which might make you reconsider your cloud operating model or cloud provider. Which brings the next question: are you locked-in at your cloud provider? Can you move your data between clouds?

One start-up that attempts to make the move to the cloud and moving between clouds easier, is Elastifile. An Israeli company, founded in 2013 with its first version of the product out in Q4-2016, it created the Elastifile Cross-Cloud Data Fabric. Their objective: bring cloud-like efficiency to the on-premises cloud, and facilitate a easy lift-and-shift into the hybrid cloud.

Continue reading

SNIA: Avoiding tail latency by failing IO operations on purpose

SNIA logoConsistency and predictability matter. You expect Google to answer your search query within a second. If it takes two seconds, that is slow but ok. Much longer and you will probably hit refresh because ‘it’s broken and maybe that will fix it’.

There are many examples that could substitute the scenario above. Starting a Netflix movie, refreshing your Facebook timeline, or powering on an Azure VM. Or in your business: retrieving an MRI scan or patient data, compiling a 3D model, or listing all POs from last month.

Ensuring your service can meet this demand of predictability and consistency requires a multifaceted approach, both in hardware and procedures. You can have a modern hypervisor environment with fast hardware, but if you allow a substantially lower spec system in the cluster, performance will not be consistent. What happens when a virtual machine moves to the lower spec system and suddenly takes longer to finish a query?

Similarly, in storage, tiering across different disk types helps improve TCO. However, what happens when data trickles down to the slowest tier? Achieving that lower TCO comes with the tradeoff of less latency predictability.

These challenges are not new. If they impact user experience too much, you can usually work around them. For example, ensure your data is moved to a faster tier in time. If you have a lot of budget, maybe forgo the slowest & cheapest NL-SAS tier and stick to SAS & SSD. But what if the source of the latency inconsistency is something internal to a component, like a drive?

Continue reading

Intel SPDK and NVMe-oF will accelerate NVMe adoption rates

Intel logoOnce upon a time there was a data center filled with racks of physical servers. Thanks to hypervisors such as VMware ESX it was possible to virtualize these systems and run them as virtual machines, using less hardware. This had a lot of advantages in terms of compute efficiency, ease of management and deployment/DR agility.

To enable many of the hypervisor features such as VMotion, HA and DRS, the data of the virtual machine had to be located on a shared storage system. This had an extra benefit: it’s easier to hand out pieces of a big pool of shared storage, than to predict capacity requirements for 100’s of individual servers. Some servers might need a lot of capacity (file servers), some might need just enough for an OS and maybe a web server application. This meant that the move to centralized storage was also beneficial from a capacity allocation perspective.

Continue reading