I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.
To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).
Continue reading “Hardware has set the pace for latency, time for software to catch up”
According to Tintri, the rise of server virtualization broke the traditional storage system. Initially we had relatively simple environments where one server talks to a number of LUNs on a storage system. Sometimes we’d have a small cluster of servers accessing those volumes. Still relatively simple.
Fast forward to now: large clusters of hypervisor hosts are the norm, collectively accessing an even larger number of volumes. Each hypervisor in turn hosts a large number or virtual machines. In case of performance problems, how are you ever going to figure out the root cause and which other systems are affected?
Continue reading “Making life a whole lot easier with Tintri VM-aware storage”
Datera was founded in 2013 with a clear mission: bringing hyperscale operations and economics to the private clouds. The big corporations such as Facebook and Google don’t manage individual pieces of hardware. Instead they use policies and “the system” will decide where to spin up an app or place some data. This means an admin can manage a lot more servers or storage. So why is this level of automation only used by the big corporations? Datera aims to change that!
Continue reading “Bringing hyperscale operations to the masses with Datera”
Earlier this year Nimble Storage announced their all-flash array called the Predictive Flash Platform; you can read my thoughts on the launch over here. InfoSight is one of the core components of that announcement, which is why we had the opportunity for a fireside chat with the Nimble Storage data science team. We discussed the workings of InfoSight & VMVision and how this relates to actual benefits for an owner of a Nimble Storage array. This post will also touch on some of the key points discussed during the later Storage Field Day 10.
Continue reading “Squashing assumptions with Data Science”
I will be attending Dell World this October in Austin, Texas and will be trying to find out how the merger between these massive companies will impact the Dell and EMC storage product portfolios. Flying in under the EMC Elect program, we should be having a front row seat to all the exciting announcements!
Continue reading “I’ll be at Dell World this October!”
A couple of weeks ago StorMagic announced their newest SvSAN 6 release. The basics are still the same: SvSAN takes the internal disks from two hypervisor servers (HyperV or VMware) and turns them into highly available shared storage. Yes, that’s a two server minimum, not three; so this should be a little bit cheaper compared to VMware VSAN and the likes. What’s new in version 6 is the addition of an Advanced edition with SSD and memory-based caching and tiering.
Continue reading “SvSAN 6: now with memory and SSD caching”
Question: What do you get when Pure Storage gets to build a system that can start small, grow big, handle file requests quickly and is simple to manage?
FlashBlade: Pure’s newest addition to its hardware portfolio. The Pure Storage FlashBlade is not just another NAS filer. It’s an all-flash, scale-out storage for file (NFSv3 for now) and object (soon), delivering some pretty good performance as you can see in the sheet above. And the chassis just looks sexy…
Continue reading “FlashBlade: custom hardware still makes sense”
If you want to build a private S3 object store, Cloudian HyperStore might be the product for you. Using commodity servers to form a scale-out architecture, you can build your own, fully S3 compliant object storage that’s located in your own datacenter. If you don’t want to supply your own servers, you can opt for the Lenovo Storage DX8200C appliance, powered by Cloudian!
Continue reading “Cloudian HyperStore: manage more PBs with less FTE”
I had the opportunity to play with a new EMC product last week: ScaleIO. It’s definitely not a new EMC product (I troubleshooted the 1.31 version and EMC released 2.0 at EMC World 2016) but I just hadn’t had the honor to work with one of those systems yet. ScaleIO is a software-defined storage solution that uses the local disks in your commodity server and shares these out as block LUNs across the Ethernet. Which means this architecture can scale pretty well, both on capacity and performance, using hundreds (if not thousands) of servers and disks.
Continue reading “ScaleIO Architecture and failure units”
Primary Data unveiled there DataSphere product at VMworld US back in August 2015. With DataSphere, Primary Data virtualizes the different types of storages in the datacenter, creating a global dataspace and breaking down the traditional silos of storage. It attempts to do for storage what VMware did for computing: any piece of data can reside on any storage, movable at any time, without interruption. In essence, increasing data mobility by decoupling the logical storage from the physical hardware. The team gave us an update on their product at Storage Field Day 10, so here goes!
Continue reading “Breaking down storage silos with Primary Data DataSphere”