Moving your data and applications to the cloud isn’t the easiest of tasks, if you want to do it right. There’s a multitude of decisions to make. Some you’ll get wrong, which might make you reconsider your cloud operating model or cloud provider. Which brings the next question: are you locked-in at your cloud provider? Can you move your data between clouds?
One start-up that attempts to make the move to the cloud and moving between clouds easier, is Elastifile. An Israeli company, founded in 2013 with its first version of the product out in Q4-2016, it created the Elastifile Cross-Cloud Data Fabric. Their objective: bring cloud-like efficiency to the on-premises cloud, and facilitate a easy lift-and-shift into the hybrid cloud.
Continue reading “Moving to and between clouds made simple with Elastifile Cloud File System”
Consistency and predictability matter. You expect Google to answer your search query within a second. If it takes two seconds, that is slow but ok. Much longer and you will probably hit refresh because ‘it’s broken and maybe that will fix it’.
There are many examples that could substitute the scenario above. Starting a Netflix movie, refreshing your Facebook timeline, or powering on an Azure VM. Or in your business: retrieving an MRI scan or patient data, compiling a 3D model, or listing all POs from last month.
Ensuring your service can meet this demand of predictability and consistency requires a multifaceted approach, both in hardware and procedures. You can have a modern hypervisor environment with fast hardware, but if you allow a substantially lower spec system in the cluster, performance will not be consistent. What happens when a virtual machine moves to the lower spec system and suddenly takes longer to finish a query?
Similarly, in storage, tiering across different disk types helps improve TCO. However, what happens when data trickles down to the slowest tier? Achieving that lower TCO comes with the tradeoff of less latency predictability.
These challenges are not new. If they impact user experience too much, you can usually work around them. For example, ensure your data is moved to a faster tier in time. If you have a lot of budget, maybe forgo the slowest & cheapest NL-SAS tier and stick to SAS & SSD. But what if the source of the latency inconsistency is something internal to a component, like a drive?
Continue reading “SNIA: Avoiding tail latency by failing IO operations on purpose”
Once upon a time there was a data center filled with racks of physical servers. Thanks to hypervisors such as VMware ESX it was possible to virtualize these systems and run them as virtual machines, using less hardware. This had a lot of advantages in terms of compute efficiency, ease of management and deployment/DR agility.
To enable many of the hypervisor features such as VMotion, HA and DRS, the data of the virtual machine had to be located on a shared storage system. This had an extra benefit: it’s easier to hand out pieces of a big pool of shared storage, than to predict capacity requirements for 100’s of individual servers. Some servers might need a lot of capacity (file servers), some might need just enough for an OS and maybe a web server application. This meant that the move to centralized storage was also beneficial from a capacity allocation perspective.
Continue reading “Intel SPDK and NVMe-oF will accelerate NVMe adoption rates”
I’m excited to announce I’ll be attending Storage Field Day 12! During the event we’ll talk storage technology for three days, starting on March 8th. There’s an impressive line-up of companies and delegates gathering in Silicon Valley and of course we’ll live stream the presentations for the folks back home, who can pitch in over Twitter. Did I mention the line-up of companies already? Oh boy!
Continue reading “Storage Field Day 12: storage drop bears reunited!”
There’s no denying that off-premises cloud services are growing. Just look at the year-to-year growth of big public cloud providers. There’s big potential if you focus on two aspects of cloud. The first is speeding up access to data that is potentially not located in the same city or even geographic area. The second is supporting new protocols and storage methodologies that are suited for cloud native applications. One player in this area of IT is Avere, which aims to connect on-premises storage and compute to their siblings in the cloud.
Continue reading “Building a hybrid cloud with Avere”
I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.
To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).
Continue reading “Hardware has set the pace for latency, time for software to catch up”
Storage Field Day 11 is taking place on October 5th to 7th in Silicon Valley, and I’m delighted to report: I’ll be one of the delegates again. In fact, this will be my 5th event! Wouldn’t it be awesome if British Airways reads this blog and hands me some champagne to celebrate as soon as we’re in the air…
The Tech Field Days are organized by GestaltIT and follow an efficient concept. A number of companies are invited to present to a number of picked individuals (delegates) and the 2 hour sessions are broadcasted live over the interwebs. The sessions are very interactive: delegates will be asking questions throughout these presentations. People back at home can ask their questions on Twitter (don’t forget the hashtag, which is #SFD11 for this event), and usually one of the delegates will pick it up and voice it towards the presenters.
Continue reading “Storage Field Day 11, here I come!”
Question: What do you get when Pure Storage gets to build a system that can start small, grow big, handle file requests quickly and is simple to manage?
FlashBlade: Pure’s newest addition to its hardware portfolio. The Pure Storage FlashBlade is not just another NAS filer. It’s an all-flash, scale-out storage for file (NFSv3 for now) and object (soon), delivering some pretty good performance as you can see in the sheet above. And the chassis just looks sexy…
Continue reading “FlashBlade: custom hardware still makes sense”
If you want to build a private S3 object store, Cloudian HyperStore might be the product for you. Using commodity servers to form a scale-out architecture, you can build your own, fully S3 compliant object storage that’s located in your own datacenter. If you don’t want to supply your own servers, you can opt for the Lenovo Storage DX8200C appliance, powered by Cloudian!
Continue reading “Cloudian HyperStore: manage more PBs with less FTE”
Primary Data unveiled there DataSphere product at VMworld US back in August 2015. With DataSphere, Primary Data virtualizes the different types of storages in the datacenter, creating a global dataspace and breaking down the traditional silos of storage. It attempts to do for storage what VMware did for computing: any piece of data can reside on any storage, movable at any time, without interruption. In essence, increasing data mobility by decoupling the logical storage from the physical hardware. The team gave us an update on their product at Storage Field Day 10, so here goes!
Continue reading “Breaking down storage silos with Primary Data DataSphere”