Moving your data and applications to the cloud isn’t the easiest of tasks, if you want to do it right. There’s a multitude of decisions to make. Some you’ll get wrong, which might make you reconsider your cloud operating model or cloud provider. Which brings the next question: are you locked-in at your cloud provider? Can you move your data between clouds?
One start-up that attempts to make the move to the cloud and moving between clouds easier, is Elastifile. An Israeli company, founded in 2013 with its first version of the product out in Q4-2016, it created the Elastifile Cross-Cloud Data Fabric. Their objective: bring cloud-like efficiency to the on-premises cloud, and facilitate a easy lift-and-shift into the hybrid cloud.
Continue reading “Moving to and between clouds made simple with Elastifile Cloud File System”
Consistency and predictability matter. You expect Google to answer your search query within a second. If it takes two seconds, that is slow but ok. Much longer and you will probably hit refresh because ‘it’s broken and maybe that will fix it’.
There are many examples that could substitute the scenario above. Starting a Netflix movie, refreshing your Facebook timeline, or powering on an Azure VM. Or in your business: retrieving an MRI scan or patient data, compiling a 3D model, or listing all POs from last month.
Ensuring your service can meet this demand of predictability and consistency requires a multifaceted approach, both in hardware and procedures. You can have a modern hypervisor environment with fast hardware, but if you allow a substantially lower spec system in the cluster, performance will not be consistent. What happens when a virtual machine moves to the lower spec system and suddenly takes longer to finish a query?
Similarly, in storage, tiering across different disk types helps improve TCO. However, what happens when data trickles down to the slowest tier? Achieving that lower TCO comes with the tradeoff of less latency predictability.
These challenges are not new. If they impact user experience too much, you can usually work around them. For example, ensure your data is moved to a faster tier in time. If you have a lot of budget, maybe forgo the slowest & cheapest NL-SAS tier and stick to SAS & SSD. But what if the source of the latency inconsistency is something internal to a component, like a drive?
Continue reading “SNIA: Avoiding tail latency by failing IO operations on purpose”
Once upon a time there was a data center filled with racks of physical servers. Thanks to hypervisors such as VMware ESX it was possible to virtualize these systems and run them as virtual machines, using less hardware. This had a lot of advantages in terms of compute efficiency, ease of management and deployment/DR agility.
To enable many of the hypervisor features such as VMotion, HA and DRS, the data of the virtual machine had to be located on a shared storage system. This had an extra benefit: it’s easier to hand out pieces of a big pool of shared storage, than to predict capacity requirements for 100’s of individual servers. Some servers might need a lot of capacity (file servers), some might need just enough for an OS and maybe a web server application. This meant that the move to centralized storage was also beneficial from a capacity allocation perspective.
Continue reading “Intel SPDK and NVMe-oF will accelerate NVMe adoption rates”
There’s no denying that off-premises cloud services are growing. Just look at the year-to-year growth of big public cloud providers. There’s big potential if you focus on two aspects of cloud. The first is speeding up access to data that is potentially not located in the same city or even geographic area. The second is supporting new protocols and storage methodologies that are suited for cloud native applications. One player in this area of IT is Avere, which aims to connect on-premises storage and compute to their siblings in the cloud.
Continue reading “Building a hybrid cloud with Avere”
When the news about the Dell and EMC merger became public last year, I was somewhat skeptical. I’ve had some really sketchy experiences with Dell servers and storage products, so it didn’t feel like a step forward. At the same time there was the organizational and support aspect. Mergers usually result in confusion for both sales processes and us people in the field having to glue all the products together. Not something I was looking forward to.
Lo and behold: I got an invite from the EMC Elect program to attend DellEMCWorld in Austin, Texas! This was my chance to fly over there and experience the merger announcements firsthand, plus ask questions. So I did! And I have to say: I was impressed.
Continue reading “DellEMCWorld 2016 – I’m impressed!”
I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.
To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).
Continue reading “Hardware has set the pace for latency, time for software to catch up”
According to Tintri, the rise of server virtualization broke the traditional storage system. Initially we had relatively simple environments where one server talks to a number of LUNs on a storage system. Sometimes we’d have a small cluster of servers accessing those volumes. Still relatively simple.
Fast forward to now: large clusters of hypervisor hosts are the norm, collectively accessing an even larger number of volumes. Each hypervisor in turn hosts a large number or virtual machines. In case of performance problems, how are you ever going to figure out the root cause and which other systems are affected?
Continue reading “Making life a whole lot easier with Tintri VM-aware storage”
Datera was founded in 2013 with a clear mission: bringing hyperscale operations and economics to the private clouds. The big corporations such as Facebook and Google don’t manage individual pieces of hardware. Instead they use policies and “the system” will decide where to spin up an app or place some data. This means an admin can manage a lot more servers or storage. So why is this level of automation only used by the big corporations? Datera aims to change that!
Continue reading “Bringing hyperscale operations to the masses with Datera”
Earlier this year Nimble Storage announced their all-flash array called the Predictive Flash Platform; you can read my thoughts on the launch over here. InfoSight is one of the core components of that announcement, which is why we had the opportunity for a fireside chat with the Nimble Storage data science team. We discussed the workings of InfoSight & VMVision and how this relates to actual benefits for an owner of a Nimble Storage array. This post will also touch on some of the key points discussed during the later Storage Field Day 10.
Continue reading “Squashing assumptions with Data Science”
I will be attending Dell World this October in Austin, Texas and will be trying to find out how the merger between these massive companies will impact the Dell and EMC storage product portfolios. Flying in under the EMC Elect program, we should be having a front row seat to all the exciting announcements!
Continue reading “I’ll be at Dell World this October!”