There’s no denying that off-premises cloud services are growing. Just look at the year-to-year growth of big public cloud providers. There’s big potential if you focus on two aspects of cloud. The first is speeding up access to data that is potentially not located in the same city or even geographic area. The second is supporting new protocols and storage methodologies that are suited for cloud native applications. One player in this area of IT is Avere, which aims to connect on-premises storage and compute to their siblings in the cloud.
When the news about the Dell and EMC merger became public last year, I was somewhat skeptical. I’ve had some really sketchy experiences with Dell servers and storage products, so it didn’t feel like a step forward. At the same time there was the organizational and support aspect. Mergers usually result in confusion for both sales processes and us people in the field having to glue all the products together. Not something I was looking forward to.
Lo and behold: I got an invite from the EMC Elect program to attend DellEMCWorld in Austin, Texas! This was my chance to fly over there and experience the merger announcements firsthand, plus ask questions. So I did! And I have to say: I was impressed. Continue reading “DellEMCWorld 2016 – I’m impressed!”
I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.
To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).
A long, long time ago when public cloud IaaS (Infrastructure as a Service) was still relatively new I was doing some contract work for a big international company. One of the tasks for the department was an IaaS proof of concept: does offloading servers to the public cloud result in cost savings? Long story short: the PoC was halted after several months because the AWS IaaS offering was prohibitively expensive and inflexible. We had enough systems and data to keep a team of qualified engineers busy and to get good purchasing discounts. An additional problem was the very rigid service catalog: there weren’t that many flavors of machines available back then and custom machines were either not possible or even more expensive.
Fast forward to 2016 and I’m looking into public/private/on-premises IaaS again for a different company. Prices have dropped, but there are still some things to keep in mind when considering a move to the public cloud IaaS models.
Several years ago cloud computing became the hottest thing in IT. You couldn’t read an article or product description without being smothered with ‘cloud’ every other sentence. In fact, many organizations renamed their product and service offerings to include cloud. If you didn’t have ‘cloud products and services’, you were going to lose business.
I’ve noticed that there’s still a surprising amount of IT people that don’t fully comprehend what cloud computing means. Is it a location or a concept? Are there different types and layers of cloud computing? Lets see if we can explain a few of the more common options…
A Brocade firmware upgrade once in a while is highly recommended: new releases usually squash bugs and add new features, which helps with a stable and efficient SAN infrastructure. The upgrade process itself is relatively straightforward if you keep in mind that you can only non-disruptively upgrade one major release at a time. With that knowledge you only need a FTP server (like Filezilla), SSH client (PuTTY), the upgrade packages and some patience.
According to Tintri, the rise of server virtualization broke the traditional storage system. Initially we had relatively simple environments where one server talks to a number of LUNs on a storage system. Sometimes we’d have a small cluster of servers accessing those volumes. Still relatively simple.
Fast forward to now: large clusters of hypervisor hosts are the norm, collectively accessing an even larger number of volumes. Each hypervisor in turn hosts a large number or virtual machines. In case of performance problems, how are you ever going to figure out the root cause and which other systems are affected?
Datera was founded in 2013 with a clear mission: bringing hyperscale operations and economics to the private clouds. The big corporations such as Facebook and Google don’t manage individual pieces of hardware. Instead they use policies and “the system” will decide where to spin up an app or place some data. This means an admin can manage a lot more servers or storage. So why is this level of automation only used by the big corporations? Datera aims to change that!
Storage Field Day 11 is taking place on October 5th to 7th in Silicon Valley, and I’m delighted to report: I’ll be one of the delegates again. In fact, this will be my 5th event! Wouldn’t it be awesome if British Airways reads this blog and hands me some champagne to celebrate as soon as we’re in the air…
The Tech Field Days are organized by GestaltIT and follow an efficient concept. A number of companies are invited to present to a number of picked individuals (delegates) and the 2 hour sessions are broadcasted live over the interwebs. The sessions are very interactive: delegates will be asking questions throughout these presentations. People back at home can ask their questions on Twitter (don’t forget the hashtag, which is #SFD11 for this event), and usually one of the delegates will pick it up and voice it towards the presenters.
Earlier this year Nimble Storage announced their all-flash array called the Predictive Flash Platform; you can read my thoughts on the launch over here. InfoSight is one of the core components of that announcement, which is why we had the opportunity for a fireside chat with the Nimble Storage data science team. We discussed the workings of InfoSight & VMVision and how this relates to actual benefits for an owner of a Nimble Storage array. This post will also touch on some of the key points discussed during the later Storage Field Day 10.