Reliability of a system is usually expressed as a percentage of uptime. A system that has an uptime of at least 99,9% should typically not exceed an unplanned downtime of roughly 8 hours and 45 minutes each year. ‘Five nines’ or 99,999% of availability is often used in IT: this equates to roughly 5 minutes of downtime on a yearly basis. For Infinidat this wasn’t good enough, so they built the Infinibox with a reliability of 99,99999%. That’s only 3.2 seconds of downtime per year. Yikes!
Today on February the 1st, EMC announced the EMC Elect 2016. I’m happy to again be part of the EMC Elect, now for the 4th year in a row!
Each year the EMC Elect program selects a number of individuals from all over the world that have shared knowledge on or worthwhile news about EMC systems. It does not matter how you do this: either via EMC’s own community network ECN, on Twitter, via blogs, while speaking at events, or other media. As long as you share your knowledge with the rest of the world, you can get nominated. Yes, even if you have an occasional rant about that feature that your VNX still does not support (*cough* removing private RAID groups from a storage pool *cough*), as long as you keep it constructive; whining is too easy.
I can’t hide the fact I was looking forward to the Qumulo presentation at Storage Field Day 8: I love scale-out NAS systems, with all the advantages they bring in terms of performance, scalability, manageability and upgradeability. I quickly learned that the founders of Qumulo previously worked on Isilon and OneFS. I work with Isilons in the field, so my interest was peaked
Back to Qumulo: they build a data-aware, scale-out, primary storage system. And it’s software defined. Meaning you’ll have full flexibility in the hardware you want to use and how big/fast/expensive you want to make the system. Plus it also gives you real-time insight into the data on the Qumulo system. Interested? Read on!
Back at Storage Field Day 8, Cohesity presented their newly announced solution to optimize secondary storage usage and how to get more bang for your buck on secondary storage. One critical thing to note here is that Cohesity changes the definition of secondary storage! In their view, secondary storage is everything that’s not Tier 1, high performance, mission critical stuff. So yes, that’s backups.. but it’s also test and development, file shares, archives, etc.
Did you ever install an Isilon cluster, connect all the cables and run through the configuration wizard, only to find out you still can’t connect to the cluster? Sure you did, happens to everyone. Maybe the cluster only has one 10GigE port online while the network team is still scrambling for 10GigE modules. Or maybe you have configured the invalid VLAN tag on the subnet. This post will group some of the more useful Isilon network commands, so you can enable VLAN tagging or add additional ports to the pool via the CLI.
No matter the brand, type, size or performance of you storage system, they should all have one thing in common: stability. Pure Storage talked about their systems design at Storage Field Day 8, which centers around “non disruptive everything”. Not only the hardware, but also the software running on top of it. Because in an ideal world, the storage system should only go down 5 years after your installation date when you’re replacing it with a new one. And in Pure’s case this means: never.
Coho Data was first to present at Storage Field Day 8 last October, with Andy Warfield (co-founder and CTO) running the entire presentation start to finish. We knew what to expect: last year Andy also presented at SFD6 and cooked the brains of half the delegates. So this time we came prepared, with ample coffee and not too much breakfast in our stomachs. And we weren’t disappointed: Andy gave a crystal clear company mission to us (NOT focused on the ever-so-hyped “disruption”, but instead on transformation), backed with ample of shiny tech and intelligence inside the Coho array. So what is Coho Data trying to do and how?
Increasing the storage capacity of a Data Domain is usually a matter of adding an additional disk shelf to the Data Domain head. Upon connecting the additional enclosure your new disks will be in an Unknown state. This does not necessarily mean there’s anything wrong with the topology of the system, but just indicates the disks aren’t in use just yet. This is however easily fixed with a couple of CLI commands.
Typing this at Schiphol airport, I can’t help feeling slightly tired. The alarm clock started blaring at 03:00 this night and with that, my trip to Storage Field Day 8 has officially started. With the first leg of the journey from DUS to AMS being over already, I can now settle for the long haul to SFO. It’s bound to be an incredibly exciting week once again and I’m looking forward to meeting up with a lot of familiar faces, two new members of the Tech Field Day family and 10 super exciting companies. Stroopwafels incoming!
Last week I’ve been implementing two new Data Domain systems for a new customer who’d like to use these systems as backup targets for their existing Veeam 8 environment. Backup would be replicated to the secondary system to guarantee recoverability even if the first system or data center experiences a catastrophic failure. In this case replication will be handled by the Data Domain system itself. You’d like your backup software to be aware of the replicas on the secondary location. This in turn means Veeam should be able to read from the replica, which turned out to be a bit of a configuration challenge. Bring out the CLI!