Last week I’ve attended Dell Technologies World 2018 in Las Vegas. I was invited by Dell Technologies via the influencer program, which basically means we get access to the same content as the press or analysts. I’ve spent the weeks alternating between general keynotes, briefings, interviews and catching up with friends and colleagues. In this post I’ll try and summarize some of the main topics of the Dell Technologies World 2018, focusing on what appealed most to me.
Good news: I will be attending Dell Technologies World 2018, scheduled for April 30-May 3rd in the Sands Expo convention center in Las Vegas. Dell has been so kind to invite me to this conference, and I’m looking forward to meet the former Dell EMC Elect and many other industry friends.
Moving to the cloud comes with many advantages. There’s the obvious advantages: if you put all your servers or services in the public cloud, you do not need your own datacenter. Provisioning rates are typically pretty fast, so this improves the time needed to spin up a new service. Plus pretty much every cloud is based on a “pay as you use/grow” billing model, giving you predictable costs.
Different public cloud providers charge different prices for services. So what if you could use the best value components from each cloud provider? Say hello to the multi cloud.
The Dell EMC High-End Systems Division talked about two systems. First about the VMAX All Flash, and later about the XtremIO X2. This post is about the latter one. The XtremIO X2 builds upon the foundation of the original “old” XtremIO, but also does a couple of things differently. This post will explore those difference a bit, and will also talk about asynchronous and synchronous replication.
Back in October we visited Dell EMC for a few Storage Field Day 14 presentations. Walking into the new EBC building we bumped into two racks. One with a VMAX all flash system and another with a XtremIO X2. Let’s kick off the Storage Field Day 14 report with VMAX All Flash. There’s still a lot of thought going into this enterprise class storage array…
Recently I’ve been upgrading a vSphere 5.5 environment to vSphere 6.5U1. The vCenter upgrade process is pretty bulletproof by now: the installer is pretty much completely automated. I did run into some issues during the ESXi upgrade, one of which is the fact there were some conflicting VIBs present in the old installations. This would mean the ESXi 6.5U1 upgrade would not start. Time to start hacking in the CLI!
Today I visited a customer to connect two RecoverPoint clusters. One RecoverPoint cluster is connected to a Unity array, the other to a VNX. After installing both clusters, we ran the RecoverPoint Connect Cluster wizard and were greeted with an “Internal Error. Contact support” error message. Awesome! Fortunately it turned out to be a pretty basic error which was easy to fix. A short story about RecoverPoint installation types in mixed-array configurations…
Storage Field Day 14 is taking place next month on 8-10 November in Silicon Valley. After having to skip one Storage Field Day, I’m glad to be back at the table for this one. If you look at the event page, it might seem there’s not that many presentations going on: only 4 companies listed as of today. But don’t be mistaken: Dell EMC will have 5 presentations, so we will not be slacking!
Last month I’ve performed a Isilon tech refresh of two clusters running NL400 nodes. In both clusters, the old NL400 36TB nodes were replaced with 72TB NL410 nodes with some SSD capacity. First step in the whole process was the replacement of the Infiniband switches. Since the clusters were fairly old, an OneFS upgrade was also on the list, before the cluster could recognize the NL410 nodes. Dell EMC has extensive documentation on the whole OneFS upgrade process: check the support website, because there’s a lot of version dependencies. Finally, everything was prepared and I could begin with the actual Isilon tech refresh: getting the new Isilon nodes up and running, moving the data and removing the old nodes.
If you’re remotely managing a Linux machine, you’ll probably use an SSH connection to run commands on that machine. There’s one problem with this approach: if you close the SSH connection, any long-running jobs/commands will halt. If you know a job will take a long time and you won’t be able to babysit the SSH connection, you can plan accordingly. But what if you underestimated the time a job will take, and you need to disconnect anyway? Here’s how to keep the job running AND make it home in time for dinner!