The Dell EMC High-End Systems Division talked about two systems. First about the VMAX All Flash, and later about the XtremIO X2. This post is about the latter one. The XtremIO X2 builds upon the foundation of the original “old” XtremIO, but also does a couple of things differently. This post will explore those difference a bit, and will also talk about asynchronous and synchronous replication.
Back in October we visited Dell EMC for a few Storage Field Day 14 presentations. Walking into the new EBC building we bumped into two racks. One with a VMAX all flash system and another with a XtremIO X2. Let’s kick off the Storage Field Day 14 report with VMAX All Flash. There’s still a lot of thought going into this enterprise class storage array…
Today I visited a customer to connect two RecoverPoint clusters. One RecoverPoint cluster is connected to a Unity array, the other to a VNX. After installing both clusters, we ran the RecoverPoint Connect Cluster wizard and were greeted with an “Internal Error. Contact support” error message. Awesome! Fortunately it turned out to be a pretty basic error which was easy to fix. A short story about RecoverPoint installation types in mixed-array configurations…
Storage Field Day 14 is taking place next month on 8-10 November in Silicon Valley. After having to skip one Storage Field Day, I’m glad to be back at the table for this one. If you look at the event page, it might seem there’s not that many presentations going on: only 4 companies listed as of today. But don’t be mistaken: Dell EMC will have 5 presentations, so we will not be slacking!
Last month I’ve performed a Isilon tech refresh of two clusters running NL400 nodes. In both clusters, the old NL400 36TB nodes were replaced with 72TB NL410 nodes with some SSD capacity. First step in the whole process was the replacement of the Infiniband switches. Since the clusters were fairly old, an OneFS upgrade was also on the list, before the cluster could recognize the NL410 nodes. Dell EMC has extensive documentation on the whole OneFS upgrade process: check the support website, because there’s a lot of version dependencies. Finally, everything was prepared and I could begin with the actual Isilon tech refresh: getting the new Isilon nodes up and running, moving the data and removing the old nodes.
Troubleshooting any system requires information about the configuration of the system and how it’s behaving over time. Unsurprisingly, this is also valid when you’re troubleshooting performance on a Dell EMC VNX. So help your storage engineer, and enable performance data logging on the VNX!
This is a long overdue post covering Dell EMC World 2017 in Las Vegas and the announcements that were made during the event. I’ll recap some of the topics that resonated most with me, namely that cloud computing is not a place but a way of doing IT. Secondly, I spoke briefly with some of the Dell EMC server guys whom give me hope that the Dell 14th generation servers are a big step up from previous experiences. Finally, I’ll share a bit of insight in what the Dell EMC Elect picked up during an interview with John Roese, and will link to a few posts from friends that attended Dell EMC World 2017. Maybe it’s a bit more of a “Dear diary,”-style post, so hang in there.
A VNX Unified upgrade is fairly easy: Unisphere Service Manager (USM) does most of the heavy lifting. A Block only system is the simplest of all: you upload a .ndu software package to the system and wait for the update to complete.
A Unified system is a combined package of a VNX Block system, and a File component consisting of one or two Control Stations and at least 2 datamover blades. In a VNX Unified upgrade, you first need to upgrade the File part of the system and afterwards the Block part. For the File upgrade, you need to select an .upg package. But… you can’t download this from the EMC/VCE website. Now what?