Today I visited a customer to connect two RecoverPoint clusters. One RecoverPoint cluster is connected to a Unity array, the other to a VNX. After installing both clusters, we ran the RecoverPoint Connect Cluster wizard and were greeted with an “Internal Error. Contact support” error message. Awesome! Fortunately it turned out to be a pretty basic error which was easy to fix. A short story about RecoverPoint installation types in mixed-array configurations…
Storage Field Day 14 is taking place next month on 8-10 November in Silicon Valley. After having to skip one Storage Field Day, I’m glad to be back at the table for this one. If you look at the event page, it might seem there’s not that many presentations going on: only 4 companies listed as of today. But don’t be mistaken: Dell EMC will have 5 presentations, so we will not be slacking!
Last month I’ve performed a Isilon tech refresh of two clusters running NL400 nodes. In both clusters, the old NL400 36TB nodes were replaced with 72TB NL410 nodes with some SSD capacity. First step in the whole process was the replacement of the Infiniband switches. Since the clusters were fairly old, an OneFS upgrade was also on the list, before the cluster could recognize the NL410 nodes. Dell EMC has extensive documentation on the whole OneFS upgrade process: check the support website, because there’s a lot of version dependencies. Finally, everything was prepared and I could begin with the actual Isilon tech refresh: getting the new Isilon nodes up and running, moving the data and removing the old nodes.
If you’re remotely managing a Linux machine, you’ll probably use an SSH connection to run commands on that machine. There’s one problem with this approach: if you close the SSH connection, any long-running jobs/commands will halt. If you know a job will take a long time and you won’t be able to babysit the SSH connection, you can plan accordingly. But what if you underestimated the time a job will take, and you need to disconnect anyway? Here’s how to keep the job running AND make it home in time for dinner!
While upgrading OneFS it’s important to keep the InsightIQ software version compatible with the Isilon systems. In this case, InsightIQ wasn’t updated for a while and I had to upgrade from 3.0 -> 3.1 -> 3.2 -> 4.x. The actual upgrade process isn’t too hard (it just takes a lot of time), but there’s one little prerequisite in the 3.1 -> 3.2 upgrade: a minimum free space in the root partition of 502MB. As you can see in the screenshot, I wasn’t even close to the minimum requirement. I got to 357 MB, and that’s after cleaning up redundant stuff. Time to add some more disk space and extend root partition!
Every once in a while you might need to replace an Isilon infiniband switch. Possibly because of a broken switch, the need for more ports, or because the old switch is.. too old. Good news: it’s a fairly straightforward job. And if your cluster has two switches, you can replace a switch at a time without outage.
Troubleshooting any system requires information about the configuration of the system and how it’s behaving over time. Unsurprisingly, this is also valid when you’re troubleshooting performance on a Dell EMC VNX. So help your storage engineer, and enable performance data logging on the VNX!
This is a long overdue post covering Dell EMC World 2017 in Las Vegas and the announcements that were made during the event. I’ll recap some of the topics that resonated most with me, namely that cloud computing is not a place but a way of doing IT. Secondly, I spoke briefly with some of the Dell EMC server guys whom give me hope that the Dell 14th generation servers are a big step up from previous experiences. Finally, I’ll share a bit of insight in what the Dell EMC Elect picked up during an interview with John Roese, and will link to a few posts from friends that attended Dell EMC World 2017. Maybe it’s a bit more of a “Dear diary,”-style post, so hang in there.
Excelero Storage launched their NVMesh product back in March 2017 at Storage Field Day 12. NVMesh is a software defined storage solution using commodity servers and NVMe devices. Using NVMesh and the Excelero RDDA protocol, we saw some mind blowing performance numbers, both in raw IOps and in latency, while keeping hardware and licensing costs low.
We’re in the midst of a VCE vBlock 340 software upgrade. Part of this upgrade process is upgrading the Cisco Nexus 5K switches that connect the blades and storage to the customer network. After upgrading the switch we suddenly noticed on the switch that the VNX Unified standby data mover (server_3) interface suspended with a “no LACP PDUs” error message. A quick check on the switch that wasn’t upgraded yet showed that interface to be online. So what’s up with that?