Today I visited a customer to connect two RecoverPoint clusters. One RecoverPoint cluster is connected to a Unity array, the other to a VNX. After installing both clusters, we ran the RecoverPoint Connect Cluster wizard and were greeted with an “Internal Error. Contact support” error message. Awesome! Fortunately it turned out to be a pretty basic error which was easy to fix. A short story about RecoverPoint installation types in mixed-array configurations…
Troubleshooting any system requires information about the configuration of the system and how it’s behaving over time. Unsurprisingly, this is also valid when you’re troubleshooting performance on a Dell EMC VNX. So help your storage engineer, and enable performance data logging on the VNX!
We’re in the midst of a VCE vBlock 340 software upgrade. Part of this upgrade process is upgrading the Cisco Nexus 5K switches that connect the blades and storage to the customer network. After upgrading the switch we suddenly noticed on the switch that the VNX Unified standby data mover (server_3) interface suspended with a “no LACP PDUs” error message. A quick check on the switch that wasn’t upgraded yet showed that interface to be online. So what’s up with that?
A VNX Unified upgrade is fairly easy: Unisphere Service Manager (USM) does most of the heavy lifting. A Block only system is the simplest of all: you upload a .ndu software package to the system and wait for the update to complete.
A Unified system is a combined package of a VNX Block system, and a File component consisting of one or two Control Stations and at least 2 datamover blades. In a VNX Unified upgrade, you first need to upgrade the File part of the system and afterwards the Block part. For the File upgrade, you need to select an .upg package. But… you can’t download this from the EMC/VCE website. Now what?
Today EMC announced their new and improved mid-range storage at EMCWorld: say hello to Unity! It will be replacing the VNX1 and VNX2 systems with simple, modern flexible and affordable new platform and can be purchased in either an all-flash or hybrid configuration. There are quite a lot of differences between both the new Unity and the old VNX systems, with a number of improvements (available at GA or later) that make me really, really happy. Let’s dive in!
I’m just about ready to start my 18 hour trip to Las Vegas for EMCWorld 2016. The first hop to Miami should be a relatively quiet one (if I can get some sleep); on the second I’ll start to prepare for the madness that will ensue next week. The agenda is packed with good events and new product launches…
Not all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…
MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.
When troubleshooting performance in a CLARiiON or VNX storage array you’ll often see graphs that resemble something like this: write cache maxing out to 100% on one or even two storage processors. Once this occurs the array starts a process called forced flushing to flush writes to disk and create new space in the cache for incoming writes. This absolutely wrecks the performance of all applications using the array. With the MCx cache improvements made in the VNX2 series there should be a lot less forced flushes and a much improved performance.
A big change in the VNX2 hot spare policy compared to earlier VNX or CLARiiON models is the use of permanent hot spares. Whereas the earlier models would have dedicated, configured hot spares that would only be used during the drive failure, the VNX2 will use any eligible unused drive to spare to and NOT switch back to the original drive. I’ve written about this and other new VNX2 features but didn’t get to try it first hand yet. Until now: a drive died, yay! Continue reading to see how you can back-track which drive failed, which drive replaced it and how to move the drive back to the original physical location (should you want to).