Back in October we visited Dell EMC for a few Storage Field Day 14 presentations. Walking into the new EBC building we bumped into two racks. One with a VMAX all flash system and another with a XtremIO X2. Let’s kick off the Storage Field Day 14 report with VMAX All Flash. There’s still a lot of thought going into this enterprise class storage array…
Troubleshooting any system requires information about the configuration of the system and how it’s behaving over time. Unsurprisingly, this is also valid when you’re troubleshooting performance on a Dell EMC VNX. So help your storage engineer, and enable performance data logging on the VNX!
A VNX Unified upgrade is fairly easy: Unisphere Service Manager (USM) does most of the heavy lifting. A Block only system is the simplest of all: you upload a .ndu software package to the system and wait for the update to complete.
A Unified system is a combined package of a VNX Block system, and a File component consisting of one or two Control Stations and at least 2 datamover blades. In a VNX Unified upgrade, you first need to upgrade the File part of the system and afterwards the Block part. For the File upgrade, you need to select an .upg package. But… you can’t download this from the EMC/VCE website. Now what?
Recently I’ve ran a project for a new EMC Symmetrix VMAX 10k installation. The install was a breeze and the migration of data to the system fairly straightforward. The customer saw some good performance improvements on the storage front and there is plenty of capacity in the system to cater for immediate growth. Yet when I opened the Unisphere for VMAX interface and browsed to the performance tab, my heart skipped a beat. What are those red queue depth utilization bars? We were seeing good response times, weren’t we? Were we at risk? How about scalability? Lets dig deeper and find out.
Several weeks ago we performed a resiliency test on a VMAX10k that was about to be put into production. The customer wanted confirmation that the array was wired up properly and would respond correctly in case there were any issues like a power or disk failure. This is fairly standard testing and makes sense: better to pull a power plug and see the array go down while there is no production running against it, right?
We pulled one power feed from each system bay and storage bay. Obviously: no problem for the VMAX, it dialed out, notified EMC it lost power on a couple of feeds and that’s it. Next up we yanked out a drive: I/O continued, the array dialed out to EMC that something was wrong with a drive, but… we didn’t see anything in Unisphere for VMAX!
Not all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…
When troubleshooting performance in a CLARiiON or VNX storage array you’ll often see graphs that resemble something like this: write cache maxing out to 100% on one or even two storage processors. Once this occurs the array starts a process called forced flushing to flush writes to disk and create new space in the cache for incoming writes. This absolutely wrecks the performance of all applications using the array. With the MCx cache improvements made in the VNX2 series there should be a lot less forced flushes and a much improved performance.
A big change in the VNX2 hot spare policy compared to earlier VNX or CLARiiON models is the use of permanent hot spares. Whereas the earlier models would have dedicated, configured hot spares that would only be used during the drive failure, the VNX2 will use any eligible unused drive to spare to and NOT switch back to the original drive. I’ve written about this and other new VNX2 features but didn’t get to try it first hand yet. Until now: a drive died, yay! Continue reading to see how you can back-track which drive failed, which drive replaced it and how to move the drive back to the original physical location (should you want to).
In September 2013 EMC announced the new generation VNX with MCx technology (or VNX2). The main advantage of the new generation is a massive performance increase: with MCx technology the VNX2 can effectively use all the CPU cores available in the storage processors. Apart from a vast performance increase there’s also a boatload of new features: deduplication, active-active LUNs, smaller (256MB) chunks for FAST VP, persistent hotspares, etc. Read more about that in my previous post.
It took a while before I could get my hands on an actual VNX2 in the field. So when we needed two new VNX2 systems for a project, guess which resources I claimed to install them. Me, myself and I! Only to have a small heart attack upon unboxing the first VNX5400: someone stole my standby power supplies (SPS)!
Over the last couple of months I’ve been busy phasing out an old EMC CLARiiON CX3 system and migrating all the data to either newer VNX and/or Isilon systems. The hard work paid off: the CX3 is now empty and we can start to decommission it. But before we ship it back to EMC we need to employ a type of CLARiiON data erasure to make sure data doesn’t fall into wrong hands.
Update 2019-01: You can also use these commands on a VNX. I’ve updated the post with several additional screenshots and fixed a few typo’s.