CLARiiON and VNX Data Erasure – DIY edition

Open Hard Drive with pencil eraser on platterOver the last couple of months I’ve been busy phasing out an old EMC CLARiiON CX3 system and migrating all the data to either newer VNX and/or Isilon systems. The hard work paid off: the CX3 is now empty and we can start to decommission it. But before we ship it back to EMC we need to employ a type of CLARiiON data erasure to make sure data doesn’t fall into wrong hands.


Update 2019-01: You can also use these commands on a VNX. I’ve updated the post with several additional screenshots and fixed a few typo’s.

Do you need a certificate?

If you need to show proof to auditors that the array was completely wiped of data, there’s only one real option. EMC offers CLARiiON Data Erasure as a professional service. At the end of the engagement they’ll present you a certificate and a list of disks that didn’t erase properly and you’re covered.

A quick check determined that we do not have such a strict policy and do not need a certificate. We DO need to make sure data cannot be easily retrieved, which is pretty much common sense in my opinion. If someone gets their hands on a drive, they shouldn’t be able to read the data.


Let’s assume I leave the LUNs bound on the CLARiiON and pull out a drive. In a worst case scenario, assume I’ve gotten my hands on a RAID1 drive: I now have a fully functioning copy of the data that I can play around with. Which is bad news…

On the other side you have the full (DoD 5220.22-M approved) 7-pass overwrite mechanism, the one the CLARiiON Data Erasure service also utilizes. This is done to eliminate data remanence, or residual data on the platters. The theory behind this is simple: let’s say you’ve got a platter filled with binary data. You erase all data (write a zero in all possible locations). Should you then remove the platter and place it under specialized laboratory equipment, you’d see a difference in magnetism between “true zeroes” and “zeroes that were formerly a one”. There’s a slight amount of residual magnetism left.

Digging a bit further it seems that this is slightly exaggerated. NIST publication 800-88 Rev 1 states the following:

For storage devices containing Legacy Magnetic media, a single overwrite pass with a fixed pattern such as 0s typically prevents recovery of data even if state of the art laboratory techniques are applied to attempt to retrieve the data.

That sounds good enough for our purposes.

DIY data erasure

In previous engagements (which also didn’t require a certificate) I’ve always used a combination of removing all LUNs and RAID groups, swapping drive positions, creating new random RAID groups and LUNs (wait for the bind process to complete), attach LUNs to a server and use software to overwrite the data. A CLARiiON LUN bind is followed by an automatic zeroing of the space which erases all previous traces of data, but it’s hard to report this to management. “Yeah if you look in the event logs and see this and that event code…”

All in all a very time consuming operation with no fixed procedure and/or very limited reporting whether the data is actually gone. I didn’t want to go through that entire nightmare again this time. I know from a previous erasure that EMC has a tool to connect to the array and wipe all the drives in parallel. But even though I’m working for an EMC partner and the fact that I don’t need a certificate, I’m not allowed to use that tool myself. So Google it is!

I ran into a blog post called “How to scrub/zero out data on a decommissioned Clariion” which talks about the zerodisk CLI command. Included in that command is a switch that checks whether or not a drive has been zeroed. That’s exactly what we need!

 Zerodisk procedure

First of all, you might want to retrieve the zero mark for all the disks in your array. The command syntax is as follows:

CLARiiON Data Erasure Zerodisk GetZeroMark
naviseccli -h <IP_of_SP> zerodisk -messner <drive_ID> getzeromark

As long as this Zero Mark is NOT either 69704 or 69760, your drive is NOT zeroed. Since this array contains 210 drives, I wasn’t really planning on entering all the drive IDs manually. So I had exported the full list of drives from unisphere and Excelled it with some concatenate formulas to generate the drive IDs (0_0_0 etc). Only then did I try substituting the drive ID in the command for “all”…. sigh! So yeah, do that, it saves time.

I don’t want to wipe my vault drives just yet. There’s conflicting information on the internet whether the zerodisk command removes your FLARE code, so better safe than sorry: create a RAID group and LUN on your VAULT drives. The zerodisk command will not run against disks that have LUNs bound so you’ve now certain your vault is protected against accidental zeroing.

CLARiiON Data Erasure starting the zerodisk process
naviseccli -h <IP_of_SP> zerodisk -messner <drive_ID> start
naviseccli -h <IP_of_SP> zerodisk -messner <drive_ID> status

For a single disk, enter the above commands and zeroing will start. It looks like a single 1TB disk wipe took about 5 minutes per 2%, so the process should complete in a little over 4 hours.

If you’re feeling confident, substitute the drive ID for “all” and watch the magic happen. You will see in the output below that the vault drives are skipped because of the bound LUN; the other error is the drive that is already zeroing.

CLARiiON Data Erasure - zerodisk starting for all disks

Now just wait till all disks are finished, run the getzeromark command, export it to file to prove all drives are empty and you’re set! Fun fact: this CX3 is so old, the sudden I/O of all drives zeroing instantly faulted two drives.

CLARiiON Data Erasure complete!


VNX Data Erasure

The exact same procedure also applies to a VNX. You can zero your disks with the same naviseccli zerodisk command, On the VNX however, the percentage and status commands aren’t that useful. It will just show you an absurd percentage done.

Alternatively, what you can also do is monitor the disks in the Unisphere GUI. Disks that are still erasing show up as “binding”:

VNX disks wiping/erasing
I noticed I didn’t wipe the FAST Cache disks (0_0_4-0_0_5); this was fixed after creating the screenshot.

Comments, suggestions, questions? Plenty of room in the comments section! Happy erasing!