Finally! I’ve just passed the USPEED Performance Guru exam! I first got aware of the SPEED programs during a Celerra Performance Workshop a couple of years ago. Initially it was an EMC Internal program, so I couldn’t get in without switching employers. In the beginning of 2013 this changed and EMC Partners could also enroll. Too bad I ran out of time with a mountain of projects and other training material… up until now! So what does it mean when someone is USPEED certified?
About 7 or 8 months ago we installed two 3-node Isilon clusters at a hospital destined to host PACS data. Since then it’s all quiet on the Isilon front: no hardware failures, no performance problems. No complaints there of course, but also.. slightly.. boring. FINALLY, a couple of days ago the Isilon sent us an email. “Device disconnected. Boot mirror is critical. Unhealthy Isilon boot disk. Mirror is degraded.” That kind of stuff. Woohoo, ACTION!
XtremIO is the new all-flash array from EMC, announced not too long ago. Flash has an enormous performance advantage over traditional spinning disks. Although there are no moving parts in a solid state drive, they can still fail! Data on the XtremIO X-Brick will still need to be protected against one or multiple drive failures. In traditional arrays (or servers) this is done using RAID (Redundant Array of Independent Disks). We could simply use RAID in the XtremIO array, but SSDs behave fundamentally different compared to spinning disks. So while we’re at it, why not reinvent our approach of protecting data? This is where XtremIO XDP comes in.
Over the last couple of months I’ve been busy phasing out an old EMC CLARiiON CX3 system and migrating all the data to either newer VNX and/or Isilon systems. The hard work paid off: the CX3 is now empty and we can start to decommission it. But before we ship it back to EMC we need to employ a type of CLARiiON data erasure to make sure data doesn’t fall into wrong hands.
Update 2019-01: You can also use these commands on a VNX. I’ve updated the post with several additional screenshots and fixed a few typo’s.
The storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!
EMC sends out a VNX Uptime Bulletin every quarter to update customers on best practices and fixes which will help you in achieving the maximum possible uptime and robustness for your VNX. You can subscribe to them as you would to with any other ETA (EMC Technical Advisory): log in at http://support.emc.com, go to Support by Product, open your product page (in this case the VNX) and click “Get Advisory Alerts” to subscribe. This bulletin discusses pools and LUN ownership, vault drives, software versions, etc.
LUNs on a storage system represent the blobs of storage that are allocated to a server. A (VNX) storage admin creates a LUN on a RAID Group or Storage Pool and assigns it to a server. The server admin discovers this LUN, formats it, mounts it (or assigns a drive letter) and starts to use it. Storage 101. But there’s more to it than just carving out LUNs from a big pile of Terabytes. One important aspect is LUN ownership: which storage processor will process the I/O for that specific LUN?!
Once your Isilon cluster is up and running you’ll want to keep an eye on it. A piece of software that’s extremely useful to monitor both performance and capacity usage is InsightIQ. Very easy to set-up, it’s extremely powerful both in pro-active and reactive monitoring scenarios. Either sit back and watch the scheduled reports land in your mailbox or take a more active approach and drill down to find the source of a performance problem. Let’s explore further!
Earlier this month EMC announced the new VNX series which promises more performance and capacity at a lower cost per GB and a smaller footprint. The hashtag for the event was #Speed2Lead which was trending on Twitter during the official event and the weeks leading up to the Mega Launch in Milan, Italy. With performance being key in the new systems, the announcement was built around the Monza race track which had the Formula 1 circus in town. Guess what the logo for the launch was?
I myself was on summer holidays during the big event (ending up only a hundred miles away from Milan, albeit a week late ;)), so I couldn’t do much more than refresh twitter and get my timeline blasted to bits. So consider this a catch-up post!