EMC World 2014 is back in town, this time with the REDEFINE punchline. After some logistic challenges to get here the show is on the road; general sessions, break-out sessions, hands on labs (HOL). So what’s up with the REDEFINE punchline? What are we redefining in the IT / data infrastructure? And what are the EMC Elect doing at EMC World when not flooding your Twitter feed?
Finally! I’ve just passed the USPEED Performance Guru exam! I first got aware of the SPEED programs during a Celerra Performance Workshop a couple of years ago. Initially it was an EMC Internal program, so I couldn’t get in without switching employers. In the beginning of 2013 this changed and EMC Partners could also enroll. Too bad I ran out of time with a mountain of projects and other training material… up until now! So what does it mean when someone is USPEED certified?
XtremIO is the new all-flash array from EMC, announced not too long ago. Flash has an enormous performance advantage over traditional spinning disks. Although there are no moving parts in a solid state drive, they can still fail! Data on the XtremIO X-Brick will still need to be protected against one or multiple drive failures. In traditional arrays (or servers) this is done using RAID (Redundant Array of Independent Disks). We could simply use RAID in the XtremIO array, but SSDs behave fundamentally different compared to spinning disks. So while we’re at it, why not reinvent our approach of protecting data? This is where XtremIO XDP comes in.
The storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!
LUNs on a storage system represent the blobs of storage that are allocated to a server. A (VNX) storage admin creates a LUN on a RAID Group or Storage Pool and assigns it to a server. The server admin discovers this LUN, formats it, mounts it (or assigns a drive letter) and starts to use it. Storage 101. But there’s more to it than just carving out LUNs from a big pile of Terabytes. One important aspect is LUN ownership: which storage processor will process the I/O for that specific LUN?!
Once your Isilon cluster is up and running you’ll want to keep an eye on it. A piece of software that’s extremely useful to monitor both performance and capacity usage is InsightIQ. Very easy to set-up, it’s extremely powerful both in pro-active and reactive monitoring scenarios. Either sit back and watch the scheduled reports land in your mailbox or take a more active approach and drill down to find the source of a performance problem. Let’s explore further!
Earlier this month EMC announced the new VNX series which promises more performance and capacity at a lower cost per GB and a smaller footprint. The hashtag for the event was #Speed2Lead which was trending on Twitter during the official event and the weeks leading up to the Mega Launch in Milan, Italy. With performance being key in the new systems, the announcement was built around the Monza race track which had the Formula 1 circus in town. Guess what the logo for the launch was?
I myself was on summer holidays during the big event (ending up only a hundred miles away from Milan, albeit a week late ;)), so I couldn’t do much more than refresh twitter and get my timeline blasted to bits. So consider this a catch-up post!
When migrating servers from one storage system to another there are basically two options: Migrate using storage features like SAN Copy or MirrorView, or migrate using server based tools like PowerPath Migration Enabler Host Copy, VMware Storage VMotion or Robo Copy. Which option you choose depends on a lot of factors, including the environment you’re in, the amount of downtime you can afford, the amount of data, etc. I’ve grown especially fond of PowerPath Migration Enabler due to its ease of use. You can throttle its migration speed, your “old” data is left intact (so you’ve got a fallback) and once you’ve gotten use to the commands it’s child’s play to migrate non-disruptively and quickly.
Yesterday we racked and stacked the EMC Isilon systems, prepared most of the cabling and pretty much prepared to start the Isilon systems. Which is pretty uneventful if you consider we’ve been dragging along hundreds of kilograms of equipment all day yesterday… The whole process can be pretty much split in four parts: configure the cluster and initial node, join the remaining nodes, configure the network, configure the rest.
I’m currently contracted by a customer that has been experiencing chronic capacity and performance issues in their storage environment. After analyzing the environment and writing an advisory report we got to work and started correcting and improving many aspects of the storage systems. One component of this overhaul is installing a pair of new Isilon systems which will store PACS (Picture Archiving and Communication System) data generated by the radiology department. The planning and design phase took place over the last couple of months, in which we involved both internal IT people and external resources such as the PACS vendor and the suppliers. All said, discussed and done: the actual implementation of the Isilon systems is scheduled for this week. Today: Isilon rack and stack!