PowerPath

3 posts

Need more speed? Hello XtremIO!

EMC Xtrem Stacked LogoThe storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!

Continue reading

Migrating with PowerPath Migration Enabler

PowerPath Migration Enabler installWhen migrating servers from one storage system to another there are basically two options: Migrate using storage features like SAN Copy or MirrorView, or migrate using server based tools like PowerPath Migration Enabler Host Copy, VMware Storage VMotion or Robo Copy. Which option you choose depends on a lot of factors, including the environment you’re in, the amount of downtime you can afford, the amount of data, etc. I’ve grown especially fond of PowerPath Migration Enabler due to its ease of use. You can throttle its migration speed, your “old” data is left intact (so you’ve got a fallback) and once you’ve gotten use to the commands it’s child’s play to migrate non-disruptively and quickly.

Continue reading

Rebalance VNX storage paths without downtime

Recently I ran into an environment with a couple of VNX5700 systems that were attached to the front-end SAN switches with only two ports per storage processor. The customer was complaining: performance was OK most of the time but at some times during the day the performance was noticeably lower. Analysis revealed that the back-end was coping well with the workload (30-50% load on the disks and storage processors). The front-end ports were a bit (over)loaded and spewing QFULL errors. Time to cable in some extra ports and to rebalance the existing hosts over the new storage paths!

Analyzer Helper Front-End ports

Continue reading