Migrating with PowerPath Migration Enabler

PowerPath Migration Enabler installWhen migrating servers from one storage system to another there are basically two options: Migrate using storage features like SAN Copy or MirrorView, or migrate using server based tools like PowerPath Migration Enabler Host Copy, VMware Storage VMotion or Robo Copy. Which option you choose depends on a lot of factors, including the environment you’re in, the amount of downtime you can afford, the amount of data, etc. I’ve grown especially fond of PowerPath Migration Enabler due to its ease of use. You can throttle its migration speed, your “old” data is left intact (so you’ve got a fallback) and once you’ve gotten use to the commands it’s child’s play to migrate non-disruptively and quickly.

PowerPath Migration Enabler is a component of PowerPath. Prior to version 5.7 SP1 you had to install and license it separately. Starting from PowerPath 5.7 SP1 it’s installed by default and no longer needs a separate license. This is a big, big advantage since adding PowerPath Migration Enabler to your install afterwards (officially) requires a reboot. Since you’re trying to migrate data non-disruptively in the first place, this reboot is annoying or even impossible in some environments.

Assuming the software is in place, the migration consists of the following steps:

  1. Assigning new storage to the server.
  2. Setting up the PowerPath Migration Enabler Host Copy sessions
  3. Starting and monitoring the sessions.
  4. Once the initial sync completes: switch over to your target storage.
  5. Finalize and clean-up.

Starting your PowerPath Migration Enabler Host Copy migration

First of all, assign your new storage to the server. This LUN needs to be equal in size or larger and should not be in use by any application. Assign your LUN to the server, make sure PowerPath recognizes it correctly, then STOP! Do not initialize/format the disk, assign drive letters or all that other nonsense. As long as PowerPath sees it, you’re set.

PowerPath Migration Enabler uses the pseudo devices created by PowerPath. Look up the names of your source and target devices: running the “powermt display dev=all” command on the server is the easiest method.powermt display dev=all output

In this specific scenario we’re migrating two Commvault backup LUNs, each 5,5TB. The pseudo names for these devices are harddisk1 and harddisk2. The target devices are identical in size: harddisk11 and harddisk12. Now we’ve got all the information, let’s start the migration!

Setup the PowerPath Migration Enabler sessions

Create the migration sessions with the following syntax:
powermig setup -src <source_pseudo> -tgt <target_pseudo> -techType hostcopy
Additionally you can add -throttlevalue <0-9> to govern the speed of the migration. If you don’t specify a custom throttle it will default at 2. More on the various throttles further down in this post.

The command will output a handle for each session. Make a note of these handles since you’ll need them with future commands. Next up, start the sessions with the sync command:
powermig sync -handle <id>. Substitute <id> with your handle number.

Tuning and monitoring

Migration speed is governed by the throttle value associated with the session. The throttle ranges from 0-9, 0 being the fastest mode and 9 the slowest:

powermig-throttle-value

Changing the throttle speed is easy:
powermig throttle -handle <id> -throttlevalue <0-9>
The default throttle of 2 was a bit too high for this particular server so I tuned it back to a throttle of 3. This allows it to run 24/7 without impacting the production environment and still finish in a reasonable time. Don’t be afraid to adjust the throttle during the day to get the optimal mix between performance impact and migration duration.

Monitoring the session is done using the query command:

powermig query

This will show you (amongst others) the devices participating in the migration, their sizes, the migration state, throttle, throughput and time to completion. If you have a lot of sessions active you can also use “powermig info -all” to get a quick overview of all sessions and their states.

Migration states and their I/O patterns

While we wait for the initial sync to complete, let’s explore the different states a migration session can be in:

  1. Setup. In this state you’ve only just created the PowerPath Migration Enabler session and not started the migration yet. Reads are serviced from the source and writes also only go the the source LUN.
  2. Syncing. The bulk copy from source to target has started. Reads are still coming from the source LUN. Writes are now going to both to the source and target LUN to make sure the target has the most recent data available. Without this write cloning the target would never become completely in sync with the source LUN.
  3. SourceSelected. Once the bulk copy completes, the session automatically transitions into the next stage: SourceSelected. The bulk copy is no longer active, reads are serviced by the source, writes go to both source and target.
  4. TargetSelected. Moving to this stage requires operator interaction and only changes which device services the read I/O. You’ve guessed it already: the target device. Write I/O is still going to both source and target devices. You can hang around in this state for as long as you like and make sure your target array/device is performing as desired. You can even fall back to the SourceSelected state if the target device performance isn’t up to spec (using the powermig selectSource command).
  5. Committed. Following the powermig commit command the I/O paths are changed. All this happens completely transparent and non-disruptive to the OS and applications. I/O is now serviced completely by the target device and the source device is no longer kept in sync.

Cutting over

Back to the migrations at hand. Synchronization has completed and the migrations are shown as SourceSelected. Next we’ll change the Read I/O over to the target LUN by moving to the TargetSelected state using powermig selectTarget -handle <id>

PowerPath Migration Enabler TargetSelected(As you might have noticed, I’ve moved to a different server. I’m too impatient to wait for the 5,5TB LUN migrations to complete before finishing this post…)

Next check whether performance is acceptable on your new LUN. If it is, commit the migration: powermig commit -handle <id>

PowerPath Migration Enabler Commit and Cleanup

Cleaning up

We’re not there yet! First of all, clean up your migration using the powermig cleanup -handle <id> command. This deletes some data on the old LUN so that you never have two identical disks attached to your server (which might confuse the OS). It will also clean up the PowerPath Migration Enabler handles. Now remove your LUN from the storage group and rescan your host. Remove the old device from the PowerPath configuration to stop the systray icon from flashing. If you’ve migrated to a larger LUN, extend your partition to take advantage of the new space. And that’s basically it. You’ve migrated to a new LUN or storage without having to shut down the application and without major performance impact. NEXT!

If you have questions, look up the PowerPath Migration Enabler Administration Guide on support.emc.com: it’s a solid document. Or you can always leave a comment a little further down!

  • dynamox

    Another good use for PPME is for customer who are migrating LUNs that have been encrypted using PowerPath encryption for RSA. PPME will allow you to migrate from encrypted to regular LUN on the fly while decrypting the data.

  • JayST

    i’ve played with PPME and noticed you need your source and target to be on supported arrays to be able to migrate. Powerpath won’t display anything it does not support. As a result, you cannot migrate from “Any vendor” to EMC or vice versa. Supported third party arrays are HDS, IBM, HP but not much more i think.

    Great article btw.

    • Jon Klaus

      Indeed, if PowerPath doesn’t support the array you won’t be able to use this tool. Thanks!

  • dynamox

    another interesting tidbit learned last week, Powerpath 5.7SP1 supports migrating Microsoft Failover clusters online. No longer are you required to take all but one node offline during the copy process and bring failover resource offline to commit the session. So the entire migration is now an online process just like any other windows box. Yay !

    • Drew

      I have to say I am impress with the new Powerpath 5.7SP 1 PPME cluster migration solution. Supporting a large number of cluster deployments, it becomes somewhat of a challenge negotiating downtime to perform migration in order to refresh hardware. I’ve done several and it is as easy as advertised. The only stumbling block is with the initial setup for existing clusters and eventual offline/online of the disk resources in your target cluster resource group(s) in order to kick off the process.

      • dynamox

        Drew,

        my impression was that with 5.7 SP1 you no longer have to have failover resource offline to commit the session ? Did you find it otherwise ?

        • Michael

          Online cluster migration is supported in 2008R2, if you have a 2003 cluster then you have to offline passive resources.

          Also, make sure you use 5.7 SP3 for 2003 cluster migrations as there’s a bug in 5.7 SP1 when you commit.

  • Soke Well

    Excellent post thank you