VNX Storage Pool LUN Ownership

LUNs on a storage system represent the blobs of storage that are allocated to a server. A (VNX) storage admin creates a LUN on a RAID Group or Storage Pool and assigns it to a server. The server admin discovers this LUN, formats it, mounts it (or assigns a drive letter) and starts to use it. Storage 101. But there’s more to it than just carving out LUNs from a big pile of Terabytes. One important aspect is LUN ownership: which storage processor will process the I/O for that specific LUN?!

Pool LUN Ownership

Lets get back to basics. A VNX has two storage processors. Both are active and servicing host I/O, but a LUN is only owned by one storage processor at a time. This is called an active/passive system. There’s a neat trick to run host I/O through both storage processors: Asymmetric Logical Unit Assignment (or ALUA) will emulate an active-active array, but isn’t REALLY active-active. With the new VNX models announced in Milan a while ago a VNX can run true active/active on RAID Group LUNs. But I digress…

Once you create a LUN you decide on which storage processor it should run. This is the “default owner”. Your friendly storage admin usually balances the LUNs according to the storage processor utilization, efficiently using the available CPU and memory/cache resources. Apart from the default owner there’s the “current owner”: which SP owns the LUN right now. The LUN doesn’t necessarily have to be owned by the default owner at all times: it might have trespassed for a number of reasons (e.g. connectivity issues, SP failure, etc).

Ideally you’ll want to keep your current and default owner identical: there’s no real performance impact of a non-matching current and default owner, but identical owners will make it much easier to check whether everything is OK in your storage environment and spot potential host connectivity issues. And to be honest: you’ve (hopefully!) made a conscious decision which LUNs to assign to which SP, so step 2 is to make sure it’s running as designed. Operate according to your design!

LUN ownership properties

With VNX storage pools a third type of owner is added: the “allocation owner” that manages slice allocation in the pool. Contrary to the default owner (which you can change in the LUN properties) or the current owner (change it by trespassing a LUN), the allocation owner cannot be changed. You will have to resort to LUN migrations to change the allocation owner: more on that later.

Now why would you want to keep the allocation owner identical to the current and default owner?! Simple: performance!

A storage pool is built from (potentially) a lot of private RAID groups. This storage pool is then carved into slices (FLARE LUNs) and these slices are assigned to a storage processor. As soon as I create a LUN on SPA, its allocation owner is also set to SPA and that Pool LUN will be built from slices allocated to SPA. If I change the current owner of the LUN to SPB, the host will communicate with the storage system via SPB. However since the slices are still owned by SPA, I/O will have to traverse the back-end CMI bus to send I/O to the relevant slices. This incurs at the very least some additional latency and SP utilization, but might escalate into a full bottleneck as soon as the CMI bus is saturated.

Check & Correct LUN ownership

So how can we make sure all LUN ownerships are optimal on our array?

First of all: analyze your SP utilization. This will help you decide which SP has the most headroom in terms of CPU utilization and thus which SP can handle more LUNs. This will drive your decisions once you run into LUNs that have incorrectly defined owners. If you’re using MirrorView, be aware that LUN ownership needs to be identical on both the primary and secondary system! So if you change your LUN ownership to SPA on the primary array, be ready to do the same exercise on the secondary array for the corresponding secondary image…

First of all: for RAID Group LUNs there’s no allocation owner. Just make sure the default owner matches the current owner either by changing the default owner or by trespassing the LUN to change the current owner. The main benefit of this is basically a clean “Trespassed LUNs” overview and a return to a “normal” state. For me the trespassed LUNs is a good indication something might be wrong in the environment: either a couple of cables failed and some hosts can’t reach an SP anymore, or software/zoning issues, etc. If you’ve got a hundred false positives in the trespassed LUNs overview, chances are you won’t even look at them anymore and ignore possible warning signs…

For pool LUNs all three owner types will have to be the same. First of all, generate a report from the report wizard “Pool LUNs” option. This will quickly list all LUNs and their allocation/default/current owners. If you have an enormous amount of LUNs you might want to export it to Excel and run some basic filters/formatting against it to quickly find out which LUNs need work.

If you decide you only need to change the current or default owner you can follow the same approach as with the RG LUNs. If however you need to change the allocation owner (for example because your SP utilization is unbalanced and you want to permanently move a LUN to a different SP), you’ll need to perform a LUN migration. Let’s assume you’ve got a LUN with: Allocation Owner = SPA, Default Owner = SPB, Current Owner = SPB, and you want to keep it on SPB.

  1. Create a LUN on the target SP (in this case SPB), identical in size or larger.
  2. Start a LUN migration from your current LUN to your new LUN, using the LUN Migrate option in the VNX. This is a non-disruptive operation: the host won’t even know you’re migrating.
  3. Wait for it to complete, then check and if necessary adjust the LUN ownership (both default and current).

Repeat the above till all your LUNs have matching owners. Pretty easy, but depending on the amount of data you need to move it might take days/weeks.

For more information, either read the corresponding knowledge base article or the whitepaper on VNX Virtual Provisioning. Or leave a comment down below!