This is the second post in a (mini-)series on VNX Performance and how to leverage SSD in VNX systems. You can find part one on skew and data placement over here. This post will dive deeper into the FAST Cache improvements that have been added in VNX OE R32.
If you’ve got SSD drives in a VNX you can use it in two ways: as FAST Cache or as an extreme performance tier in your storage pools (FAST VP). Either of these implementations has advantages and disadvantages and they can also be used concurrently (i.e. use FAST Cache AND FAST VP). It depends on what you want to do…
The last post ended with some improvements in VNX OE R32. But there is more…
FAST Cache Promotion Avoidance
If you’ve read whitepapers on FAST Cache before you can probably remember the best practice that you should not enable FAST Cache on LUNs that have a lot of sequential small I/O. A prime example would be the log LUNs for a database/exchange server. Log entries are appended to a file and are flushed to the storage sequentially. Why would this be an issue for FAST Cache?
FAST Cache uses a tracking map/table to identify which 64KB blocks should be promoted to FAST Cache. Each time an I/O hits a block the corresponding counter in the tracking map is incremented. Once it hits a certain threshold (e.g. 3), this block is eligible to be promoted.
Let’s think this through. We start with no data in FAST Cache. Now picture a sequential write stream to a LUN, each I/O being 4KB large. Once the first I/O is received by the storage system the associated FAST Cache counter for that block is incremented to 1. Easy!
The second 4KB I/O comes in. Because FAST Cache tracks in 64KB blocks, for FAST Cache this is a re-hit of the same block. Thus the counter is incremented to 2.
The third I/O comes in and writes 4KB of data. Again, FAST Cache sees it as a re-hit and increments the tracking counter to three. The 64KB chunk is now marked for promotion!
Add a couple more of these sequential I/Os (we can actually write sixteen of these 4KB blocks and FAST Cache would still think we’re rehitting the same block) and at some point during the stream the block will be actually promoting into FAST Cache. During this promotion there will be a slight delay (a lock): in the worst case this will be noticeable by the writing server as a 30% performance drop. And even worse: you will now have a 64KB chunk promoted into FAST Cache that you are not going to use again any time soon! Talk about wasting precious SSD capacity!
To prevent the above behavior you’d have to disable FAST Cache on the LUNs that have small sequential I/O. And here you run into another problem: FAST Cache is enabled on a storage pool level. Which leaves you with pretty much only bad options: keep using RAID groups where you can still enable/disable FAST cache on a LUN level, move all your LUNs with small sequential I/O to a separate pool or accept the performance and efficiency loss.
EMC has come to the rescue and solved this issue in R32 with a small sequential I/O filter. The filter will identify small sequential I/O and prevent it from triggering a FAST Cache promotion. The immediate benefit is that you, the storage admin, no longer have to juggle around with LUNs and decide for each and every one whether to enable or disable FAST Cache. You can just enable FAST Cache on the pools even if it has small sequential I/O going to some of the LUNs: the VNX will do the rest!
FAST Cache Fault Isolation
Apart from the small sequential I/O filter there have also been some improvements on fault handling with FAST Cache.
In R31 a faulted drive in a FAST Cache mirror would cause FAST Cache to flush all outstanding writes to disk to prevent data loss. FAST Cache would then go into a read-only mode: it would still cache reads but no longer cache writes.
If you have a lot of FAST Cache drives this is a bit of overkill. Lets assume you have 8 FAST Cache drives (4 mirror groups) and one hotspare. If you lose one FAST Cache drive, you still have three healthy FAST Cache mirrors. Why disable write caching on those as well?
So starting with R32 the fault isolation behavior has improved significantly:
- The healthy mirrors will continue to service both read and write promotions into FAST Cache.
- New promotions coming into FAST Cache will be sent to the healthy mirrors.
- The existing data on the faulted mirror will continue to be available for reads.
- Any outstanding writes (dirty pages) on a faulted mirror will still be flushed to disk to prevent data loss. Should any of these writes remain to be high intensity I/O, it will be re-promoted into FAST Cache to a healthy mirror.
You will still lose some performance when a drive fails, but it will no longer be all or nothing.