MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.
FAST Cache works by by copying repeatedly used blocks (64KB) from spinning disks to the FAST Cache SSDs. A memory map tracks which blocks are in MCx FAST Cache. Once an I/O is sent to the VNX it checks whether this block is stored in the DRAM cache or FAST Cache. One of the downsides to the FLARE based systems is that the FAST Cache memory map is checked first before the DRAM cache is queried. This is an artifact due to the fact FLARE was already quite mature when FAST Cache was introduced (and basically couldn’t be squeezed in anymore). Since the introduction of MCx is a complete architecture change, EMC Engineering had the chance to reorder the software stack.
Looking at the above picture, I/O comes in via the front end and flows down to the back end. In the FLARE systems (1st gen VNX and CLARiiONs) the FAST Cache is on top of the FLARE code, meaning FAST Cache gets queried first before the I/O enters FLARE and the DRAM Cache. In the MCx FAST Cache implementation this software layer is queried after the Multicore Cache layer is queried. The net result is that with MCx FAST Cache there is no longer a delayed write acknowledgement.
MCx FAST Cache Page Management
MCx FAST Cache uses a 64KB page size. If you compare this to the MCx FAST VP slice size of 256MB that means FAST Cache uses a 4096 times smaller unit. This increases the likelihood of having actual hot data on the FAST Cache drives instead of some hot data and a lot of cold data around it. By no coincidence this is also the reason why EMC best practices recommend adding FAST Cache to an array first and only then adding a flash tier to a storage pool.
Once a block is hit 3 times it is copied to the FAST Cache and future I/O will be serviced from this solid state FAST Cache drive instead of the spinning drive. An incoming write to the block will land in FAST Cache and will mark the page as dirty. Asynchronously the FAST Cache will write this dirty page back to the spinning disk: once this completes the page is marked clean again. The now clean page will not be discarded from FAST Cache by default; instead it will stay in FAST Cache until it is overwritten by a different promoted, hotter block. This makes sure the FAST Cache is always giving the maximum possible performance improvement to the array.
MCx FAST Cache will try to keep the amount of dirty pages low; this enables it to rapidly promote new pages when the workload changes. One of the mechanisms that enables this is the proactive cleaner process: when FAST Cache is relatively idle the proactive cleaner will copy dirty pages back to the disks based on the LRU (Least Recently Used) algorithm. Should FAST Cache ever run out of space (e.g. too many dirty pages and there’s a whole bunch of new promotions coming up) it will create space by selecting an appropriate amount of dirty pages and flushing them to the back-end disks before reusing the now clean space for the new promotions.
FAST Cache Fair Use Policy
We’ve already seen with the MCx DRAM Cache that cache usage is policed: no longer can a LUN on slow disks claim all the write cache in vain and cause forced flushes for the entire system. A similar idea is applied to FAST Cache: each LUN gets a minimal amount of FAST Cache pages. The minimal page count is calculated by equally sharing 50% of the available FAST Cache pages across all the LUNs, whereas the minimal page count for a LUN cannot exceed 50% of the size of that LUN. The remaining 50% of FAST cache is free-for-all. This makes sure that every LUN has at least some space in FAST Cache reserved. If it doesn’t need to use it that’s fine and the pages are free for use by other LUNs. But as soon as the LUN turns hot, it can claim at least the minimal amount of pages.
Access Patterns – Write & Read
As discussed the software stack is reordered in MCx; instead of first querying the FAST Cache memory map an incoming write now lands in DRAM cache first. This results in a faster ACK to the host.
Once the DRAM write cache needs to flush dirty pages to disk it will query the FAST Cache Memory map to see if that block is promoted to FAST cache. If it is, it will flush the page to the FAST Cache SSD; if the block isn’t promoted it will straight to the spinning disk. So basically the FAST cache memory map query only decides where the block is flushed to: FAST Cache will NOT function as an overflow cache that catches everything that doesn’t fit in DRAM cache!
Reads will also hit the multicore cache first. If the data is in cache it will be immediately returned to the host. If it isn’t (read miss) the FAST cache map is queried; depending on the result the data will be retrieved from either the FAST Cache SSDs or the data drive and stored in DRAM cache before being returned to the host. Again the FAST cache will not function as an overflow cache once the read cache is full; data will only be promoted to FAST Cache once the 64KB page has received enough repeated I/O and is deemed hot enough.
In conclusion…
If you want to squeeze more performance from your VNX array FAST Cache is an ideal first step. MCx FAST Cache uses a small page size of 64KB which means very little SSD space is wasted on cold data, giving you maximum return on your investment. Data residing on spinning disks is copied to the FAST Cache drives once the block is repeatedly accessed and will remain in FAST Cache until it needs to make room for hotter data. This does mean that FAST Cache needs a warm-up time and your data might not always be in FAST Cache when you need it to be. So if you need constant SSD level performance and response times you might be better off to put the LUN on dedicated SSD drives or pin it in the highest tier in a storage pool that has an Extreme Performance tier. It’s also good to remember that FAST Cache does not function as an overflow cache: if you have a long sequential write burst to data that doesn’t reside in FAST Cache, these writes will end up going to the spinning drives. So size your back-end appropriately!
That said, FAST Cache is a powerful tool to accelerate your VNX performance to SSD speeds at minimal cost. If you want to read more about FAST Cache in CLARiiON or VNX systems, head over to support.emc.com and find either the MCx Multicore Everything whitepaper or the more specific FAST Cache whitepapers. Or you can just leave a comment down below!