MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.
When troubleshooting performance in a CLARiiON or VNX storage array you’ll often see graphs that resemble something like this: write cache maxing out to 100% on one or even two storage processors. Once this occurs the array starts a process called forced flushing to flush writes to disk and create new space in the cache for incoming writes. This absolutely wrecks the performance of all applications using the array. With the MCx cache improvements made in the VNX2 series there should be a lot less forced flushes and a much improved performance.
It is that time of year again: time to select the EMC Elect 2015! EMC Elect is a recognition program for the top EMC social media guru’s worldwide. Customers, EMC partners or employees, independent analysts: everyone can become a part of the EMC Elect 2015. To do so, you only need to be nominated (that’s phase 1) and survive the judging process (phase 2 performed by the EMC Elect judging panel). Interested?
Last year EMC installed two new three-node Isilon clusters for one of our customers. 13 months later the customer needs more capacity so the time for adding Isilon nodes to the existing clusters has finally come. Good news for me: since the initial install I got certified to install Isilon systems myself so these expansions are all mine! EMC marketing promises an Isilon cluster expansion in 60 seconds; let’s put it to the test!
A big change in the VNX2 hot spare policy compared to earlier VNX or CLARiiON models is the use of permanent hot spares. Whereas the earlier models would have dedicated, configured hot spares that would only be used during the drive failure, the VNX2 will use any eligible unused drive to spare to and NOT switch back to the original drive. I’ve written about this and other new VNX2 features but didn’t get to try it first hand yet. Until now: a drive died, yay! Continue reading to see how you can back-track which drive failed, which drive replaced it and how to move the drive back to the original physical location (should you want to).
“Check your email ;)”. That was the first Twitter DM I read one sleepy morning in June. It’ll suffice to say, a minute later I was wide awake: I was chosen to represent the EMC Elect at the EMC “Redefine Possible” MegaLaunch event in London (UK)! I knew about these launch events because my colleague Rob attended one last year in Milan. Excitement started building and a couple of hours later I figured out I wasn’t going alone…
July 8th 2014. EMC MegaLaunch 4. Theme: Redefine Possible (or #RedefinePossible on Twitter). What previously was impossible, now is possible! Catchy theme and something that we’ve seen in IT for a number of times now. For example: In the 1990’s, who would have thought it was possible to migrate a server from one datacenter to another, possibly a couple of miles away, without downtime, in a couple of seconds?! Doing things fundamentally different, better: that’s the goal we’re always trying to achieve. So how can we apply this to Isilon?
In September 2013 EMC announced the new generation VNX with MCx technology (or VNX2). The main advantage of the new generation is a massive performance increase: with MCx technology the VNX2 can effectively use all the CPU cores available in the storage processors. Apart from a vast performance increase there’s also a boatload of new features: deduplication, active-active LUNs, smaller (256MB) chunks for FAST VP, persistent hotspares, etc. Read more about that in my previous post.
It took a while before I could get my hands on an actual VNX2 in the field. So when we needed two new VNX2 systems for a project, guess which resources I claimed to install them. Me, myself and I! Only to have a small heart attack upon unboxing the first VNX5400: someone stole my standby power supplies (SPS)!
Isilon scale-out NAS clusters (or grids) are built out of nodes or servers using (relatively) cheap commodity hardware. Compared to a traditional storage system this has the advantage that with every capacity (=disks) expansion you’re adding to the cluster, the number of CPUs, amount of RAM and network ports grows accordingly. You just add another node or server and poof: more TBs, more speed. The proprietary software OneFS then glues all those nodes together to create one single immense filesystem (up to 20PB with current drive specs). But what’s one trait of commodity hardware? You occasionally need an Isilon firmware upgrade! The LSI disk controller needs a new release once in a while, as do perhaps the front panel or the Infiniband components. This howto explains what to do to make sure you’re running the latest firmwares!
Every IT system needs a software upgrade once in a while, either to enable additional functionality or to patch some security holes. Yes, even an Isilon scale-out NAS. Good news: performing an Isilon OneFS Upgrade is peanuts! Including pre-checks and the post-checks our 3-node cluster was upgraded in less than 2 hours without downtime. Curious how to do this?