In September 2013 EMC announced the new generation VNX with MCx technology (or VNX2). The main advantage of the new generation is a massive performance increase: with MCx technology the VNX2 can effectively use all the CPU cores available in the storage processors. Apart from a vast performance increase there’s also a boatload of new features: deduplication, active-active LUNs, smaller (256MB) chunks for FAST VP, persistent hotspares, etc. Read more about that in my previous post.
It took a while before I could get my hands on an actual VNX2 in the field. So when we needed two new VNX2 systems for a project, guess which resources I claimed to install them. Me, myself and I! Only to have a small heart attack upon unboxing the first VNX5400: someone stole my standby power supplies (SPS)!
Continue reading “VNX2 hands-on (a.k.a. Who stole my SPS?!)”
A new quarter, a new VNX Uptime Bulletin! This month is all about target releases of code and associated bugs. It’s important to keep up to date with current code releases; not only because certain newer models of disks/modules may not be supported by old code, but also because EMC is constantly fixing known problems and bugs. This VNX Uptime Bulletin headlines with VNX OE 33 updates and continues with target code for R32 and R31.
Continue reading “VNX Uptime Bulletin Q1 2014 – Start Upgrading!”
Finally! I’ve just passed the USPEED Performance Guru exam! I first got aware of the SPEED programs during a Celerra Performance Workshop a couple of years ago. Initially it was an EMC Internal program, so I couldn’t get in without switching employers. In the beginning of 2013 this changed and EMC Partners could also enroll. Too bad I ran out of time with a mountain of projects and other training material… up until now! So what does it mean when someone is USPEED certified?
Continue reading “Entering the league of USPEED Performance Gurus”
The storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!
Continue reading “Need more speed? Hello XtremIO!”
EMC sends out a VNX Uptime Bulletin every quarter to update customers on best practices and fixes which will help you in achieving the maximum possible uptime and robustness for your VNX. You can subscribe to them as you would to with any other ETA (EMC Technical Advisory): log in at http://support.emc.com, go to Support by Product, open your product page (in this case the VNX) and click “Get Advisory Alerts” to subscribe. This bulletin discusses pools and LUN ownership, vault drives, software versions, etc.
Continue reading “VNX Uptime Bulletin Q3 2013”
Recently I ran into an environment with a couple of VNX5700 systems that were attached to the front-end SAN switches with only two ports per storage processor. The customer was complaining: performance was OK most of the time but at some times during the day the performance was noticeably lower. Analysis revealed that the back-end was coping well with the workload (30-50% load on the disks and storage processors). The front-end ports were a bit (over)loaded and spewing QFULL errors. Time to cable in some extra ports and to rebalance the existing hosts over the new storage paths!
Continue reading “Rebalance VNX storage paths without downtime”