Last year Dell acquired EMC. EMC was rebranded to “Dell EMC” and moved under the Dell Technologies umbrella. In the last couple of months we could see the websites, emailing lists and Twitter accounts change from EMC to Dell EMC, etc. So it was inevitable that the EMC Elect program was going to change as well. In the meantime Dell also had an influencer program called the Dell Tech Centre Rockstars. Going forward, these two programs will merge and, as of today, be known as the Dell EMC Elect!
I will be attending Dell World this October in Austin, Texas and will be trying to find out how the merger between these massive companies will impact the Dell and EMC storage product portfolios. Flying in under the EMC Elect program, we should be having a front row seat to all the exciting announcements!
I had the opportunity to play with a new EMC product last week: ScaleIO. It’s definitely not a new EMC product (I troubleshooted the 1.31 version and EMC released 2.0 at EMC World 2016) but I just hadn’t had the honor to work with one of those systems yet. ScaleIO is a software-defined storage solution that uses the local disks in your commodity server and shares these out as block LUNs across the Ethernet. Which means this architecture can scale pretty well, both on capacity and performance, using hundreds (if not thousands) of servers and disks.
Today on February the 1st, EMC announced the EMC Elect 2016. I’m happy to again be part of the EMC Elect, now for the 4th year in a row!
Each year the EMC Elect program selects a number of individuals from all over the world that have shared knowledge on or worthwhile news about EMC systems. It does not matter how you do this: either via EMC’s own community network ECN, on Twitter, via blogs, while speaking at events, or other media. As long as you share your knowledge with the rest of the world, you can get nominated. Yes, even if you have an occasional rant about that feature that your VNX still does not support (*cough* removing private RAID groups from a storage pool *cough*), as long as you keep it constructive; whining is too easy.
Did you ever install an Isilon cluster, connect all the cables and run through the configuration wizard, only to find out you still can’t connect to the cluster? Sure you did, happens to everyone. Maybe the cluster only has one 10GigE port online while the network team is still scrambling for 10GigE modules. Or maybe you have configured the invalid VLAN tag on the subnet. This post will group some of the more useful Isilon network commands, so you can enable VLAN tagging or add additional ports to the pool via the CLI.
Several weeks ago we performed a resiliency test on a VMAX10k that was about to be put into production. The customer wanted confirmation that the array was wired up properly and would respond correctly in case there were any issues like a power or disk failure. This is fairly standard testing and makes sense: better to pull a power plug and see the array go down while there is no production running against it, right?
We pulled one power feed from each system bay and storage bay. Obviously: no problem for the VMAX, it dialed out, notified EMC it lost power on a couple of feeds and that’s it. Next up we yanked out a drive: I/O continued, the array dialed out to EMC that something was wrong with a drive, but… we didn’t see anything in Unisphere for VMAX!
Today EMC announced the names of the EMC Elect 2015. I’m thrilled to be able to say: I’m one of them! Three years in, the Elect program is EMC’s way of saying “thank you!” to her online advocates. We share our knowledge, either by blogging or tweeting or speaking at public events. In return EMC gives us (among others) early access to new, hot product information, an easy path to the various EMC business units, some swag, comfy beanbags to sit in at Tier 1 events like EMC World, etc. And of course bragging rights…
In 2014 EMC announced their participation in the VMware EVO:RAIL program which combines storage, networking and VMware compute into a hyper-converged infrastructure appliance: 1 to 4 x86 appliances with internal storage, VMware vSphere and VSAN on top. Connect it to your network, open the management interface and you’re set. Today EMC delivers on that promise with the VSPEX BLUE hyper-converged appliance. The mission of VSPEX BLUE: add simplicity to your IT infrastructure and enable the EVO:RAIL user to easily use all the other EMC offerings like RecoverPoint and ESRS. Let’s see how it does that…
Not all data is accessed equally. Some data is more popular than other data that may only be accessed infrequently. With the introduction of FAST VP in the CX4 & VNX series it is possible to create a single storage pool that has multiple different types of drives. The system chops your LUNs into slices and each slice is assigned a temperature based on the activity of that slice. Heavy accessed slices are hot, infrequently accessed slices are cold. FAST VP then moves the hottest slices to the fastest tier. Once that tier is full the remaining hot slices go to the second fastest tier, etc… This does absolute wonders to your TCO: your cold data is now stored on cheap NL-SAS disks instead of expensive SSDs and your end-users won’t know a thing. There’s one scenario which will get you in trouble though and that’s infrequent, heavy use of formerly cold data…
MCx FAST Cache is an addition to the regular DRAM cache in a VNX. DRAM cache is very fast but also (relatively) expensive so an array will have a limited amount of it. Spinning disks are large in capacity and relatively cheap but slow. To bridge the performance gap there are the solid-state drives; both performance wise and cost wise somewhere between DRAM and spinning disks. There’s one problem though: a LUN usually isn’t 100% active all of the time. This means that placing a LUN on SSDs might not drive your SSDs hard enough to get the most from your investment. If only there was software that makes sure only the hottest data is placed on those SSDs and that will quickly adjust this data placement depending on the changing workload, you’d have a lot more bang for your buck. Enter MCx FAST Cache: now with an improved software stack with less overhead which results in better write performance.