Every once in a while you might need to replace an Isilon infiniband switch. Possibly because of a broken switch, the need for more ports, or because the old switch is.. too old. Good news: it’s a fairly straightforward job. And if your cluster has two switches, you can replace a switch at a time without outage.
Last year EMC installed two new three-node Isilon clusters for one of our customers. 13 months later the customer needs more capacity so the time for adding Isilon nodes to the existing clusters has finally come. Good news for me: since the initial install I got certified to install Isilon systems myself so these expansions are all mine! EMC marketing promises an Isilon cluster expansion in 60 seconds; let’s put it to the test!
Isilon scale-out NAS clusters (or grids) are built out of nodes or servers using (relatively) cheap commodity hardware. Compared to a traditional storage system this has the advantage that with every capacity (=disks) expansion you’re adding to the cluster, the number of CPUs, amount of RAM and network ports grows accordingly. You just add another node or server and poof: more TBs, more speed. The proprietary software OneFS then glues all those nodes together to create one single immense filesystem (up to 20PB with current drive specs). But what’s one trait of commodity hardware? You occasionally need an Isilon firmware upgrade! The LSI disk controller needs a new release once in a while, as do perhaps the front panel or the Infiniband components. This howto explains what to do to make sure you’re running the latest firmwares!
The storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!
Yesterday we racked and stacked the EMC Isilon systems, prepared most of the cabling and pretty much prepared to start the Isilon systems. Which is pretty uneventful if you consider we’ve been dragging along hundreds of kilograms of equipment all day yesterday… The whole process can be pretty much split in four parts: configure the cluster and initial node, join the remaining nodes, configure the network, configure the rest.
I’m currently contracted by a customer that has been experiencing chronic capacity and performance issues in their storage environment. After analyzing the environment and writing an advisory report we got to work and started correcting and improving many aspects of the storage systems. One component of this overhaul is installing a pair of new Isilon systems which will store PACS (Picture Archiving and Communication System) data generated by the radiology department. The planning and design phase took place over the last couple of months, in which we involved both internal IT people and external resources such as the PACS vendor and the suppliers. All said, discussed and done: the actual implementation of the Isilon systems is scheduled for this week. Today: Isilon rack and stack!