Last year EMC installed two new three-node Isilon clusters for one of our customers. 13 months later the customer needs more capacity so the time for adding Isilon nodes to the existing clusters has finally come. Good news for me: since the initial install I got certified to install Isilon systems myself so these expansions are all mine! EMC marketing promises an Isilon cluster expansion in 60 seconds; let’s put it to the test!
“Check your email ;)”. That was the first Twitter DM I read one sleepy morning in June. It’ll suffice to say, a minute later I was wide awake: I was chosen to represent the EMC Elect at the EMC “Redefine Possible” MegaLaunch event in London (UK)! I knew about these launch events because my colleague Rob attended one last year in Milan. Excitement started building and a couple of hours later I figured out I wasn’t going alone…
July 8th 2014. EMC MegaLaunch 4. Theme: Redefine Possible (or #RedefinePossible on Twitter). What previously was impossible, now is possible! Catchy theme and something that we’ve seen in IT for a number of times now. For example: In the 1990’s, who would have thought it was possible to migrate a server from one datacenter to another, possibly a couple of miles away, without downtime, in a couple of seconds?! Doing things fundamentally different, better: that’s the goal we’re always trying to achieve. So how can we apply this to Isilon?
Isilon scale-out NAS clusters (or grids) are built out of nodes or servers using (relatively) cheap commodity hardware. Compared to a traditional storage system this has the advantage that with every capacity (=disks) expansion you’re adding to the cluster, the number of CPUs, amount of RAM and network ports grows accordingly. You just add another node or server and poof: more TBs, more speed. The proprietary software OneFS then glues all those nodes together to create one single immense filesystem (up to 20PB with current drive specs). But what’s one trait of commodity hardware? You occasionally need an Isilon firmware upgrade! The LSI disk controller needs a new release once in a while, as do perhaps the front panel or the Infiniband components. This howto explains what to do to make sure you’re running the latest firmwares!
Every IT system needs a software upgrade once in a while, either to enable additional functionality or to patch some security holes. Yes, even an Isilon scale-out NAS. Good news: performing an Isilon OneFS Upgrade is peanuts! Including pre-checks and the post-checks our 3-node cluster was upgraded in less than 2 hours without downtime. Curious how to do this?
A couple of months ago InsightIQ 3.0 was released. This new release offers improvements in the interface and under the hood, especially when paired with OneFS 7.1. Upgrading is straightforward, done in under 30 minutes and also makes sure you’re not affected by the Heartbleed bug. Start clicking!
About 7 or 8 months ago we installed two 3-node Isilon clusters at a hospital destined to host PACS data. Since then it’s all quiet on the Isilon front: no hardware failures, no performance problems. No complaints there of course, but also.. slightly.. boring. FINALLY, a couple of days ago the Isilon sent us an email. “Device disconnected. Boot mirror is critical. Unhealthy Isilon boot disk. Mirror is degraded.” That kind of stuff. Woohoo, ACTION!
Once your Isilon cluster is up and running you’ll want to keep an eye on it. A piece of software that’s extremely useful to monitor both performance and capacity usage is InsightIQ. Very easy to set-up, it’s extremely powerful both in pro-active and reactive monitoring scenarios. Either sit back and watch the scheduled reports land in your mailbox or take a more active approach and drill down to find the source of a performance problem. Let’s explore further!
Yesterday we racked and stacked the EMC Isilon systems, prepared most of the cabling and pretty much prepared to start the Isilon systems. Which is pretty uneventful if you consider we’ve been dragging along hundreds of kilograms of equipment all day yesterday… The whole process can be pretty much split in four parts: configure the cluster and initial node, join the remaining nodes, configure the network, configure the rest.