I’m currently contracted by a customer that has been experiencing chronic capacity and performance issues in their storage environment. After analyzing the environment and writing an advisory report we got to work and started correcting and improving many aspects of the storage systems. One component of this overhaul is installing a pair of new Isilon systems which will store PACS (Picture Archiving and Communication System) data generated by the radiology department. The planning and design phase took place over the last couple of months, in which we involved both internal IT people and external resources such as the PACS vendor and the suppliers. All said, discussed and done: the actual implementation of the Isilon systems is scheduled for this week. Today: Isilon rack and stack!
We required two systems that could scale out easily (both capacity and performance; this is often forgotten!) and would also be easy to maintain. This “easy to maintain” would have to be both while the system was in service/operational and when the system was written off and needed to be replaced. Radiology departments have the habit of always wanting to store more data with more or better scanners. This drives up the amount of TBs of data we need to store to quantities that are difficult to manage, let alone migrate without downtime. If we could prevent having to migrate in the first place, that would be great.
Enter Isilon! One filesystem. Capacity expansion in <60s after racking and stacking. Expansion also adds CPU/Memory/NICs, so performance scales linearly as well. Pretty easy management of the system. System written off? Add a new node, put an old node in “maintenance mode”, wait for rebalance to complete, remove old node. Repeat till all nodes have been replaced -> voila, a new system without downtime or migration!
Our current PACS demand is archiving only so we went for nodes from the NL-series. Three nodes each at 36TB, throw in a N+2:1 protection ratio and you end up with roughly 70-ish TBs of available space. And if we run out of space we’ll plug in an additional node, press some buttons and we’ve got extra capacity and performance (due to the fact we’re not only adding disks but also all the other resources such as CPU, memory and network interfaces).
If you are considering purchasing a new system with NL nodes, make sure you have nodes with plenty of RAM! Don’t try to save a couple of bucks/euro’s and skimp on RAM; you’ll pay double in performance.
No heavy lifting required
The two systems were spread across 3 pallets. After unpacking we ended up with two carts, each cart containing one entire cluster: two Infiniband switches, three NL400 chassis and nine boxes of disks. Be aware: the disks are matched to the node serial number, so don’t start mixing up the boxes!
The fact that the nodes are empty disk-wise makes them pretty easy to lift. Install the rails, shove the node into the rails. Insert the drives: keep an eye on the SATA connectors; there could be some dust or foam trapped there from transport. Just blow them.. oldskool NES! ;). Install the front panel. Rinse and repeat for all nodes, then install the Infiniband switches in your rack. Keep in mind that Infiniband cables have a max length of 10 meters: place those switches in a tactical spot so that you can
confiscate.. expand to other racks once your cluster grows.
Cable it up!
Next up: cabling! First up: the Infiniband cabling. Be careful with the cables: they’re fragile and you don’t want to bend them in too small of a radius. Connect node 1 to switchport 1 on both switches, node 2 to ports 2, etc.
The Infiniband network is an internal network in the Isilon cluster and will be used to transport chunks of data between nodes. Performance matters!
Next up: front end network cabling. These systems are equipped with four 1 GigE interfaces each. Cable them redundantly to your network so that you can sustain a switch reboot.
The end result should look something like this. Please: take your time cabling this gear in! Spaghetti is good but it should be on a plate covered with sauce and some Parmesan cheese. It doesn’t belong in a data center rack: it’s a pain in the behind for the next guy/girl that needs to work in that rack. And to be really honest: it’s just a disgrace to leave a rack like something exploded inside. Have some pride in your work!
The observant reader might think by now: “Hey, do they have wireless power in The Netherlands?” Nope… The systems came shipped with cables that had normal wall-outlet connectors. Since we need to plug into PDUs (Power Distribution Unit), we need those extension-cord type (officially: C13-C14 power cables). Since we don’t nearly have enough of cables on stock and the fact that most of them are 2+ meters in length, we’re postponing the power cabling till tomorrow so we can source some shorter, proper cables.
That’s it for day one. Day two covers starting up the Isilon and configuring the network and other basic settings. You can read about that over here.