Isilon scale-out NAS systems lean heavily on DNS. DNS plays a vital role in the way the clients access the files on the Isilon cluster: clients are distributed across nodes and network interfaces based on what your DNS reports back to the clients. This is all done by the SmartConnect component in the Isilon which is configurable up to a certain extent. In this post I’ll explain what SmartConnect does and will also share a tip to speed up your Isilon DR procedure.
Isilon SmartConnect zone
An Isilon cluster consists of 3 or more nodes, each having multiple network interfaces. You can chop your cluster and/or nodes up into “SmartConnect zones” to segregate traffic. For example: the first 10Gbit NIC on your X-nodes is for the R&D department, the second NIC on the X-nodes is for manufacturing and all other clients end up using one of the four 1Gbit NICs on the NL-nodes. Each of these zones has a range of IP addresses which are automatically spread across the available interfaces and nodes. How are you going to make sure you spread your client load across all clients and interfaces?
The Isilon does this by taking care of the DNS process. Instead of making an A-record in your own DNS saying “hey Client, you can find Isilon-1 at 127.0.0.1”, you will create a zone delegation that tells the client “hey Client, this is the SmartConnect Service IP (SSIP) of the Isilon; ask him/her which IP you need to use to access your files!”. You can create multiple SmartConnect zones; one for that R&D Dept, one for Manufacturing, etc. In each SmartConnect zone you can configure how the IP addresses are handed out to the clients: for example Round-Robin or load-balanced based on node CPU usage or interface bandwidth usage, etc.
If done right and using the earlier example, the clients that access the Isilon via research.isilon001.customer.lan will now get an IP matching the least loaded X-node and first 10Gbit interface, another client accessing the Isilon via manufacturing.isilon001.customer.lan will end up on the least loaded X-node and second 10Gbit interface, and all the other clients accessing for example generic-smb.isilon001.customer.lan will end up on a NL-nodes and any of the 1Gbit interfaces.
The above approach makes it easy to integrate your Isilon in the existing infrastructure (the changes to your DNS are minimal) and yet allows the Isilon to spread the clients efficiently across its components.
So how about Disaster Recovery?
Disasters happen. Even if the Isilon itself doesn’t experience a catastrophic failure, the power might fail in your data center or an admin might press the wrong buttons and bring down the system. To make sure your disaster recovery is quick and your clients can access the data as soon as possible, you need:
- Your data on a secondary Isilon. Typically this means replicating the data using SyncIQ. During a disaster you would have to make your replicas read/write accessible: you can do this with the press of a button.
- Make sure your clients can access the replicated data. SMB shares and NFS exports on the secondary Isilon need to be configured correctly to make sure that (after you make the data accessible in step 1) your clients can actually access it. This is a matter of consistent and correct management: if you create or modify a share on the primary Isilon, do the same on the secondary Isilon.
- Point your clients to the right Isilon, in this case the secondary, DR Isilon. There are two sub-options:
- Reconfigure all your clients to now go to Isilon-002. This is a LOT of work, so not the recommended approach.
- Make sure Isilon-002 mimics Isilon-001. Part 1 is redirecting your DNS requests to the secondary Isilon; this is a simple step in your DNS server where you change the DNS delegation. Part 2 is making sure your secondary Isilon actually responds to requests: i.e. make it think it’s the primary Isilon. You can do this in advance (before the actual disaster recovery) by creating aliases, but this is unfortunately a CLI-only command.
First of all, list your IP address pools on the secondary Isilon with isi networks list pools
Find the pool you want to add your alias to, then run the following command to add the DNS name for the primary Isilon:
isi networks modify pool –name subnet0:pool0 –add-zone-aliases isilon-001-research.customer.lan
It is entirely possible to have multiple aliases for one zone on your secondary Isilon: in that case use a comma to the separate them in the above command.
Repeat these steps for all zones and ensure that each SmartConnect Zone on the secondary Isilon also listens to the equivalent SmartConnect Zone on the primary Isilon.
In the image above you can see the original zone name (smb.file001.xxx.xx) and the alias for it (smb.file101.xxx.xx).
Your DR process is now brought back to ensuring your SyncIQ replicas are read/write accessible and changing an IP address in your DNS server. Job done!