Storage

113 posts

PSA: Isilon L3 cache does not enable with a 1:1 HDD:SSD ratio

Isilon L3 cache not enablingI recently expanded two 3-node Isilon X210 clusters with one additional X210 node each. The clusters were previously installed with OneFS 7.x, and upgraded to OneFS 8.1.0.4 somewhere late 2018. A local team racked and cabled the new Isilon nodes, after which I added them to the cluster remotely via the GUI. Talk about teamwork!

A brief time later the node actually showed up in the isi status command. As you can see in the picture to the right, something was off: the SSD storage didn’t show up as Isilon L3 cache. A quick check did show that the hardware configuration was consistent with the previous, existing nodes. The SmartPool settings/default policy was also set up correctly, with SSDs employed as L3 cache. Weird…

Continue reading

How To: Clone Windows 10 from SATA SSD to M.2 SSD (& fix inaccessible boot device)

1TB WD Black SN750 NVMe M.2 SSDA few weeks ago I received a 1TB Western Digital Black SN750 M.2 SSD, boasting an impressive 3470 MB/s read speed on the packaging. I already had a SATA SSD installed in my gaming/photo editing PC. Nevertheless, those specs got me to pick up a screwdriver and install the new M.2 SSD. The physical installation is dead simple: remove graphics card, install M.2 SSD, reinstall graphics card. I wasn’t really looking forward to a full reinstallation of Windows 10 though. There’s just too many applications, settings and licenses on that system that I didn’t want to recreate or re-enter. Instead, I wanted to clone Windows 10 from SATA SSD to M.2 SSD.

After a little bit of research, I ended up with Macrium Reflect, which is freeware disk cloning software. Long story short: I cloned the old SSD to the M.2 SSD, rebooted from the M.2 SSD, and… was greeted with a variety of errors. The main recurring error was Inaccessible Boot Device, however in my troubleshooting attempts I saw many more errors.

Continue reading

Faster and bigger SSDs enable us to talk about something else than IOps

Bus overload on an old storage array after adding a few SSDs

The first SSDs in our storage arrays were advertised with 2500-3500 IOps per drive. Much quicker than spinning drives, looking at the recommended 140 IOps for a 10k SAS drive. But it was in fact still easy to overload a set of SSDs and reach its max throughput, especially when they were used in a (undersized) caching tier.

A year or so later, when you started adding more flash to a system, the collective “Oomph!” of the Flash drives would overload other components in the storage system. Systems were designed based on spinning media so with the suddenly faster media, busses and CPUs were hammered and couldn’t keep up.

Queue all sorts of creative ways to avoid this bottleneck: faster CPUs, upgrades from FC to multi-lane SAS. Or bigger architectural changes, such as offloading to IO caching cards in the servers themselves (e.g. Fusion-io cards), scale-out systems, etc.

Continue reading

Reassign Isilon node IP addresses; go OCD!

A while ago I installed two new Isilon H400 clusters. With any IT infrastructure, consistency and predictability is key to a trouble-free experience in the years to come. Cables should be neatly installed, labeled and predictable. Wiring in the internal network cables, it helps if the nodes 1 through 4 are connected to switch ports 1 through 4 in order, instead of 1,4,2,3. While some might consider this OCD, it’s the attention to detail that makes later troubleshooting easier and faster. Like a colleague said: “If someone pays enough attention to the little details, I can rest assured that he definitely pays attention to the big, important things!”.

So I installed the cluster, configured it, then ran an isi status to verify everything. Imagine my delight when I saw this:

Isilon nodes before reassigning node IPs

Aaargh!

Continue reading

Data Domain migration and retaining your system name and IP addresses

DD3300Several of our Data Domains are end-of-life and need to be replaced with new hardware. In most of the cases it’s a small site with a small Data Domain that only holds roughly 1 month of backups. In these cases we just install a new Data Domain next to it, reconfigure our our backup software, and that’s it. After a month, the old backups have expired and you can switch off the old Data Domain.

For the slightly larger sites, there’s more than one backup client/server writing to the Data Domain. There are Oracle RMAN backups, SQL dumps, etc. Plus the retention of backups on the Data Domain is much, much longer. In these cases you want to perform a proper Data Domain migration which retains the name and IP address of the old Data Domain, so you don’t have to touch all the clients. Here’s how you do that, and a DDBoost gotcha you should be aware of!

Continue reading