The fast WekaIO file system saves you money!

WekaIO file system logoIt is a fact of IT life: hardware becomes faster and more powerful with every new generation on the market. That absolutely applies to CPUs. A few weeks ago at Intel’s Data Centric Innovation Day in San Francisco, Intel presented their new Intel Xeon scalable processors. These beasts now scale up to 56 cores per socket, with up to 8 sockets per system/motherboard. This incredible amount of compute power enables applications to “do things”, whether it’s analytics, machine learning, or running cloud applications.

One thing in common across all applications is that they don’t want to wait for data. As soon as your %iowait is going up, you are wasting your precious and expensive compute power because the storage subsystem is not fast enough. Fortunately, WekaIO wants to make sure this will not be the case for your applications.

Continue reading

PSA: Isilon L3 cache does not enable with a 1:1 HDD:SSD ratio

Isilon L3 cache not enablingI recently expanded two 3-node Isilon X210 clusters with one additional X210 node each. The clusters were previously installed with OneFS 7.x, and upgraded to OneFS 8.1.0.4 somewhere late 2018. A local team racked and cabled the new Isilon nodes, after which I added them to the cluster remotely via the GUI. Talk about teamwork!

A brief time later the node actually showed up in the isi status command. As you can see in the picture to the right, something was off: the SSD storage didn’t show up as Isilon L3 cache. A quick check did show that the hardware configuration was consistent with the previous, existing nodes. The SmartPool settings/default policy was also set up correctly, with SSDs employed as L3 cache. Weird…

Continue reading

How To: Clone Windows 10 from SATA SSD to M.2 SSD (& fix inaccessible boot device)

1TB WD Black SN750 NVMe M.2 SSDA few weeks ago I received a 1TB Western Digital Black SN750 M.2 SSD, boasting an impressive 3470 MB/s read speed on the packaging. I already had a SATA SSD installed in my gaming/photo editing PC. Nevertheless, those specs got me to pick up a screwdriver and install the new M.2 SSD. The physical installation is dead simple: remove graphics card, install M.2 SSD, reinstall graphics card. I wasn’t really looking forward to a full reinstallation of Windows 10 though. There’s just too many applications, settings and licenses on that system that I didn’t want to recreate or re-enter. Instead, I wanted to clone Windows 10 from SATA SSD to M.2 SSD.

After a little bit of research, I ended up with Macrium Reflect, which is freeware disk cloning software. Long story short: I cloned the old SSD to the M.2 SSD, rebooted from the M.2 SSD, and… was greeted with a variety of errors. The main recurring error was Inaccessible Boot Device, however in my troubleshooting attempts I saw many more errors.

Continue reading

Faster and bigger SSDs enable us to talk about something else than IOps

Bus overload on an old storage array after adding a few SSDs

The first SSDs in our storage arrays were advertised with 2500-3500 IOps per drive. Much quicker than spinning drives, looking at the recommended 140 IOps for a 10k SAS drive. But it was in fact still easy to overload a set of SSDs and reach its max throughput, especially when they were used in a (undersized) caching tier.

A year or so later, when you started adding more flash to a system, the collective “Oomph!” of the Flash drives would overload other components in the storage system. Systems were designed based on spinning media so with the suddenly faster media, busses and CPUs were hammered and couldn’t keep up.

Queue all sorts of creative ways to avoid this bottleneck: faster CPUs, upgrades from FC to multi-lane SAS. Or bigger architectural changes, such as offloading to IO caching cards in the servers themselves (e.g. Fusion-io cards), scale-out systems, etc.

Continue reading

My brain will be melting at Storage Field Day 18!

SFD LogoStorage Field Day 18 will be a full event, according to Stephen Foskett. And Stephen doesn’t use italics too often! Three days, likely 3-4 sessions a day, each two hours long. Add a jetlag, a foreign language and new technology, which all need inline processing to keep up to speed. Outside of the sessions: very interesting conversations (tech and non-tech) while we drive between companies, so no naps. In other words: our brains will be melting for three days at Storage Field Day 18. And I’m VERY much looking forward to it!

Continue reading

Reassign Isilon node IP addresses; go OCD!

A while ago I installed two new Isilon H400 clusters. With any IT infrastructure, consistency and predictability is key to a trouble-free experience in the years to come. Cables should be neatly installed, labeled and predictable. Wiring in the internal network cables, it helps if the nodes 1 through 4 are connected to switch ports 1 through 4 in order, instead of 1,4,2,3. While some might consider this OCD, it’s the attention to detail that makes later troubleshooting easier and faster. Like a colleague said: “If someone pays enough attention to the little details, I can rest assured that he definitely pays attention to the big, important things!”.

So I installed the cluster, configured it, then ran an isi status to verify everything. Imagine my delight when I saw this:

Isilon nodes before reassigning node IPs

Aaargh!

Continue reading

Data Domain migration and retaining your system name and IP addresses

DD3300Several of our Data Domains are end-of-life and need to be replaced with new hardware. In most of the cases it’s a small site with a small Data Domain that only holds roughly 1 month of backups. In these cases we just install a new Data Domain next to it, reconfigure our our backup software, and that’s it. After a month, the old backups have expired and you can switch off the old Data Domain.

For the slightly larger sites, there’s more than one backup client/server writing to the Data Domain. There are Oracle RMAN backups, SQL dumps, etc. Plus the retention of backups on the Data Domain is much, much longer. In these cases you want to perform a proper Data Domain migration which retains the name and IP address of the old Data Domain, so you don’t have to touch all the clients. Here’s how you do that, and a DDBoost gotcha you should be aware of!

Continue reading

Isilon node loses network connectivity after reboot

Isilon H400 chassis with serial cable attachedIn my previous post I described how to reformat an Isilon node if for some reason the cluster creation is defective. After we got our new Gen 6 clusters up and running, we ran into another peculiar issue: the Isilon nodes lose network connectivity after a reboot. If we would then unplug the network cable and move it to a different port on the Isilon node, the network would come online again. Move the cable back to the original port: connectivity OK. Reboot the node: “no carrier” on the interface, and no connectivity.

Continue reading

Reformat an Isilon node and try again!

Isilon H400 chassis with serial cable attachedWhile installing a new Dell EMC Isilon H400 cluster, I noticed node 1 in the chassis was acting up a bit. It allowed me to go through the initial cluster creation wizard, but didn’t run through all the steps and scripts afterwards. I left the node in that state while I installed another cluster, but after two hours or so, nothing had changed. With no other options, I pressed Ctrl + C: the screen became responsive again and eventually the node rebooted. However, it would never finish that boot, instead halting at “/ifs not found”. Eventually, it would need a reformat before it would function properly again…

Continue reading

PSA: Unity VMware VMFS replication limit hit at 64 sessions

Our company recently replaced a lot of VNX storage by new Dell EMC Unity all-flash arrays. Since we are/were primarily a VMware hypervisor house, we decided to go ahead and create the new LUNs as VMware VMFS (Block) LUNs/datastores. This however resulted hitting us a weird and unexpected replication limit at 64 sessions.

 

Continue reading