I am going to Tech Field Day 20! And I can wholeheartedly say that I am looking forward to it. I you exclude the brief TFDx with VMware NSX last year, this will be my first, full-length, non-storage Field Day. For the last 10-ish years I’ve been focusing on storage, backup, data replication & disaster recovery. But I actually started out in the hypervisor corner of IT. And my entire career I’ve been shoving my nose into other IT fields and product suites. It’s one of the advantages of working for a (back then) smaller VAR, and keeps life interesting. Tech Field Day 20 will be the perfect opportunity to find out how outdated my knowledge is on those non-storage fronts.
Dell EMC uses Secure Remote Services (SRS, formerly known as ESRS) to enhance the tech support experience for their products. There’s two sides to this support: connect home, and connect in. Connect home is your device itself dialing back home to Dell EMC to report various things such as errors, automatic support uploads, etc. If either of this results in a Service Request at Dell EMC, a engineer can then use SRS to dial in / connect in and have a look at the faulty system. The latter saves you from having to host a Webex session.
Dell EMC likes to have all Dell EMC systems connected to SRS, again for two reasons. First of all, it reduces the time spent by engineers in troubleshooting an issue. If an engineer can dial in himself, without having to negotiate a Webex session with the customer, that means more SRs per engineer per day and lower support costs for Dell EMC. Secondly, it will result in faster incident resolution, and thus a happier customer. The support engineer can look up the state of a defective drive independently, and order new parts while the customer is sleeping. Win-win!
Last year we decommissioned a physical Avamar grid in London because it was both out-of-support and the location was about to close down. The Avamar was however still being used for desktop/laptop (dt/lt) backups. A separate project was taking care of replacing those laptops, but in the meantime we needed to keep the Avamar backup service running.
We did a quick calculation on the required capacity and deployed 4 new Avamar virtual editions in our central VMware farm. After configuring them and connecting them to the Avamar Enterprise Manager dashboard, we were able to move the majority of clients over.
Now, almost a year later, many of those laptops have been replaced and are no longer backed up by these 4 new Avamars. Which clearly shows in the utilization, as you can see. Three out of four systems are <10% utilized. Since these Avamars claim a fair bit of resources from the VMware farm, I set out to consolidate the systems into the first virtual Avamar. Thus, reclaiming 75% of the resources.
I recently installed a new Data Domain DD6300. Part of the whole installation procedure is to run a DD OS upgrade to bring the system up to the target DD OS release. You can find the target releases over here. While running the upgrade to 188.8.131.52, the Data Domain correctly rebooted as part of the upgrade. Logging back in, the system GUI kept throwing an “Upgrade in progress” popup, blocking everything else in view. There is also an alert that shows “DD OS Upgrade is in progress. The system will not be available for backup and restore operations. The alert will be cleared after the upgrade operation is complete.” Which I guess is NEVER when the upgrade is hung…
I’ve installed quite a few new Isilon clusters in 2019. All of them are generation 6 clusters (H400, H500, A200), using the very cool 4-nodes-in-a-chassis hardware. Commonality among all these systems is an 1GbE management port next to the two 10GbE ports. While Isilon uses in-band management, we typically use those UTP ports for management: SRS, HTTP, etc. We assign those interfaces to subnet0:pool0 and make it a static SmartConnect pool. This assigns one IP address to each interface; if you do it right, these should be sequential.
Recent addition to my install procedure is to create some DNS A-records for those management ports. This makes it a bit more human friendly to connect your browser or SSH client to a specific node. In line with the Isilon naming convention, I followed the -# suffix format. So if the cluster is called cluster01, node 1 is cluster01-1, node 2 is cluster01-2, etc. However, it turns out this messes up your SyncIQ replication behavior!