Replication

12 posts

Isilon SyncIQ uses incorrect interface: careful with mgmt DNS records!

I’ve installed quite a few new Isilon clusters in 2019. All of them are generation 6 clusters (H400, H500, A200), using the very cool 4-nodes-in-a-chassis hardware. Commonality among all these systems is an 1GbE management port next to the two 10GbE ports. While Isilon uses in-band management, we typically use those UTP ports for management: SRS, HTTP, etc. We assign those interfaces to subnet0:pool0 and make it a static SmartConnect pool. This assigns one IP address to each interface; if you do it right, these should be sequential.

Recent addition to my install procedure is to create some DNS A-records for those management ports. This makes it a bit more human friendly to connect your browser or SSH client to a specific node. In line with the Isilon naming convention, I followed the -# suffix format. So if the cluster is called cluster01, node 1 is cluster01-1, node 2 is cluster01-2, etc. However, it turns out this messes up your SyncIQ replication behavior!

Continue reading

PSA: Unity VMware VMFS replication limit hit at 64 sessions

Our company recently replaced a lot of VNX storage by new Dell EMC Unity all-flash arrays. Since we are/were primarily a VMware hypervisor house, we decided to go ahead and create the new LUNs as VMware VMFS (Block) LUNs/datastores. This however resulted hitting us a weird and unexpected replication limit at 64 sessions.

 

Continue reading

Zerto facilitates IT resiliency with a single VM replication platform

Zerto IT ResiliencyBack at Storage Field Day 16 in Boston, Zerto presented their VM replication software. It’s a block level, continuous hypervisor based replication, using a journal to log I/O in a VCR-like fashion. This enables you to rewind to any point in time that’s covered in the journal, and recover your VMs to that exact state. Zerto’s plans are a bit grander than “just VM Replication” though… they aim to cover the complete IT Resiliency market.

Continue reading

RecoverPoint Connect Cluster wizard: Internal Error, contact support

RecoverPoint boxToday I visited a customer to connect two RecoverPoint clusters. One RecoverPoint cluster is connected to a Unity array, the other to a VNX. After installing both clusters, we ran the RecoverPoint Connect Cluster wizard and were greeted with an “Internal Error. Contact support” error message. Awesome! Fortunately it turned out to be a pretty basic error which was easy to fix. A short story about RecoverPoint installation types in mixed-array configurations…

Continue reading

Deploying RecoverPoint – Part 3 – Deployment Manager

RecoverPoint boxConfiguration of the deployed vRPAs is performed with the RecoverPoint Deployment Manager. This is a tool on your laptop that, using a multi-step process, assigns IP addresses to the RecoverPoint appliances and their various networks and connects these appliances to the VNX array. The previous part of this series discussed the first steps to get into the tool: now it’s time to start entering some configuration data.

Continue reading

Deploying RecoverPoint – Part 1 – Intro & Gotcha’s

RecoverPoint boxIt has been a while since I’ve written anything about RecoverPoint since my original post on RecoverPoint 4.0 several years ago. To my delight I was recently put on a couple of projects to deploy new virtual RecoverPoint clusters for two customers. Several things had changed since the first appearance of the virtual RecoverPoint Aplliance (RPA), so why not write a small series on the deployment of these appliances? Gotcha’s included!

Continue reading

vSphere Replication – Pros and Cons

vSphere Replication ArchitectureThe last couple of months I’ve been busy consolidating a couple of European data centers to one location in The Netherlands. Technically this meant we had to migrate a large number of virtual machines with as little downtime as possible across WAN links with varying speeds (30Mbit up to 500Mbit). There are a number of methods to go about this, but we chose to use the vSphere Replication infrastructure which is included in vSphere 5.x for free. Unfortunately there are a couple of downsides in the management interface which become a pain if you have to manage several hundred replications…

Continue reading

Rebalance VNX storage paths without downtime

Recently I ran into an environment with a couple of VNX5700 systems that were attached to the front-end SAN switches with only two ports per storage processor. The customer was complaining: performance was OK most of the time but at some times during the day the performance was noticeably lower. Analysis revealed that the back-end was coping well with the workload (30-50% load on the disks and storage processors). The front-end ports were a bit (over)loaded and spewing QFULL errors. Time to cable in some extra ports and to rebalance the existing hosts over the new storage paths!

Analyzer Helper Front-End ports

Continue reading