Coho Data was first to present at Storage Field Day 8 last October, with Andy Warfield (co-founder and CTO) running the entire presentation start to finish. We knew what to expect: last year Andy also presented at SFD6 and cooked the brains of half the delegates. So this time we came prepared, with ample coffee and not too much breakfast in our stomachs. And we weren’t disappointed: Andy gave a crystal clear company mission to us (NOT focused on the ever-so-hyped “disruption”, but instead on transformation), backed with ample of shiny tech and intelligence inside the Coho array. So what is Coho Data trying to do and how?
Increasing the storage capacity of a Data Domain is usually a matter of adding an additional disk shelf to the Data Domain head. Upon connecting the additional enclosure your new disks will be in an Unknown state. This does not necessarily mean there’s anything wrong with the topology of the system, but just indicates the disks aren’t in use just yet. This is however easily fixed with a couple of CLI commands.
Typing this at Schiphol airport, I can’t help feeling slightly tired. The alarm clock started blaring at 03:00 this night and with that, my trip to Storage Field Day 8 has officially started. With the first leg of the journey from DUS to AMS being over already, I can now settle for the long haul to SFO. It’s bound to be an incredibly exciting week once again and I’m looking forward to meeting up with a lot of familiar faces, two new members of the Tech Field Day family and 10 super exciting companies. Stroopwafels incoming!
Last week I’ve been implementing two new Data Domain systems for a new customer who’d like to use these systems as backup targets for their existing Veeam 8 environment. Backup would be replicated to the secondary system to guarantee recoverability even if the first system or data center experiences a catastrophic failure. In this case replication will be handled by the Data Domain system itself. You’d like your backup software to be aware of the replicas on the secondary location. This in turn means Veeam should be able to read from the replica, which turned out to be a bit of a configuration challenge. Bring out the CLI!
After you’ve built a new storage environment you will probably want to monitor it and/or integrate the equipment in existing monitoring tools. SNMP is one of the protocols to use for this, but for some reason I always forget how to do a Cisco NX-OS SNMP v3 configuration. There’s a big difference in security between SNMP v2c and v3 and they’re configured quite differently: SNMPv2c uses community strings, SNMPv3 builds on the user accounts in the switch. This post will show you how to configure SNMP v3 in the DCNM SAN GUI and on the Cisco MDS NX-OS CLI.
Woohoo: I will be attending Storage Field Day 8! The 8th edition is held from October 20th till the 23rd in the same location as always: Silicon Valley, CA, (USA, Earth, etc). I’m very excited to return: Storage Field Day 7 was a big success and this time around it shouldn’t be any different. The line-up of presenting companies is impressive: I’ve met some of them at previous Storage Field Days and there’s a couple of new names that I’ve been really looking forward to meet.
Isilon scale-out NAS systems lean heavily on DNS. DNS plays a vital role in the way the clients access the files on the Isilon cluster: clients are distributed across nodes and network interfaces based on what your DNS reports back to the clients. This is all done by the SmartConnect component in the Isilon which is configurable up to a certain extent. In this post I’ll explain what SmartConnect does and will also share a tip to speed up your Isilon DR procedure.
Recently I’ve ran a project for a new EMC Symmetrix VMAX 10k installation. The install was a breeze and the migration of data to the system fairly straightforward. The customer saw some good performance improvements on the storage front and there is plenty of capacity in the system to cater for immediate growth. Yet when I opened the Unisphere for VMAX interface and browsed to the performance tab, my heart skipped a beat. What are those red queue depth utilization bars? We were seeing good response times, weren’t we? Were we at risk? How about scalability? Lets dig deeper and find out.
The last couple of months I’ve been busy consolidating a couple of European data centers to one location in The Netherlands. Technically this meant we had to migrate a large number of virtual machines with as little downtime as possible across WAN links with varying speeds (30Mbit up to 500Mbit). There are a number of methods to go about this, but we chose to use the vSphere Replication infrastructure which is included in vSphere 5.x for free. Unfortunately there are a couple of downsides in the management interface which become a pain if you have to manage several hundred replications…
Maxta was the last presenter at Storage Field Day 7 and spoke about their hyper-convergence product. Hyper-Convergence… what’s that all about again? Simplicity mostly: combine your compute and storage in one single unit and manage it in a single integrated user interface. Maxta offers their product in two form factors: the software-only MxSP package, or the MaxDeploy appliance which is hardware and software combined and preconfigured. Maxta also adds three additional key values to hyper-convergence: Choice, Scalability and Cost. The former two usually don’t spring to mind when you hear about hyper-convergence products: the simplicity you seek comes with a trade-off. Let’s dive in!