Need more speed? Hello XtremIO!

EMC Xtrem Stacked LogoThe storage market has gradually been using more and more flash to increase speed and lower cost for high I/O workloads. Think FAST VP or FAST Cache in a VNX or SSDs in an Isilon for metadata acceleration. A little bit of flash comes a long way. But as soon as you need enormous amounts of flash, you start running into problems. The “traditional” systems were designed in an era where flash wasn’t around or extremely expensive and thus simply weren’t designed to cope with the huge throughput that flash can deliver. As a result, if you add too much flash to a system, components that previously (with mechanical disks) never were a bottleneck now start to clog up your system. To accommodate for this increased usage of flash drives the VNX system was recently redesigned and is now using MCx to remove the CPU bottleneck. But what if you need even more performance at low latency? Enter EMC XtremIO, officially GA as of November 14th 2013!

 The hardware

XtremIO is an 100% all-flash, scale-out, enterprise storage array. It was designed from the ground up to fully utilize flash for maximum performance and consistent low latency (we’ll get into the consistent part later). An XtremIO array consists of building blocks (or X-Bricks) in a clustered set-up. A single X-Brick is comprised of:

  • One 2U Disk Array Enclosure (DAE), containing:
    • 25 eMLC SSDs
    • Two redundant power supply units (PSUs)
    • Two redundant SAS interconnect modules
  • One Battery Backup Unit
  • Two 1U Storage Controllers (redundant storage processors). Each Storage Controller includes:
    • Two redundant power supply units (PSUs)
    • Two 8Gb/s Fibre Channel (FC) ports
    • Two 10GbE iSCSI ports
    • Two 40Gb/s InfiniBand ports
    • One 1Gb/s management/IPMI port
  • NO proprietary hardware! -> this speeds up the hardware update cycle since commodity hardware is used.

XtremIO X-Brick

One X-Brick delivers approximately 250,000 IOps (assuming fully random 4KB read IOps) or 150,000 IOps for a fully random, 4KB, 50/50 Read/Write workload. An X-Brick has 10TB of physical capacity which translates to 7.3TB of usable storage; the rest is used for internal house-keeping. Mind you, that’s without deduplication which we’ll cover later. LUNs are active/active (no ALUA needed) and response time is less than 1 millisecond.

About that performance and response time: XtremIO prides itself on consistent, predictable performance. If you’ve used SSDs before you might be aware of the write performance penalty once a drive fills up. Most drives and arrays employ some sort of garbage collection to physically erase deleted data and free up the space. This free space is then consolidated, creating new full stripes to write to. The downside is that this garbage collect creates a lot of read and write IO on your array which results in a performance drop. Since the XtremIO system can handle partial stripe writes and thus doesn’t need a garbage collect process, the performance is consistent and the same for a new, empty array and a used, 97% full array. And in an enterprise array, you want consistent behaviour!

Not enough? I mentioned scale-out right? Just add another X-Brick! They’re clustered together using redundant 40Gbit (QDR) Infiniband switches. Infiniband? Yeah that’s that technology that transfers a 4KB block between X-Bricks in 7 microseconds. Yes, you read that right.. microsecond, not millisecond! Infiniband is fast…

Currently you can add four X-Bricks to your XtremIO cluster. This limit is imposed for two reasons: bigger configs simply haven’t been fully tested yet and (allegedly) there isn’t any real commercial demand for >1 MILLION IOps yet! But that’s bound to change, and rumors about eight X-Brick clusters are circulating already…
Capacity wise, in January 2014 will see 20TB X-Bricks being announced which stretches the current 4 Brick clusters to 80TB physical capacity. No, we can’t mix different capacity X-Bricks in the same XtremeIO cluster just yet.

You might think: “Hey, Isilon is a scale-out cluster too! And it uses Infiniband! Are these products related?” Nope! Isilon is scale-out NAS, XtremIO is scale-out block. No code is shared between products, and there are currently no plans to consolidate code or technology.

Data is protected with XDP (XtremIO Data Protection) instead of RAID and stored in 4KB blocks. These blocks are assigned a hash or fingerprint (based on the SHA-1 algorithm), which is where the in-line deduplication feature comes in. Write the same block again and just the metadata is updated, which is stored in the 256GB of RAM per X-Brick. Real-world examples should see deduplication ratios between 3:1 and 10:1, so your 7,3TB usable physical capacity X-Brick houses between 22-73TB of logical data!

XtremIO is fully integrated with VAAI: apply the above deduplication example on a VM clone operation and you can see the advantages!

XtremIO VAAI

Availability wise, the XtremIO array ticks the necessary boxes: no single point of failure, a self-healing system with end-to-end data verification and non-disruptive updates. Monitor and manage it via the GUI, CLI, REST API or vSphere plugin. And it will play nicely with existing EMC products such as VPLEX, PowerPath and ESRS.

Stay tuned for additional information on XDP!