VCE 101: What’s Next with Deduplication?

-

banner-med.jpg

Welcome to the latest post in our series entitled ‘Virtualization Changes Everything

In our last post we discussed the CapEx savings provided by data deduplication of both production storage capacity and storage array cache. Deduplication is fundamentally changing the data center and it appears that there’s more on the horizon. I’d like to share with you what I call ‘Data Center Ethernet Acceleration’.

Overview

“”Some call it a software mainframe others call it cloud, it depends on when you were born” – Steve Herrod, CTO, VMware

When discussing multiple applications on a single hardware platform I prefer the term ‘software mainframe’. I think this is due more to my time working with VMware products rather than my age. I still recall my introduction to this concept back in 2001.

VMware-2001.jpg

It won’t be too long, say within the next 18 to 24 months, when the purchase of a 64 core system will be as common as the purchase of an 8 core system is today. When ultra high CPU density is commonplace and is combined with ESX/ESXi, DRS, VMotion, HA, and FT we will truly have a massive scale software mainframe.

With this architecture how many VMs will we be able to host on a single server chassis? A conservative starting place is probably somewhere between 100 and 200.

With architectures this dense efficiency becomes key in ensuring the ability to scale. Sure, we’ll have plenty of CPU and memory in the physical server, but the amount of data being processed (read) and generated (written) by such a large number of VMs running on a single physical server could be mind numbing.

The obvious way to address the I/O requirements of a software mainframe would be to increase the physical network infrastructure (adapters, connections, and ports). I’m confident that we will have affordable 64 core servers before 40 GbE is commonplace. Case in point, 10 GbE is just getting a toehold in data centers today, and while 10 GbE can significantly reduce the number of network connections to a common 8 or 12 core system I expect that the number of connections per server will continue to increase along with the increase in server density.

The not so obvious way to addresses these I/O requirements are in the advancement of efficient data communication protocols, specifically Data Center Ethernet Acceleration in the form of data deduplication aware NFS.

Let’s Get into the Technical Details

As you can see in this image we can significantly reduce storage consumption and optimize the effectiveness of array cache thru deduplication. However, these efficiencies do not translate in terms of network bandwidth savings.

dedupe+IC.jpg

Mike Eisler is leading NetApp’s efforts to add deduplication aware communication mechanisms to the NFS spec. Mike is one of our Senior Technical Directors and to say he is an expert on NFS is an understatement. Just a few of his accomplishments include being a co-author of the NFSv4 spec (RFC3530) and editor of the NFSv4.1 (or pNFS) spec (note the RFC# is yet to be assigned). There’s more on Mike at his site.

I am not an expert in protocol level discussions but I will make every effort to not let Mike down as I attempt to summarize the work he is coordinating without embarrassing myself.

This is Not a New Idea

I decided to coin the phrase ‘Data Center Ethernet Acceleration’ because the submitted enhancements to NFS mirror what is already available on the market today in terms of WAN acceleration technologies such as WAAS from Cisco or the Steelhead appliances from Riverbed.

At a high level, the current WAN acceleration technologies are able to preserve bandwidth by identifying if any block (or subcomponent) of the data being transferred already exists at the destination. If there are any matches for these data blocks at the destination than these blocks are not transmitted and metadata is sent in their place. The data is reconstructed in whole by copying the data blocks that already exist at the destination.

To my friends at both Cisco and Riverbed – if I have misrepresented your technologies please let me know and I will update this section.

This is the same type of bandwidth savings which are provided natively when replicating deduplicated data from one Data ONTAP powered storage controller to another.

Enabling WAN Acceleration inside of the Data Center

Enabling dedupe aware communications between a server and a storage array is a two part process. Similar to the WAN acceleration analogy the client needs to have a block level awareness of the remote file system and the beginning of this process is available to us via Remote Direct Memory Access (RDMA).

I think I can safely state that the primary goal of adding RDMA capabilities into NFS was to place the overhead associated with NFS on par with that of Fibre Channel by allowing the NFS client to bypass the kernel and directly read or write from the memory space of the storage array. This design in effect allows the client to treat the network file system as a local file system and without the semantics and operations required by a clustered file system.

Enabling File System Intelligence

This is where some of the most recent submissions to the IETF regarding adding dedupe awareness to the NFS protocol becomes very exciting. Now that we have enabled the Hypervisor to treat the remote storage as local we want to extend the virtualization intelligence and allow the client to leverage the storage efficiencies of dedupe aware cache and extend these savings onto the storage or data center network.

RDMA Caching-m.jpg

In this future architecture the only I/O required to travel the network will be data blocks that are globally unique, and dedupe benefits increase in conjunction with the density of redundant data.

Now I should caution you, protocol enhancements don’t happen overnight, so while what I am sharing with you looks promising it will be a few years from now before you are able to deploy this architecture. This should not prevent you from considering implementing data deduplication technologies where they are available today.

Additional I/O Optimization Enhancements

In future minor and major releases of vSphere VMware will be delivering I/O offload mechanisms as a part of the vStorage APIs for Array Integration (VAAI). This capability will allow VAAI integrated storage arrays, such as any running Data ONTAP, to manage the movement of data on the back end without consuming server and network resources. I highlight this technology, as I believe it reinforces my original premise that efficiency is a key to scaling. I will speak specifically about I/O offload in a future VCE post (check the syllabus). What is clear is Data deduplicate aware storage protocols will be the next major advancement in the I/O efficiency model.

In Closing

We already see the savings from reducing server footprints being spent on increasing storage capacity. When the day comes where servers with ultra dense core counts become the norm we will have to consider expanding our effective network bandwidth. I ask, would you prefer to tackle this challenge as we have done in the past by purchasing more hardware or would you prefer to address it via virtualization technologies like dedupe storage protocols?

I vote for the latter.

I know this post was not my norm as it focused on futures versus what is available today. I hope you enjoyed it and I look forward to your feedback. I would be remiss if I didn’t acknowledge the efforts of many individuals within NetApp, who along with Mike Eisler are making this technology a reality.

Remember Virtualization Changes Everything!

Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…