NFS, VMware, and Unintended Consequences!

-

When it comes to storage interconnects for VMware deployments I’m rather biased; unified multi-protocol storage platforms like NetApp FAS should be the ONLY consideration for any sizable installation. Unified multi-protocol platforms allow for customers to receive the benefits of both NAS (NFS) and SAN (FC, FCoE, iSCSI) based connectivity from a single array. Once one collapses their FC & Ethernet networks into a unified storage fabric (ala Cisco 10GbE) i would seem rather odd to not consider deploying unified array platforms.

It’s been well documented that for shared datastores and most applications NFS delivers the greatest amount of flexibility, scalability, and storage virtualization. The true value of NFS is not a technical discussion but rather one of operational savings as NFS provides unbelievable simplicity when managing massive numbers of VMs. Think ‘set-it-and-forget-it.’

For applications that have requirements to be ran on a block-based storage protocol, say for reasons like block-based data management toolsets or out-of-date technical support statements, FC/FCoE/iSCSI fulfills these areas of need and completes the value of a unified storage platform.
I understand those who have yet to experience VMware on NetApp NFS maybe skeptical of my statements. Such is fair criticism, as always one should be skeptical of all claims made by all vendors.

As a means to help validate some of the points shared around the usage of NFS I’d like to introduce you to a recent post from Martin Glassborow, aka the StorageBod, entitled, “NFS, VMware, and Unintended Consequences!” Martin is a serious data center architect and storage expert who knows his stuff.

Check out his short post and comments section on running Vmware on NFS and the unexpected results it delivers for Martin. BTW – I love this quote from the post, “(NFS) moved VMWare firmly and squarely into NetApp’s sweet spot.


Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

5 Comments

  1. Nice article. One question – I have an NFS data store with deduplication turned on and I have recently started receiving messages to the effect that my data store is over 500% deduplicated. I assume that this is meant to warn me that should I decide to deduplicate I might experience a shortage of storage – is this correct? Are there any downsides to this degree of deduplication?

  2. Jeff,
    First let me say nice job on your deduplication rate!
    This “over-deduplicated” message is a threshold recently added to Operations Manager. There are two reasons NetApp informs you of this. The first is that un-deduplicating your storage would now require more physical capacity than you have (I’ve yet to see a VMware/NetApp customer want to do this). The second is if you use SnapVault for D2D backups. SnapVault does not transfer data in its deduplicated state like SnapMirror (even though it does automatically deduplicate once at the secondary storage.) So you would just need to think about the ramifications to your D2D backups.

  3. This is a question I get from customers periodically (I’m a VAR engineer). Basically it’s more just a “heads up” message….consider it confirmation that dedup is doing what you want it to.
    What I generally recommend to customers is that as you start to get higher and higher dedup ratios leave more free space in the datastore — you’re saving so much space there’s no need to skimp.
    Generally speaking on larger NFS datastores (500 GB and above) I recommend leaving at least 20% free if not maybe 25% as you start to see better dedup and/or thin provisioning savings.

  4. Thanks for the comments – I just noticed in my initial post I should have said “should I decide to de-deduplicate”. Or maybe just “duplicate” – sounds silly when you put it that way…unless you have no choice I suppose.

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…