VMware VVols: It’s time to revisit the SAN and NAS debate

-

Which is better with VMware, SAN or NAS storage? This simple question sparked numerous debates throughout the IT industry during the formative and hyper-growth years of VMware. The conversation spanned performance concerns, dove into operational trade-offs and more importantly defined a new set of capabilities and requirements which have led us to VVols. For me this was a magical time, one that with the perspective clearly shows how unprepared we all were for the conversation, adoption disruption and innovation that resulted. As with every topic, time has removed the conversation from the limelight; however, with the upcoming release of VMware VVols the time seems right to revisit the SAN versus NAS debate.

Are you ready for round two?

The perspective of an insider

Before we dive into the SAN and NAS with VVols, I think it beneficial to ensure we are on the same page. I’d like to offer the following definitions for our discussion.

SAN: SCSI based, block storage protocols used with VMware in the form of VMFS datastores and as VM-direct accessed RDMs. Protocols include: Fibre Channel (FC), iSCSI & Fibre Channel over Ethernet (FCoE).

NAS: File access storage protocols used with VMware in the form of NFS datastores and VM-direct accessed LINUX mount points & Windows file shares. Protocols include: Network File System (NFS) and Server Message Block (SMB and formerly known as the Common Internet File System or CIFS).

I classify the SAN vs NAS debate into three eras…

2006 – 2010: VMware releases VI3 and introduces Ethernet storage options NFS & iSCSI to the existing support for Fibre Channel (FC).

At the time most viewed FC as enterprise class, iSCSI as a new low-cost alternative and frankly ignored NFS. What we as an industry learned at that time was network file systems were performant, more capable and ideal for managing tens to thousands of VMs as compared to clustered file systems. NFS provided VM-granular functionality and was devoid of challenges found in areas like shallow per-LUN IO queues, SCSI reservation contention and file locking overheads.

During this era the edge clearly goes to NFS and originates what will eventually become the VVols initiative and is the core for many new storage platforms including Tintri, Nutanix and Simplivity.

2010 – 2014: VMware releases vSphere 4.1 and vStorage APIs for Array Integration (VAAI).

The masses fawn over ‘Full Copy’ the copy offload mechanism but the largest benefit was found in the SCSI Atomic Test and Set (ATS) API, which replaced SCSI reservations and in turn reduced the volume of metadata and optimistic locks. During this time many SAN vendors introduced large I/O queues on their target adapters providing a dynamic pool of I/O to be available per datastore. Together these enhancements removed the many hidden I/O bottlenecks that frustrated and limited the scale of many SAN deployments.

During this time we see ‘protocol parity’ emerge – while SAN and NAS became peers in terms of I/O scaling, trade-offs and advantages still existed. NAS environments were still simpler to manage and saw integrations at higher levels of the VMware stack (i.e. View and vCloud Director). By contrast SAN supported more applications and application deployment models while also appearing to receive greater engineering investments from VMware. In summary, VAAI quieted much of the SAN versus NAS debate.

2014 – on…: VMware releases the beta of VVols and GA is likely within a year.

The architecture provides functional parity for SAN and NAS and provides a framework for storage vendors to enable their unique capabilities and innovation. VVols will provide much more than VM-granular capabilities, they carry the potential to stop the fragmented data management we’ve adopted over the past 8 years and unify storage operations from application to infrastructure through backup and compliance.

Vendor specific capabilities are outside the scope of this post; however, we can discuss some capabilities that you should know when considering SAN or NAS with VVols.

5 Reasons why I believe you should consider SAN over NAS for VVols

  1. T10 UNMAP is an INCITS standard that will reduce storage requirements by enabling data that is deleted in a VM to be automatically deleted on the storage array. UNMAP is available today on a per VM basis with Hyper-V today and provides significant storage savings over the limited UNMAP capabilities in VMFS.
  2. T10 DIF is another INCITS standard and it will ensure end-to-end data integrity from application to array, reducing the requirement and system load in providing retroactive data validation in today’s storage arrays. DIF is designed to eliminate little known yet real-world issues like misdirected writes, writing incorrect data, over the wire corruption, etc.
  3. Greater support for applications. Some applications and application deployment scenarios either aren’t certified or simply cannot operate over NFS. I question some of the limitations, like the lack of NFS support with Microsoft Exchange Server, there are some limitations that appear will always exist with NFS. Microsoft Cluster Services (MCS) is one example where SCSI pass thru is required to deploy and SCSI-3 reservation support. While this capability is not in the current VVols beta, one should expect that VMware will look to provide this in a future release and in turn retire the need for RDMs.
  4. Enhanced storage bandwidth capabilities. This is an area where SAN has been and will continue to be more capable than NAS. The VVols beta currently does not include NFS v4.1 with pNFS, the latter is required for link aggregation. As such each VVol will be limited to a single link for IO with NFS and multi-links with SAN via VMware’s native or 3rd party multipathing software.
  5. Enhanced storage IO capabilities. Like the previous item, this is an area where SAN has been and will continue to be more capable than NAS. As the IO to an NFS VVol is limited to a single Ethernet link, any congestion cannot be addressed by VMware’s native or 3rd party multipathing software. Link congestion could manifest itself in a storage VMotion operation which will actual add load to the link, exacerbating the problem before it can shift IO access for the VM to another link.

In Closing…

Regardless of your preference for storage protocol with VMware, the SAN versus NAS debate has served all of us well. It provided customers with storage options that allowed them to advance their virtualization initiatives. NFS guided VMware, storage vendors and industry standards boards to develop new, optimized mechanisms better suited for virtual storage objects and paved the way for new storage vendors and their innovations to come to market.

I’ve long held fast to the notion that Ethernet provided the best solution when considering storage protocol with VMware. NFS provided simplicity with large shared datastores and iSCSI or FCoE enabled SCSI capabilities for applications requiring such connectivity.

As the IT industry has advanced so has my perspective. I rarely, if ever, speak to a customer struggling with the issues that led to the SAN versus NAS debate. Virtualization has matured into cloud computing, supporting business critical applications and large numbers of business process. Storage bottlenecks are still abundant but they are inherent in the workload cloud computing places disk based storage architectures and not the storage protocols. Flash addresses this issue.

As you look at VVols you should revisit the protocol your infrastructure runs on. Obviously if you’re on Fibre Channel, you’ll remain on Fibre Channel but if you’re a NFS customer on Ethernet you need to investigate what’s possible with iSCSI or FCoE and VVols.

There’s much more to discuss around VVols besides protocols, so stay tuned I’ve got more to share.

 

 

Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

3 Comments

  1. T10 UNMAP’s advantage over NFS is NAS platform specific. On ZFS appliances from Oracle, the moment a VM is deleted the storage is reclaimed and returned to the available pool. More importantly, storage isn’t locked up within the NFS share, it is available for other customers of the appliance as well. All storage allocations are truly ‘thin’ in this appliance; not just allocation but also reclamation. No special actions required to reclaim stranded storage.

    We run thousands of VMs using NFS over 10gE and find the storage management to be trivial. I still continue to believe NFS is a better choice for VMware: faster (10gE, 40gE, QDR IB, …), easier to administer, allocation changes are instantaneous (want to add/remote 5TB to/from a datastore, type in the new quota and done with no additional actions needed), …

    I vote for NFS.

    • Anantha,

      Thanks – I’m with you NFS has been great for virtualization and has advanced the entire IT landscape as it comes to providing storage for VMs. As technologies advance we must revisit their impact on our architectures. You’re correct; NFS datastores do not store data after a VM has been removed (via deletion or Storage VMotion migration) as VMFS datastore do.

      In the article I am discussing T10 UNMAP from an end-to-end perspective; from VM (GOS file system like NTFS, EXT, etc) through to the storage on the array. T10 UNMAP will return vast quantities of capacity as this data significantly outweighs what gets temporarily left in VMFS (until a new VM is deployed or migrated and overwrites these blocks).

      End-to-end T10 UNMAP allows thin or more precisely capacity efficient storage to achieve matched levels of storage utilization and in turn reduce the cost of storage.

      – cheers
      v

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…