Updated: vSphere on NetApp Storage Best Practices v2.1

-

Quick heads up, our VMware vSphere on NetApp: Storage Best Practices Technical Report TR-3749 version 2.1 has been released.

Here’s a quick summary of what’s new in version 2.1:

  • vStorage APIs for Array Integration (VAAI)
  • Storage I/O Control
  • New Performance Counters for NFS Datastores
  • Virtual Storage Console 2.0 (VSC2 plug-in)
  • All CLI process have been removed and replaced with plug-ins (except one with MTS iSCSI)
  • A 26% reduction in document size (from 124 pages to 92) – aka driving simplicity

As always, the content in TR-3749 applies to NetApp FAS and vSeries arrays, as well as the N-Series from IBM. Note all best practices in this document superseded the previous versions of this document and should be considered authoritative regarding the use of NetApp technologies in VMware deployments.

Sorry to be brief, but I’m heads down with VMworld preparations and wanted to get the word out!


Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

6 Comments

  1. A bit of a different question…..if you want to dedicated (2) vmkernels to iSCSI traffic and a separate vmkernel to NFS traffic, is it possible to keep iSCSI from using all 3 vmkernels and just use 2? It seems like it should be (possibly via non-routable IP subnets on top of the same VLAN) but I keep seeing more paths than should be there.

  2. @Andrew – IMO if you are using 10GbE it doesn’t matter. Use 2 links for all traffic.
    If you are using 1GbE, then you can use multi-session TCP with iSCSI which will bind the iSCSI IO to 2 links. In order to bind NFS to the third link you will need to have the iSCSI traffic on one subnet and the NFS on a second as NFS uses the hypervisors routing table to decide which VMkernel port to use. You will also need to restrict the NFS exports to only the subnet of the NFS traffic.

  3. Andrew are you by any chance attempting IP Hash for NFS as well as multiple TCP sessions for iSCSI? If so and you have some extra pNICs, you could do something like this:
    http://www.yellow-bricks.com/2010/08/06/standby-nics-in-an-ip-hash-configuration/comment-page-1/#comment-9505
    It’s nice to see that v2.1 of TR3749 finally explicitly warns about mixing iSCSI port binding and NFS NIC Teaming. However, the TR doesn’t clarify that the warning wouldn’t apply if iSCSI and NFS were configured on separate vSwitches.

  4. Interesting. One of your replies on Chuck Hollis’ blog claimed that RAID-10 did not offer enough protection for today’s workloads. Yet this document is unclear about the fact and only states it with certainty about RAID-5.
    For the record, are you claiming that the mean time to data loss of RAID-DP is superior to that of RAID-10 for an equal number of physical disks?

  5. @Geert:
    Thank you for the paper reference. However, I don’t see anything addressing my question. Please point out the specific reference where it factually demonstrates that RAID-DP has an MTTDL superior to RAID-10 with all failure scenarios considered. Not just dual failure.
    Examine multiple physical disk unit failures in a 16 disk group. On the first failure, data remains available for both RAID types. On the second with RAID-DP (and any RAID-6) data remains available. On RAID-10 data becomes unavailable 1/15th of the time (the random failure hits the opposing failed mirror). On failures n=3 to 8, RAID-DP always fails. For RAID-10 the likelihood of data unavailability (failure of a mirror of an already failed disk) is only (n-1)/(17-n), considerably better than 100% as in RAID-DP. Only on the 9th failure does data unavailability become certain with RAID-10, whereas it is certain for failures beyond 2 with RAID-DP. The resiliency improvement of RAID-10 over RAID-DP (or any RAID-6) grows as the number of disks is increased.
    Further, I only asked about MTTDL. But since you want to talk about parity overhead, in a 4 disk group, RAID-DP and RAID-10 have exactly the same overhead (50%). In the more useful 16 disk case, RAID-10 overhead is 8 disks (50%) and RAID-DP overhead is 2 disks (12.5%) a reduction of 6 (37.5% of total). Where does 46% come from (a group of 50 disks?!?)? It’s not in your paper.
    As far performance is concerned, RAID-10 will outperform RAID-DP for reads, since for any member block in a RAID-DP only 1 IO at a time can be done, and RAID-10 can do 2 (one to the disk containing the sector and one to its mirror.) Some systems, like the Symmetrix (and very old DEC HSCs…), will actually optimize between a disk and its mirror directing the IO to the physical disk that can service the IO soonest, a neat little optimization unavailable to RAID-6 derivatives. As for writes, RAID-DP has to update those additional 2 parity blocks so there are at least 3 writes, with RAID-10 there are only two so array IOPS performance will be better with RAID-10. I will concede that for full stripe writes, parity RAID schemes in general (RAID-DP included) have less overhead and will perform better, but they are not a typical case. RAID-10 has the further, but now small given fantastically fast processors, advantage of not having to calculate any parity.
    RAID-10 is expensive, but like all things you don’t get what you don’t pay for.

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…