VMware Admins are Storage Admins – vStorage Integration Part 3

-

This is the final post in the ‘VMware Admins are Storage Admins’ series. In part 1 we discussed how VI admins are becoming storage admins through the virtualization of our data centers. We reviewed the traditional provisioning model, and introduced a new model designed to simplify, automate, and standardize storage operations by allowing storage admins to create resource pools that the VI admins can use at will via vCenter plug-ins.

In part 2 we drilled into the storage functionalities and technical details around the NetApp vCenter plug-ins. This feature list highlighted automating ESX/ESXi storage connectivity along joint best practices, storage array reporting and connectivity automation, automated datastore provisioning and management, and provisioning zero cost, pre-deduplicated, VMs for servers and desktops.

The Devil is in the Details

I find many appreciate understanding the differences between marketing claims of similar products and solutions. I’d like to use the last part of this series to compare and contrast the vCenter integrations from NetApp with similar offerings from EMC.

I’m sure some of you are asking, ‘why EMC again?’ The reason is simple, when looking at VMware integration it seems that NetApp and EMC are leading the pack. Don’t take my word for it, consider the following quote:

“It’s no secret that I have a lot of respect for NetApp, both as a company and their technology. When it comes to VMware-focus and integration, in my eyes (which may be off-base), it’s basically EMC and NetApp and then the others are so far back that it doesn’t matter.“ – Chad Sakac, VP of VMware alliances, EMC

I’d like to tackle this comparison based on how I shared the components of the new model.

My intent here is to be very accurate and fair in representing the non-NetApp technology. My intent is to help clarify the pubic understanding. If I’ve overlooked a detail, please let me know and I will revise this content.

Creating Storage Resource Pools

Defined as enabling storage admins to assign storage resources for use in the virtual infrastructure.

NetApp Plug-ins deliver the following capabilities:

  • NetApp controllers (FAS or vSeries for 3rd party arrays or IBM N-Series)
  • Aggregates (RAID protected pools of physical disks)
  • FlexVols (logical storage containers that reside on Aggregates)
  • Storage I/O interfaces (select Ethernet ports)

EMC Plug-ins deliver the following capabilities:

  • At present the EMC plug-in does not support a model of resource pool creation.

Provisioning Datastores

Defined as enabling VI admins the ability to provision datastores

NetApp Plug-ins deliver the following capabilities:

  • Supports all NetApp controllers (FAS or vSeries for 3rd party arrays or IBM N-Series)
  • Provisions FC, FCoE, iSCSI, and NFS datatsores from the resource pools created by the storage admins.
  • Provisions at the host, cluster, or data center level
  • Provisions data deduplicated datastores (VMFS & NFS)
  • Provisions thin datastores (VMFS & NFS)
  • Sets multipathing policy and balances IO load across all storage interfaces
  • Secures storage object (masks LUNs and sets NFS exports)
  • Sets the options in compliance with storage best practices

EMC Plug-ins deliver the following capabilities:

  • Supports Celerra controllers (which can address local, Clariion, or Symmetrix disk)
  • Provisions NFS datatsores on any NFS export
  • provisions at the host, cluster, or data center level
  • Provisions compressed datastores (NFS)
  • Provisions thin datastores (NFS) (aka virtual provisioning)
  • Sends IO across a single storage interface
  • Secures storage object (sets NFS export)
  • Allows one to set options in compliancet with storage best practices

Managing and Reconfiguring Datastores

Defined as management capabilities available to adjust settings of existing datastores from within vCenter

NetApp Plug-ins deliver the following capabilities:

  • Supports all NetApp controllers (FAS or vSeries for 3rd party arrays or IBM N-Series)
  • Resizes FC, FCoE, iSCSI, and NFS datatsores (grow VMFS & NFS, shrink and autogrow NFS)
  • Enable, report, and disable block level data deduplicaiton on the datastore (VMFS & NFS)
  • Enable, report, and disable thin provisioning of the datastore (VMFS & NFS)

EMC Plug-ins deliver the following capabilities:

  • Supports Celerra controllers (which can address local, Clariion, or Symmetrix disk)
  • Provisions NFS datatsores on any NFS export or can create a new export on any file system
  • Resizes NFS datatsores (autogrow NFS)
  • provisions at the host, cluster, or data center level
  • Sends IO across a single storage interface
  • Secures storage object (sets NFS export)

Audit and Automate ESX/ESXi Storage Settings

Defined as the ability to audit and set ESX/ESXi storage connectivity settings from within vCenter. Note these settings are outside of what’s configurable with VMware host profiles.

NetApp Plug-ins deliver the following capabilities:

  • Supports all NetApp controllers (FAS or vSeries for 3rd party arrays or IBM N-Series)
  • Supports FC, FCoE, iSCSI, and NFS settings
  • Reports on storage settings for vSphere and VI3
  • Provides automated, non-disruptive updates to storage settings for vSphere

EMC Plug-ins deliver the following capabilities:

  • Supports FC with Celerra controllers
  • Supports FC with Symmetrix controllers
  • Supports iSCSI with Clariion controllers
  • Reports on storage settings for vSphere and VI3

Report Storage Details and Utilization

Defined as the ability to report storage utilization at the datastore, LUN, volume (NFS), and RAID level.

NetApp Plug-ins deliver the following capabilities:

  • Supports all NetApp controllers (FAS or vSeries for 3rd party arrays or IBM N-Series)
  • Supports FC, FCoE, iSCSI, and NFS
  • Reports on storage settings at all four layers

EMC Plug-ins deliver the following capabilities:

  • At present the EMC plug-in does not support reporting storage utilization levels, only provisioned capacities.

Report Storage Faults

Defined as the ability to communicate storage controller related issue in vCenter

NetApp Plug-ins deliver the following capabilities:

  • Supports all NetApp controllers (FAS or vSeries for 3rd party arrays or IBM N-Series)
  • Supports FC, FCoE, iSCSI, and NFS

EMC Plug-ins deliver the following capabilities:

  • Supports FC with Celerra controllers
  • Supports FC with Symmetrix controllers
  • Supports iSCSI with Clariion controllers

Ensure Optimal I/O Settings within VMs

Defined as the ability to enhance the I/O efficiency of a VM by adjusting GOS configurations. These setting include SCSI settings and partition alignment.

NetApp Plug-ins deliver the following capabilities:

  • Supports all NetApp controllers (FAS or vSeries for 3rd party arrays or IBM N-Series)
  • Supports FC, FCoE, iSCSI, and NFS
  • Supports Windows and Linux GOS

EMC Plug-ins deliver the following capabilities::

  • At present the EMC plug-in does not provide this level of functionality

Instant Provisioning/Cloning of VMs

Defined as the ability to provision or clone VMs either individually or in bulk

NetApp Plug-ins deliver the following capabilities:

  • Supports all NetApp controllers (FAS or vSeries for 3rd party arrays or IBM N-Series)
  • Supports cloning of VMs on FC, FCoE, iSCSI, and NFS datatsores
  • Clones running VMs, VMs from templates, and vApps
  • Provisions data deduplicated VMs on VMFS & NFS
  • (NFS is an immediate FlexClone)
  • (VMFS is a VMware clone followed by the dedupe process)
  • Offloads cloning I/O (NFS & VMFS with datastore level clones)
  • Clones support dedupe updates
  • Clones support Storage VMotion
  • Bulk VM clones are balanced across ESX/ESXi hosts
  • Clones support guest customization
  • Imports desktop pools into View Manager, XenDesktop, and vWorkspace

EMC Plug-ins deliver the following capabilities:

  • Supports Celerra controllers (which can address local, Clariion, or Symmetrix disk)
  • Supports cloning of VMs on NFS datatsores
  • Clones running VMs and VMs from templates
  • Provisions linked VMs on NFS (aka fast cloning)
  • Provisions thick VMs on NFS
  • Provisions compressed thick VMs on NFS (not available with fast clones)
  • Offloads cloning I/O (NFS & VMFS with datastore level clones)
  • Clones support compression updates (thick VMs only – not with fast clones)
  • Clones support Storage VMotion (thick VMs only – not with fast clones)
  • Bulk VM clones are balanced across ESX/ESXi hosts
  • Clones support guest customization

Summarizing the vCenter Integrations

I think its clear, EMC & NetApp are leading the pack in terms of enabling storage integration within VMware environments. There are additional capabilities that I could have compared, and in some of these areas EMC is ahead of NetApp, like the SRM failback plug-in. For this post I selected to restrict the scope to the capabilities we covered in this series.

I’ve made every effort to be accurate in the data I’ve shared above. If I have made an oversight, please let me know and I will correct the data. trust me, it is always tough to make sure one is on top of another technical offerings. let us operate under the guise that the majority of what I have shared is accurate (+/- 10%), if so I believe the following are fairly accurate statements…

  • NetApp enables storage admins to deploy resource pools for use by the VI admins. This model provides control of the resources by the storage team while enabling the VI admin to use the resources as needed and on demand. The VI admin receives tools which work identically across all platforms, supports every storage protocol without feature restrictions, simplifies and ensures the uptime of the environment by enforcing best practices.
  • EMC the VI admin to provision the NAS storage resources on demand. The VI admin receives tools that work differently across all various storage platforms, provides inconsistent support for multiple storage protocols with feature restrictions. Their plug-ins do simplify tasks (where supported) and offers a means to optionally set best practices. The EMC tool does not allow the storage admin to have ultimate control of the storage resources.

Closing Thoughts

The enablement of storage management with in vCenter has just begin and we should expect to see the levels of integration unparalleled as compared to what was available with physical (and there were many cool integrations with these systems). As we navigate this landscape, I’d suggest the unified architecture of Data ONTAP will allow us to innovate faster and provide more tools than the other guys as they have to write to an inconsistent set of array specs (Engenuity with Symmetrix, Dart with Celerra, and Flare with Clariion). Obviously time will tell, but I believe what I’ve shared in this post demonstrates why I have this view.

Before I sign off, I thought I’d update the VMware integration chart that originally appeared in my post “EMC: The Storage Most Integrated with VMware?”. I think the details listed above are easier to quantify visually.

(click on the image to view at full size)
Updated the image on 3/11/10 based on feedback from EMC


Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

8 Comments

  1. Vaughn
    I thought this was good post until you started making stuff up again.
    I think we’d all agree that the proliferation of VMware has increased the popularity of the “VMware admin in charge” model for storage management.
    And then you went off the deep end again.
    It seems that anytime you comment on EMC’s capabilities, you get it wrong. That chart towards the end is yet another excellent example of this.
    Once again, you’re making stuff up (MSU).
    Not only that, as Chad and I have both pointed out repeatedly, you don’t need to overstate NetApp’s capabilities to be successful here.
    Recent egregious examples include your market penetration with VDI (“1 million desktops”, “9 out 10”, etc.) makes all of us wonder what you’ve been smoking. And your “5000 user” statements were candidates for an industry blooper reel.
    The sad thing is that this behavior on your part isn’t really needed.
    All customers want is the truth, and if you’re not prepared to give it to them, best for you to keep quiet.
    — Chuck

  2. Hi Chuck,
    Can you identify the specific things you feel were made up? There are specific claims being made in the post, which of them are incorrect?
    Thanks!

  3. Vaugh,
    please fix the following:
    dedeup VM’s – celerra
    reports dedupe savings – celerra
    auto configure ESX nfs settings – celerra
    High Performace Multipathing I/o…this one made me laugh so hard, my wife looked at me weird.
    yea dude, we have an amazing product that cost money which is by far superior to NMP RR !!
    SRM Failback, please add the Clarrion, RecoverPoint, oh, and we also have an interface showing all the
    vm’s that are not in compliance for SRM!

  4. @ Mr. Hollis
    Thank you for initial compliment on the discussion. Believe me when I say NetApp and EMC are clearly leading the way with VMware integration, and in the end the customers will benefit from the innovations of our engineering staffs.
    Regarding the comparison of the technologies , I clearly state in my post that my intentions are to provide accurate data specifically in the areas of technology produced outside of NetApp. More over, I have asked for a public review of the content and stated that I would make correction in the event of an oversight. I’m attempting to be as transparent as I can be.
    I believe one can label your statement dismissing the accuracy of the chart without providing supporting data as you making an “argument from authority.” This is commonly referred to as a logical fallacy and is defined as one who positions a claim as true without having to substantiate the their claim as it is derived from a privileged position of knowledge.
    I’d like to ask you to change the dialog regarding this topic. Instead I’d like to see you contribute to the discussion by providing myself, and the community, with the data points that allow us to increase the level of accuracy in these (and other) technical comparisons. I’m sure the sales teams in both companies would love to have a mutually agreed upon document listing the capabilities of each other’s technologies. In fact this is what I’m striving to provide and if you could help it would display a level of transparency on your part.

  5. @Itzik
    Thank you for providing feedback and contributing to the sharing of information.
    May I ask for you to clarify a few points before I post your suggested edits?
    – dedeup VM’s – I believe they can only be compressed with EMC. I am taking this data directly from Chad’s post on the capabilities of F-RDE
    – reports dedupe savings – celerra
    Again, sorry to be literal here. As the arrays do not offer the ability to reduce the storage consumed between two VMs via block level data deduplication I would rather use another term or phrase. Would reports storage savings provided by array work?
    – auto configure ESX nfs settings – celerra
    Got it, my oversight
    – High Performace Multipathing I/O
    I know you tout the use of the native multipathing but if I quote implicitly from the EMC website, “PowerPath/VE enables you to automate optimal server, storage, and path utilization in a dynamic virtual environment. This eliminates the need to manually load-balance hundreds or thousands of virtual machines and I/O-intensive applications in hyper-consolidated environments.” This is where I get a bit confused by EMC’s pitch. You offer X number of wys to accomplish a task and every way is the right way. I’m gonna deny this request as it directly contradicts the corporate positioning of EMC. Is this fair?
    – SRM Failback, please add the Clarrion
    Got it. Which protocols are supported? (btw – not that I’m adding it to the chart, can you add which replication models are supported?)
    – On RecoverPoint, the compliance piece is cool. Are you suggesting I add a line to the chart introducing a new metric? If so, i’ll need more details.
    Itzik, I commend you for standing up to help refine this data. I look forward to publishing the updated chart upon receiving your feedback.
    Thanks again.
    V

  6. Reposting from memory, as my original post never appeared.
    —-
    One change would be to remove the “FC only” and “iSCSI only” notations in the “Physical to Virtual Storage Mgmt” row. EMC Storage Viewer has included support for both Symmetrix and CLARiiON, FC and iSCSI, since the beginning.
    Also, I have to agree with Itzik that spinning native multi-path support from VMware into a win for Data ONTAP is a bit of a stretch. To be fair, maybe you should break that row into two: 1) Native Multipathing I/O and 2) 3rd Party MultiPath Plugin. Then both EMC Storage and Data ONTAP will get green checks for #1, and ONTAP will get red exclamations for #2.

  7. Re: [NetApp – The Virtual Storage Guy] Lee McColgan submitted a comment to VMware Admins are Storage Admins – vStorage Integration Part 3
    @Lee
    Thank you for the comments and ensuring accuracy in this chart. I am very concerned about presenting accurate data.
    On multipathing… I can see your point around multipathing; however, I dont believe you see ours. NetApp arrays work optimally with VMwares Round Robin path selection policy as our arrays do not serve with per LUN queue depths. As you know EMC arrays do serve LUNs with queues, and when a LUN enters a queue full condition all I/O on the path is subject to performance degradation. It is for this condition that EMC recommends customers concerned with performance should purchase PPV/E.
    I dont think it is accurate to say EMC arrays run optimally with the native multipathing software included in ESX/ESXi. Do you?
    On Storage Viewer support for iSCSI FC on Symm Clariion… Can you can point to public documents so I can verify your statement? If it is correct I will absolutely update the chart.
    Thanks you again for the feedback.

  8. DISCLOSURE – EMC Employee here.
    @Vaughn – it’s is:
    1) ABSOLUTELY correct to say that EMC arrays run optimally with NMP Round-Robin mode. No worse, no better than all other arrays (including NetApp) that support this model. The array LUN queue model is a total red-herring. Implying that the network is never congested or unbalanced is an implication that all network QoS mechanisms are a waste of time, always.
    2) ABSOLUTELY correct to say that NMP RR is relatively primitive still by other open systems multipathing (improving, no doubt!). It has no adaptive host-side queue management, doesn’t do predictive path testing, and manual new path discovery. PowerPath/VE does all of that, so it’s completely accurate to say it’s BETTER. It’s also not free. Customers can choose, just like they choose between the VMware dVS and the Nexus 1000v. Good, and Better.
    You don’t have to try to make us seem more complex 🙂 It’s basic:
    – VI3.x multipathing = was really not great.
    – vSphere NMP with any array on the VMware HCL that supports – – either active/active or ALUA = good
    vSphere MMP with PowerPath/VE = best.
    simple enough for ya? 🙂
    OK – now for fact check, row by row.
    Table as a whole:
    – over-positioning vSeries, N-Series, FAS – they are all the same, right? I mean, if you’re going to hammer us for having different array types, isn’t it fair to say you have one? But, whatever, let’s just let that slide.
    – Symmetrix is one family, and the columns are the same for vCenter plugins (and will stay that way), so like the comment above, you’re artificially making us look like we have more than 3, just like you’re artifically making you look like you have more than 1.
    1) Auto-Provision Datastores – correct.
    2) Dynamically mask LUNs – incorrect. There is a plugin for Celerra that does that.
    3) Dynamically grow/shrink datastores – correct.
    4) Dedupe VMs – putting an exclamation mark in there is fine if it makes you feel good 🙂 We both save capacity, in different ways. EMC’s F-RDE is a combo file-level dedupe and sub-file compress. In the VM datastore use case (which we’re talking about here), file-level dedupe does nothing, sub-file compress generally nets a 40-50% capacity savings (in general purpose NAS use cases, file-level dedupe often saves more than block-level or compress). It has the advantages of having NO impact on filesystem size, features, or behavior, and being unaffected by local and remote replication (ergo, there is no “pinning” for elements of a filesystem that are being referenced by a snapshot). Can the same be said of the NetApp approach? Not saying better/worse – just pointing out that a NetApp constructed table, from a NetApp constructed world-view won’t note the pros/cons on both sides – just like an EMC-constructed one would it?
    5) Report dedupe savings – same as above. I personally would say “report space savings”, and then it’s incorrect.
    6) Auto-configured iSCSI settings – correct.
    7) Auto-configure NFS settings – incorrect, it’s auto in the Celerra NFS plugin.
    8) High Performance Multipathing – incorrect, see above.
    9) Physical to virtual storage management – incorrect. iSCSI and FC are supported across the board, the Celerra also has it for NFS.
    10) array based VM cloning – this is incorrect, but on your side. this can only be done on NFS datastores, not VMFS datastores. We both can “cheat” (taking a snapshot/clone) of the LUN, mount and copy out a VM, but that is not a VM-level snapshot. the reason we’re able to do it on NFS is that both Celerra (as of DART 5.6.47) and ONTAP (I think as of 7.2.3?) can do file-level snapshots (which in the VMware use case manifest as “VM-level” snapshots). Is this statement of mine correct?
    11) Arra based datastore cloning – incorrect. Celerra can do it for NFS and iSCSI.
    12) IO/offload for VM clones – incorrect, on your side. See 10.
    13) SRM support – correct
    14) SRM failback – incorrect. it is supported on all four – in the latest SRDF SRA, Recoverpoint 3.3, Mirroview Insight, and the Celerra SRA.
    15) Not sure if I understand the last one – can you explain to me what it means?
    BTW – you know why I think this is an exercise in futility (and why I avoid making direct comparisons to others, and something I have talked to you verbally OVER and OVER again?)
    a) if you look at the list above, you’re wrong on more than half. If this was in front of a customer, you, and NetApp would have lost credibility. This is why I try my darnest to train EMCers to never go negative on a competitor, but emphasize why customers choose EMC. When you’re a tiny startup, you HAVE to go negative. NetApp is not a tiny startup. You guys are a great company, with great products and people. Coming from a short guy, lose the Napoleon/David vs Goliath complex 🙂
    b) The list will be COMPLETELY wrong within about a month (seriously), and yet you, and the NetApp folks, and partners who use this will continue to use a doc that is completely out of date.
    Will you commit to constantly (literally constantly!) updating this thing? If so, what a waste of YOUR time. Personally, I try to make sure my team and I, along with EMC partners simply stay on top of VMware, Cisco and EMC technologies. We have little time to try to track others. Inevitably things we would say about others would be incorrect, just like I pointed out with the table above. Don’t get me wrong, that when good companies (like NetApp) do innovative things that customers like (like RCU), that we work with engineering to see what we can do. No “not invented here” syndrome allowed.
    c) EMC is more than a storage company. What about our VADP integration with Avamar for backup use cases (not just VDDK but also CBT)? What about SCM’s integration to extend ESX host profiles and guest remediation? What about RSA envision’s integration with vCenter for security auditing/remediation? Do you have any of those? Anything like that? Anything planned? BTW, that’s a SHORT LIST of all the integration points beyond simply storage.
    d) You selected the battlefield here to be “vCenter Plugins”, and extrapolated out to “VMware integrated”. Well, let’s broaden the context a bit shall we?
    – Does NetApp’s array element management tool directly connect to the vCenter APIs, showing VMware contextual info directly in the storage context, like EMC’s midrange does? Can I see what VM is being replicated via Snapmirror and which arent? If you can, is that a free feature?
    – Do ESX host initiator records (or igroups I believe they are called in NetApp-land) automatically get registered by vSphere 4?
    Moving on to perhaps a more productive discussion…
    It **IS** fair to say that EMC has multiple platforms, and that increases our engineering burden as we develop plugins.
    I’ve always said – whether it’s a company or an individual, our strengths are our weaknesses. NetApp’s strength is laser focus on in essence one product. EMC;s strength is breadth of capability. They each have flipside weaknesses.
    Three quick comments on that one (why would we intentionally “burden ourselves” with more than one storage array type?):
    1) Does NetApp have something like a V-Max? What percentage of that market does FAS6000 series have relative of that “enterprise array” (the definitions of that vary – but generally it’s defined by broader host attach than open systems only, coupled with N+1 architectural designs sharing a global cache model) to IBM, HDS and EMC? How hard would it be to extend FAS into that space? Scale out is not enough in that market.
    2) If one product is always the way, all the time, why the big fight over Data Domain? Isn’t that an implicit statement that growth into new markets aren’t always best served by a given technological approach and that perhaps not all problems can be solved “in the storage platform”? I wouldn’t be embarrassed to make that statement – it seems patently obvious to me.
    3) EMC DOES need to simplify our product families – **where the needs can be met via one architectural model, one approach – we need to simplify.** No argument from me there. You can bet your bottom dollar we’re beavering away at it here 🙂
    All just my 2 cents (my opinion) of course. If you want to commit yourself to consistently stating stuff about others that are always going to be a little wrong, a little behind the times (in some cases not a little, but a lot), then so be it, I’ll be glad that you’re doing that rather than playing with betas of VMware and your own products 🙂

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…