It’s been two weeks since VMworld 2009 concluded. I believe the show was an overwhelming success for VMware and their ecosystem partners. I believe this year’s event demonstrated that VMware was executing on their strategy to be the leader in the virtualization of the data center through the refinement of numerous technologies, enhancements, and partnerships.
As there has been a ton of coverage on the show, the sessions, the product and partnership announcements, etc… I thought I’d go in a different direction with today’s post and share a theme that was top of mind in nearly every discussion I had with attendees whether they were customers, prospects, partners, analysts, administrators, or executives.
Virtual data centers today are faced with large challenges when it comes to managing their storage.
What seems common in most environments is most of the VMware administrative teams are staffed with individuals who have modest experience with storage architectures and administration. This model has resulted in very successful server virtualization architectures based on unmanageable, hard to troubleshoot, and extremely expensive storage platforms.
One customer labeled their situation as one of “Ever Multiplying Complexity”
This scenario is so widespread is has literally initiated a relatively new market of technologies designed to help tame this beast, which are available from nearly everyone ranging from start-ups to storage industry leaders.
It seems that there is a message being presented that implies the only way to virtualize one’s data center is to purchase a multitude of tools which makes traditional storage arrays and storage networks less complicated. How does adding more equate to less complexity?
It is clear is the model is broken
I fundamentally believe you don’t surround complicated solutions with more tools – you replace the complicated with simpler, more advanced technology. We all do this in all other areas of our lives and if you’re reading this post you’ve already done this with the servers in your data center.
At this moment a representative offering traditional storage array technology is probably stating that I’m misinforming the public or overstating my claims. That’s rubbish.
It takes an extraordinary amount of engineering advancements to make technology simple yet full featured. Does anyone challenge the sophistication and capabilities of technologies like ESX or Mac OS X?
When Cisco recognized that IOS was limited in the value it provided virtual infrastructure they didn’t choose to sell customers additional software as a means to make up for the shortcomings of IOS. Instead Cisco began developing NX-OS as a way to eliminate the challenges.
Why are customers’ giving the storage vendors a free pass and not asking them to get on board with the virtual era?
I have a hard time understanding the current storage array offerings by most vendors. Different architectures for enterprise, mid-tier, and small office; each running a separate software stack, which prevents native interoperability (OK, you can buy bolt on products to enable some amount of interoperability). As there are multiple array software architectures it impacts the integration points with VMware resulting in some features on some get models via some protocols.
If we were to apply this model to our server virtualization efforts our data centers would run three hypervisors, each with its own set of management tools and offering differing capabilities.
Would you do this with your server virtualization efforts?
Why NetApp is Simple and Simple is Ideal
As VMware virtualizes hardware with ESX NetApp virtualize storage hardware with Data ONTAP. DOT is available on every array from the enterprise class to small office, and it can virtualize your existing enterprise class arrays from EMC, HP, HDS, 3Par, etc…
This ‘storage hyper-visor’ design enables interoperability between every controller, equal functionality across all platforms, and reduces the training requirements of the storage teams and eliminates gaps in VMware integration points.
Making Storage for VMware Simple
The choice of storage connectivity can make a huge impact on the operational team.
Most don’t realize that each storage protocol available with VMware offers various strengths and benefits. Again, this is probably due to most of us VMware admins having a limited storage background or working in an environment where only a single storage protocol is in use.
With virtualization customers are best served with Ethernet storage networks as opposed to Fibre Channel as Ethernet enables multiple storage protocols (NFS, iSCSI, & FCoE) thus customers can leverage the benefits of each protocol.
Consider that ~80% of all systems being virtualized are for the purposes of consolidation. Did you know consolidation ratios with NFS datastores is exponentially greater than with iSCSI or FC datastores. Most
The NFS model results in significantly less datastores and which results in operational savings thru reducing tasks in the areas of backup scheduling, storage provisioning, LUN masking, data replication, storage network management like zoning, SRM recovery plans, node connectivity, multipath assignments, etc…
A 10X increase in density results in 10X operational savings
Another example of simplicity was in a previous post I covered the plug-n-play SAN capabilities with vSphere and NetApp. The traditional storage array guys will claim I’m over selling here. This isn’t because they are bad guys; rather they fundamentally believe every storage array operates similarly and as such they all have the same limits. Check out the post for more details.
Make Storage Monitoring Simple
With the storage array providing a simpler architecture then the number of reporting and monitoring tools can be reduced. Ideally we want to avoid monitoring individual storage objects such as LUNs, or abstracted sub-objects like Meta-LUNs or concatenated volumes. Ever try to find a hot spot when multiple VMs run on multiple partial disk stripes? You bet you’ll need additional tools (and a bit of luck).
Admittedly, NetApp doesn’t offer the number of monitoring tools offered by other storage vendors. This is a testament to the array’s architecture and ONTAP as with ONTAP storage monitoring is performed on pooled resources such as disk, CPU, link and port utilization. This design is a strength and not a weakness as some way suggest.
We live in an abstract world
We have choices to consider in order to continue growing our virtual data center: Deploy complex storage and surround it with gobs of management tools, clients, agents, etc… or deploy simple, virtualized, integrated storage technologies that save your budget and your staff’s time.
You’ve already made the decision to simplify operations by virtualizing your servers, so why not do the same with your storage arrays?
“Simplicity is the ultimate sophistication.” – Leonardo da Vinci
Leave a Reply