Simple Versus Complicated – Verified

-

One day after sharing with you my customer engagement where we discussed provisioning ‘Storage for Desktops – Simple VersusComplicated‘ I was informed of a post from another storage vendor that verified my premise.

While my post discussed 500 desktops with VMware View and the latter covered 1,000 with Citrix XenDesktop, the information I shared relating to the storage complexities was verified by a team leader of a storage vendor.


I don’t understand the need for this level of rigidity and complexity. Why would anyone want the following attributes from their storage platform?

  • A requirement to provisioning physical RAID groups and map each to a function in the solution
  • No sharing of capacity or IOPs between the RAID groups
  • No dynamic resizing of the capacity in a RAID group if it is sized incorrectly
  • SSD to compensate for the slow performance of the spinning media
  • The increased financial cost of SSD drives (roughly still around 20X price premium per GB)
  • The increased CPU cost to process the multiple RAID calculations of each group

I still believe the NetApp model is a much simpler & efficient model.


Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

2 Comments

  1. Working with another vendor i can tell you that i appreciate your design, i can use it too wth for my deployments but i understand why sometimes you separate pools with different raid levels and you don’t mix up all disks in a single pool: if you have SLA requirements or if you simply don’t want I/O contention in one datastore to affect performance on another it can be useful to make separation.
    In general i use you approach but i am often asked to make sure that, for example, a VDI implementation doesn’t suffer from I/O contention (or vice-versa for what it matters) so some degree of separation is needed if you need to garantee performance no matter what because in your design performance are super-high at the beginning but the more VMs you add the more all VMs are affected.
    Well, mine it’s an opinion of course.

  2. Boring.
    I really wish that you and EMC would stop the flame wars. Your posts (and EMCs too) are as much about negative aspects of the other (be it truth or fud, doesnt matter) as they are about the tech.
    Every post is in the format “Blah, blah, blah, EMC BAD, blah, blah blah….” or vice versa, based on who’s posting.
    As someone interested in the technology, you all come across as spoilt little kids who refuse to play nice (or just ignore each other if required) in the playground.
    Can we just listen, and be educated, on the merits of each technology. We are quite well able to make decisions by ourselves, without the constant online spats.

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…