As some of you may know if I’m not talking to someone (partner, customer, etc…) I’m probably listening to my iPhone. Besides my love for music (I’m a bass player) I listen to audiobooks, courses from the teaching company, and a half-dozen or so podcasts.
Today during my flight from NC to CA (I believe we were somewhere over Kentucky) the VMware Communities Roundtable podcast #85 started playing and John Troyer’s featured guest in this episode was none other than one of my good friends, John Dodge! For those of you who’ve yet to meet John, he’s a Sr. Manager within VMware’s Desktop Specialist Consultants, and he has the following charter:
- Provide delivery capabilities to field teams.
- Develop services IP for PSO field delivery, and share with Practice Development for the purpose of releasing to the entire partner community
- Provide partner enablement, such as working with the largest SISOs on the development of DaaS Reference Architecture and GTM strategies.
- Provide thought leadership through a number of vehicles including PEX, VMworld, podcasts, and blogging.
BTW – I understand John’s team is currently expanding and will continue to do so over the next four quarters (I also understand they need additional NetApp experts).
Skip the Remainder of this Post and Listen to Episode #85
If you’re still reading, I’ll cover a few points around storage requirements with VMware view and all virtual desktop deployments…
I implicitly agree when John states desktop virtualization is more challenging than server virtualization as it incorporates all of the architecture components of servers plus the complexities of managing desktops.
I’m always torn around how much of a technology to cover, to discuss a solution holistically or just cover the impact of storage specific to the solution? VMware View is such a rich topic of discussion that for this post I’ll focus only on storage. I made this decision, as I want you to listen to John on the podcast.
2010: The Year of Virtual Desktop
If you caught my ‘Chinwag’ with Mike Laverick you probably recall that NetApp is seeing a HUGE uptick in our virtual desktop deployments with VMware, Citrix, Microsoft, Quest, and more. As I stated early, storage is a very complicated challenge with desktop environments.
Virtualized desktops have a very different steady state as compared to what one sees with virtualized servers. Whereas server workloads tend to be very predictable; desktops have a massive fluctuation between their steady state and storm events.
Many are surprised to learn that when the desktop IO load increases there’s a high probability that a large number of other desktops are also increasing their load. This environmental behavior is referred to as a storm event.
Storm events include:
- Boot storms (after a desktop refresh, recompose, or patch update)
- Logon storms (the 90 minutes each morning when 85% of the users log on and open their apps)
- Antivirus storms (where every file on a VMDK must be read)
- Logoff storms (the 90 minutes each afternoon when 85% of the users log of and close their apps)
- Shutdown storms (before a desktop refresh, recompose, or patch update)
Why Sizing/Financing Storage for Desktops is Difficult
While all virtual desktop deployments have a different steady state in terms of IOPs, John cites a commonly range of 4-8 IOPs per desktop with storm events commonly ranging between 100-150 IOPs per desktop and some reaching 300 IOPs.
Storage costs can kill the financial model of adopting desktop virtualization.
If you need to provide 150 IOPs per desktop in order to ensure a quality user experience than you may need to deploy one 15k FC drive for every two users. With SSD drives one could host more users per drive as SSDs provides a significant increase in the number of read IOPs. However, SSDs aren’t a exactly perfect medium as the write performance is commonly reported to being closer to that of SATA drives. Not to mention that SSD drives are no where near the density of today’s FC drives.
While I struggle to forecast the desktops to drive ratio available with SSDs I am confident that the cost per GB per desktop between SSD & FC drives is probably very close (I’m comparing a few expensive SSD drives compared to several inexpensive FC drives).
Making Storage Cost-Effective, Dense, and ‘Burstable’
Storage for desktops includes OS, applications, and user data. These data sets are stored separately in multiple virtual disks on multiple datastores. By separating this data View can provide a persistent user experience while providing a means to dynamically update the OS and apps. Ideally one would like to store the OS and apps on high performance drives and user data on more cost friendly drives.
Whether one would like to provide storage savings via View Composer’s Linked Clones capabilities or NetApp’s hardware accelerated cloning via the Rapid Cloning Utility to our Transparent Storage Cache Sharing allows for SSD read performance with cost-effective FC drives.
I just wrote about this ability in my two-part post on Transparent Storage Cache Sharing (see part 1 & part 2). In short, TSCS allows the controller to load a single copy of data on the array’s cache even thought the object is presented as a unique independent object outside of the array. For example with View one with deploy a number of replicas from a desktop gold image (remember VMware recommends 64 Linked Clones per replica).
Consider a 2,000 seat deployment of a 20 GB desktop. When deploying Linked clones with traditional storage arrays one would have 32 replicas stored on disk and in the cache consuming 640 GBs of array cache! 640 GBs of cache is a lot of cache! With NetApp’s TSCS, there is single copy of the replica on disk and in the array’s cache consuming only 20 GBs.
In addition, user data also generates a fair amount of I/O load especially with Outlook OST files (which I also spoke about in the part two of the TSCS post). Like OS and app images, the benefits of TSCS allows one to ensure the user experience will be greater than what can be achieved with he same types and number of drives with a traditional storage array.
Virtualization Changes Everything
Recently one of my comrades in the storage and virtualization blogosphere challenged the sizing I provided in my View Express post. Specifically my friend took issue with the 4-8 IOPs per desktop I used in the sizing. He claimed that a more real world number would be 8-15 IOPs with bursts of up to 25 IOPs.
The dismissiveness of these statements reminded me of the Josn Billings quote, “It ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so”
I’m sure the vendors of traditional, legacy storage architectures are frustrated. Storage virtualization technologies from NetApp deliver a higher quality user experience by providing more IOPs and capacity per disk drive at a lower cost per GB than what is available from traditional, legacy array architectures.
I assure you the NetApp technologies that are so ideal for virtual desktops and View deployments work just as well for virtual server deployments.
John’s expertise around virtual desktops, storage requirements, insight into OS builds, and view as to the future directions of View are all a part of the podcast. Check it out, its great.
Leave a Reply