Last week I presented our portfolio of desktop virtualization solutions to a group of Cisco engineers and account managers. These solutions are built on FlexPod and include desktop virtualization technologies from VMware and Citrix, include automated configuration instructions, and each is certified as a Cisco Validated Design (CVD). In short, we are working together to deliver integrated solutions intended to address the broadest set of IT and user requirements.
Today I want to share with you some of the data presented during this call that is truly amazing and clearly demonstrates why NetApp is the market leader in providing storage for virtual desktop solutions with Cisco, VMware, and Citrix.
Allow me to level set the discussion before going deeper
In terms of I/O load, desktop virtualization has two distinct operational modes that I’ll refer to as ‘Steady State’ and ‘I/O Storms’.
- Steady State is the day-to-day operational usage, which on a per desktop basis constitutes a relatively light load, commonly ranging from 16-32 IOPs with a 20% : 80% read-to-write ratio. At scale these I/O requirements can be significant; however, they are easily addressed by most storage arrays with cost-effect storage footprints.
- I/O Storms are the recurring and sharp peaks in demand for server and storage resources that occur simultaneously on hundreds or thousands of virtual desktops. There are a number of different types of I/O Storm events, including boot (which includes refresh & recompose), log-on, log-off, antivirus scans, shutdowns, etc. On a per desktop basis this is a relatively heavy load, for example: a boot storm has a 95% : 5% read-to-write ratio. At scale these I/O requirements can bring a storage controller to its knees that will result in the loss of access to tens or hundreds of virtual desktops.
In short, desktop virtualization solutions require the most cost effective storage infrastructure with the densest storage footprints and provides advanced storage technologies in the form of storage tiering to meet the extreme load of an ‘I/O Storm’ event and the ‘Steady State’ events that are sure to follow.
One Way
It should come to mind that one way to avoid these demands, regardless of the storage infrastructure and its efficiencies, is to size the environment for the worst possible I/O Storm and the highest peak activity during Steady State. You should be saying “Vaughn, that is crazy, highly inefficient and a significant waste in your available resources.” Well you’re right, and as such storage companies have created solutions and approaches that enable more efficient responses to these periods of I/O demand – some good, some not so good and expensive (based on today’s technology costs).
The Traditional Storage Array’s Way
One of the typical ways I’ve seen traditional storage vendors provide storage tiering is to move the hot data onto higher performing disk drives like solid state drives (SSD), or what some call enterprise flash drives. They would say that the hot data constitutes the operating system data with the user data being placed on a lower-performing tier. This ensures that the peak loads associated with the client operating system are served on the best possible media in the stack. The benefit of this design is the ability of the SSD media to provide a high level of read Input/Outputs Per Second (IOPS) response in those periods of peak activity. There is of course added benefits in the separation of the “not-so-important” operating system from the “more-important” user data – not for me to cover, I am sure those traditional storage array vendors have their own best practices around this (again, that is one approach).
The use of SSDs is a rather direct means to address the I/O requirements of the I/O Storm events, though I am not so sure it’s the right approach in the Steady State events, because the downside to this design is the underlying tiering software does not provide instant relief to the load and the high cost of SSDs (still running about 18 times the price of a SAS drive) makes it a costly solution to implement.
Traditional Storage Array SSD model in the storage architecture when considering VMware View 4.5
(click on image to view at full size)
Now let me be very fair – this SSD model can significantly increase the performance of an array in servicing the I/O load (or challenge) of a boot storm as shown in the image below.
(click on image to view at full size)
The NetApp Way
A more advanced way to provide storage tiering is the Virtual Storage Tiering (VST) technology that is provided by NetApp’s FlashCache technology.
As you can see from the image above, with our virtual storage tiering provide near bus-level access to data that is hot, NetApp’s FlashCache design means that blocks of data can be immediately cached into optimized memory. Now because FlashCache uses the same deduplication technology used throughout the NetApp portfolio, FlashCache scales to meet the I/O demands of data sets beyond the capacity of the array’s read cache in both I/O Storms and Steady State events.
Our VST technology provides extreme read IOPs for I/O Storm events in an elegant and immediate fashion. The downside to this design is that you have to add FlashCache to your NetApp array; however, these cards costs significantly less than a single SSD drive. So maybe I misspoke and in reality, there isn’t really a downside to the NetApp Way.
The NetApp VST model in the storage architecture when considering VMware View 4.5 with NetApp hardware-accelerated VM clones.
(click on image to view at full size)
The NetApp VST model can significantly increase the performance of an array in servicing the I/O load (or challenge) of a boot storm or steady state event.
(click on image to view at full size)
These results were obtained on a FlexPod leveraging a mid-tier FAS3170. The storage technology that delivers this capability and performance is a fundamental component of our array architecture, which we have been developing specifically of virtual infrastructure over the past five years.
Unreal Results
The NetApp VST design provides greater performance and capabilities than the most advanced storage tiering capabilities provided by traditional storage arrays. I believe you’ll agree the NetApp results are truly extraordinary!
- 25% less disk drives (24 vs. 32)
- 25% more desktops (1,250 vs. 1,000)
- 12% less time to boot (40 minutes vs. 45)
- 50% more
- 186% more data transferred (359GB vs. 192GBs)
data per desktop (287MBs with Win7 vs. 192MBs with WinXP)
(the above data was updated on 6/6/2011 to correct an error in the original post)
Wrapping Up
It would be remiss of me to not highlight the role Cisco’s Unified Computing and Unified Networking had in delivering these stellar results. The point I want to highlight is that while one has many options to consider when selecting a provider of virtual desktop infrastructure, Cisco and NetApp are leading the industry in providing dense datacenter architectures designed specifically for VMware View and Citrix XenDesktop . You can be assured that when you select a FlexPod based solution powered by VMware or Citrix they can be assured that their desktop architecture will deliver unmatched data center density, performance, and availability.
If you’re considering a desktop virtualization strategy, may I suggest you consider a FlexPod based solution from Cisco and NetApp? You won’t find another HW architecture more ideally suited for this workload.
Solution URLs
Cisco & NetApp Desktop Virtualization Solution with VMware View
Cisco & NetApp Desktop Virtualization Solution with Citrix XenDesktop
Citrix XenDesktop on VMware vSphere on Cisco Unified Computing System with NetApp Unified Storage
Citrix XenDesktop on XenServer on Cisco Unified Computing System with NetApp Unified Storage
VMware View 4.5 on Cisco Unified Computing System and NetApp Storage
I do have a question…
Why would EMC publish a report on Desktop virtualization for 2250 desktops and only show bootstorm for 1,000 of them..?
That point should immediately jump out to anyone reading that report. You can guess how long it would take to boot 2250 desktops…and it ain’t 45 minutes…or 50 or 60 or 70 or 80…it’s over 1 1/2 hours.
Isn’t it unrealistic to be booting 2500 desktops simultaneously? Wouldn’t you stagger booting to happen outside of business hours and in increments of a few hundred at a time?
@James – Great question. I would suggest to you that one needs to consider scale and the capabilities of the desktop software to deploy pools of desktops.
I think we agree that one could micro manage a small number of desktops. I challenge the ability of one to micro manage thousands or tens of thousands of seats as the means to do so would require one to significantly increase the number of points to manage.
Operational nightmare.
Vaughn, it was great session to the Cisco team! Thanks for all the update.
Correct me if I’m wrong, but it seems like you are referring to two different EMC technologies: Fully Automated Storage Tiering (FAST) and FAST Cache which acts as an array cache expansion by simply adding EFDs with no downtime of the array. FAST tiering does occur on a scheduled basis to move hot data to high-performance drives and vice versa for cold data but FAST Cache is globally available to the entire array and acts in real-time to absorb peaks such as VDI bootstorms. Your chart is demonstrating the effects of FAST Cache, not FAST.
@JeremyKeen – Good catch, i will recise the post for accuracy around the data EMC has provided as validation of their tiering technology when used with VMware View.
I do not claim to understand why they would design such an architecture; however, I do believe I have demonstrated there is a better way to meet the storage requirements in a virtualized desktop environment.
Thanks again