Last week I presented our portfolio of desktop virtualization solutions to a group of Cisco engineers and account managers. These solutions are built on FlexPod and include desktop virtualization technologies from VMware and Citrix, include automated configuration instructions, and each is certified as a Cisco Validated Design (CVD). In short, we are working together to deliver integrated solutions intended to address the broadest set of IT and user requirements.
Today I want to share with you some of the data presented during this call that is truly amazing and clearly demonstrates why NetApp is the market leader in providing storage for virtual desktop solutions with Cisco, VMware, and Citrix.
Allow me to level set the discussion before going deeper
In terms of I/O load, desktop virtualization has two distinct operational modes that I’ll refer to as ‘Steady State’ and ‘I/O Storms’.
- Steady State is the day-to-day operational usage, which on a per desktop basis constitutes a relatively light load, commonly ranging from 16-32 IOPs with a 20% : 80% read-to-write ratio. At scale these I/O requirements can be significant; however, they are easily addressed by most storage arrays with cost-effect storage footprints.
- I/O Storms are the recurring and sharp peaks in demand for server and storage resources that occur simultaneously on hundreds or thousands of virtual desktops. There are a number of different types of I/O Storm events, including boot (which includes refresh & recompose), log-on, log-off, antivirus scans, shutdowns, etc. On a per desktop basis this is a relatively heavy load, for example: a boot storm has a 95% : 5% read-to-write ratio. At scale these I/O requirements can bring a storage controller to its knees that will result in the loss of access to tens or hundreds of virtual desktops.
In short, desktop virtualization solutions require the most cost effective storage infrastructure with the densest storage footprints and provides advanced storage technologies in the form of storage tiering to meet the extreme load of an ‘I/O Storm’ event and the ‘Steady State’ events that are sure to follow.
It should come to mind that one way to avoid these demands, regardless of the storage infrastructure and its efficiencies, is to size the environment for the worst possible I/O Storm and the highest peak activity during Steady State. You should be saying “Vaughn, that is crazy, highly inefficient and a significant waste in your available resources.” Well you’re right, and as such storage companies have created solutions and approaches that enable more efficient responses to these periods of I/O demand – some good, some not so good and expensive (based on today’s technology costs).
The Traditional Storage Array’s Way
One of the typical ways I’ve seen traditional storage vendors provide storage tiering is to move the hot data onto higher performing disk drives like solid state drives (SSD), or what some call enterprise flash drives. They would say that the hot data constitutes the operating system data with the user data being placed on a lower-performing tier. This ensures that the peak loads associated with the client operating system are served on the best possible media in the stack. The benefit of this design is the ability of the SSD media to provide a high level of read Input/Outputs Per Second (IOPS) response in those periods of peak activity. There is of course added benefits in the separation of the “not-so-important” operating system from the “more-important” user data – not for me to cover, I am sure those traditional storage array vendors have their own best practices around this (again, that is one approach).
The use of SSDs is a rather direct means to address the I/O requirements of the I/O Storm events, though I am not so sure it’s the right approach in the Steady State events, because the downside to this design is the underlying tiering software does not provide instant relief to the load and the high cost of SSDs (still running about 18 times the price of a SAS drive) makes it a costly solution to implement.
Traditional Storage Array SSD model in the storage architecture when considering VMware View 4.5
Now let me be very fair – this SSD model can significantly increase the performance of an array in servicing the I/O load (or challenge) of a boot storm as shown in the image below.
The NetApp Way
A more advanced way to provide storage tiering is the Virtual Storage Tiering (VST) technology that is provided by NetApp’s FlashCache technology.
As you can see from the image above, with our virtual storage tiering provide near bus-level access to data that is hot, NetApp’s FlashCache design means that blocks of data can be immediately cached into optimized memory. Now because FlashCache uses the same deduplication technology used throughout the NetApp portfolio, FlashCache scales to meet the I/O demands of data sets beyond the capacity of the array’s read cache in both I/O Storms and Steady State events.
Our VST technology provides extreme read IOPs for I/O Storm events in an elegant and immediate fashion. The downside to this design is that you have to add FlashCache to your NetApp array; however, these cards costs significantly less than a single SSD drive. So maybe I misspoke and in reality, there isn’t really a downside to the NetApp Way.
The NetApp VST model in the storage architecture when considering VMware View 4.5 with NetApp hardware-accelerated VM clones.
The NetApp VST model can significantly increase the performance of an array in servicing the I/O load (or challenge) of a boot storm or steady state event.
These results were obtained on a FlexPod leveraging a mid-tier FAS3170. The storage technology that delivers this capability and performance is a fundamental component of our array architecture, which we have been developing specifically of virtual infrastructure over the past five years.
The NetApp VST design provides greater performance and capabilities than the most advanced storage tiering capabilities provided by traditional storage arrays. I believe you’ll agree the NetApp results are truly extraordinary!
- 25% less disk drives (24 vs. 32)
- 25% more desktops (1,250 vs. 1,000)
- 12% less time to boot (40 minutes vs. 45)
- 50% more
- 186% more data transferred (359GB vs. 192GBs)
data per desktop (287MBs with Win7 vs. 192MBs with WinXP)
(the above data was updated on 6/6/2011 to correct an error in the original post)
It would be remiss of me to not highlight the role Cisco’s Unified Computing and Unified Networking had in delivering these stellar results. The point I want to highlight is that while one has many options to consider when selecting a provider of virtual desktop infrastructure, Cisco and NetApp are leading the industry in providing dense datacenter architectures designed specifically for VMware View and Citrix XenDesktop . You can be assured that when you select a FlexPod based solution powered by VMware or Citrix they can be assured that their desktop architecture will deliver unmatched data center density, performance, and availability.
If you’re considering a desktop virtualization strategy, may I suggest you consider a FlexPod based solution from Cisco and NetApp? You won’t find another HW architecture more ideally suited for this workload.