One of the things I love about my current role at NetApp (beyond the great culture, people, and technologies) is the amount of time I spend meeting with customers, prospects, partners, analysts, and the online community to discuss the topic of virtualization and how it has changed everything from IT architectures to corporate business models.
Over the past four to five years we have witnessed data centers change at a break-neck pace. During this evolution the level of exposure around storage technologies has risen from gregarious, black boxes once relegated to a small set of applications to a shared global component of the cloud; crucial to the success of the virtual infrastructure.
The architectures and the bread and butter capabilities of a traditional storage arrays have changed significantly since virtualization became mainstream (which is I’d suggest occurred in June of 2006 with the release of VI3 from VMware).
Back then traditional storage arrays looked like:
- Various Distinct Storage Platforms were available for hi-end SAN, mid-tier SAN, NAS gateways, and backup-only arrays each delivering a unique set of capabilities
- Fibre Channel networks were considered by many as enterprise class connectivity
- RAID 10 was the norm for highly available, high performance applications
- RAID 5 was everywhere as it provided cost effective data protection for application with moderate performance requirements
- Storage integration was commonly defined as completing a validation test for a particular application
- Copies of Data Sets, whether used for disk to disk backup or application test & development, were whole 100% duplicates
It wasn’t too long ago when some of the traditional storage array vendors disagreed with, and dismissed NetApp’s architectural design along with our vision that storage arrays should provide their feature set and capabilities across every platform.
NetApp dared to pose the question, ‘Why should customers have to choose between a full-featured yet expensive high end array and a mid-tier array be with crippled set of capabilities?’
By contrast the NetApp arrays of 2006 featured:
- Unified Storage Platform providing SAN, NAS, and backup capabilities. Hardware platforms were defined as different quantities of hardware all with identical functionality
- Ethernet Networking were promoted as enterprise class (while we also supported Fibre Channel connectivity)
- RAID-DP provided highly available, high performance data protection with a cost (RAID overhead) equal to and often less than RAID5
- Storage integration was available for the administrators of Microsoft, SAP, Oracle, and IBM applications via SnapManager and SnapDrive. Admins could self provision, backup, restore, and create instant data set clones (with access granted by the storage admin)
- Storage Efficient snapshot based backups, zero cost clones of volumes and LUNs, and of course support for data deduplication.
I think its fair to say that NetApp’s leadership was keenly aware that storage requirements for the next generation datacenter would be radically different than what was the norm of the shared storage arrays of the 1990s and early 2000s.
Fast Forward to Present Day 2011
Who would have expected the data center requirements of 2011 to be what they are today? With the advent of virtualization and cloud the market has spoken: Traditional legacy storage array, complete with a limited set of functionality, have gone the way of the dodo bird.
Shared virtual infrastructures designed to support cloud services require storage platform with the following capabilities:
- Unified Storage Architectures that provide identical functionality across all array platforms
- Converged Ethernet I/O is required for unified communications
- Data Protection in the form of RAID must excel in performance and reliability, while being cost effective
- Simultaneous SAN & NAS Access to the same set of physical disks, this eliminates dead spots in disk pools
- Virtual Storage Tiering provides I/O burst-ability without having to relocate data
- Unified Storage Efficiencies of deduplication, compression, thin provisioning, space reclamation, snapshot backups, and zero-cost clones for production use with both SAN & NAS data sets.
- Beyond Disk to Disk Backup should provide DR capabilities. Why have multiple offline copies of data, one for each need?
- Scale out NAS – it’s the future as it provides universal access by enabling storage as a networked service by decoupling access from the host/hypervisor
- Integrated Functionality Everywhere so hypervisors, backup applications, orchestration and data management suites. Soon there will be too much data to be able to complete a ‘migration’ without integration.
Virtualization Changes Everything, Including Storage!
I’d like to thank Val Bercovici for the following data points. I think they paint a clear picture of NetApp’s growth in the storage market and leadership position with cloud deployments.
Unified Leadership…
- There are over 180,000 NetApp Unified Arrays shipped
- In a little over a year, we shipped nearly 3PB of Unified Solid-State FlashCache in an optimal Virtual Storage Tier configuration
- NetApp leads the high-performance Unified Primary Dedupe market with over 87,000 deduplication customers in production deployed across nearly 40,000 Unified Systems
- NetApp revenue grew 54.9% since last year while EMC grew 28.3% (IDC)
- NetApp gained 2.8 percentage points in market share as storage efficiency resonates with end-users facing with budget and data center space, power & cooling constraints (Gartner)
- NetApp is positioned in the Leaders Quadrant in Gartner’s Modular Mid-Range & High-End Storage System, Storage Resource Management and SAN Management Software Magic Quadrants (Gartner)
Go with a Leader
Some in the storage industry are preparing for a major new product launch, one which will introduce their next generation storage arrays and from what I understand they include many claims of NetApp-like features.
If use history as our guide (examples 1, 2, & 3) then I expect we will see that these new systems ship compete with a set of half-unified capabilities. I’m confidant the actual capabilities, complete with caveats won’t be too hard to discern.
EMC is Beginning to Sound Like a Broken Record
I’d suggest that these new platform will sound eerily like what NetApp has been shipping for years. Don’t take my word for it, here’s what a few outside of NetApp are saying…
Is EMC leading or following NetApp?
Dave Raffo, Senior News Director | SearchStorage.com
—
Enrico Signoretti | Founder and CEO of Cinetica
—
EMC follows NetApp’s lead with VNX line
Chris Mellor | Channel Register
EMC is the current market share leader in the storage industry, and yet there is a need to change their direction. To leave the
traditional legacy storage array architecture behind and to follow NetApp. By their action this seems obvious; however, with all of this change I can’t help to consider what EMC may look like in the future. Is it possible that they may devolve into a software only company?
Time will tell.
Wrapping Up This Post
Imitation is the sincerest form of flattery, but who wants an imitation when they can have The Original Unified Storage Platform?
Interesting overview & analysis. Netapp has indeed been a true innovator. What I’m missing something about Data Protection beyond RAID. Many agree that RAID will not be able to provide sufficient availability & reliability levels. Erasure coding and dispersed/distributed storage are growing in popularity. What’s your view on the future of RAID – how much longer will it be around?
One of the best parts about being a NetApp customer is seeing the light that NetApp is shining, and it also comes as almost a grassroots effort to want to grab everyone else by the wrists and drag them along for the ride as well.
I’ve done that several times with a few other (non-NetApp customers) users, and it was amazing to see their eyes light up and the smiles come across their faces when the light bulb went on and they “got it.”