Thoughts on 2011 – Technical Trends for Clouds

-

Recently, a number of us at NetApp were asked to share our thoughts around our expectations for 2011. The topics shared by thought leaders at NetApp from the likes of Val Bercovici to Jay Kidd ran the gamut from global economic conditions to ITaaS offerings to object-oriented storage architectures. As one might expect, the thoughts I shared were a bit more technical in nature and focused on the integration of storage technologies with virtualization and cloud-based constructs.

Over the past several years, I’ve been providing this type of guidance regarding product requirements, technical advancements, partner solutions, and the like relating to the product managers and technical Directors responsible for developing our virtualization related solutions. After sharing my thoughts within NetApp, I thought you might be interested in some of the guidance I provided for 2011.

I will admit as I write this post that I am thrilled by the concept of sharing, receiving feedback, and eventually reviewing the accuracy of my ”predictions.” Before I do, I would like to stress that the views I share are my own. They are not to be considered official NetApp statements.

So with that disclaimer out of the way, here are my predictions for 2011 pertaining to how storage will advance within cloud & virtual infrastructures…

  1. Cloud architectures will standardize on Network Attached Storage (NAS) as the de facto form of storage access. In 2011, we should see a significant number of announcements coupled with technology releases that either add support for and in many cases actually recommend NAS as the preferred form of storage access. Consider this trend along with the maturation of object-oriented storage as a means to provide scaling for massive cloud-based services, and NAS seems prime to be the storage infrastructure of tomorrow.
  2. Unified 10Gb Ethernet networking will increase in adoption, converging today’s disparate Ethernet and Fibre Channel networks. The short-term ROI realization driven by reductions in port count and rack space coupled with additional technical advancements will fuel this growth.
  3. The adoption of storage efficiency technologies like data deduplication with production data sets and applications will continue to increase. As I cited in my post on IDC’s forecast on data growth, in 2011 I expect us to see application vendors formally endorse the use of such technologies as a means to reduce the total cost of their solutions. I expect us to see a new metric of success to emerge, one based on the “cost per IOP” model. This advancement will motivate cloud architects to increase their knowledge of these technologies and their idiosyncrasies with a focus on when and where it’s appropriate to implement each.
  4. The trend established in 2010 will continue with the announcement of additional strategic alliances and relationships dedicated to prevalidated cloud-based architectures like FlexPod. The jury is in on this one; customers are in love with these offerings and are demanding more integration, simplicity, and standardization as a means to accelerate their private cloud adoptions and next-gen projects.
  5. Storage caching technologies, both hardware based modular expansion units and enhanced forms of caching software, will begin to gain acceptance as a primary form of storage access. Caching technologies themselves will continue to develop over the next few years as this technology will enable non-disruptive application migration capabilities and increased IO scaling within the hypervisor.
  6. I foresee an increase in the number of applications that will integrate advanced storage array capabilities as a means of hardware offload; replacing constructs that historically were software based. The goal of such integration enables solutions to scale to new limits, which is needed with the ever-increasing volume of data. The ability to offload data management functions to a storage platform capable of providing theses functions in a more efficient manner looks to become the norm. vStorage APIs appear to be the start of potentially popular trend.
  7. Software-based Virtual Storage Arrays (VSA) will advance in their adoption, capabilities, and points of integration. This technology will come of age, so to speak, and in doing so will blaze a trail of net-new solutions previously unobtainable due to requirements of traditional shared storage arrays.

At this point, the floor is yours. I encourage you to share your thoughts. There’s nothing off limits. I eagerly await the criticism, skepticism, agreement, augmentation, and advancement of these views. In twelve months, we will revisit this thread and review the accuracy of our thoughts. Until then, thanks for reading and spending a moment or two of thought.

Cheers!

 

Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

2 Comments

  1. I see storage vendors making a push to do for traditional storage arrays what VMware did for physical servers.
    I hope to see a complete abstraction of logical storage containers from being held down to any 1 controller, shelf stack, array.
    I hope to be able to “right click, maintenance mode,” and similar to vMotion, have all of my data shuffled off to another “host/filer/controller” in my “cluster” of storage arrays, unbeknownst to the end user.
    I understand that it is limited by connection mediums between controllers and trad disk arrays/shelves (among other things), but I see this becoming abstracted VERY soon.
    With regards to NetApp specifically, DataMotion is cool, but take it to the next level and completely abstract my logical FlexVol’s from being tied down to a particular filer. Abstract my filers in a “cluster” completely from the dumb pool of disk. Give me a “vCenter” equivalent for my storage arrays and allow me to build logical clusters of filers/heads/controllers/etc, and allocate certain workloads (i.e. random vs. sequential) seamlessly without any real need for hard configuration.
    I see BIG pushes from all of the manufacturer’s to make something like this more of a reality, and if it already is, then it should be PoC’ed and made more public.
    -Nick

  2. Is this a real suggestion of a tongue in cheek look at Netapp’s business plan for 2011? You’re looking at the IT world through a very narrowly focused telescope here, Vaughn.
    If there’s something you should be considering as an emerging technology, it’s pNFS; something you’ve not mentioned at all. How about a more reasoned view of the market than just a look at where Netapp wants us all to go?

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…