The 4 major trends in enterprise storage in 2014

-

Has there ever been a more exciting time to be in IT? Seriously, the industry seems to have turned up the volume on innovation up to 11 and there’s no stopping in sight. If we focus only on enterprise storage – a small portion of the industry – I think most agree that the 4 major trends that clearly crossed the chasm from emerging to mass adoption in 2014 were: hybrid storage arrays, host-based acceleration, hyper-converged architectures and all-flash arrays. Many within the industry view these 4 as the disruptors driving the decline in external disk array revenues.

Innovation overload?

With a market clearly in flux, its likely that if your organization will be considering one or more of these technologies in 2015. So where do you begin?

Its time we hit the pause button on tired storage talk tracks like performance or architecture and ask more actionable and quantifiable questions like ‘Where does each of the these 4 clearly provide benefits?’ and ‘Where should I consider adopting each in 2015?’ I want to advocate you prioritize innovation and consider the products from an innovation leader as for without their advancements; we would not be witnessing the disruption.


The many forms of hybrid storage: accelerate disk with flash

All storage architectures designed to increase the performance of a disk array by leveraging flash as an I/O acceleration cache are forms of hybrid storage. The storage DNA hybrid storage arrays, host-based acceleration and hyper-converged architectures make them all closely related.

Disk storage is renowned for exhausting performance at the expense of abandoned capacity. I/O acceleration is a trusted, tried and true means to increase the performance ceiling in a cost effective manner. The formula is simple  store the active dataset in high performance flash and performance increases follow. 

From my discussions with CIOs, CTOs, IT directors, architects and administrators around the globe, I believe its fair to say the market shares the following views…

Hybrid arrays – build from the ground up to optimize performance with I/O caching via flash as a native function of the kernel. They often out perform traditional arrays retrofitted with flash. If I supported a heterogeneous environment and had a storage budget of less than $50K USD – I would consider a hybrid array. It’s risk averse, supports every Intel based server platform and delivers bump in performance.

Host-based acceleration – moves flash and the I/O caching logic (in the form of software) out of the array and into a server, on a server-by-server basis. If I had responsibility for a performance starved application and my needs weren’t being address by the storage team I would consider deploying host-based acceleration. It’s can provide immediate gains for $10K to $30K USD per server.

Hyper-converged architectures – relocates all flash and disk to the server and implements multi-host replicas for increased data availability. If I had a number of small branch or remote offices that were 100% virtualized with 2 to 4 servers and a small storage array, I would consider a hyper-converged platform. At this size there’s cost savings to consider.


All-flash arrays: replace disk with flash 

All-flash arrays are not new but 2014 was a milestone year as the entire storage market was forced to follow the lead of the all-flash innovators and offer flash at the price of disk. Most implemented software to reduce the price per usable GB. Like the section above, let’s focus on the area of greatest commonality in this space and for those who seek to know the deltas, they are well documented and can be found strewn across the interwebs.

All-flash arrays – built from the ground up deliver consistent sub-millisecond latency for all data access. They often include means to overcome the write limitations of NAND flash. Contrary to disk based storage, all-flash arrays tend to exhaust capacity well ahead of performance.

If I were looking to build a next-generation storage infrastructure it would be comprised of all flash. Sure I’d have to start with performance centric applications today but in a few short years I’m confident flash will be supporting capacity based applications and uses.


A tip for 2015: adopt innovation

Given the pace of IT innovation – not just storage but across the entire ecosystem – to what degree of accuracy do you believe you can predict the new software and services your CIO will ask you to deploy 24 months from now? How about 36 months from now?

So ask yourself, “Which of the innovative storage technologies should I adopt?” The answer to this question returns to the opening of this post. Don’t fall into the trap of talking old-school storage. It’s not about IOPs or architectures – it’s about your goals and your area of influence. If scale, budget or authority limits you then one or more of the hybrid architectures is likely right for you.

By contrast if you are a member of the storage architecture team, a innovation leader, or carry enough weight to influence the storage decision (never underestimate the influence application owners wield) – you are likely looking to build a next-generation architecture to address today’s needs and have the highest likelihood for success with the unknown that tomorrow will bring. If this is you, then an all-flash array is your best move.

I’m very happy to see the innovation occurring in today’s market. Many close friends are leading the direction of hybrid storage arrays, host-based acceleration, hyper-converged architectures and all-flash arrays. Collectively we are helping customers exceed their goals and are shaping the next-generation of IT.I wish them all the best and would ask you to consider one of the disruptive technologies from an innovative vendor in 2015.

The innovation isn’t coming from big storage – viva la revolution!

Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

2 Comments

  1. Hi Vaughn,

    I am always interested in the debate around the all-flash datacentre and therefore the death of the HDD.

    My understanding is that for the foreseeable future (i.e. 5-10 years) the SSD will keep getting bigger and lower cost per GB, equally the same will happen with the HDD. At all times the cost per GB of HDDs will be significantly lower than SSDs and the cost per IO of SSDs will be significantly lower than HDDs.

    I think we all agree that most organisations have an awful lot of data (as always the 80:20 rule applies) that is not very active.

    Therefore over the next 5-10 years the best place for the active 20% of data will be on SSDs and the best place for the inactive 80% will be HDDs.

    So my questions are:

    1. How do you/Pure see the above?
    2. Will pure be looking to support HDDs in their arrays moving forward?
    3. If they are not when do you see the death of the HDD?

    Much more of my thoughts are available at http://blog.snsltd.co.uk/does-the-all-flash-array-really-make-sense/

    Your comments would be much appreciated.

    Best regards
    Mark

    • Mark
      Traditional storage vendors sell you 100 TBs raw and if you are lucky you get 70 TBs usable. With Pure you purchase 70 TBs raw and you get 250 TBs usable (or more).

      Pure Storage has multiple data reduction techniques (pattern matching, compression, deduplication) that are ALWAYs on (http://www.purestorage.com/flash-array/flashreduce.html).

      Pure’s operating environment was designed from the ground up for flash. This allows the data reduction techniques to have zero impact on performance. Traditional storage vendors may have the ability to implement some of their own reduction processes but they either come at a cost to performance or simply don’t provide significant reduction.

      So, it isn’t the fact that SSDs are getting larger and cheaper over time. As you point out HDDs are also getting larger and cheaper. It is the way in which innovative vendors like Pure are able to get far more usable capacity out of SSDs than traditional storage vendors can with HDDs all while delivering sub-millisecond performance.

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…