Comparing and Contrasting All-Flash Arrays

-

Recently I shared how EMC’s entry into the All-Flash Array market provided both market clarity and likely would accelerate the adoption of All-Flash Arrays (AFAs). Most found the post accurate and a fair representation of how the market is divided between performance-focused and storage-efficient AFA architectures.

The performance capabilities and datacenter/environmental benefits of All-Flash Arrays are widely understood. As such I felt it appropriate to classify and clarify some of the key differences between performance-focused and storage-efficient AFA architectures. My efforts are reflected in the chart below.

FlashChart

(Replication for Pure Storage FlashArrays will be available in beta for customers on Purity O.E. 3.3)

 

With the ability to compare and contrast the AFA market it is easy to identify the performance-focused and storage-efficient platforms. What may be more interesting are the areas where these platforms both align and diverge in capabilities. For example…

  1. MLC-based NAND is used in all of the platforms.
  2. As one would expect, the storage-efficient arrays provide significant increases in usable storage capacity.
  3. Most of the AFAs are locked hardware configurations and only support single SSD failures.
  4. There’s little market consistency in areas of data and operational management features.
  5. Half of the systems require external infrastructure elements like UPS and InfiniBand networks.

I would like to be very clear – this is my attempt to have a substantive conversation around the capabilities inherent in a number of All-Flash Arrays. I do not claim to be an expert on any AFA outside of the Pure Storage FlashArray. The information in this chart was obtained via publicly available content provided by Violin Memory, IBM, EMC and Pure Storage. I will update and/or correct any misinformation as long as the revised data was produced by the AFA vendor and can be publicly referenced.

Note: I had to guestimate as to what is possible with each array. Admittedly this is likely less than what any vendor would prefer to see published. My apologies.

References:

Violin Memory

IBM FlashSystem

EMC XtremIO Launch Event Chat Transcript

 

 

 

Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

24 Comments

  1. Odd you left out Nimbus, which is based on the same Rising Tide linux base as Pure, and claim better scalability per array. Also, claiming XtremeIO suffer performance degradation on controller failure is more than a bit of a slippery slope. Of course they do, they actually scale beyond an HA pair. That isn’t a detriment, it’s a superior architecture decision to HA pair. It will inherently scale to better performance.

  2. Thanks for the article. I think you need to make some corrections on the Violin specs, we wouldn’t want any false information flying around. The below is all publicly available information.

    1. Violin does both SLC for performance and MLC for performance+storage efficiency
    2. Violin can support multiple concurrent flash vimm module failures without disruption (more than double)
    3. Violin does have full hardware and software NDU all the way from the gateways to the vimms themselves (http://www.violin-memory.com/products/vmos/)
    4. Infiniband is a supported protocol (http://www.violin-memory.com/products/vmos/)

    I recommend you update the table to provide the correct information.

    Also i can’t find any public information about how pure does full hardware NDU, could you provide me with any links? as in SSD firmware upgrades.

    • Hey there,.Jonas from Pure here.
      A few questions about Violin hopefully you can clear up for us so we can make this right.. we aren’t in the FUD business and I’d like to help Vaughn make this right

      Can Violin lose ANY 2 or more vimms or specific pairs of VIMMS based on the layout in the appliance? Is there are a scenario where you can lose the “wrong 2 VIMMS” and go down like ala’ a R5 set that is 3+1 or the wrong 2 drives in a R10? My admittedly rudimentary understanding is if you have effectively sets of raid groups within the box like a traditional raid array and yes, you can lose multiple drives as long as they aren’t in the same raid set. If you hit 2 in a r5 like parity group, you’re down.

      Does the NDU have no impact on performance or does it require a planned outage window? Do Violin customer do this int he middle of the day? In other words, if the system is driving say 200k IOPS and the OS is upgraded, does it half the performance of the frame or is there NO impact at all? If it halves the performance like a VNX or worse (disables caching or something), it can’t be done in the middle of the day and thus it’s not really an enterprise grade NDU.

      Looking at the page you referenced I had a few additional questions:
      How are encryption, thin and snapshots provided? Is that through Veritas volume manager or natively within the array itself? I have customers with Violin on the floor and they have none of these features so perhaps they have older firmware or out of date hardware. What mechanism is used for the snaps COFW? Can Violin customers non-disruptively move to the next gen without downtime with an in place upgrade or is it a forklift to get to the new Violin box?

      • Hi Jonas,

        Who defines the specific criteria behind each of the categories in the table? I believe only Vaughn can answer that as he has conducted this analysis. If there is a specific criteria for each entry then this should be clearly stated.

        I am not in the personal game of who’s flash chip is bigger than the others. I only believe in that the correct information should be used and be available.

  3. “I would like to be very clear – this is my attempt to have a substantive conversation around the capabilities inherent in a number of All-Flash Arrays.”

    I find that pretty unlikely. I would expect, given your role at a flash vendor, that this is your attempt at trying to make your product look better than those of your main rivals. After all, given that in your own words you have had to “guestimate” what is possible with each array, it cannot work as an exercise in factual comparison.

    Still, it’s given me an idea so I thought I’d do the same thing:

    http://flashdba.com/2013/12/04/postcards-from-storageland-competitive-blogging/

  4. Your asterisk regarding replication on Pure should be removed, given that you likely don’t have adequate info on the competitors’ upcoming features. The platforms either support a feature now or they don’t. You don’t get to give your own platform a pass unless your post is sales-focused rather than purely informative. I have come to respect your posts over the years as being extremely fair and objective, and look forward to more of the same!

  5. You left off the leader in Flash Arrays. HDS just hit 1 million iops on their HUS VM with all flash. Plus it scales to 100 TB of Flash with the ability to add spinning disks. After reading this article, I wonder if you work for EMC.

  6. Good analysis Vaughn, Looks like your competitor’s porftolio’s are clearly lacking in this space.

    Calvin, read your blog, best not to get in a pissing contest over features etc, shooting holes in 3par is just as easy if you want to go down that path

    • I would say its an account with a number of inaccuracies. its clear that there wasnt much effort in trying to find the right information. It annoys me when clearly intelligent people provide false information with little time spent making sure if it is accurate or not. This is a fair game and the information presented should also be accurately fair.

  7. Coming from IBM (the company offering the flash array with all of the “N”s in this table as long as you really use it stand-alone) my view is most probably biased, but: isn’t flash about speed? About latency and IOPS? These numbers are missing while the capacity is listed naturally. I guess at the moment only very few customers say “I have not enough free capacity in my environment. I don’t care at all about speed. Let’s buy all-flash arrays!”.

  8. Vaughn, some more correction requests from the Violin camp.

    If you are going to compare RAW vs Available its also sensible to look at what people really mean by RAW. When Violin says that we have 70TB of RAW flash we mean that that is the RAW capacity of all the Flash Cells. However vendors using SSD’s tend to refer to RAW as the Formatted capacity of the SSD’s they source from XYZ manufacturer. So for example EMC claim a RAW capacity of 400GB for their SSD’s, however in reality these are either 512 GiB or 576 GiB drives formatted down to 400 GB. In the case of EMC that formatting is obviously to allow for GC etc, the additional drop from 10TB per X-brick to 7.5TB is for RAID overhead and Metadata. If one was being completely accurate with EMC one should also probably include the capacity of the 4 additional SSD’s (2 per controller) which are also used to hold metadata during some kinds of failure.

    Incidentally Violin has 44 TB after Formatting, Raid and Metadata not 40 (its 40 if we are talking TiB).

  9. Disclosure: I work for a national VAR who has business relationship with multiple AFA vendors

    The unfortunate thing is that, to a customer, these checkbox comparison sheets have little to no value because they doesn’t provide them with an understanding of what is important when selecting an AFA. Also, I think its a bit ridiculous for a vendor to post a comparison sheet like this in public without first vetting all the data with the other vendors — “I do not claim to be an expert on any AFA outside of the Pure Storage FlashArray” is a very poor excuse for even the smallest inaccuracy if proper due diligence hasn’t been performed. Regardless of the spin, this IS a competitive post, not some altruistic attempt to “help customers understand the AFA market”. If this isn’t a marketing activity (which it is) and the interest is truly to provide market clarification, it would have been much better served if the content was generated in collaboration with other vendors.

    My complete thoughts here: http://vjswami.com/2013/12/04/a-buyers-guide-for-the-all-flash-array-market/

  10. I like the chart. I would consider it a first go at an all vendor AFA comparison. Thanks for putting it out there for regular IT folk to use as a data point (aka me).

    “Note: I had to guestimate as to what is possible with each array.”
    As the saying goes: “When you assume, you make an ass out of you and me”
    Question marks for unknowns would have been readily apparent and less issue for vendors.
    I also agree that leaving iops out, however theoretical each company came up with that number, was an oversight.

    • Leaving IOPs or for that matter any other measure of performance was more than an oversight. It actually rendered the table rather pointless. All Flash Arrays exist to provide better performance than spinning disk arrays.

      But performance the defining characteristic of an AFA is missing.

      If I need capacity I can get that with lots of cheap SATA drives, but that measure is in the table!!

  11. One question how do SSD based arrays manage Firmware NDU, I see a table with ticks in the boxes for Array vendors using SSD’s and I can see how its possible to do for the controllers but how do you update the firmware in each of the SSD’s. Or is this another hole in the analysis?

  12. Pardon my late arrival to this post and subsequent discussion.

    I find posts like this extremely useful because it gives people some points to reference when researching the various products available. There will always be bias in such posts, and I would like to think most people are cognizant of the author (or the blog) and their association(s).

    Comparisons of the various AFA products like this post are very sparse, and I laud Vaughn for starting a discussion about these products. I look forward to some updates to the chart. 🙂

  13. Vaughn

    I have updated your table. Since AFA’s are primarily being installed to improve IO performance, plus size and power consumption I have added the missing Performance Section and a power and rack space section. I hope the numbers I put in for the Pure Array are correct. If you want me to change them I will be happy to do so.

    http://wp.me/p41RCn-H

    Regards
    Andrew Harrison Violin Memory

  14. Great post Vaughn. Would have loved to see a performance category as well a mention of the scalability feature set. Possibly we will see the P-AFA and S-AFA merge.

    • I agree with you; however, customers require information in order to make an informed decision. I believe the content in this post is not only accurate, but fair in supporting deployment considerations. AFAs offer much more than disk-based arrays, we need to evolve the conversation from mere IOPs and capacity claims.

      Thanks for the comment,
      v

  15. double disk failures , NDU even in hardware expansions , Replication and compressin are all supported in XIO, this comparision is outdated ..please change it

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…