Pure Storage 101: Scale-up or Scale-out?

-

Often we geeks will propound and pontificate on technical topics based on what I call ‘the vacuum of hypothetical merits’. Understanding and considering the potentials and shortcomings satisfy our intellectual curiosity and fuel our passion – however often these conversations are absent of business objectives and practical considerations (like features that are shipping versus those that are in development).

Mark twain once said, “Give a man a reputation as an early riser and he can sleep ’til noon.” The significance of this quote often comes to mind during technical comparisons.

Consider scale-up and scale-out storage architectures. Likely you have an opinion on each but before diving into the vacuum what’s do you want from your storage infrastructure?

 

The Ideal Storage Architecture

Enterprise IT departments are in the midst of a transition from technology operator to IT services provider, where ensuring a high quality customer experience is paramount. As a part of this evolution, customers are selecting technologies that have greater availability than what currently proliferates the datacenter. This translates into storage platforms that can be deployed and scale capacity, performance or both, non-disruptively and without performance degradation.

Obviously there are additional requirements that include affordability, interoperability and operational simplicity, but for the sake of this post let’s consider these items as addressed. Would you consider the definition I posited as sound and inline with modern datacenter initiatives? I think it is.

 

The Pure Storage FlashArray

By definition the FlashArray is a scale-up storage platform; however, this scale-up architecture is unlike any scale-up array in your datacenter. The FlashArray provides non-disruptive operations that allow customers to scale performance and capacity inline with their business growth without disruption or loss of performance.

scale-up-n-out

Historically scale-up architectures support the hot-addition of storage capacity but tend to struggle with controller upgrades. Software upgrades often incur a drop in performance and hardware upgrades require downtime due to a lack of compatibility between the existing and the new hardware.

These issues aren’t wholly eliminated with scale-out architectures – but let’s save that for another post.

The Pure Storage FlashArray is a stateless architecture. This is a significant advancement over traditional scale-up platforms. The controllers provide CPU, memory & IO ports and unlike traditional arrays, the NVRAM-based write cache is located in the storage shelf. This means a controller upgrade does not impact the normal data processing operations with the write cache and persistent flash layers.

Another unique element is the FlashArray only utilizes half of the controller CPU & memory resources during normal operations which when combined with the stateless architecture ensures 100% non-disruptive performance with planned and unplanned downtime.

I discuss the stateless architecture of the FlashArray in greater detail in Pure Storage Flash Bits: Ensuring 100% Performance

 

Maybe I’m Expressing a Vendor Bias

Some may question my view, as Pure Storage doesn’t offer a scale-out array. This is a fair criticism; however, let’s consider XtremIO, the scale-out AFA from the storage market share leader EMC. There’s a lot of marketing activity heralding the level of engineering sophistication in this scale-out platform; however, for the tens or hundreds of man years invested in the development you cannot non-disruptively scale an XtremIO today… and it’s been generally available (GA) for six months.

How do you scale-out an XtremIO you ask? Simple, backup the data, zero out and reconfigure the existing hardware with the new and restore the data. How long could that take? (hint: downtime would correlate to the capacity of data – the larger the install the longer the outage)

So if a scale-out all-flash array doesn’t scale-out then is it a scale-out architecture? You can argue that technically it is, but if it doesn’t satisfy the business goal of scaling non-disruptively then what’s the point of the architecture?

I think it’s more accurate to classify the current version of XtremIO as a fixed, distributed storage architecture. Ever heard of such a thing?

 

Where Scale-out is a Must Have

There is a significantly different level of effort (and pain) associated with migrating SAN data as compared to NAS data. The sophistication in modern applications and virtual infrastructure platforms allows for non-disruptive migrations of these applications. Examples include Oracle ASM, VMware Storage VMotion, Exchange DAGs, etc. These applications by and large run on SAN storage protocols (fibre channel, iSCSI & FCoE).

NAS data (NFS and SMB – formerly CIFS) commonly provides file services for engineering applications and users in the form of home directories. NAS datasets are a challenge to migrate. This is due to the fixed path of data to hosts, which is complicated by large volumes of data, comprised of small sizes files with granular access controls. Migrations can be simplified after one crosses the chasm of pain and installs a global namespace in the form of a network service or a scale-out NAS storage platform.

 

Enabling the Flash Datacenter

Pure Storage is delivering a storage experience that is unmatched by the storage industry incumbents. Paradigms are changing. Flash for less than the price of disk is a pleasant surprise but scale-up storage architecture that scales more than a scale-out AFA is simply mind blowing.

Before one considers a scale-up or scale-out storage architecture it’s important to determine your operational and business requirements and apply them to the technology available to you in-house and in the market.

Unfortunately for you some vendors make product statements comprised of future technologies and features. We want to have a different relationship with our customers. We’re working on some amazing technology and when it’s ready we’ll let you know, until then check out the FlashArray 400 series powered by Purity 4.0.

Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…