SPEC SFS2008 Verifies You Can Run Faster on Fewer Disks with PAM

-

spec-sm.gifNetApp is the only storage vendor focused on reducing storage hardware footprints. Evidently discussions surrounding the use of storage saving technologies in a production environment must be upsetting to the traditional storage array manufactures as anytime I discuss the technologies that enable such environments the comments sections of my posts inevitably end up reading (and smelling) like the zoo after the monkey cage has endured a pooh-flinging outburst.

It is my premise that the attempts of the traditional legacy storage array vendors to dissuade customers from considering and implementing storage savings technologies is a self-serving and not a community driven interest. Maybe my statement is a bit inflammatory but just consider the following statements regarding the use of data deduplication from a well-known traditional storage manufacture VP…

”Typically, when storage admins run into I/O density problems, they have two fundamental approaches: more disks, or faster disks.”

“By spreading the load on more spindles, I/O density can be reduced. However, you now require more storage, power, cooling, etc. — which kind of tends to defeat the whole premise of data deduplication to begin with.”

“The other approach is to use faster disks — either disks that spin faster, or perhaps enterprise flash drives. Invariably, these cost more as well — again tending to defeat the whole premise of attempting to save money through data deduplication.”

DMXSaysDedupeIsSlow.jpg

Statements like this are clearly meant to caution customers against the use of ‘risky’ new concepts (reduction of storage footprints) and the primary enabling technology (data deduplication). So is the information shared in this warning accurate?



I find it interesting the warning omitted any mention of storage array cache as a means to increase storage I/O performance. I wonder why this basic storage construct was omitted from the precaution? Is array cache not a viable option for increasing array performance?

The answer is ‘yes, I/O performance is increased with the addition of cache'

Recently NetApp has released the second generation of our Performance Acceleration Module (PAM), which provides customers the ability to modularly increase the total amount of storage cache available in their array. Just how much cache can be added is based on controller model; however, you may find it interesting that the PAM II cards are available in 256GB & 512GB increments and both the 3000 and 6000 models support multiple cards.

PAM II is more than just storage array cache, as they operate as dedupe aware cache. This combination of Intelligent Caching with PAM II results in an exponential increase in the amount of data that can be served by the cache and thus available for every application in the data center. http://blogs.netapp.com/virtualstorageguy/2009/07/vce-101-deduplication-storage-capacity-and-array-cache.html

dedupe+IC.jpg

I suspect that cache was left out of the provided options for increasing performance in a traditional array as the amount of cache on these platforms is fixed, or limited, to the capacity allotted to the model when it is manufactured. With traditional arrays if you need more cache you must purchase a new array.

I believe for these reasons that the traditional array VP made his statement where scaling could only be addressed by adding disk drives to the array.

Alternatives to PAM with traditional Arrays

In the future all of us will replace our current spinning disk drives with solid state or flash based drives. SSDs natively serve data near the performance and latency level of cache. SSD drives are fast, and with any new technology the drives are expensive (roughly 10x the cost/GB of the fastest FC disk drive).

SSD is an ideal medium for storage reducing, or dense storage technologies; however at today’s price point their adoption isn’t viable for the majority of virtual datasets, whether it is servers or desktops.

With the advent of PAM customers can have SSD performance with today’s disk drives, thus allowing us to do more with less while we all bide time for the eventual commoditization of SSDs.

Before we Begin Introductions are in Order

The majority of my readers are virtual infrastructure admins and some may not be familiar with storage industry benchmarks, so before we go any further I need to introduce you to SPEC

The Standard Performance Evaluation Corporation (SPEC) is a non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops benchmark suites and also reviews and publishes submitted results from our member organizations and other benchmark licensees.

The SPEC Results with Today’s Disk Drives

Every now and again I will push the envelope in a post, and this type of messaging sometimes gets under the skin of a few readers, so this is your last chance. After this paragraph there is no turning back. You can figuratively take the blue pill and the story ends; you flip to another blog or website and believe whatever you want to believe. Or you take the red pill where you stay in Wonderland and I show you how deep the rabbit-hole goes.

matrix_morpheus_redblue.jpg

The SPEC SFS2008 Results

The most recent NetApp submissions to the SPECSFS2008 benchmark (a NFS server I/O benchmark) includes the following configurations of our mid-tier FAS3160 array:

Config 1: 224 300GB, 15k rpm FC drives (16 drive shelves)
(blue line on graph)

Config 2: 56 300GB, 15k rpm FC drives (4 drive shelves) & 512GBs of PAM II
(red line on graph)

Config 3: 96 1TB, 7200 rpm SATA drives (4 drive shelves) & 512GBs of PAM II
(green line on graph)

The chart below contains the response times reported for every 6,000 IOPs reached in the test. A positive result is identified as one with a lower response time.

What is of interest is both configurations that had the PAM II cards installed out-performed the classic configuration while operating with significantly less disk, and in the case of the SATA configuration, disks with half the rotational speed.

SPECwnwoPAM.png
Click here for the full-sized image

Pulling it All Together

Today’s post I opened by sharing the message or warning being shared by a storage industry leader around the use of storage savings technologies and how their use will not permit the reduction in actual disk drives in your environment due to a negative performance impact.

The proof is in the pudding; PAM increases performance of datasets stored on fewer spindles. When combined with storage savings technologies like thin provisioned virtual disk, thin provisioned LUNs, and data deduplication PAM can enable customers to reduce their storage footprints and costs while deploying a greener data center.

Virtualization changes everything, and PAM is another example. PAM allows you to reduce your storage footprint, and associated costs today while waiting for the price of SSD disks to fall.

Vaughn Stewart
Vaughn Stewarthttp://twitter.com/vStewed
Vaughn is a VP of Systems Engineering at VAST Data. He helps organizations capitalize on what’s possible from VAST’s Universal Storage in a multitude of environments including A.I. & deep learning, data analytics, animation & VFX, media & broadcast, health & life sciences, data protection, etc. He spent 23 years in various leadership roles at Pure Storage and NetApp, and has been awarded a U.S. patent. Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing“.

Share this article

Recent posts

5 Comments

  1. Vaughn,
    Great post. We’ve found in our own practice that PAM provides a huge performance advantage. I do have a question. While I certainly agree with your statement that virtualization changes everything – how does PAM specifically benefit virtualization as opposed to storage performance in general?

  2. Steve,
    By adding dedupe into the virtualized mix you get to store more VMs on the same footprint (or the same on a smaller, if you will).
    Now with dedupe aware PAM cache you also get to serve far more VMs or VDIs from that PAM cache because you’re not chaching redundant blocks.
    This behaviour makes the PAM shine even more in VM and VDI environments specifically than it already does in “general” workloads.
    In TLAs (Traditional Legacy Arrays) there is no such thing as primary dedupe, let alone dedupe aware cache. Hence their requirement for huge amounts of spindles AND cache to store VM and VDI environments and get the same performance…

  3. Are the algorithms for the PAM cache and/or the stock controller cache documented? I’m curious to know if it is something more intelligent than LRU so when all of my linux vm’s update the locate database at 4am my PAM cache or controller cache don’t evict valuable cached data with a much longer in-cache age and frequency of cache hits. Thanks and looking forward to receiving my FAS 3000 series sometime soon.

  4. Vaughn,
    Great points. SPECsfs2008 results demonstrate storage efficiency, including write caching for massive reductions in the number of disks required, linear performance scaling in a single file system.

Leave a Reply

Recent comments

Jens Melhede (Violin Memory) on I’m Joining the Flash Revolution. How about You?
Andris Masengi Indonesia on One Week with an iPad
Just A Storage Guy on Myth Busting: Storage Guarantees
Stavstud on One Week with an iPad
Packetboy on One Week with an iPad
Andrew Mitchell on One Week with an iPad
Keith Norbie on One Week with an iPad
Adrian Simays on EMC Benchmarking Shenanigans
Brian on Welcome…