I want to tell you a story about how my evening went the other night. I hope you don’t mind a narrative.
Monday I received an email from a friend in the VMware community, “Did you see the Register, it’s unreal, EMC arrays crushed the SPEC benchmarks!” As you’d assume, this news got my attention. I mean, EMC hasn’t published a SPEC benchmark in years.
(The last EMC SPEC submission was published in November of 2007)
Crushed? My buddy said EMC crushed the results! I couldn’t stop my mind from racing, had EMC unleashed some new technology which could revolutionize the storage industry? I mean, I thought NetApp had the coolest tech. I mean we are the only ones to offer production dedupe, intelligent caching, and the most integrated vCenter plug-ins. What could EMC have created?
I’m man enough to admit, I began to panic. Maybe it was time to freshen up the old resume and beg Chad for enlistment in his Army. That wouldn’t be such a bad place for me to land. I mean Chad is a great guy, heck I know and respect many of the guys on his team, so I know we’d kick butt together. Now I realize I’d have to take a few lumps from Zilla, but privately we’re friendly, so I think he’ll hold back on a few of his punches; however, I’m sure, I mean I’m positive Chuck would beat me with a pillowcase full of oranges. That would be a pretty rough outing, but bruises heal right? I’d get over it.
It was time to get a grip and calm down. I fired up the MacBook, and as Chrome launched I braced myself for the worst.
There is was, right in front of me, EMC had in fact returned to SPEC and published 110,621 and 118,463 IOPs in the NFS and CIFS SPEC SFS2008 and SFS2008_CIFS test beds respectively!
I Needed to Know More, How Was This Possible?
I clicked on the link to read the EMC test bed configuration…
- 1 Symmetrix V-Max Array (4 engines) with 256GBs of Cache
- 3 Celerra NS-G8 Datamovers (aka NAS gateways) with 8GBs of cache (two active one passive)
- 96 400GB EFD flash drives (for the test data) in two disk RAID-1 pairs
- 4 450 GB Seagate FC drives (required to operate the Symmetrix)
- 1 24-port FC switch (to connect the Clariions to the Symmetrix)
This configuration is clearly, a best in class configuration. I mean, it’s the VMax with EFD; there is no wonder that EMC was able to obtain 110,621 and 118,463 IOPs respectively in the NFS and CIFS benchmarks.
Feeling Like a Kicked Dog…
At this point I’m feeling pretty low. I get up from my desk and head into the kitchen. I steal a few pieces of Ghirardelli chocolate from my wife’s stash (the peanut butter and chocolate squares are a nice pick me up when you’re feeling blue).
Knowing I will have to face the music, I mean everyone is gonna know about the EMC results, I muster up the courage to peak at the posted NetApp SPEC submission. And there it was, NetApp had published meager results of 60,507 IOPs. Ugh, does it get any worse? 60k? 60k? 60k stinks compared to 110k!
I Needed to Know More, Why Such Low Results?
I clicked on the link to read the NetApp test bed configuration…
- 1 NetApp FAS3160 Array with 16Gbs of cache
- 2 256GB Performance Acceleration Modules
- 56 300GB FC drives
I nearly fell out of my chair!
Where’s all of the hardware?
I mean, sure the NetApp results were a bit more than half of the EMC results, but the hardware in the test bed fits in 18u of rack space. Heck, the FAS3160 is the middle model in NetApp’s mid-tier platform.
And That’s When It Came into Perspective…
Are the EMC numbers outstanding? Yes they are.
Relative to the hardware used to obtain the performance results are the EMC numbers impressive? Absolutely not.
Who would purchase EMC’s largest SAN array, three NAS gateways, and EFD drives, which run approximately tens times the cost per GB of a 15k FC drive to obtain results less than what is available with two NetApp mid-tier arrays?
I am begging anyone out there who can ballpark the price of the EMC configuration to please post their information in the comments section of this post. I assure you; the cost to purchase the EMC configuration is easily double (if not triple) the cost to acquire two mid-tier FAS3160 arrays.
Obviously, I’m having Some Fun at the Expense of EMC
Chuck, Chad, Mark (aka Zilla), guys what your company put out serves no one’s interest but the sales force at EMC. Test results like this one accomplish nothing but to set a false expectation as to what a customer should expect to receive when purchasing this technology.
Take for example, the EMC performance team used the most expensive type of drives (EFD), and threw away half of the storage capacity by enabling RAID-1 purely for the sake of ensuring high performance.
By contrast, two mid-tier NetApp arrays, would not only provide greater performance at a lower acquisition price, the two combined would provide more storage capacity with 300GB FC drives versus the 400GB EFD drives used in your config (21.8 as opposed to 18.8), and this would be possible with less file systems to mount (4 versus 8).
If you want to promote high performance results based on a large amount of hardware, I’d suggest you take a look at the NetApp SPEC submission of 120,011 IOPs. These results were obtained with a single FAS6080 and a fair amount of hardware, primarily 324 15k FC drives and no PAM. This large config beats both of EMC submissions while providing 64.6 TBs of storage (or 344% more) with smaller (300GB) drives. I’ll also wager this test bed costs less than the EMC config.
Shenanigans seems to be the word Chuck uses for discrediting other storage vendors and their enhancements over technologies offered by EMC.
On multiple occasions Chuck has claimed that NetApp technology is “unobtainable” and to meet said claims we would be “breaking the laws of physics”.
Maybe it’s tough being the big dog for so long. I mean, how hard is it to admit that you just don’t have the storage get up and go you once had? Sorry, there’s no little blue pill for what ails you.
Chuck, EFD drives push 6,000 IOPs @ 20 ms latency. 15k FC drive push 220 IOPs @ 20 ms. How can two NetApp mid-tier arrays with 112 slow, 15k drives out-perform the VMax, Celerra, 96 EFD combo? Did we break the laws of physics? Did we trick out the array by configuring it in a non real-world manner?
Give EMC Credit
It was very smart of EMC to diversify their portfolio and invest in becoming a software company. Eventually we’re gonna have to become friends and business partners. It is inevitable. I foresee us as long time technology partners, EMC with your VMware subsidiary and NetApp as the storage which VMware runs on!
Just to keep things honest, because you know that I love you Vaughn, 😉
I want to discuss perhaps what has been overlooked – I’ll try not to cover what you’ve already mentioned (whether right or wrong, so please don’t try to u-turn the conversation into something irrelevent to the point I am making, okay ? :))
You mention that the NetApp array is a FAS3160, (Your words: 1 NetApp FAS3160 Array with 16Gbs of cache) but in the Spec report (and how the 3100 series is deployed) it is a cluster (later mentioned in the SPEC report that it is an active/active cluster) [I won’t go into how the BOM seems to lack mentioning various required licenses such as cluster and so forth, this doesn’t even count! ;))
So subsequently and taken from the SPEC report this is effectively a two node cluster with 8gb of cache (per each controller) and the PAM module which is another 256GB slapped into the mix which is isolated to each node of the cluster, so if I’m doing my math correctly (Please do correct me if I am wrong)
8*2+256*2 = 528
And it has the NVRAM which I honestly don’t count, but for the sake of argument in the SPEC report it shows another 2gb per node – for the sum total of 532GB of ram allocated to the system.
Please do correct me if I am wrong, that the most amount of memory addressable at a single time on a single node within this configuration is in fact 266 (8+256+2) since the data which is stored on the PAM card is not shared across controllers for accessibility from a connection perhaps going into nodeb in the cluster.
(If I am mistaken and the PAMII memory is indeed shared in the Active/Active cluster, where nodea interactively can access the PAMII card of nodeb and vice versa, let me know)
But whether that is true/valid or otherwise – please correct me if I am mistaken, but is this ‘comparison’ report basically saying that this NetApp configuration is using almost 2x the amount of memory that the compared EMC configuration is using?
If that being the case, 2x the ‘cache’ thrown at the problem, but producing 1/2 the speeds/feeds in IOPS (and don’t even get me started on how I really don’t care about speeds and feeds unless I have a SPECIFICALLY designed app which performs this way in real time and in production (rarely)
But putting that aside, as I am exclusiving discussing memory ramifications – and both sides of the coin seem like they deviate from reasonable practice for volume / aggr layout – I won’t say how, because I frankly am not sure how the EMC side conforms to best practice, but the Netapp array in this configuration is in violation of ‘best practices’ (hell, I helped define some of those practices ;)) and I’ve seen enough disasters to know to NEVER run a prod environment in the exact way the evironment was laid out.
So, taking a step back, it’s quite possible everything I’ve said is a bunch of noise (and I’m sure many people on both sides will agree and disagree with everything I’ve said)
My Lessons learned are:
Some systems can get really fast speeds (yay?!) and unrealistic expectations without consideration for application behavior can cause us to do crazy things in our environments.
People like to write blog posts about their SPEC reports, and people like to write counter blog posts about SPEC reports and sometimes (rarely: me) I typically hate commenting on SPEC reports when there’s so much controversy around them, but I couldn’t stand idly by with the level of intimate netapp knnowledge I have.. and be… misled perhaps is the word, on the merits of it’s own configuration.
I guess my follow-up lesson learned on this is, I could rip out everything I’ve said about the EMC environment, and the points and facts I’ve made about the NetApp SPEC report and my comments are entirely valid (except for areas I specifically defined as needing confirmation/validation :))
We both know that I know VERY specific ways to make both of these reports fail miserably and one side can shine greater than the other when we take into consideration a ‘mixed workdload’ as is often the common consideration when it comes to a Unified platform – So we won’t open up that can of worms 😉
Thanks for your time on this matter Vauhgn and others – and if you disagree with anything I’ve concretely said or referenced where I may be incorrect or mistaken I welcome the correction; though just like it may be better for me not to post this – It may equally be better not to question the integrity of the facts 😉
Oh yea, and don’t write comment responses at 130am 🙂
/ second comment – the SPEC report you reference for the single FAS6080 – did not link to a valid report [So, I didn’t check that in my response]
Disclaimer: My comments represent me and my own personal thoughts and opinions, comments, especially for where they’re HORRIBLY WRONG AND MISTAKEN, and even where they’re OMG RIGHT 😉 Subsequently the content here does not reflect the opinions or thoughts of EMC, NetApp, or even SPEC as an organization!
– All is fair in love and SPEC reports ~me
Hi Vaughn
I checked, and the SPEC test we submitted was simply the result of some lab work done by the engineering team.
They thought it might be interesting to post, and I agree.
There was no breathless press release or announcement. We didn’t pick a competitor’s gear and do a trumped-up head-to-head.
Nope, none of that carnival-style marketing-stunt stuff we see so often in the industry.
No claim that it represents an actual customer environment, or that it’s better than anything else. No representation that someone might actually want to buy such a thing.
Nope, just an interesting engineering result that some might find useful. And many people did find it interesting, including you!
It’s important to keep in mind that if you’re familiar with how enterprise flash drive costs are starting to come down, it’s not entirely unlikely that we’ll start seeing “all flash” configs before too long.
And then it does become interesting where the bottleneck moves to in this world — the controller, the network, the host adaptor, etc.
Which is one of the reasons the team ran the tests.
Best wishes …
– Chuck
@Chris – Thanks fro the comments, I have a soft spot for NetApp Alumni. I’ll keep it brief.
Does a FAS3160 have 2 storage processors? Yes
Is a PAM II array cache? Well, kind of. While it is a form of memory, it manages data as an extension of WAFL, so it does behave a bit differently than system cache. With that said, who cares, PAM makes systems extremely fast.
The nice thing about SPEC is the bench mark runs the same for all vendors, the only turntables are the array & server configs. As I highlighted, the NetApp config is a stock, production config.
And this is where the rubber meets the road. Who cares about PAM or EFD drives.
You have published its benchmark, so let’s make the benchmark relative. What was the cost of EMC config? 10X that of the NetApp config? I bet I’m close to the mark with this guess.
This is where benchmarks are very valuable, they show the performance relative to the cost of the hardware. These data points help customers in selecting a storage vendor.
also, thank you for citing the broken link – its fixed.
Cheers!
@Chuck – Thank you for the reply. While we go at each other rather aggressively at times I am always appreciative of your comments (even if more often than not we disagree).
On the fall of enterprise flash costs, I agree. in time EFD will be available at commodity prices. When that day draws nearer, NetApp will offer them.
EFD is a game changer. Crazy IOPs, but its not a perfect medium. Have you looked at the write performance of EFD? maybe it would be better stated as the write penalty of EFD.
So while everyone whoops it up over EFD, why do you think slow 15k FC drives with PAM perform so well when the EFD drives have ~30X the IOP capacity?
So today we can meet EFD with a much more cost effective architecture – and customers really like saving money.
As for tomorrow, we all have to face the EFD write challenge. In your opinion, will RAID-1 become the default RAID type for EMC arrays with EFD? RAID-1 has the lowest RAID performance penalty for a traditional storage array, so it seems plausible.
I’ve just realized that I may be going at you a bit with my comments. My apologies.
I thank you for sharing your thoughts.
Cheers!
Vaughn — now you’ve got me totally confused.
My understanding is that PAM is volatile, and not safe for writes. Random reads with locality of reference — great. Other read profiles — it depends.
Of course, no one at NetApp would ever come out and say that directly, but it’s pretty clear. If I am wrong, please set the record straight so we can minimize FUD based on wrong perceptions.
My other understanding is that the STEC EFD implementation is significantly faster than any configuration of rotating rust for most write profiles. We’ve seen this so many times in so many customer environments, so trust me on this.
So, when you’re talking “write penalty” — compared to what? Some theoretical number? Or compared to other alternatives in the marketplace today?
Thanks …
— Chuck
V-man, you were right – that was WAY TOO MUCH NARRATIVE up front – get to the smackage faster. Sometimes the other guys use a lab queen to get their results. That’s benchmarking! Netapp could do something similar, couldn’t it?
Re: [NetApp – The Virtual Storage Guy] Chuck Hollis submitted a comment to EMC Benchmarking Shenanigans
@Chuck You are correct, PAM does not directly address writes; however, the IOPs savings of PAM frees up IOPs on the drives that does make the freed IOPs available for write operations. As for sharing the secret sauce (in order to end the FUD), I dont think theres a benefit to having that level of discussion. First, Im not sure that we can have this level of discussion without a NDA. Second, what does the discussion solve? The proof is all around you.
If EMC arrays with EFD drives were actually as effective as NetApp arrays with PAM and FC drives, then why does NetApp own ~90% of the VMware View (aka VDI) market share? Your team knows that View is very challenging in regards to storage as the array must be able to significantly reduce the price per GB in order for the customer to deploy; however, the array with its inexpensive architecture must be able to address the numerous IO storm events (boot, login, A/V scan, etc…).
Can EFD address the IO challenges? Absolutely. 6,000 IOPs per drive is perfect! Can EFD address the cost savings? Not today, as they are way too expensive. Yet FC + PAM can. Extremely cost effective with outrageous performance.
Imagine when NetApp ships EFD + PAM! I bet you guys already have plans in place in an effort to slow down our momentum in advance of any such release.
Have you guys thought about rolling out a NetApp take out program? Maybe you could identify 2 or 3 key accounts in each sales district and aggressively go after them with over the top pricing discounts? In your opinion, do you think I have a future as an EMC sales strategist?
To close my thoughts, NetApp and EMC have completely different architectures. We have different strengths and weaknesses. Ill even offer today EMC storage arrays can do things NetApp cannot (connect to a VAX, IBM System/360, UNIVAC, etc…) Today Im pretty sure we own you guys when one looks at grading arrays based on storage utilization, performance, data protection and ease of use. Funny enough, these are exactly the foundation attributes required for the deployment of cost efficient virtual datacenters with technologies like VMware and Hyper-V.
Please dont misunderstand my words. I am not underestimating EMCs ability to innovate, deliver, or execute. You guys are very good at all three. I just believe the market has realized we might be a better fit for where most want to go.
Chuck, at some point in the future Id like to work with you. It may never happen at EMC or NetApp, but I would like it happen. Your a fierce competitor who is thought provoking and evangelizes on behalf of your company and indirectly the storage industry. Again, thank you for the comments.
Re: [NetApp – The Virtual Storage Guy] marc farley submitted a comment to EMC Benchmarking Shenanigans
@Marc
Youve got me on this one… Whats a lab queen?
Someone sent me an email estimating that the cost of the EMC test bed used in the SPEC tests was $6M.
While I am not in sales, I would estimate that the FAS3160 config is in the $300-400k range.
So here’s my math: Two FAS3160s out perform the EMC mountain of hardware at 1/7th the cost, with more capacity, less footprint (36u), and and served on fewer file systems.
Can someone check my math?
Also, Chuck posed the same PAM question on my blog. I gave a little more detail on the impact of writes with PAM. I don’t think I violated any NDA since all of this stuff has been talked about publicly or at least published in one of our whitepapers, KB articles or manuals.
http://blogs.netapp.com/efficiency/2010/02/this-or-that-this-and-that-it-makes-a-difference.html#comments
Mike
A lab queen is a system that you would only find in a lab – too expensive and tricked out for the real world.
Vaughn!
Thank you for your thoughtful comments.
“NetApp owns ~90% of the VMware View (aka VDI) market share” — that’s simply amazing!
That’s funny, I subscribe to all the independents — IDC, Gartner, et. al. and I’ve never seen that stat from anyone.
If it’s actually true, that would be an amazing achievement on your part.
If it’s a made-up number, that would be more reinforcement of my premise that — yes — NetApp is not beyond making stuff up when it suits their purpose.
So, anywhere we can find the data behind it? Someone showed me some slideshow thingie, but no sources were mentioned, or even the methodology.
Thanks!
Hey Vaughn – I always enjoy your posts. This is a fun one. 🙂
One thing has me confused…you mention early on that “had EMC unleashed some new technology which could revolutionize the storage industry?”.
The answer is yes and you skipped over this. While the configuration EMC used is a larger & faster frame than what NetApp tested with (and how did we get to comparing the two anyway, since we didn’t build an array to target 60k IOPS we don’t know that the two would be that different!?), it is still industry changing technology.
The Vmax with EFDs is truly revolutionary and can deliver some amazing performance results, even in smaller configurations than this one. You imply this configuration is not for the real world but maybe you mean it is only a play for the high-end market – a space that NetApp still isn’t taken seriously in.
Re: [NetApp – The Virtual Storage Guy] Adrian Simays submitted a comment to EMC Benchmarking Shenanigans
@Chuck youll have to wait for more details. I know thats not what you want to hear, but it is what it is.
@Adrian long time no speak! Kudos to you on your new role @ EMC, setting direction for the EMC Microsoft Hyper-V partnership. Well deserved. Following up on our thoughts…
What was tested is the latest and greatest form EMC which has a speculative cost of around $6M. This config achieved roughly 110,000-118,000 IOPs (NFS CIFS respectively. Greta numbers. Cool tech. No challenges there.
My point is pure economics, why would a customer spend $6M on said config when alternatively they could spend approximately $600k-$800k for two mid tier NetApp arrays, where the config consumes significantly less rack space, provides more usable capacity, and achieves 60,000 IOPs from each unit providing a total output of 120,000 IOPs?
NetApp, also has a submission with posted a high end array config, albeit an older publishing achieving 120,000 IOPs with a FAS6080 array. This config provides 344% the storage capacity of the EMC config and costs much less than the suggested $6M of the EMC config.
So, lets get real here.
Customers look to purchase storage on a number of metrics. One is cost per GB, another is cost per usable GB, the third is cost per IOP. The EMC V-Max + Celerra + EFD combo loses to NetApp configurations on all three metrics. I applaud your ability to defend your architecture as being production worthy and feasible for the market.
Do you pay 10X the market value for food? What about consumer electronics? I doubt it. Then why would you expect your customer base to consider the demonstrated configuration as being realistic? Again, I doubt it.
Thanks @marcfarley for teaching me what a lab queen was. Sounded scary at first.
– Polly Pearson
Another great post on this topic is available from Nick.
http://blogs.netapp.com/storage_nuts_n_bolts/2010/02/how-to-do-less-with-more.html
Boy do these SPEC arguments devolve into intense battles, huh?
Chuck says EMC did a “casual post” showing what can be done with SSD.
This strikes many as rediculous as the commercial value of the systems under test is dubuious. I think the $/IOP, $/Useable-GB pretty much say it all: You can build it but nobody should want it because it’s value is highly questionable. Also, there are clearly better ways of skinning the cat.
It does seem like an odd place to put a “technology demo” Chuck – SPEC results database? No?
Regarding unleasihing a new technology that is revolutionizing the sotrage industry, I have to say there are ingredients of change here, but we have a way to go don’t we? After all – just look at the total IOPS EMC is getting relative to the unbottlenecked capability of the EFD’s in the set-up: Can you say bottleneck? Maybe a 6X bottleneck? God forbid the SSD’s were any faster, or you added any more of them.
I think Marc adapted the term aviators use for a plane that spends all it’s time in the Hanger because it is so unreliable – Hanger Queen….replace Hanger with Lab, unreliable with, well, worthless? I don’t know.
I see your point Chuck, but it is odd to pop up after two and a half years with a SPEC posting like this one.
Mike
@Mike,
I don’t know. Let’s give EMC the benefit of the doubt on that one. After all, with all that inventory overhang http://bit.ly/5io7Fa they very well might just have piles of EFD lying about the lab.
What I thought was more interesting was how many backend disk IOPS EMC expended for each CIFS op.
They used 96 of these http://bit.ly/c2ZeJd in a V-max so, with somewhere around 6 million IOPs available, The three node Celerra (two active one standby) produced 120K CIFS ops. That works out to something like 50 Disk IOPS on the backend for every CIFS op out.
In the Apple config, for the CIFS, it works out to something like 1 disk IOP for every two CIFS ops. In the Apple NFS config, it works out to something like 1.5 disk IOPS/NFS op. In the NetApp NFS 6080 config, it works out to about 1 disk IOP for every two NFS ops. In the 3160 config, it’s more like 1 disk IOP for every 6 NFS ops.
Seems to me that the EMC configuration was very lossy. It’s not exactly what I’d call efficient. It takes EMC 50 Disk IOPS to do what it take Apple or NetApp less than 1 disk IOP to do. No wonder EMC needs EFD.
John
The backend was removed entirely from this test as it was a test of the front end DataMovers.
From the results you can see exactly where the redlines are for CIFS and NFS and that they’re comparable regardless of the protocol.
No weird drop off as shown in some of the other results posted by those who tested both protocols and not just one protocol.
The test used exactly the required amount of storage with all shipping components. No hardware or weird cuts of software in the testbed six months before any customer can get their hands on them.
What I find interesting is that in the FAS 6080 test out of the 324 drives used in the test only 72 of them were required to support the test data set.
What were the other 252 drives for?
That’s a box configured with 4.5 times the required storage.
A 15K RPM FC drive is good for about 200-250 IOPS. You have to consider BOTH IOPS and Space. @ 1 disk IOP per 2 NFS OPS, the NetApp 6080 config used 324 15K FC spindles.
The EMC Vmax config used 96 EFDs @ 60K IOPS per spindle per the manufacturer documentation. That works out to 50 disk IOPS per CIFS OP. That’s an efficiency 100X worse than either the NetApp 6080 result or the Apple Xserve result.
John
Re: [NetApp – The Virtual Storage Guy] Storagezilla submitted a comment to EMC Benchmarking Shenanigans
@Mark thanks for the comments. Ill agree with you that the FAS6080 results required a lot of hardware. Im not sure why the decision was made to not include PAMII versus gobs of spinning rusted metal (Im paraphrasing Chuck here).
This same logic applies to the question regarding why only two Celerra datamovers? If the V-Max wasnt sweating, than more datamovers would have provided greater results, so why the absence of hardware? Was a $6M config OK, but $7M view as excessive?
I look forward to the next submissions from both EMC and NetApp, I am confident challenges to the existing results will be addressed.
I have results for that right here. Four Data movers was over 200K IOPS. It scales linerarly as you add data movers as….and say it with me…the backend isn’t the bottleneck.
But two is always going to be the base DataMover configuration. And that’s what’s serving the NFS/CIFS IOPS, the DataMovers.
While I’m sure you’d like to make out this was is a systems benchmark if that were the case you’d never have provisioned four and a half times the storage for the required workload in the 6080 test.
One could call shinanigans right there if one so chose. 252 excess disk drives.
But it’s not a systems test and all that tells anyone is that there was 4.5 times the rust in the box and assuming it wasn’t the backend that choked up the NFS result tops out when you redlined the 6080 controller pair.
When the controllers/DataMovers/whatever they’re called in FASland choke that’s your SPECsfs number.
There’s nothing beyond that in the benchmark, that’s the number.
I look forward to reading your CIFS submission.
@Mark,
For that, look to the 3160 result which did use PAM II. It used 56 spindles of good old spinning rust to reach just over half of what EMC did with 96 EFDs. So, just use 2 of the 3160s. With 112 spindles of rust, you get the performance of 96 EFDs on EMC.
So, you’re telling me it’s ok to use 50 disk ops per CIFS op, when even Apple gets 0.5 disk ops per CIFS op. There’s something wrong with that picture.
John
I suspect the only people still reading by now are just the motivated vendor debaters, but alas … 🙂
NetApp have only scratched the surface with PAM II powered benchmark results, both block & file. As always we will stick to our emphasis on practical applicability (i.e. affordability) of these results, since we already know we can easily hit any artificial max performance milestone when money is no object.
Throughout 2010, expect NetApp to rock the industry with (revolutionary PAM II accelerated) SATA-based high performance benchmark results. That’s what *real* customers need – and that’s what NetApp is delivering.
The “usable benchmark” bar has been set by NetApp. This year we will keep raising it. Let’s see who else can keep up – or even stay in the same ballpark! 🙂
@Val @John
Well said, customers want data that helps them make a sound decision… Period.
This message was sent by my thumbs… fault them for the brevity and poor spelling
Simply speaking, PAMII + WAY LESS old-fashioned disks = cost savings for pretty good performance.
I’ve seen several workloads (including one on DMX with over 400 drives) where FAS + PAM + way less drives provide equal if not better performance.
Ultimately, that’s what customers WANT and NEED.
Bang-for-buck.
And PAM delivers that in spades.
D
Vaughn,
I suspect that there is a lot of inference occurring here when it comes to the CIFS-vs-Disk IOPS and price/performance argument.
Specifically, you can’t infer that the backend VMax was actually servicing 6 million IOPs just because it’s configuration is theoretically capable of doing so. As you know, CIFS and NFS Ops are NOT disk IOs. It is pretty clear from the sizing of the VMax in this test (and supported by Storagezilla’s comments) that the goal here was to test the performance of the NS datamovers, NOT the VMax, and as such the VMax was configured for far more throughput than necessary for the test. That being said, making any sort of argument that EMC must need 50 IOPS for a single CIFS ops is impossible with the data at hand, and would be unbelievable anyway.
Further, the context of the test being on the datamovers themselves also makes any sort of price/performance conclusion impossible. A customer requiring 100,000 CIFS Ops would work with EMC to size the backend appropriately and I would venture an educated guess that the backend would be smaller than what was used in this lab scenario.
The only way to get a valid price/performance comparison between two systems is to size the front-end AND back-end for the workload.
The beauty of the EMC approach is you have the ability to address bottlenecks where they exist rather than rip and replace or having to purchase a disparate system.
Well – what I don’t get is why didn’t EMC use all 8 data movers since that’s the max for the NS-G8? That way since the back-end is not the issue, EMC could have posted a better than 2x number (7 active DM’s instead of 3).
Thoughts?
As to why the benchmark didnt include more datamovers… Only EMC knows.
This message was sent by my thumbs… fault them for the brevity and poor spelling
@Storagesavvy – I respectfully disagree with the assertion that EMC was merely testing the scaling limits of the Celerra and that the backend was not hitting I/O limits as the config tested included 3 Celerra data movers (2 active on passive).
I position that if the V-MAX had underutilized resources it is a very odd decision to not enable the 3rd Celerra (the passive DM) and increase the published benchmark results by 50%. Why would one make such a decision?
It is illogical to be asked to rationally consider that EMC could have published result obtaining approximately 160,000 IOPs without increasing the hardware in the test bed.