Last night I got caught in what I would classify as a tweet-fight with several of my chums over at EMC on the role storage arrays play within a virtual datacenter. The conversation was pretty intense and at one point I posted the following string of tweets…
@sakacc @chuckhollis – Guys EMC & NetApp just look at virtualization differently. Allow me to explain…
@sakacc @chuckhollis – VMware allows customers to share CPU, memory, and network ports among multiple VMs allowing a reduction in servers
@sakacc @chuckhollis – Cisco allows customers to share ports among multiple connection protocols thus reducing network and storage switches
@sakacc @chuckhollis – NetApp allows customers to share disk capacity at a sub VM level among multiple VMs which reduces total storage
@sakacc @chuckhollis – EMC provides shared disk access to multiple VMs. Shared access is not shared resource usage; this is not virtualized
@sakacc @chuckhollis – VMware, Cisco, & NetApp want customers to purchase less hardware. What is EMC’s plan to reduce their HW footprint?
As you can see my premise is clear, for customers to be successful in their virtualization efforts, they must virtualize the entire datacenter this includes servers, networks, and storage.
Who decided that server and network hardware should be reduced while the shared storage footprint is permitted to grow uncontrollably?
So after last night’s rumble I find Chuck Hollis dropped from of the conversation in order to craft his thoughts in a blog post on the key component in this discussion – storage virtualization should provide hardware reduction in the same manner as server and network virtualization technologies, and this means begins by deduplicating production workloads.
I’m unclear as to Chuck’s intention. Is he warning potential customers based on…
• Facts which he will share with us
• Misinformation provided to him by an EMC competitive analysis
• Fear mongering because EMC doesn’t have a comparable offering
As Chuck is an upstanding individual and has been in the storage industry since the mainframe days, I’d like to suggest that he is misinformed and void of malice.
Let’s Review Chuck’s Misunderstandings
Point One: I/O Density
“Now let’s consider the primary data store use case. By definition, it’s a “hot” storage workload. Maybe you’ve taken a database that used to run on 20 disks, and now found that you can fit it on 10. The I/O density of those 10 disks has now doubled.”
Point Two: Disks are Inherently Slow
“Typically, when storage admins run into I/O density problems, they have two fundamental approaches: more disks, or faster disks… Indeed, if one of those media types is enterprise flash, primary storage dedupe can create incredible I/O densities, and we’re good with that. “
Point Three: Disks Fail
“And when they fail, the array has to rebuild them. This inevitably puts a big hurt on I/O response times during the rebuild — different schemes have different impacts. Mirroring schemes tend to have less impact than parity schemes, but require (wait for it!) more storage.”
Chuck You Are Correct on Every Point!
What would you expect me to say, Chuck’s not stupid, in fact he’s very clever. He’s so clever that he is only providing partial information in order to make his point and influence your purchasing decisions.
The Truth with Full Disclosure
Let’s take Chuck’s concerns out of order… in order to make sense here.
Disks are Inherently Slow
Regardless of rotational speed or drive type disk drives are slow. Traditionally performance is increased by providing a combination of storage array cache and additional disk drives to the array. Did I miss the storage best practice of adding lots of disks versus a small amount of cache when performance gains are required?
Starve any workload of cache and performance suffers horribly.
This is why Data ONTAP can deduplicate the storage array cache of NetApp FAS, IBM N-Series, and 3rd party arrays with our vSeries. With Intelligent Caching we eliminate redundant data within the cache thus resulting in exponentially more available cache to serve the work load. This technology was covered extensively in my post – VCE-101: Deduplication: Storage Capacity and Array Cache
I/O Density Impacts Performance
I/O density does increase per disk when you remove disks and an excellent example of this would be Virtual Desktops or VDI. With VDI customers want to leverage a single desktop image in order to serve thousands of desktop users. For more on VDI see my post – A New Era for Virtual Desktops
The proposed I/O density issue is addressed by array cache (see the section above). I’d like to introduce some proof points…
below are the results of the most I/O intensive operation known to VDI it is called a ‘boot storm.’ In this test I am simultaneously booting 1,000 virtual desktops. This activity is known as the bane of VDI – Don’t take my word for it, ask VDI expert Brian Madden.
In the test run we have 1,000 desktops, each at 10GBs in size. In addition, the dataset includes a 512 MB vswap file and 4 GBs of user data per desktop. This test is serving 14.5 TBs on 5.2 TBs of physical storage on a FAS3170 mid tier array with a PAM I module installed.
Below are the test results. As you can see we have very good results just running on the deduplicated data set; however, I’d like to highlight that at the 15:39 mark we enable Intelligent Caching. Note the total data being served remains constant at ~250 MBs while the disk I/O is reduced by ~60%. As an extra bonus I/O latency is reduced ~90%.
Data MBs
Disk IOPs
I/O Latency
A Special thanks to Chris Gebhardt for whipping up this little test for me
I believe this data clearly demonstrates that customers can run deduplicated datasets without any performance impact. If you need more data try these posts –The Highlight of SNW and Deduplication Guarantee from NetApp – Fact or Fiction?
If you need more data points, maybe you could ping a few of the VMware vExperts, Chad knows who they are, and ask them what they are seeing with deduplicating their production data sets.
Disks Fail
This is true spinning disks are prone to failures and when they fail the storage array spends a tremendous amount of resources to rebuild their content. I cannot speak for EMC, but NetApp arrays monitor the health of disk drives and allowing the array to identify and proactively replacing drives before they physically fail.
The success of proactive failing is measured at greater than 99% of drive failures. Come on Chuck, EMC must offer something similar to this technology. The arrays don’t still rebuild drives from parity sets do they?
Deduplicating the Production Data Completes the Cloud / Virtual data Center
It is well known that shared storage is required for the high availability features of VMware, Hyper-V, XenServer, etc. So while others are consolidating storage vendors are enjoying a boom – every system virtualized must move from direct attached storage to shared storage.
May it be possible that vendors of traditional, legacy storage array architectures want to poo-poo storage savings technologies like dedupe in order to preserve their ability to grow their footprint and revenue based on your virtualization efforts? The more your virtualize, the more storage you must buy…
In addition to the production footprint server virtualization makes backup to tape difficult and DR very easy (thank you VMware for SRM). This statement may be obvious to most, but both of these business continuity functions require additional storage arrays.
The Pervasive Effect of Dedupe
What EMC, HP, HDS, Dell, and other traditional storage array vendors don’t want to tell you is by deduplicating the production footprint you realize savings throughout your data center. Backup disk pools are reduced, storage for DR is cut in half, and replication bandwidth requirements also receive the same savings.
In order to wrap up this post, may I offer you these demos from VMworld 2009.
Technical Details on Running VMware on Deduplicated Storage
The Pervasive Effect of Storage Reduction Technologies
In Closing
Granted, everyone isn’t going to deduce every dataset. But as more servers are virtualized the result will be more data residing on our shared storage platforms, and just think of the capacity reductions from deduplicating even 80% of these datasets (and their numerous versions / copies with backup, DR, test & dev, etc).
Customers win with Dedupe, it is the enablement to increasing storage utilization rates in order for them to be on par with what is available with server and network virtualization technologies.
As I often say, ‘Virtualization Changes Everything’ including one’s understanding of storage architectures.
EMC has SSD’s and touts the many benefits they bring. NetApp does not have them, and explains why you shouldn’t use them or why you don’t need them. NetApp can de-dup primary storage and touts the many benefits. EMC cannot de-dup primary storage and explains why you shouldn’t use it or why you don’t need it. I think I have this figured out!
Now for the assumptions that Chuck made. Just because we free up more space via primary de-dup doesn’t mean we will keep piling more VM’s, databases, files, etc. onto the same spindles. What we have found is by reclaiming space on our volumes we gain flexibility and functionality. We have more room for things like snapshots, QA environments, and cloned data. We have drastically cut the amount of data we replicate over the WAN as well!
Additionally imagine how nice it would be if you just had the option to de-dup primary data! Maybe your budgets were slashed this year, but the business still needs additional room for aforementioned databases and VM’s. Now, I can de-dup my primary data to make room for those other things!
Chuck also mentions things like file stores and databases for primary de-dup. Now, while you certainly can de-dup any type of production data you want the real question is why would we? De-duping an SQL database for example isn’t going to gain me much if anything in terms of duplicate blocks. Now, if I had 2 copies of the exact same database on the same volume, then yeah I could de-dup and get all that back. Of course I don’t need to since I can already make 0 space copies (flexclone!) with no performance hit.
I am a current NetApp customer, and a former EMC customer. I was an EMC customer at my last job, and I chose that storage because it fit what we needed to do. I would make the same choice in that environment again. At my current employer NetApp was the right call. With each iteration of OnTap we keep getting more and more features that reaffirm our decision. Customers need to be educated on their options, and while I don’t agree with the stance Chuck took I didn’t think it required an aggressive rebuttal from NetApp. The arguments he made were debatable not deplorable, and that’s just my 2 cents!
I think the part that EMC chooses not to accept is the effeciency of de-duped cache.
Let’s say I have 1000 windows VMs on a Traditional Legacy Array, and 1000 windows VMs de-duped on a NetApp array. I then boot all of the VMs at the same time.
On the traditional legacy array, each VMs begins reading files. A thousand copies of the same file for each and every file overflow the cache in short order making it ineffective. The result is reads from disk, and a lot of them.
On the NetApp array, instead of 1000 copies of each file being read into cache, only a single copy of each file is read into cache. A far smaller cache then becomes more effective than a much larger cache on the TLA.
When you add PAM, the most frequently used items remain in the main cache in RAM on the controller. Those items that are accessed a bit less frequently move to PAM instead of being tossed when the main cache fills. By design, the most frequently accessed items are on the fastest media automatically. Those that are a little less frequently accessed are still on fast media, albeit a bit slower than main cache. A copy of all data, including cold data that is infrequently accessed, remains on disk. There is no rube goldberg vaporware to shuffle LUNs or 768K chunks of data around consuming IOPS and bandwidth; it’s not needed. The most frequently accessed data automatically flows up to the fastest media.
John
Erick – thanks for the follow up. I agree, my reply was a bit aggressive, for that my apologies.
As for SSD, you make a great point which I did not address. SSD is the media whcih all data stored on disk will eventually reside. SSDs are an ideal medium to combine with data deduplicated working sets as they are ultra dense in terms of IOPs.
The problem with SSDs is they are cost prohibitive to deploy in any amount of volume today. This issue will resolve itself in the near future, but until that time comes NetApp provides SSD performance to data stored on FC & SATA disks via our PAM II modular cache expansions.
PAM II modules are available in 256GB and 512GB capacities and when combined with Intelligent Caching allow customers to practically load entire working sets into cache resulting the performance capabilities of SSD while using existing disk technologies.
Thanks for the comments and for being a NetApp customer.
Hi Vaughn
I must have missed your twitterstorm last night. I was eating dinner with my family, and didn’t see it until the middle of next day.
First, I admit I’m unclear as to whether PAM is intended as read-only cache, or is designed to be non-volatile and handle writes as well.
If it’s only the former, I’d be interested in how you address I/O density for write intensive environments. Given that sustained write performance hasn’t been exactly a WAFL strong point in the past, I think that’s a fair question.
Claiming that PAMs offer “the performance of SSDs” would mean that they could be used as SSDs, i.e. for writes as well as reads.
Second, read cache for storage is not NetApp magic juju. EMC has had large read/write caches on our products since the early 1990s.
As you know, most operating systems and databases cache frequently used blocks in the server, presenting a relatively random profile to the storage array, making read cache significantly less effective for many production use cases.
And, of course, read cache is of limited use for most sequential I/O streams.
You did hit on a particular case where read caches do seem to make a difference, but you have to admit that large-scale VDI boots present a very unique I/O pattern.
Looks like you’re getting good results from your PAM card there. Looks like the exact same result you’d get from using VMware’s linked clones feature any array with a decent read cache.
BTW, how does it look for normal production workloads? Oracle, Exchange, SQLserver — you know, “any production workload”?
I know it’s an uncomfortable question, but you and the others keep dancing around the central argument, which I will repeat here for convenience.
1. Dedupe increases I/O density which can be a problem for production workloads.
2. Remediating I/O density problems takes more resources: spindles, cache, PAMs, tuning effort, whatever.
3. When NetApp claims they “save money on storage” when used for primary storage, do they include the costs of remediation?
As an example, it’d be interesting to compare the relative costs of the ~9TB of usable storage saved in your example (after factoring in formatting overhead, reserves, licenses, etc.) with the cost of the jumbo 512 GB PAM cards used to remediate performance.
Those half-terabyte memory cards don’t look cheap.
A similar “save money” proposition from the good people at NetApp had us all stumped a while back.
There was a claim that the V thingie would “pay for itself” with reclaimed storage capacity.
No matter how hard we worked the numbers (using NetApp’s published data), we never could come up with a scenario where that would be true. NetApp never provided an example either.
And, finally, I agree with Erick. Time to seriously consider decaf.
— Chuck
I think the most irksome thing for EMC has to do with relevance. When it came to SSD technology, I don’t *think* we argued against it (not sure – different sales teams take different approaches). Personally, I hope not since we qualified TMS arrays with V-Series and our very own Jay Kidd announced that we will ship SSDs w/the DS4243 shelves in the near future.
NetApp simply had a different flash strategy. EMC used SSD as a disk replacement strategy; another disk qualification – with limitations. No more innovative really than qualifying SATA drives.
NetApp pursued an ONTAP + PAM/PAM II strategy. ONTAP had features such as Thin Provisioning, Cloning and Dedupe. You take those capabilities; you look at flash technology and the most appropriate starting place for us was a PAM/PAM II strategy. All that stuff that Chuck warns about – boot storms, I/O density issues? Yeah, we don’t have those. That bothers EMC. Intelligent caching of shared blocks (via cloning or dedupe) dramatically mitigates those issues and effectively delivers on the promise of FAST today. That has to be a concern for EMC who has pursued a disk qualification strategy and who only has FAST working on overhead projectors at the moment. The EMC FAST message is largely irrelevant before it even ships so you have to see if you can get any traction on the “yeah, dedupe is good and all but it really doesn’t save you all that much when you think about it” strategy. All I know is customers are loving the benefits (and I’m sure that bothers EMC a bit, too, since they don’t have a dog in this fight yet).
@Chuck,
We target PAM for the very same workloads as EMC targets SSD (per EMC documentation): highly random read so you’re offering up a red herring on writes. That’s not where EMC recommends using SSDs.
Not taking the bait on the WAFL write performance. Another EMC Yeti story on WAFL. Love the effort, though. Keep clicking those slippers together!
That big read cache you refer to with EMC is used as a victim cache storing tracks of data. It’s a guess-ahead lobby that, yes, does reflect the ’90’s era in which it was born. PAM is “dedupe aware” so efficiency on disk translates to efficiency in cache and WAFL has a little more intelligence as to which blocks are related to each other so I look at it as more of an educated guess on our part. I know EMC is trying to shim WAFL-like file systems into their arrays now but you don’t have that intelligence layer in their yet. If you did, we’d be reading a post on how EMC did WAFL right. I’m sure I can look forward to that post, though.
As an aside, you disable cache for SSD drives and don’t support dedupe or thin provisioning on them. There’s a good use case there but you have to do more than simply offer up a faster set of drives, especially if you make customer compromise on other efficiency features.
The PAM cards aren’t free (I’ve actually seen the pricing) but they do cost less than the multiple trays of disks they can replace. The SPECsfs benchmark info we published with the PAM II announcement shows identical performance profiles on systems without PAM II and a system with PAM II and 75% less drives. Even if the hardware costs were the same, the power and cooling costs alone are a huge savings.
And, yep, that V-thingy really can pay for itself because we can do things with EMC SSD drives that EMC can’t – like dedupe, thin provision, clone. Weird, huh?
So, I don’t think we’re debating whether or not primary dedupe saves customers money. You just don’t believe it will save as much as we believe it will. The cool thing is we both believe it will save customers money. Other than the fact that EMC doesn’t support dedupe on primary storage, what’s not to like?
@ chuck “large-scale VDI boots present a very unique I/O pattern ”
The same sorts of things happen with virus scans on all kinds of virtualised server infrastructure, and these happen on an annoyingly regular basis
They also occur if you have a backup product that uses an agent in the virtual machine to do “incremental forever” style backups, or an SRM tool that walks the filesystem looking for new/changed files.
All of these are relatively common workloads in virtualised environments that, without the benefit of something like primary storage deduplication with intelligent caching, may require a bucket load of spindles to complete within a reasonable timeframe.
I have heard from customers that the solutions to these problems recommended by traditional storage vendors have been so expensive that they would kill the ROI of the VDI project to the point where, whithout NetApp, the project never would have gone ahead at all.
It’s not just about cost savings in the storage, its about what those savings enable the company to do in the rest of their infrastructure.
Guys, this is getting downright silly on your part.
I don’t expect you to admit in public that you’ve got a serious issue, but the FUD here is coming fast and furious from you folks.
Let’s get started, shall we?
Why is it that no one at NetApp is willing to say “we save $X on physical storage but to overcome performance problems you have to spend $Y on a big fat memory card and hope you’ll mask the problem”?
That would mean you’re being honest with customers and the marketplace, rather than just hyping people.
SSDs *can* be used for writes, guys, unlike PAM. Don’t be silly. Read the documentation carefully.
Their performance profile isn’t as stellar as for random reads, but you’re far better off than with spinning disk. Keep in mind that large amounts of nonvolatile storage cache (such as found on EMC products) do a great job of soaking up writes — another topic we’ll have to bring up with you before too long.
How come you guys don’t want to talk about real-world production workloads? Oracle, SQLserver, Exchange, etc.?
Uncomfortable topic?
Buying a few half-terabyte memory cards to make a virus scan run faster, or an SRM file walk go faster?
I wouldn’t be able to say that with a straight face in front of a customer.
SPECsfs? Huh? Next thing you know, you’ll be claiming that the SPC is a real-world test 🙂
All of us who’ve taken the time to really test your products have seen the exact same thing — dramatic performance degradation under sustained write loads.
For the very latest published result from your competitor HP, please see here:
http://www.communities.hp.com/online/blogs/datastorage/archive/2009/09/26/understanding-fas-esrp-results.aspx
The effect is repeatable to anyone who takes the time to let any write-intensive test run long enough on a full system. We’ve all seen it again and again and again.
And we both know primary dedupe doesn’t help this effect any, does it?
@Mike — I don’t know where you’re getting your information regarding how EMC uses SSDs in its products. I’d be glad to send you some documentation so that you can be accurate in front of your customers. Or you can take one of our classes.
We both know that you strive for accuracy!
Also, all you have to do to shut me up on the V thingie is do a simple side-by-side comparison with “money saved on storage” on one side of the equation and “overall cost of the V thingie” on the other.
Just like you’d do for any customer. It’s that simple.
Guess I exposed a sensitive issue here!
— Chuck
A couple of thoughts on the whole NetApp vs EMC as a Virtual Infrastructure storage back end Twitterstorm of Thursday night. Let me start off with this disclaimer. I am speaking only for myself and not as a representative of CSC management. Furthermore, let me also say that both EMC and NetApp are valued alliance partners for CSC and we do good business worldwide with both parties.
I must confess a bias towards NAS storage for Virtual Infrastructures as opposed to Block access protocols. I believe that the abstraction of NAS greatly reduces the amount of administrative effort required to configure and support the VMWare storage environment. There are fewer places where defects (misconfigurations) can be injected in the process and therefore reliability is higher. Working for a service provider, I want simple, reliable and cheap. Because you are dealing with objects in a file system, there are fewer administrative touches required. The overall level of staff training can be lower. Properly engineered, NAS should generate a cheaper overall virtual infrastructure than one based on shared storage connected via a SAN.
Although I hear that FC performance has been improved in vSphere, in ESX 3, NFS demonstrated superior performance characteristics under higher concurrent I/O workloads with more graceful degradation as more VM-s were added. NFS was only edged out in raw throughput by FC when a single virtual machine was running. There are workloads where FC is the right choice, but the vast majority of VMWare workloads work just fine on NAS. With NAS, there is no need to go for FCoE in order to get to a converged datacenter network fabric: you already have one in your existing data center LAN.
Everyone’s mileage varies, but most “average” VM-s in our environments seem to generate on the order of 25 I/O-s per second at 8KB block sizes. If you multiply these rates by 320 (or more) virtual machines in a single blade chassis, our experiences bear out Chuck’s statement that (paraphrased) spindle count will be the biggest determinant of how the back end storage performs at these I/O densities. Flash Drives and FAST-like in box tiering will be welcome improvements (shoutout to AmberRoad 😉 ), but Cache or Flash drives are still comparatively scarce resources that can eventually be saturated. I think the number of spindles will still matter going forward but perhaps not as much as today.
As to the dread “boot storm”, CSC designed its first iteration of Dynamic Desktop (VDI) infrastructure around NetApp with Cache (PAM) accelerator boards. Chuck implied that the cache memory would be prohibitively expensive, but we found that the additional cost of the cache was far offset by the 5X scaling factor we were able to get. So in our case, paying for the additional cache was a significant net cost saving to us.
For me, the critical attributes of VI back-end NAS storage are as follows:
1) Writeable snapshots – Create a generic, gold image of an operating system and stamp it out so there is minimal storage consumed by each new VM instance. Just the blocks with different data between images are consumed.
2) Data deduplication – As you patch each image, each “branch” of the writable snapshot will have a lot of duplicate data added to it. To optimize storage consumption, you need a process to come along behind and collapse these blocks to reclaim the duplicate space.
3) Tight integration between the storage device and Virtual Center to ensure consistent snapshots of virtual machine images in can be efficiently generated without human intervention as workload moves around the server farm and replicated (if required).
In my opinion, NetApp has historically had the edge in these categories but EMC’s Celerra team is catching up rapidly. I believe that today, NetApp will likely retain a modest edge if all it comes down to is a NAS functionality shootout with Celerra for the VI back-end. But that is not the ground that EMC is choosing to fight the battle for long-term VI back-end supremacy on. Far more worrisome for NetApp it seems to me is that EMC is positioning itself to leapfrog NetApp in terms of fundamental enabling technologies on a couple of fronts.
First, EMC has acquired FastScale. And that technology takes DeDupe to a whole new level for VMWare. Will there be teething pains? Of course, but for Virtual Infrastructures, this technology works at a much higher level and yields memory savings in additions to disk space reductions. This will be pretty cool if EMC can get the price and reliability right for this technology.
Chuck has circled back a bunch of times on the barriers to vApp mobility. Data movement is the big problem he keeps coming back to. I have seen a spate of recent posts of his on the subject. Solving this is key to getting VMWare’s vCloud world off the ground. ATMOS is certainly looking to handle the lower I/O tier apps and aspires to be the file/object store of choice in the vCloud that can facilitate workload mobility and be a persistent object store. But what about distributed storage for high performance block applications like data bases that need to move between datacenters in a hybrid cloud? What about VMDK mobility? There is still a gap in the technology to address these high I/O needs and really permit the vCloud ecosystem to take off.
I have to believe that given the increasing levels of cooperation and integration between VMWare and EMC (compared to the past) that EMC will be well positioned to leverage or influence development of new hypervisor capabilities to address this high performance cloud I/O functionality gap. Since the hypervisor knows about “dirty” blocks and I/O hot spots it is probably the layer best positioned to exploit shortcuts to keeping VMDK or vApp images in synch over a distance. To me, this seems to be the logical place to start looking at optimizations. That doesn’t preclude NetApp or other storage vendors from playing, but it probably means EMC will be first to market with a comprehensive set of solutions across all classes of I/O workloads in the vCloud. It also probably means that NetApp’s traditional strength in terms of efficient, low-cost distance replication will be to some extent be negated by a combination of hypervisor functionality, advances by network equipment vendors or refinements to array based replication targeted at virtual server environments from other storage players like EMC, IBM and HDS.
For me, the questions come down to simple ones. How do new EMC and NetApp storage innovations enable me to lower the cost of delivering each virtual server instance? How do these storage technologies facilitate the widest range of vCloud application mobility? It does me no good to have cheap vCloud compute resources that are prohibitively expensive to access either because of onerous network requirements or massively expensive storage infrastructure. If the infrastructure is cheap but complex and hard to support, that does me no good either. I think bun fights over the gory technical details of today’s storage technologies (let’s face it – all of which are fit for purpose, reliable enough, perform well enough) are really proxy wars over which technical approach yields the best “real world” TCO today and whether or not the various vendor approaches will best facilitate workload mobility in the vCloud ecosystem going forward. Both companies have a good technical solutions and sound value propositions. It will be interesting to see how their different approaches play out over the next couple of years.
I would welcome any comments on the above to improve my understanding of these issues.
Cross posted to (http://chucksblog.emc.com/chucks_blog/2009/09/a-quick-note-on-primary-data-dedupe-and-io-density.html)
Aw, Chuck, you’re fun to watch, I’ll give you that. You remind me of the Black Knight in Monty Python and the Holy Grail (
I merely mentioned that EMC recommends SSD for the same environments where we would recommend PAM cards: highly random read environments. You don’t need to send me the documentation. I have it right here and the first line under “Enterprise Flash drive (EFD) Storage” reads:
“EFDs offer the best performance with highly concurrent, small-block, read-intensive, random I/O workloads.”
If you like I can quote other EMC doc but that’s where I’m getting my info. Does the instructor lead training use a different set of manuals?
I know there’s a use case for writes but we’re talking storage efficiency here not drive substitutions. If you support SSD but have to disable cache, virtual provisioning and dedupe, then what’s your point on efficiency exactly? There’s a benefit there for writes but not nearly what it could be if it negates valuable efficiency features.
If you don’t like SPEC then take down the EMC submissions. I was simply pointing out that you can achieve the same performance profile on NetApp gear with 1/4 of the drives using a PAM card. It was a NetApp to NetApp comparison. I’m not sure how this got to be about you but… ah crap I know how it got to be about you but we’re getting off topic.
If you want to talk about how we perform in Oracle environments, you can always ask Oracle. I hope more readers ask this same question if they’re headed out at Oracle Openworld in a couple of weeks and, you’re right, they should ask about V-Series when they’re there. We should absolutely offer to make that cost savings comparison.
Microsoft? We were just named Microsoft’s Storage Solutions Partner of the Year for 2009. I’m comfortable with that as a great place to start if you wanted to know more about how we perform in Microsoft Exchange and SQL environments.
And – before I sign off – were you seriously suggesting that HP is your 3rd party authority on NetApp performance? Now, I have to say, I didn’t see that one coming.
I don’t think you exposed a sensitive issue. I’m just hoping you’ll continue to tell customers your theories on WAFL write performance and that dedupe doesn’t buy them much. (I really hope this new strategy of using HP as a source of objective validation is added to the arsenal. That one is a peach!) Keep tossing these out there. I look forward to the posts where you threaten to bite our legs off.
@Mike, I’ve re-read my comments and yours, and — I have to admit — you’re doing an excellent job of twisting around each and every point in a way that’s very creative.
You may think this behavior on your part is clever. It’s not. It’s tiresome, it’s blatantly obvious what you’re doing, and I don’t think it reflects well on you.
Let’s get back to the core points, shall we?
We started all of this in reaction to “data dedupe for primary workloads may cause I/O density increase which may incur additional costs”, e..g more disks, PAM, etc.
We’ve heard much from the NetApp team that “large amounts of read-only cache” can really help, and “we use our large amounts of read-only cache more effectively than others”.
Fine. Nice talk — where are the numbers?
I think the answer to all of that depends on the workload.
Reading the same files over and over again (a-la SPEC) certainly favors read cache. As do VDI solutions implemented without using VMware’s tools. Etc.
However, the Bold Marketing Claim is “any workload”. People are taking you at your word, and finding problems.
If the claim was “highly concentrated read-only workloads that tend to use the same files over and over again”, it’d be far more accurate, and I would agree with that claim.
Let’s get back to the original question, shall we?
NetApp claims primary data dedupe saves money on storage. NetApp claims investing in big, read-only memory caches is needed to reclaim some semblance of storage performance.
For given production workload X, show the $$$ you saved and what it cost to provide equivalent (or better) performance.
All we’re asking NetApp to do is to show that they’re not making stuff up.
Same question for the V thingie. Show the cost of the storage you saved, and how much the V thingie costs when all is said and done. Please show before-and-after on application performance to make sure the end user experience didn’t go down the tubes in the process.
It would be preferable to name the real-world workload you using, rather than referring to a synthetic test, such as a test that reads the exact same files over and over again 🙂
Although most of us would not consider VDI boot a typical production workload, we could use that as an example, since Vaughn talked about it above.
The claim is that ~9Tb usable was saved, with an additional cost for a big PAM II. How much storage $$$ was saved, and how much did using the PAM cost, when all was said and done?
Same thing for the V thingie.
The claim is that it “pays for itself”. We can’t find any way that could be true. And that’s *before* we start looking at application performance issues.
At the end of the day, my goal here is to help customers and the industry.
You may have come up with something truly magical and valuable for customers. Or not. To help all of us decide whether this is the case, you’ll have to show your math.
Otherwise, it’s nothing more than another marketing stunt.
Best regards!
— Chuck
@Mike
Karl exposed a well known behaviour of Jetstress auto-tune; Nothing more, nothing less. Yes, jestress auto-tune isn’t perfect, and the Microsoft documentation on Technet tells you how to work with that. I don’t think I’ve ever seen Karl claim to be an Exchange expert, so I’ll forgive him for the lapse.
@Chuck
You actually started with “One particular vendor is starting to push dedupe for primary data stores (e.g. hot application data), and I’ve recently observed that they’re starting to bump into a few laws of physics that fly in the face of aggressive marketing.”, and we both know there’s only one vendor that does de-dupe of primary data.
You went on to poo poo PAM and it’s impact on VDI boot storms when you said “However, program binaries are subject to “boot storms” — scenarios where all the virtual machines are booting simutaneously. Not a huge effect if you’re talking small-scale VDI (maybe a hundred machines) — but a substantial effect when we’re talking many thousands of virtual machines all trying to read the exact same files at the exact same time.
As some customers have been experimenting with deduping other forms of primary data (databases, email, etc.) they’re starting to notice the effect as well — they’ve ended up with increased I/O density that can only be overcome by — well — adding more storage!”.
It was only when Randy Arthur of CSC validated the NatApp postion on VDI boot storms by saying “As to the dread “boot storm”, CSC designed its first iteration of Dynamic Desktop (VDI) infrastructure around NetApp with Cache (PAM) accelerator boards. Chuck implied that the cache memory would be prohibitively expensive, but we found that the additional cost of the cache was far offset by the 5X scaling factor we were able to get. So in our case, paying for the additional cache was a significant net cost saving to us.” that you abandoned that position and chose to use HP as your NetApp workload performance authority . You should really have a few of your EMC internal Exchange experts validate HP’s test design before you take a leap of faith like that.
Of course, in less that 24 hours you returned to form. You said “Although most of us would not consider VDI boot a typical production workload, we could use that as an example, since Vaughn talked about it above.
The claim is that ~9Tb usable was saved, with an additional cost for a big PAM II. How much storage $$$ was saved, and how much did using the PAM cost, when all was said and done?” Didn’t you read a single work of Randy’s comment?
In the end it’s not you or I, but the customer that matters. You really out to listen to them.
John
@Mike
I’d have to say I disagree with the statement that Chuck will bash WAFL until EMC shims in a WAFL-like file system, and then proclaim loudly that EMC did WAFL right. This has not been the EMC track record at all. From my point of view, when a competitor releases a new product/feature and EMC has that oh @#%@! moment realizing there’s a serious gap, they respond in a predefined manner. It’s their trademark Modus Operandi…
1. EMC Bash & Copy Division begins trashing said product/feature.
2. EMC IP Division scans the portfolio searching for any technology that can conceivably be claimed as a “like” product/feature.
3. EMC Research Division attempts to redirect any ongoing research, no matter how distant the relationship to the problem, to fill the gap.
4. EMC M & A Division attempts to buy a company that could conceivably be claimed to have a competitive product/feature.
5. EMC Marketeering Division makes an announcement that the product/feature is on the EMC roadmap at some point in the not too distant future.
6. EMC Bash & Copy Division begins spreading the word the product/feature is a non issue because EMC now has a similar product/feature.
This happened with RAID 6. It happened with Thin Provisioning. It happened with de-dupe. It’s happening again with Intelligent Cache. The track record is very clear. They’ll eventually offer up something like a mashed together lash-up of linked clones/Fastscale as an alternative to Intelligent Cache. It’ll be about as effective as their RAID 6, but that doesn’t matter; the point is they’ll be able to fill the checkbox on the RFP and redirect the customer to their latest Rube Goldberg creation. It’s become oh so predictable. If you’re an EMC competitor, have just released a new product/feature, and EMC starts bashing it, you know you’ve done good.
John
Exactly John – whenever we’re asked to show the cost savings and we have actual customers comment on their actual savings and success – that for some reason gets ignored or discounted.
Whenever we are asked for proof on performance, we offer up 3rd party benchmarks – that for some reason gets ignored or discounted.
Whenever we are asked for sources for info and we quote a vendor’s best practice or installation manual – that for some reason gets ignored or discounted.
Whenever we’re asked to stand behind our claims, we issue guarantees – that for some reason gets ignored or discounted.
So, if I get this right, if you offer facts, references, benchmarks, guarantees, and a 17 year track record to validate it all, that’s misleading. If you offer up unsubstantiated intuition and suspicion, well, that shouldn’t be considered deflection or FUD but fact?
Not buying it but, more importantly, neither is this ever-expanding pool of customers. The choice Chuck would have you make is clear: we can do what we say we can do or, customers are gullible and we can fool an ever-expanding pool of them into repeatedly buy what we’re selling. Personally, coming from the customer side before joining NetApp, I err on the side of customers. All we can do is keep putting the facts, references, benchmarks in front of them and if they need to see it for themselves first hand, we’ll do that to.
Disclosure Chad (EMC employee here) I’m going to try to help, rather than fan flames here.
Deduplication – everywhere and as much as possible is fundamentally a GOOD thing.
Use of more RAM (and use as efficiently as possible) in storage arrays (particularly as RAM and Flash prices continue to drop) is fundamentally a GOOD thing.
Frankly – every technology we can deploy that reduces footprint/cooling/complexity – while meeting the SLA requirements of the customer = GOOD thing.
It’s up to each vendor to prove to each and every customer that “all in” we can provide the most efficient (where efficiency is measured on capex, opex, flexiblity and ease-of – including space/power/cooling) – while meeting the SLAs for the various use cases.
This is a non-triviality, though our respective marketing and sales teams (as all do) need to try to make it as simple as possible. (this is also important, as engineers – including me – have a habit of making everything sounds crazy complicated)
As someone in the comment thread pointed out, each company implies that the efficiency technologies of the other are useless in general, and at the same time imply that the efficiency technologies they have solve world hunger – each while furiously working to fill any competitive gaps.
EMC offers production data dedupe on Celerra, but this technique, while highly efficient in the general purpose NAS use case – it currently skips large (>200MB) files, so has limited use in cases where the files are large (VMDKs as an example).
I’ve shared a lot of public customer feedback showing it within or above the NetApp’s production dedupe approach by percentage points with that class of dataset. @ Mike R – just to highlight that no one can cast stones here (we all live in glass houses) – our approach continues to be “discounted” by NetApp folks.
We aren’t stopping there – and you can BET we’re working furiously to apply the idea of deduplication everywhere we can (including the production block use cases). Of course, NetApp isn’t staying still either – as a respected competitor – I would hope for no less.
We would of course position the “total capacity efficiency” – of all production use cases (including NAS), as well as backup use cases as the measure of how much we can save our customers. We would also include ideas where we’re still pushing such as auto-tiering (at file and block levels), dense storage, spin-down and archiving as efficiency technologies.
Now, on EFDs, as an interesting note – the performance data we have published (which is public) shows that where we see a 30x uplift in IOPs and 6-10x lower latency of use of solid-state on our block devices (CLARiiON/Symmetrix), on our NAS device (Celerra) we see about a 7x-10x more IOPs and about 4x lower latency. Now some would claim it’s because of our NAS code (meaning ours specifically not NAS generally), but I suspect that EMC has smart engineers, as NetApp does. It looks like the long-lived code paths in the Celerra stack have a harder time maximizing 10x lower device latency and 30-50x more IOPs from a drive. The flip side is that the extra abstraction layer makes things like deduplication easier (which is why it’s there first). As NetApp introduces SSDs (which I’m sure they will at some point – since after all, it’s as “simple as plugging them in”, right? 🙂 we’ll see. Perhaps PAM was an easier architectural model for NetApp – and that doesn’t discount it, or make EMC’s approach less innovative.
Each of us has architectural models which make certain things easier, and others harder. that doesn’t invalidate each other’s innovations, and each other’s efficiency technologies.
If I read Chuck correctly, he wasn’t saying production dedupe is bad – but rather that is a capacity efficiency technique. It needs to be coupled with performance (IOPs/MBps) efficiency techniques for maximum effect on reducing power/cooling/aquisition cost (or be applied to workloads that don’t have a need for performance, by only GBs). In some cases this effect is extreme, and in others, less extreme.
NetApp responds with PAM, and it’s up to customers to evaluate the cost/benefit advantage (no implication that it isn’t there). I think what he’s saying (and I don’t speak for Chuck, but just looking at the thread): “does the cost of PAM, and using new hardware (including using vFilers if they want to use an existing array) translate into the full ROI picture you expect after a sales call?”
I will say this – the cost of EFDs are dropping VERY fast – and there is a benefit of being a standard form factor, in that everyone is competing to make those denser and less costly.
As an “reductio ad absurdum” example – in a recent competitive campaign between EMC and NetApp for a VDI use case, the customer did their own math and concluded for their 7000 initial user deployment, that leveraging their existing Symm and adding SATA with a small number of EFDs was the cheapest (acquistion and operating) costs. The idea of production deduplication was very appealing (particularly because of “simple” positioning along the lines of “add this to your environment and reduce your storage cost by 50%”), but in the final measure, with quotes and costs from both – they chose one way. Other customers choose otherwise.
When it comes to efficiency (and again, I’m speaking for myself here), I respond with claims of good utilization, writeable snapshots, thin provisioning, parity RAID (R5/R6 depending on the customer), production deduplication with general purpose NAS, and backup deduplication (at the source and target). I talk about EFD for focused use cases today, and I also show them where we are going with fully-automated tiering in the near future to broaden it’s applicability.
We (and others) both have broad use of Thin Provisioning and writeable snapshots used widely (across broad use case sets)
I try to spend the bulk of my time on showing customers HOW to leverage technology, and how to integrate it with their virtualization/datacenter transformation efforts. In my experience, more often than not – that’s where the efficiencies are really made/lost.
Then, I go back to my day job, working on VMware/EMC technology integration with the goal of making storage invisible – and apply as Randy points out in the datacenter, and in the cloud (including cost points that are 1/10th traditional enterprise storage).
I guess what I’m trying to say is:
1) This stuff is never as simple as it gets painted out to be.
2) that perhaps we aren’t as different as we might think we are (wildly different in technology – but not different in the sense that we strive to be as efficient as we can).
3) As soon as we assume the competition is dumb, start to discount what customers tell us, or refuse to accept that innovation occurs everywhere – that’s a dangerous place for anyone – an individual or an enterprise.
@ Randy – you were at my 1:1 CSC briefing, so have visibility into not only what we have but where we are going. Thank you for your comments.
@Chad Nicely said and thank you for being a voice of reason at EMC. These sort of conflicts are tough since we all love what we do and are passionate about the technology so when I read stuff like “All we’re asking NetApp to do is to show that they’re not making stuff up.” it feels like a personal attack to me. Even when a customer comes forward and says it works as advertised it doesn’t seem to slow Chuck down. I just get red faced and try to figure out how to better explain to him we don’t operate like a EMC device therefore his logic is broken. I certainly don’t expect EMC to prove to me personally the things your devices can do, you are a respected major company and I have no reason to think otherwise. It would be nice to be treated with the same respect. So again thank you for the comments.
Keith (NetApp Employee)
I’d like to say thanks to Chuck for follow up. While I fear we won’t agree as to the value of using data deduplication with every working set I would like to comment on a few points in the thread…
@Chuck
You mention, “Second, read cache for storage is not NetApp magic juju. EMC has had large read/write caches on our products since the early 1990s.”
You are correct, caching of disk blocks by reading sectors where the content of the disk is unknown does sound like antiquated 1990’s tech. NetApp has always provided content aware cache, which is now enhanced by being content and deduplicate aware.
@Chuck
You stated, “Looks like you’re getting good results from your PAM card there. Looks like the exact same result you’d get from using VMware’s linked clones feature any array with a decent read cache.”
Linked clones are only useful for temporary Virtual Machines – period. This is the reason why they are only available in View and Lab Manager. I’d share more but I think there is a more critical point to make. Your generic positioning of the use of Linked Clones, without stating the caveats, is a sales gimmick that misleads customers as to what is a viable solution.
@Mike
You shared, “If you did, we’d be reading a post on how EMC did WAFL right”
WAFL is available for EMC arrays today thru the vSeries. In fact EMC has ‘go to market partners’ who virtualize their Symmetrix arrays with NetApp.
@Randy
Thank you for being a NetApp customer and sharing the success that CSC has had with devolving VMware Cloud solutions based on NetApp storage technologies. You posted “EMC will be well positioned to leverage or influence development of new hypervisor capabilities to address this high performance cloud I/O functionality gap.”
I say this with confidence: VMware does not develop technologies or features for individual technology vendors, rather they create open frameworks such as the vStorage API for Array Integration or the Pluggable Storage Architecture which allows eco-system partners to participate in.
Customers should feel confident that thru VMware, their hardware vendors must deliver on their promises and capabilities or they can be replaced easily and non-disruptively.
My last point (I promise), VMware typically licenses their products per CPU socket. How happy would their top distributors (like HP, Dell, and IBM) be if they developed an EMC-only solution or enhancement?
I’d suggest the risk to the revenue stream would be too great for any considerations to be taken seriously.
@Chad
Thanks for joining the fray, but I believe you have done what Chuck is being dinged for. Thru the generic positioning of Celerra Dedupe as an ideal solution for VMware you misrepresent your technology.
Does EMC recommend Celerra NFS as their preferred platform and storage protocol for VMware environments?
What about recommending dedupe for most virtual machines?
Does dedupe run on FC, FCoE, iSCSI, Symmetrix (DMX & VMAX) , Clariion, Centera, or ATMOS?
The answers are quite simply ‘No’
So are we spinning here, or addressing actual customer needs?
You are very correct that EMC & NetApp have the same goals, but we are starting from different approaches. Begin by considering the fundamental component of a virtual datacenter or cloud… VMware.
VMware eliminates the hardware differences in servers, so how do customers do the same with the multiple array offerings from EMC? Celerra Dedupe appears to be an edge use case technology deep in the EMC tradition of you can have feature ‘X’ but not in conjunction with features ‘Y’ or ‘Z’.
This design screams of the rigidity and inflexibility of the physical server infrastructure, which these arrays were designed for.
NetApp starts from a position of hardware differences equate to scaling differences, and not a reduction in capabilities or features sets. Data ONTAP eliminates the hardware differences between storage arrays, enables dedupe for virtually every workload, and allows customers to virtualize more systems faster, for less, and in a much simpler manner.
Point is, none of you have completely virutalized your storage, have you? I still have to manage individual pools of disks that are statically owned by controllers.
Being an NDA NetApp customer, I’m privvy to some of the stuff they have coming, and know that they’re moving in that direction of complete abstraction with things like “vFiler” and “Data Motion,” but they’re not completely here and “bought into” yet. Exciting nonetheless.
Some of the stuff you guys are bickering about like 6 year olds, us customers really could care less about, and you’re just making yourselves look silly.
* What percentage of customers are actively looking into SSD?
* What percentage of customers are actively doing ‘enough’ VDI on storage systems where boot storms are/would be an issue?
I would venture a guess that this percentage would be a very small one in both cases.
This isn’t a pissing contest, and if you want to do that, take it offline, where you’re not going to alienate potential future customers from wanting to deal with either of you at all.
@Vaughn – I was VERY clear in my post Vaughn.. I know you’re busy, but so am I. please re-read. Perhaps the caffeine suggestion is a good one 🙂
My point was not to equate your dedupe to Celerra dedupe, and call them the same (they are not the same).
My point was that it’s an example of the fact that we share a view: efficiency = good. Each of us has areas where we can be more, or less efficient. Dedupe for production storage is easier when you have an abstraction layer between the block storage and the data presentation layer. This is where TP happens (on all storage arrays), and where that layer is mature, it is the natural target for how you do dedupe. In NetApp’s case (from my understanding – I don’t claim to be an expert), the dedupe function is a natural side effect of some of WAFLs characteristics. In otherwords, you are leveraging your core architecture to provide customer benefit. GREAT.
My point on Celerra dedupe (to reiterate – LITERALLY requoting myself from the earlier comment: “EMC offers production data dedupe on Celerra, but this technique, while highly efficient in the general purpose NAS use case – it currently skips large (>200MB) files, so has limited use in cases where the files are large (VMDKs as an example).”. I don’t know how I can be MORE clear. Currently – our dedupe approach on NAS does diddly for VMDK on NFS use cases. As I stated, we’re not stopping there, and we will offer production dedupe over time for more use cases, and across different platforms. It’s not easy with our architectural model, but count on it.
BUT – customers arrrays are used for LOTS of use case – not just one. My point is that we’re all striving for efficiency, using the tools in our quivers. In some cases we’re each ahead, in others behind.
Now – to answer the other question you asked:
“Does EMC recommend Celerra NFS as their preferred platform and storage protocol for VMware environments?”
NO WE DON’T. I’ve trained everyone at EMC I’ve ever talked to – pick the right protocol for the customer. Sometimes its’ block, sometimes its NAS – in general, the most flexible configurations are a bit of both. Sometimes it can all be met with one platform (here I’m not speaking in general terms, I’m speaking about EMC – customer can determine if it applies as a principle to others) sometimes it can’t.
Come on man – look at the joint NFS whitepaper we posted together.
It’s as wrong to be reflexively “NAS always” as it is to be “block always”.
We do both, we do both well. Now, in our case, there is a difference between the functions, and behavior of Celerra NAS and the block targets of CLARiiON and Symmetrix. There are of course differences between functions on NetApp too based on protocol. For examples, look at the dialog we had about ALUA and FC/FCoE relative to iSCSI on NetApp FAS platforms, or file/vm level snapshot/clone operations. But, I’ll acknowledge that those differences are minor relative to the differences between a CLARiiON and a Symm.
Sometimes I see that as a weakness and sometimes I see it as a strength.
The question is – do you always see it as a weakness? If so, I beg to differ.
What I was trying to point out is that every vendor (EMC included) suggests that their efficiency technology is all that matters, and what others have is bunk. The reality is there’s goodness all around. Customers choose what’s the best for them.
Vaughn – the core principle you state has indeed served Netapp and it’s customers well, and been an important part of your growth to date is that there is a sweet spot where the tradeoffs of single design (and add features/functions) is a good position to take.
That said – the Data Domain acquisition battle was an interesting one. if NetApp’s dedupe is so on the mark, in all use cases, ALL THE TIME – why battle it out so furiously for SISL? The answer is that for B2D where only real performance metric that matters (ingest rate) – the inline dedupe model works better than a post-process. VERY hard to engineer into a production storage target. If you’re aiming for the highest SLAs, the N+1 storage architectures of HDS and IBM DS8000 and Symmetrix rule the day – VERY hard to engineer into a classic “two head” storage architecture (not impossible – and you know that NetApp and EMC are working hard in this direction).
This has been my point – respect for differences, acknowledge innovation where it occurs, and sometimes a strength in one context is a weakness in others.
That’s all.
@Nick – boot storms are not the practical challenge with VDI use cases, and are addressed with: a) relatively small/primitive caches; b) staged boot; c) configuring the broker for log-off rather than shutdown on client shutdown; d) configuring the client guest swap to be fixed.
The real challenge (in my experience) are the other elements of the desktop lifecycle – if process change is out of bounds. These are AV, patching, and other similar tasks. It is aleiviated through user data redirection to NAS, but this introduces some complexity (check-in/check-out gets busted for example if you don’t use folder synchronization). This has been feedback from NetApp and EMC customers.
More than all of this – most VDI projects get sidelined (to date) based on end-user experience unless the use case is narrowly defined to not include general knowledgeworkers.
Re: SSD adoption – don’t know what you’ve heard but here are the fact that I have access to. Every customer is actively looking. We’ve sold out for the last 6 quarters. I just tried to order 40 (for various VMware/EMC performance testing use cases), and we’re backlogged. To understand why, understand – we’re not talking about Intel X-25Ms here. While they are 8x more expensive when expressed as $/GB, they are 1/4 cheaper when expressed as $/IOP, and 1/95 cheaper when expressed as watt/IOP. They are also on an incredibly steep price decline.
That DOES NOT make them a replacement for production dedupe, Rather, they are orthogonal, but related efficiency technologies for storage use cases. Other examples are: “dense” – like more drives per enclosure or 2.5″ disks; or fully automated storage tiering, or spin-down. depending on the use case – each is more or less important – but most customers have a LOT of use cases.
You offered to take ideas for additional tests. I’d really like to see what the system was capable of before duplication was enabled without PAM, and then also with PAM but no deduplication. I agree 100% that an intelligent caching engine would increase performance of a deduplicated data set, that looks to be proven well with your results.
As a consumer of storage and a guy who gets exposure across lots of apps and systems let me say my focus is on simplicity. The boxes are tools in a tool box, so are the protocols. Use what works.
The game changer is the interface and the interop between the stack. The winner is the company that is app aware, location (network) aware, workload aware, and able to float these globs of user function around when systems break. This should be done without 84,615 lines of shell script written by the consumer.
Pure speed can be accomplished in many ways, function and usability are what I appreciate.
I can say this is a great and interesting time to be in the field. We are in a spot where a technology shift is happening. I am sure have had that “a ha” moment. Mine was setting up a VM solution for a family member who owns a dental practice. Sadly his budget did not support storage from either vendor in this thread..
As for competition and passion, all for it. As a customer I see two guys who care about their technology and are willing to back it up. Good for both of you.
I guess the energy consumption of the entire system is lower, and that’s what really matters most.
Unfortunately consolidation is commonly misunderstood to be the sole function or value proposition of server virtualization given its first wave focus. I agree that not all applications or servers should be consolidated (note that I did not say virtualized).
From a consolidation standpoint, the emphasis is often on boosting resource utilization to reduce physical hardware and management costs by boosting the number of virtual machines (VMs) per physical machine (PMs). Ironically, while VMs using VMware, Microsoft HyperV, Citrix/Xen among others can leverage a common gold image for cloning or rapid provisionment, there are still separate operating system instances and applications that need to be managed for each VM.
Re: [NetApp – The Virtual Storage Guy] generic viagra submitted a comment to Run Everything Virtualized and Deduplicated : aka Chuck Anti-FUD
@Generic
You are absolutely correct, consolidation is only one use case with server virtualization. For additional details around virtualizing other apps may I suggest you read my latest post…
http://blogs.netapp.com/virtualstorageguy/2010/03/transparent-storage-cache-sharing-part-1-an-introduction.html
Data deduplication is useful stuff. Much the way that compression shows up everywhere in the infrastructure stack, data deduplication can be thought of in the same way.