You may recall on March 25th I shared that I have learned from a source what I believe to be the largest know VMware datastore. I suggested quid pro quo, you share the average size and the largest size of your datastores and in exchange I’d share what I’ve learned.
First, I’d like to thank everyone who participated, we had 326 share their averages and 253 share their largest. Without further adieu here’s the results…
Average Datastore Size
81.4% of VMFS datastores range between 500GBs and up to 2TBs
80.2% of NFS datastores range between 1TBs and greater than 2TBs
Size of Largest Datastore
83.3% of VMFS datastores range between 500GBs and up to 2TBs
82.6% of NFS datastores range between 1TBs and greater than 2TBs
I wish I would have had the foresight here to see that limiting the polling options to ‘greater than 2TBs’ was a mistake and I should have added additional tiers.
I hope some find this data set, while simple, at least interesting. Who knows, maybe it will help those who have datastores less than 500GBs in size to feel comfortable to increase their capacity.
The Largest Known Datastore?
So a few weeks back someone who has a significant amount of insight as to the storage deployments, goals, and desires of customers shared with NetApp that the largest database which VMware is aware of is a 12 terabyte NFS datastore! That blew me away as it is much larger than what I expected.
Ironically it runs on NetApp, and we were informed of its existence by VMware. Could you imagine a 12TB datastore in your environment?
I wouldn’t mind running a datastore that size. Restoring it across a WAN in the event of a failure though…………
The VMware datastore sizing issue has always struck me as a bit of an oddity given the scale of the VMware solution across dozens of servers, TB of RAM, 100s of CPUS.
I wouldn’t give it a second thought if I was provisioning a 12TB volume group on a Unix server, or if I saw a 12TB NFS share mounted across dozens of servers.
Fundamentally 12TB is now 2-3 shelves of disks, even after RAID6 protection, and saying we’d have to stick to breaking these 3 shelves down into 10 “logical” units because of a software limitation is a step-backwards.
I’m sure VMware are working hard on reducing the limitations on VMFS, otherwise people will have to start looking at NFS to reduce management costs.
Re: [NetApp – The Virtual Storage Guy] Andrew Mitchell submitted a comment to Poll Results – What is the Size of Your Average Largest Datastores
@Andrew Hey man, how have you been?!? You attending the TechSummit in May?
Yeah, 12Tbs would be tough. I think we could pull it off with our Long Distance VMotion as it allows for VM access while the data is in flight between sites. Ill post on this next week.
Cheers!
Re: [NetApp – The Virtual Storage Guy] Ewan submitted a comment to Poll Results – What is the Size of Your Average Largest Datastores
@Ewan Thanks for chiming in. I thought your comment, …otherwise people will have to start looking at NFS to reduce management costs. was interesting as this is exactly what we have been seeing since 2006 (with the release of VI3). Id suggest that NetApp customers who run 500 VMs or more are commonly on NFS primarily for ease of management reasons. Now Id clarify to say that most of the larger installations started on Fibre Channel and as such may have some amount of a legacy footprint running with their NFS datastores.
One thing to keep in mind that for large environment the max amount of NFS Shares vs the max amount of VMFS volumes might slightly effect the outcome of this poll.
I think for VMFS the general best practice has always been 300-500GB datastores to avoid scsi reservation conflicts. However the mechanism has been vastly improved and 1TB is not uncommon in vSphere environments. 12TB is however a completely new ball game. But this is of course very specific to a single customer and a single usecase!
@vaughn Yes Andrew will be at Tech Summit.
Duncan
Yellow-Bricks.com
@Duncan – Thanks for chiming in. The max number of datastores per host is effectively a cluster limit. I havent seen a customer reach this limit on VMFS or NFS; however, Im confident that in the future the max will be the same for both VMFS and NFS.
The results of this poll did align with what we see in our customer engagements, which is NFS deployments have larger (and fewer) datastores and larger DRS/HA clusters. With the release of the Atomic Test Set locking mechanism in ESX/ESXi 4.1 i would expect we will see customers move to larger VMFS datstores and clusters.
This message was sent by my thumbs… fault them for the brevity and poor spelling
Honestly, I’d be glad if I can create 32TB volume and consolidate 15 2TB-something volumes I’ve got right now attached to 20 host ESX cluster. Hope that after upgrading to ONTAP 8 it would be possible.
Vaughn do you have any reports of how an aggregate with 36 1TB SATA disks would behave? My plan is to create one, max two aggregates and put inside as many disks as possible, create one huge NFS volume and get rid of VMFS/FC altogether.
I’m asking that because couple of your competitors had some problems in past (I have no idea what is the situation righ now) putting 1TB SATA disks in huge disk pool. I know it because was working as a Storage Solutions Architect that time for that vendor. And even it was possible to create a disk pool with 96 1TB disks such kind of implementation was ‘not recommended’.
Have your folks at Netapp tested 64 bit aggregates with bunch of 1TB SATA disks? Has anyone tested such an aggregate/volume particularly for VMware deployment?
I’ve got >60TB of usable storage with 6080 running >400 VMs and twice that number of VMs is planned in near future. Unfortunately I do not have extra 6080 in my garage to test 64bit aggregates.
Following the previous post. We are currently moving from fc (pillar) to nfs (netapp) datastores and are trying to decide if we want 1 large nfs datastore or if we should break it up. Currently have 7 hosts and about 110VMs. I am having a real hard time tracking down whether we will see performance issues with one large nfs datastore for all the VMs. Does anyone have any input/links? Thanks
@Sam – great question. You could easily run 110 VMs on an NFS datastore. The strength of NFS is how easily it handles large numbers of VMs. NFS delivers dorect access to hardware accelerated VM cloning, transparent dedupe, etc… The storage network design with 1GbE Ethernet can be a little complex (as compared to say iSCSI).
Will you deploy on 10GbE or 1GbE?
I would suggest to you that if the total number of VMs you plan to deploy is less than 150 or 200 and you are running on 1GbE then you may want to consider iSCSI. The storage network setup is incredibly simple to configure for link resiliency and throughput aggregation (with vSphere).
Long story short, you cant go wrong here.
Punched by thumbs, please excuse typos
@Sam We have 170 VMs and have 3 NFS datastores. Feel free to email me if you want to chat
What is the volume size limit for a FAS2040HA? I would like to know so we can look into a proposal using 2TB drives.
Thanks!
Rene
Would love to see a new poll regarding average VMware datastore sizes in 2016.
I haven’t thought about asking this question but with the emergence of new storage platforms – from AFAs to HCI, it might be interesting. Thanks for the suggestion.