In yesterday’s post we discussed how VI admins are becoming storage admins as an unexpected byproduct from the virtualization of our data centers. We looked at the storage provisioning process with emphasis on the number tasks that a VI admin must execute. At reviewing the traditional model we introduced the new model; one jointly created by NetApp and VMware engineering and designed to simplify, automate, and standardize storage operations in the cloud.
The last item we discussed was the how the new model allows storage admins to provision and secure ‘raw’ storage resource pool for use by the VI admin team. This model allows the storage team to focus on higher value functions such as resource utilization, quality of service, and data protection.
In today’s post our discussion shifts to the storage management integrations available to the VI admins thru the NetApp vCenter plug-ins.
The Value of Unified Architectures
As you probably know, NetApp array run Data ONTAP across all of our storage controllers, including when we virutalize other arrays. There a tremendous amount of value in unified architectures like Data ONTAP, ESX/ESXI, and NX-OS as the architecture allows for vendors, partners, 3rd party developers, service providers, and customers to develop a set of tools, which work across the data center regardless of hardware platform.
The NetApp plug-ins for vCenter leverage the unified architecture in order to deliver all functionality across every array and for every storage protocol. Imagine deploying processes across all of your datacenters without having to consider the hardware model?
In the rare event a function is unavailable, it tends to be due to a lack of support for a particular operation within the entire architecture stack.
Audit and Automate ESX/ESXi Storage Settings
I felt that a good place to start was with basic connectivity. By selecting the NetApp tab in vCenter the VI admin is shown the storage arrays configured in part 1 of this series. The storage arrays are able to identify the ESX/ESXi hosts connected to them and thru vCenter audit the settings related to FC, FCoE, iSCSI, and NFS to see if they are set to values defined in NetApp best practices. Should theses settings require updating, the VI Admin can select an individual or multiple hosts and execute a non-disruptive update to the storage settings.
The audit process can be run at any time without disruption to the production environment. This capability empowers VI admins with the ability to ensure optimal uptime as the environment grows. The settings we update are currently not covered with VMware host profiles, or in other words we are extending the automated configuration process. Finally, the ability to update the host setting is limited to vSphere hosts. From what I understand, VI3 hosts do not provide the APIs in order for us to make our changes; however, once a system is identified as being in non-compliance one can make the manual settings as outline in TR-3428.
Report Storage Details and Utilization
Also available in the NetApp tab is the ability to report on storage utilization for SAN & NAS based datastores. When one selects a datastore they are presented with a large amount of details around the storage object such as LUN serial number, igroup, ALUA enablement, dedupe savings, etc.
One of the key benefits in this screen is the ability to report on storage utilization through the numerous layers beginning with at the datastore and ending at the aggregate (or a physical collection of RAID protected disks).
The value here really stands out when one enables storage savings technologies like our block level data deduplication, zero cost clones, or thin provisioning.
Report Storage Faults
The NetApp plug-in also provides feedback as to the health of the storage controllers to the VI Admins. The ability to report the ‘health’ of the physical infrastructure allows us to reduce the time it may take when the VI admins and storage admin may need to address an issue.
Ensure Optimal I/O Settings within VMs
Another component to our plug-in is the ability to audit and adjust settings within the VM in order to ensure optimal I/O. The first set of tools includes scripts which can be ran from within VMs or applied to VM templates, which sets local SCSI settings within the GOS.
The second set of tools is our MBRscan & MBRalign. The two tools combine to audit and correct the partitions and file systems within the VM. I’ve covered the importance of partition alignment. If you’re unfamiliar with this topic, I’d suggest you get acquainted, as it is key to ensuring optimal VMs especially if one leverages the cloud and runs VMs across a dissimilar set of storage arrays.
Provisioning Datastores from Resource Pools
Do you remember where we left off in Part 1 of this series? It was with the storage admin provisioning pools of storage resources for use by the VI Admin. Do you recall that the storage admin did not have to create LUNs, FlexVols, LUN masks, NFS exports, set multipathing policies, etc…? We’ve saved all of these nice details for our plug-ins.
With NetApp one can select an ESX/ESXi host, cluster, or data center and provision a datastore to that unit. The datastore can be FC, FCoE, iSCSI, or NFS and will be configured from one of the resource pools established by the storage admin. Our plug-in will handle path selection, load balancing, setting multi-pathing policies, securing the storage target, and the enabling of thin provisioning and data dedupe. While the VI admin receives automation and an on-demand provisioning process, the environment receives the consistent implementation of NetApp best practices.
I trust you as the reader understand the number of different options one must consider when deploying SAN or NAS with VI3 and vSphere. Suffice to say, each protocol is different with each major release of the hypervisor.
Managing and Reconfiguring Datastores
Ever provision storage and later wish you could modify it? Maybe a datastore is nearing its capacity limit. Should you initiate a migration with Storage VMotion? While I don’t know if you should or shouldn’t with NetApp you have additional choices in the areas of dynamic, non-disruptive resizing of datastores (sorry only NFS datastore can be shrunk).
To change the capacity or storage efficiency settings one needs to simply select the datastore, right click, and choose appropriate option from within vCenter.
Instant Provisioning of Pre-Deduplicated VMs as Servers and Desktops
The NetApp plug-ins began when a number of us at NetApp and VMware looking at ways to reduce storage costs with virtual desktop deployments. This demo from VMworld 2007, is the earliest public content displaying of our zero-cost cloning technology FlexClone cloning files. Until then FlexClone was limited to LUNs and FlexVols.
Today we can deploy a single VM, multiple VMs or an entire desktop pool (across multiple datastores) instantly, without consuming any additional storage in the process. By simplifying selecting a running or shutdown VM, template, or vApp the VI admin can deploy the industries most space efficient clones.
Unlike some cloning technologies, NetApp clones are permanent, high performance VMs that can be handled in the same manner as any VM. There are no restrictions with are cloning.
Our cloning is integrated into vCenter, View Manager, XenDesktop, and Quest vWorkspace. If our clones didn’t work so well, you wouldn’t find them so tightly integrated across products from the industries leading vendors of virtual desktop solutions.
There’s a number of additional features available when cloning VMs with our plug-in, but those details are probably better suited for a future post. (hint – think hardware assisted desktop refresh)
Wrapping Up Part 2
I hope you have found the details around our (did I mention free) vCenter plug-ins. I’d love to hear what you think. Please feel free to post comments and suggestions for future updates.
I have one additional post planned in this series where we take a look at the plug-ins available by the rest of the storage industry. I’d like to think of part 3 as an audit of the storage industry’s response to the challenge I made following VMworld 2009.
It’s late, and I ned to wrap up. I hope you’ll be back for tomorrow’s conclusion.