Since VMworld 2009 one of the recurring questions posted to me on Twitter has been. “When will NetApp release the VSC?” – Wait no longer, the VSC is available for download via NOW.
For those of you unfamiliar with the VSC, it is a vCenter 4.0 plug-in that simplifies the configuration and management of NetApp, IBM, and Virtualized 3rd party arrays. The VSC supports ESX/ESX 4.0 along with monitoring and reporting only capabilities with ESX/ESXi 3.5.
The Virtual Storage Console is the next generation of the NetApp ESX Host Utilities Kit. Gone are the days on installing the EHU into the service console of ESX or cursing NetApp for the lack of support with ESXi. The VSC adds a NetApp tab into vCenter that enables one to:
• Manage ESX/ESXi storage connectivity
• Report ESX/ESXi storage details
• Accelerate resolving storage connectivity related support cases
• Optimize VM storage options for performance and availability
My I introduce you to the Virtual Storage Console version 1.0…
The NetApp Tab
The VSC management interface is a fairly straight complete wit its left-hand navigation. Upon closer inspection you may note that the left-hand navigation is collapsible. Current plans have support for vertical scrolling of plug-ins from within the left-hand navigation for the integration of our VMware management tools like the RCU and VMInsight (both currently stand alone plug-ins), with SMVI (currently not a plug-in).
I hope I didn’t mislead anyone with the use of the term left-hand navigation. Unfortunately I am not referring to storage integration between NetApp and HP offerings. For this post I am merely referring to management system located on the left side of the NetApp page.
The Overview Page
Here a VMware admin can view the storage arrays and ESX/ESXi nodes that comprise the VMware environment. As a part of the Storage Controller section there is a health monitoring function comprised of red/yellow/green status notification system. With the appropriate permissions granted, the VMware admin can connect directly into the controller collect more details or even resolve the issue.
The lower half of this page includes the ESX Hosts section that includes an overview and notification system similar to the one in the Storage Controller section. In this area the simple red/yellow/green status system is used to report on storage related configuration issues from a ESX/ESXi perspective. A number of the settings verified include those for NFS & iSCSI, HBAs, and NMP. In addition, there are storage side options related to these settings, such as ALUA, which are also reported on.
To correct these issues, the VMware admin can simply highlight the offending systems and with a right click these settings can be corrected, without disruption to running VMs.
The Storage Details Pages – SAN & NAS
Here VMware admins get an insight in the details surrounding the datastores served from each storage controller. Such details in this section include:
• Datastore to LUN mapping
• Datastore to FlexVol mapping
• Datastore to aggregate mapping
• Datastore to storage array capacity reporting
• LUN details including LUN masking (igroup), thin provisioning (reservation), dedupe savings, etc…
• LUN capacity reporting transcending all four layers of storage: VMFS datastore, LUN, FlexVol, and aggregate (RAID protected physical disks)
• NAS details including thin provisioning (guarantee), dedupe savings, autogrow settings, etc…
• NAS capacity reporting transcending the three (well actually two) layers of storage: Datastore & FlexVol (which are the same) and aggregate (RAID protected physical disks)
Traditionally NAS deployments have been able to display storage savings from data deduplication within the datastore thus providing transparency between storage and VMware admin team. By contrast SAN based deployments were oblivious to any underlying storage virtualization. This typically led to additional management overhead when enabling these types of technologies within a SAN. The VSC beta customers were absolutely ecstatic about having the capability to view storage capacity from datastore to the physical disk especially when leveraging storage savings technologies like dedupe and thin provisioning.
This reporting capability allows customers to truly manage physical disk pool capacities thus increasing storage utilization rates.
The Data Collection Page
Ever opened a support case for a storage connectivity issue? They aren’t fun particularly due the amount of data collection tasks that you will be asked to perform from the multiple support teams. Hopefully this scenario will never occur; however, in the unlikely event it does we hope you find the Data Collection tab useful.
From this pane the VMware admin can collect support logs and content from ESX/ESXi, storage network switches, and storage controllers. Zip these files up and upload them to the various support teams in an effort to identify and remedy the issue.
The Tools Page
Here we provide add-on tools designed to optimize the performance and availability of the I/O ‘stack’ within the VM. First up is the MBR Tools. This mini-suite includes mbrscan & mbralign, which respectively, identify and correct misaligned file systems within VMs.
Aligning the file system of a VM provides higher per VM I/O performance, can significantly reduce the load on a storage array, and can improve data deduplication rates. The MBR Tools install on an ESX host and are run via the command line. I’ll post more on file system alignment and some of our partner details later this week.
The second set of tools is referred to as Guest OS Timeout scripts. These scripts are delivered as ISO images. Once mounted to a VM a sysadmin can execute these scripts and they will set optimal SCSI retry and timeout values. These settings are critical to ensure VM availability in the event of an array failover event.
The About Page
This page provides a simple overview of the VSC and the number of objects it manages and reports on.
As I shared with you shortly after VMworld 2009, the shift towards a virtualized data center is simultaneously reducing server footprints while increasing storage footprint and storage management complexity. Most whom I speak with are ever increasingly expressing a desire to avoid the ‘ever multiplying complexity’ associated with growing a storage infrastructure. This can begin by architecting less complex designs and implementing simpler architectures.
The VSC is a part of the NetApp architecture for VMware where we are focusing on removing complexity. The VSC will be added into TR-3749, vSphere on NetApp best Practices, as a recommended best practice when configuring ESX/ESXi storage parameters. In doing so, we plan to remove 2-3 pages of existing content for every page of the VSC will add to the document.
While the VSC is available at no cost to our customers we hope you find it invaluable. Please share with us your feedback either thru your account team or NetApp partner, in the comments section of this post, or via Twitter.
The Virtual Storage Console is available with NetApp FAS, IBM N-Series, and 3rd party storage arrays virtualized with the NetApp vSeries and can be downloaded here.
Note: IBM will release an IBM branded version of the VSC in the near future.
Update: The VSC has the following Data ONTAP version requirements, which I should have listed in the original post. From the release notes…
22.214.171.124 and later – Supports all SAN and NAS functions
7.2.4 to 7.3.0 – Supports all NAS functions. SAN storage controllers are discovered, but no LUN information is displayed. These controllers will be flagged as ‘unmanaged.’
This is excellent. Does this tool integrate with SANscreen in any way? I’d like to see NetApp produce a VSC for Sun OpsCenter.
Nick Howell says
Outstanding! Love it, love it, love it!
Timo Sugliani says
I just tried to install it but I’m facing a “network” issue.
the VCS is checking from vCenter all NFS types Datastores and then trying to identify storage arrays from their IP Address.
But in my setup we have separate networks, let’s say 10.X.X.X for vSphere “network” and 192.X.X.X isolated network for the storage part (only storage backend)
So the VCS plugin is trying to import an array with 192.X.X.X (which isn’t accessible from that network), but I still have an e0M network iface on the correct network but with another IP Address.
Isn’t there a way to setup an array in the VCS where “production” vif aren’t available from the “vSphere” network ? (it should be possible as it connects to the array, and can make the relationship from the vifs and the NFS shares in my case)
Thanks in advance.
Timo, sounds like you may have to dual home your vCenter server. One NIC for the 10.x net and one NIC for your 192. net.
I had the same issue with SANscreen’s VMinsight but it didn’t work. We had to completely redesign our public\private network architecture with routed VLANs and ACLs to make it work.
“I hope I didn’t mislead anyone with the use of the term left-hand navigation. Unfortunately I am not referring to storage integration between NetApp and HP offerings.”
Cool google trap there! designed to bring the left hand searchers clicking in : )
Vaughn Stewart says
LOL – I didn’t intentionally do that – but it sure reads that way.
I really like the look of Jimi Hendrick’s guitars. I think strats look awesome when played by those who have a dominant left-hand.