Last week I introduced you to SnapProtect, our unified business continuance suite that leverages NetApp’s hardware-accelerated data protection capabilities to enhance and simplify business continuance processes.
I believe the best way to share all of the goodness we have to offer with SnapProtect is to begin by reviewing the current state of business continuance. With a common point to communicate from, I’ll proceed into our integrated data protection technologies and additional partner solutions in follow-on posts.
Let’s get started…
Business Continuance is Backup & Disaster Recovery
I want to begin by asking you to look at backup and disaster recovery (DR) as two ends of the business continuance spectrum. Both functions provide processes and procedures as a means to return an application, or set of applications, to a normal operational state in the event of corruption, loss or disaster.
What truly separates backup from DR is the costs associated with each process. On a per copy basis, backup is much less expensive, and as such is the standard data protection process for all systems. By contrast DR is very expensive, which commonly relegates its use to a select set of applications and systems.
Outside of the desire for applications, systems, and facilities to never go offline, businesses truly only care about restoring services as quickly as possible. The means of recovery is irrelevant, it can be accomplished from either backup or DR.
Time is money.
Backup is Broken
I’m at a loss for words to describe it any other way; whether you backup your datacenter to tape or via a disk-based solution, your backup process stands in stark contrast to what is being achieved with server virtualization. When a customer deploys VMware vSphere they receive core benefits that we all know and appreciate, and certainly that I (and many others online) have outlined in the past.
With all of the net-new products released over the past few years targeted to enhance business continuance with server virtualization, most have had little impact. Today’s business continuance applications, processes and architectures offer nothing anywhere near the benefits enabled with server virtualization. Areas I am focusing on include the reduction in infrastructure resources, streamlined operational processes, reduced time requirements, etc.
Let’s take a deeper look and start with a familiar concept: Backup – the process of making copies of select datasets at specific points in time that are later used as a source to restore from to return that data to a desired state.
Backup begins by copying data from the production array and storing this copy on tape or disk in the form of a ‘backup optimized array’. Such purpose-built arrays often implement storage savings in the form of data deduplication and compression in order to store a large capacity of backup recovery points.
The backup array provides the first point of restore in the event the production data is lost or becomes inconsistent. Like the backup process, a restore requires the data to be read, reassembled from the deduped bits, and then copied back to the production array or an alternative location, which must be able to store the data in it’s native, non-storage savings format.
How long you keep backups on the queue is inherent to your business, but like most, I am sure you have a requirement to offload an aggregate of those backups to a more long-term archival system.
Data Archives
Beyond a need to protect data daily, businesses are also required to create backup archives – the process of making copies of data in order to meet legally enforced, compliance requirements. These copies are usually created from a select interval of backup data, say weekly or monthly, and commonly written to another set of media – often with a different set of efficiencies.
Backup and archive are for all intents and purposes, the same process. The primary differences between the two is areas of the recover points available (current vs dated), restore times (hours vs days), retention periods (days vs years), and in some cases the type of storage medium (disk vs tape).
The process of archiving backups usually involves copying data from “purpose-built” backup arrays to either tape or different kind of “purpose-built” array that is designed for compliance or archival data (such as a content addressable array). Like the restore described above, the copy process requires the deduplicated data to be reassembled, transferred, received and processed to be stored in a dense format. If the target is tape, then the data is likely stored in a compressed format. If the target is disk, then the storage format is likely comprised of dedupe and compression.
Intermission
I’ve tried very hard to reduce the amount of content in my posts. It looks like this blog isn’t going to make the mark. – Sorry about that ;^)
So far we only covered the part of the business continuance processes the occurs in the same datacenter as where the production data resides. We need to move forward and start looking at business continuance beyond the production datacenter.
Disaster Recovery
Disaster Recovery can be summarized as the process of recovering application services in the event of a critical loss of infrastructure or loss of an entire site. DR architectures are relatively straightforward when compared to backup and archive. Simply double the storage, compute and networking layers in a remote facility, replicate the production data to the remote system, and implement DR automation like VMware vCenter Site Recovery Manager orCitrix StorageLink Site Recovery, which is available for XenServer and Hyper-V.
I propose that if DR were available at the price of backup, then DR would protect every system in the datacenter. This has tremendous value and speaks to a unified approach to business continuance – especially since backup and data archive systems cannot provide DR-level access to their copies of the production data.
DR does have one item in common with backup and archival processes; replication (aka the copy process). In this post, DR becomes the third transfer of the data and that transfer is in a format native to the array, but different than that of the backup and archive platforms.
Let’s Take Inventory
In short it’s very likely you have many, many copies of your production data strewn across a number of systems, stored in many formats, with restricted access granted only to application specific interfaces.
Your datacenter probably includes a production storage array that hosts the day-to-day workload of the virtual infrastructure. This system likely has an identical twin located in a remote facility providing data for DR purposes. You may also have two purpose-built systems in order to store point-in-time copies of data; the first provides short-term retention of backups in order to fulfill restore requests. The second provides long-term retention on backups in order to meet compliance and legal requirements.
I often hear about VM sprawl – how about business continuance sprawl?
Consider the impact that server virtualization is having on the industry. The mo
vement from physical to virtual directly impacts how backup and DR processes are implemented and managed. This impact plays an important role in my use of the phrase ‘time is money’. Regardless of the workload, the protection of data skips across array types based on the maturity of the recovery set: backups as the first point of recovery, archival as the second, and DR as a hybrid of the two.
Traditional business continuance designs are ugly, but to most they are an accepted pain of doing business. It is also the reason why NetApp released SnapProtect. I’ll continue this conversation in a day or two…
This release has me very confused because of the recent Netapp / Syncsort partnership. This partnership really completed the puzzle for me as it finally gave us a very efficient way to leverage NetApp storage and protect a large variety of physical servers and applications easily (basically OSSV on steroids) with instant availability and recovery. Now with Snap Protect I am confused, is it meant to only consolidate our SMVI/SMSQL/SMExchange products from a single catalog or is it meant to go beyond that?
@Clay – I admit my posts probably would have been best served if I began with this one, followed by Syncsort, and then SnapProtect. unfortunately that wasn’t possible.
I plan to cover NSB in the next day or two. There’s lot’s to share here.
Thanks for the feedback. Cheers!
Vaughn, I agree with Clay. What is going on? Where is the focus? SANScreen or BalancePoint? DFM or System Manager? NSB or SnapProtect? I’m Snap*’ed to death by all the interfaces. Why can’t you unify this into a single console? Is NSB dead? Its a stellar product that gets no love from NetApp marketing. Your customers are confused.