You are browsing the archive for DPM SCDPM Backup.

We need your help!… on SCDPM tape features

7:08 pm in Uncategorized by mikeresseler

Although we don’t know the release date yet, we are getting closer to the Release of System Center Data Protection Manager 2012.  And this time, we need your help.  The (former) SCDPM MVP’s are looking for your help.  We need YOU to give us information / requests / whatever… on what the SCDPM product team can do to improve the tape management in SCDPM.  Yes, you read that correctly.  We are going to gather all your feedback and get it to the product team.

You can find more information on http://robertanddpm.blogspot.com/2012/02/tape-management-improvements.html and you can always get the info through this blog.  I will make sure that we get it to the right people.

Thanks for helping!

Cheers,

Mike

Microsoft’s internal use of SCDPM 2010

1:53 pm in Uncategorized by mikeresseler

A couple of weeks ago, Microsoft released a case study about System Center Data Protection Manager 2010.  The customer in this case study is Microsoft’s own internal IT.  This is a very interesting case study as Microsoft is a very large organizations, with divisions worldwide, many datacenter and lot’s of data to protect (3.5 petabytes!!!!).  But as with any other case study, it is an interesting read but it doesn’t give you any technical details or information on how they did the work.

Another SCDPM MVP and good friend of my Yegor mentioned on his blog last week that Microsoft has released two more documents around the case-study.  I downloaded those and found out that they are actually containing great information for everybody that is tasked with a new setup of SCDPM or a migration from 2007 to 2010.

A must read:

Protecting Server Data with System Center Data Protection Manager 2010: http://www.microsoft.com/download/en/details.aspx?id=26659

Managing Data Back Up at Microsoft with Data Protection Manager 2010: http://www.microsoft.com/download/en/details.aspx?id=5898

Cheers

Mike

DPM 2010 QFE Rollup available

10:55 am in Uncategorized by mikeresseler

Microsoft has just released a DPM 2010 QFE Rollup.

This, much awaited hotfix rollup, fixes the following issues:

  • You cannot protect the Microsoft Exchange Database Availability Group (DAG) on a secondary DPM 2010 server.
  • You are prompted to restart a client computer after you install an agent on the client.
  • DPM services crash, and you receive the error, “Unable to connect to the database because of a fatal database error.”
  • MSDPM crashes, and event ID 945 is logged in the event log.
  • When you change a Protection Group, add a very large database, change the disk allocation, and then commit the Protection Group, DPM 2010 does not honor the user intent, and instead, DPM 2010 sets the sizes of replica and shadow copy volumes to the default sizes.
  • The Management tab does not link to information about the latest Microsoft Knowledge Base article for DPM 2010.
  • You receive the message, “Computers not synchronized,” when you try to replicate DPM 2010 databases to a System Center Operations Manager. 

The information for the KB can be found here: http://support.microsoft.com/kb/2250444

The download can be found here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=f399fbfa-5c8b-4eb6-bda2-ea997745919a

Enjoy

Mike

New DPM 2010 datasheets released

6:30 am in Uncategorized by mikeresseler

Hey All,

From the All Backup Up blog:

5 new datasheets are released about DPM 2010

Product Overview of DPM 2010

How to Protect Microsoft SQL Server with DPM 2010

How to Protect Microsoft Exchange with DPM 2010

How to Protect Microsoft SharePoint with DPM 2010

How to Protect Windows Clients with DPM 2010

Enjoy!

Cheers,

Mike Resseler

Bare Metal Recovery: How to add all volumes

6:55 am in Uncategorized by mikeresseler

Hey All,

On the DPM newsgroup (http://social.technet.microsoft.com/Forums/en-us/dataprotectionmanager) there was a very interesting thread the last few days.  One of the users asked if it was possible to include all volumes into a Bare Metal Recovery

As you might know, Bare Metal Recovery only protects the critical volumes (boot + system + volumes hosting files of server roles), so if you have a volume with applications or user data or whatever, you need to protect it also.  Now that is not a problem because you can choose BMR and also the additional volumes

image

Now the user said that this was not OK, because in a disaster, he wanted to recover as quickly as possible.

Luckily, Praveen D [MSFT] found out a good solution, one which I think can be very helpful in some cases, so here goes…

DPM uses windows backup to do the job.  So in your DPM\bin folder, you will find a file called BmrBackup.cmd.  Inside this cmd you will find the command that drives windows backup.

With BMR, you will see something like:

start  /WAIT %SystemRoot%\system32\wbadmin.exe start backup -allcritical -quiet -backuptarget:%1

If you add the option –include:VolumeLetter:,VolumeLetter: then you add your volumes in the BMR.  Don’t forget to increase the volumes for your replica and recovery point volumes.

Thanks Praveen

Cheers,

Mike Resseler

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

8:21 am in Uncategorized by mikeresseler

Hey All,

Here’s part 6, and the final part of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

In this part I will give an overview of the partner announcements made @ MMS.

As you all know Microsoft has partnered with some companies to provide protection to the cloud.  But there are also partnerships around DPM on an appliance and on virtual tape library software.

 

1. Cristalink Firestreamer

Firestreamer is a utility that can create a virtual tape library and virtual tapes based on different kinds of storage such as internal and external hard disk drivers, flash memory, blu-ray, dvd’s, and so on.

Very cool solution if you use this in conjunction with DPM2DPM4DR.

For more information: http://www.cristalink.com/Default.aspx

2. i368

i368, a division from Seagate delivers their eVault software together with DPM to support non-windows environments support.  Stuff such as Linux, VMWare, Sun Solaris, HP-UX, Oracle and so on will be protected by this, creating a solution with is fantastic for windows (DPM) and at the same time gives you the opportunity to protect other workloads.

The also offer their  solution in an appliance, based on a rebranded dell server, with everything preinstalled on it.

http://www.i365.com/products/data-backup-software/microsoft-backup-recovery/index.html for more information

3. Iron Mountain

Iron Mountain delivers protection to the cloud.  With this company, you can protect your data and sent it straight to the cloud from the DPM console.  A very cool solution for off-site backup.

www.ironmountain.com

 

That’s it for DPM week 2010.  In my humble opinion, the new version of DPM is a must have for every windows environment.  It has improved a lot over the DPM 2007 SP1 solution which was already a good product.  Now it just got better.  And because Microsoft realizes that not everything is Microsoft in your environment, they build strong partnerships with other companies that leverages the product and allow you to do tape library sharing, so that you can protect your other apps with whatever you want…

To be continued

Cheers,

Mike

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

6:28 am in Uncategorized by mikeresseler

Hey All,

Here’s part 5 of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

This session was given on Friday morning and should have been normally a session from David Allen (System Center Operations Manager MVP – Deloitte) and Sergio De Chiara (DPM Architect – Microsoft Corporation)

Due to the ash cloud, both guys couldn’t make it to Las Vegas so that was quite a disappointment since I really wanted to see David in action.  He owns the blog http://www.scdpmonline.org which is a great resource for all of you that need to work with DPM.

Luckily for me, the DPM team decided to throw in another session and the title sounded promising: Disaster Recovery and Advanced Scenarios.

So session 5 of DPM for me, on a Friday morning.  And Jason, if you are reading this, don’t forget the promise you made to the guys that followed all of your sessions… I’m eagerly waiting for the book :-)

Anyway, session 5 with Jason Buffington and Vijay Sen. 

On the agenda for today:

  • End-User Backup and Recovery
  • Bare Metal Recovery
  • Disaster Recovery
  • Misc
    • Agent Deployment in the Enterprise
    • Non-Domain Servers
    • SCOM Management Pack

So the session starts with some figures about what it cost when disaster strikes for each hour that the environment is down.  All nice figures but a little bit too much oriented on the American Business.  I don’t think that I know a company that will loose 6.4 million dollar of income for each hour that they are out.  But no matter how much it cost, when your business is down, it will cost money, a lot of money, not to mention the image loss or worse, the compliance issues that you will be facing.  So in worst case, how are we going to recover, and how are we going to do this as fast as possible.

Definition of a disaster:

Process of recovering from any natural or man made disaster that results in loss of partial or complete loss of data center and infrastructure.

What I really liked is that this definition is more then a hurricane, a flood, 9-11 (hey, we were in Vegas…) but that it also includes a disk crash, a stolen laptop and so on.  Basically, when data is lost, no matter in what form, it costs money.  So we need to recover. 

All right, first topic discussed is dpm2dpm4dr (read: DPM to DPM for Disaster Recovery)

image

This was already working in DPM 2007, so nothing new here. 

However, they increased the possibilities with this:

  • One-click DPM DR failover and failback
  • Separate schedules per DPM server
  • Chaining support
  • Offsite tapes without courier services
  • Restore servers directly from offsite DPM

 

Suppose your DPM main server falls out.  By using the switch protection option you can change the recovery to the secondary server.  Rebuild or fix the primary DPM server, and use the same switch to change the protection again to the primary server.

For each DPM server you can use a different schedule, so your primary will probably have a very tight schedule, but your secondary will be protecting much slower if there is a wan between them

Chaining support is also one of the new cool features.  It basically allows you to do backup to backup to backup or protect multiple primary DPM servers with one secondary.  You can also start to cross.  Your primary server will be acting also as a secondary and visa versa.

Offsite tapes without courier services is how they see it when your secondary server is in an offsite location.  Since the tapes are offsite, it is not necessary to give them with a courier anymore.

And last but not least… Still need to recover after a major failure?  Recover straight from the secondary server.

Many other things were discussed during this session such as post and pre backup scripts

ScriptingConfig.xml

<?xml version=”1.0″ encoding=”utf-8″?>

<ScriptConfiguration xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance” xmlns:xsd=http://www.w3.org/2001/XMLSchema xmlns=”http://schemas.microsoft.com/2003/dls/ScriptingConfig.xsd”>

<DatasourceScriptConfig DataSourceName=”Data source”>

<PreBackupScript>”Path\Script” </PreBackupScript>

<PreBackupCommandLine>parameters</PreBackupCommandLine>

<PostBackupScript>”Path\Script” </PostBackupScript >

<PostBackupCommandLine>parameters</PostBackupCommandLine>

  <TimeOut>30</TimeOut>

</DatasourceScriptConfig>

</ScriptConfiguration>

We also saw a great demo of a BMR recovery.  Just start your server with a windows cd (make sure that the network card and disk subsystem is recognized so use a wim file with injected drivers if necessary), choose recovery mode and connect to the location of the BMR files

The definition of a BMR backup is the following:

  • Backup of all Critical Volumes

 

  • Critical Volumes = Boot + System + Volumes hosting files of Server Roles
  • E.g: Boot, System, Active Directory (for DC’s)

 

  • Used for both System State Recovery and BMR Recovery

So important to remember is to have a different backup for other volumes that contain data!

Hereunder is a great overview screenshot of a BMR recovery

image

Till next,

Cheers,

Mike

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

7:29 am in Uncategorized by mikeresseler

Hey All,

Here’s part 4 of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

This was the last session of DPM Wednesday, and given by Asim Mitra and Vijay Sen, 2 program managers within Microsoft and responsible for the Virtualization protection within DPM.

On the agenda of this session:

  • Protecting your hyper-v environment
  • Hyper-V Recovery Options
  • Recovering from a disaster
  • Sample Customer Deployments

They started by outlining the top priorities for CIOs in 2010

image

If you look at the screenshot, you will see that Disaster Recovery / Business Continuance and Server Virtualization comes in 2nd and 3rd.  First one is cost reduction, but I guess that will be so for the next x years :-)

I know that virtualization is more “sexy” then disaster recovery for an IT Pro, but it is of course pretty important to think about backup / disaster recovery whenever you deploy a new solution into your environment.  So why not do this hand in hand?  DPM is designed to protect hyper-v fully and if you have read one of my previous posts you know that it is also capable of backing up vmware virtual machines… if you tweak a bit :-)

So what are the features of DPM 2010 for protecting hyper-v?

  • Host-level backup of Hyper-V on WS 2008 R2
  • Cluster Shared Volumes (CSV) support
  • Seamless protection of Live Migrating VMs
  • Alternate Host Recovery
  • Item Level Recovery

Sounds interesting?  Let’s continue to have a look.

First, they started with a discussion on what to protect.  Should we protect on the host and backup entire VM’s?  Or should we protect inside the guests and take the data?  Now this was the sign for many people in the room to shoot the profile of their environment at the two and ask what the solution would be for their specific case.  Luckily these guys were smart enough (or well trained :-)) to leave all options open.  Why?  I think they share the same opinion as I have.  You never can take this decision without first assessing an environment thoroughly.  There are so many questions you need to ask first before you can decide on what strategy you are going to use.  And even then, in many cases, you will be using both.  I actually had a discussion that evening with a guy that could not believe that at a certain moment you would only choose for the host-level backup for a certain virtual machine.  I actually do think there are cases when this can be done.  Imagine a webserver that is running in production and where the configuration only changes once and a while.  A daily backup of the guest should be enough.  I think a lot of servers that are running and running and don’t contain user data or business data can be protected that way.  I mean, who cares that you lose log files if you are not compliant to something?  If you can recover the server quickly when he’s out, that’s more important then those log files right?  And if they are important, I’m sure that the business then have a solution to archive these logs into an auditing system.  But for the conclusion, this really should be looked at on an individual base and here under are some points that can be used to make that decision

  • Host
    • Protect or recover the whole machine
    • “Bare Metal Recovery” & “Item Level Recovery” of every VM
    • Protect non-Windows servers & LOB applications that don’t have VSS writers
    • No granularity of backup
    • Single DPM license on host, all guests protected
  • Guest
    • Protect or recover data specifically
    • SQL database
    • Exchange
    • SharePoint
    • Files
    • No different than protecting the physical server
    • DPML per Guest

Next topic, how does it work.

As always, you start with an initial replica.  After that, this is never done again.  What happens is the following:

  1. DPM initiates the backup process
  2. Using the VSS framework, an application consistent snapshot is created inside the guest virtual machine
  3. A snapshot of the VM is created on the Host (Important mark, use a hardware VSS writer is you are using a CSV)
  4. Then there is a checksum comparison of the VM snapshot with the DPM Replica
  5. Finally, only the changed blocks are replicated to the DPM Server

Seamless protection of Live Migrating VMs

Yep, you’ve read it correctly.  The backup administrator (I would like to introduce a new title for this job, I would like to call him a Business Continuity and Protection Engineer or Officer… what do you think? :-)) doesn’t need to care where the actual virtual machine resides.  With live migration, pro-tips in SCVMM and virtualization admins you can imagine that the placement of a virtual machine is never fixed.  And you can also image that the virtualization admins won’t like to update the backup guy every time a machine has moved.  With all the automation you can create these days (SCVMM, Opalis, SCOM…) they will probably don’t have a clue either.  DPM will know where the virtual machines is, and protect it from there.  If a machine is moved, then DPM will follow it to its new path.

What about Storage Migration?  Will that work also?  Yep, it will.  Again, DPM will follow the path

All nice and well, you are protected.  But issues happened, and you need to recover.  What are your options?

  • Restore VM back to original host or cluster

Probably the most expected option, system went down, recover to the same location and you’re up and running again.

  • Restore VM to a different host or cluster

A little less expected.  Restore the server to another cluster or individual host.  Now this opens options.  Take a backup of a production server, and restore it to another host for testing purposes.  Just make sure that your test environment doesn’t have the capability to talk to your production environment.  Not sure about the latest patches or service packs?  Restore to another environment, deploy the patches and see if the server starts nicely again.

  • Item Level Recovery (ILR) to file share

And this will become a very much used feature in the future.  Mount the virtual machine, get inside the virtual machine or guest and get the items out of the disk.  This can be extremely handy if you decommissioned a server but forgot to copy one or two files.

What they also discussed is disaster recovery and how to prepare for it, but this will be much more highlighted in the next part.

Finally they showed some real-life implementations.  I’ll add the example of a mid-sized asian hoster in here

CSV Production Environment

This customer has multiple 3-5 node CSV clusters with 30+ VMs on each.

Each CSV has Fiber channel SAN – Dell EqualLogic with H/W provider

Maintained a ratio of 1 CSV per cluster node & VHDs for a VM are co-located in a CSV.

Backup Configuration:

The VM workload mix comprises of almost all Microsoft workloads (Complete Microsoft Shop).

The average size / VM is ~70 GB.

All VMs are backed up at the host level with DPM 2010 on a daily basis.

35% of servers which require which require granular backup and near continuous RPO continue to get backed at guest level using DPM 2010, just as earlier in a physical environment.

Typical DPM 2010 Server Configuration

Number of Processors on DPM Servers: Intel 2×4 cores

Amount of RAM on DPM Server: 8 GB RAM

DPM 2010 protects a fan in of 3 such CSV clusters

 

Till next post

Cheers,

Mike

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

6:37 am in Uncategorized by mikeresseler

Hey All,

Here’s part 3 of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

A session given by Tim Kremer and in backup, you guessed it, Jason Buffington :-)

This session was all about protecting your clients.  First thing we started with was the reason why we wanted to protect clients.  Many companies or IT pro’s will react that users should save their valuable data somewhere on the network or take a backup on their own.  While this probably works with one or two percent of the companies, I’m sure it fails with the other 98 percent.  The reason for that is simple.  When people are travelling, they won’t be uploading their data to a network share, and even when they are in the office and need to copy their data on a Friday evening to the server… guess what will happen :-).  If they need to backup their own data, then you will probably have users that have a 100 copies of their data on a expensive network share and others who never bother or backup to their local drive on their laptops.  So if the laptop gets stolen or the disk is dead….

 

According to some research companies (something like forrester or gartner or so, forgot which one) about 60% of the intelligence of a company resides on local disk from the users.  Now that’s a lot.  So if we want to protect that knowledge, then we need to find a good way to do that without too much trouble and without disturbing the users or let them do it themselves.  It just won’t happen.  Period.  (This process of letting the end users do the backup their selves is often called the “tax” for the end users)

 

When designing the solution, the architects @ Microsoft had the following challenges:

  • Mobile workforce
  • Different users with different needs
  • Large scale (many many desktops / laptops)

So they created the following goals:

  • Remove the end user tax
  • Support roaming user backups
  • Allow customizability for specific users
  • Enforce admin defined restrictions
  • Keep IT costs low

How did they solved those requirements.

With the same agent as the one for the servers you can start protecting your clients.  By using your favorite deployment method (SCE, SCCM, AD, MDT…) you can get the agents out there.  Remember, you don’t pay licenses for an agent if you don’t use it.  So deploying it over your entire network is not going to give you a licensing issue.  You start paying the moment you start to protect it.  Period.

Second is that an IT Pro can create different policies.  Let’s say that we want that a client will protect it’s my documents, a specific company directory and maybe some more folders that can be imported for the user such as favorites or something.  But of course, we don’t want the My pictures or My music folder to be protected.  The company is not interested in getting all the vacation pictures or mp3 library of their employees.  (Ok, the IT Pro’s might be interested in the mp3 collection :-)).  By defining a policy and including / excluding folders you can achieve this.  And it get’s even better, you don’t need to know the exact location of the my documents folder.  DPM will use the path variable to define where it is.  And last but not least, you can actually deny certain extensions.  No .mp3 files is a good example for this.  Whether we like it or not, end-users are mostly smart enough to see that certain folders are excluded and will move their “valuable data” to a folder that is protected.

 

Now what if users want to be able to protect some specific folders?  Folders that are not default in the company but still contain valuable information.  By giving the end-users (or some of them) the rights they can choose their selves certain folders to be protected.

image

Now what about users on the road?  How is this going to work?  Here’s the answer.

1. They support backup over VPN and direct access.  So whenever a client is connected to the main office over vpn or direct access it has the possibility of synchronizing with the office.  Remember the block-level copy from part 2!  So the data that is sent over is really not that much.

2. DPM provides you with two mechanisms.  While performing a backup, it will send the data to the DPM server if it is reachable.  At the same time, it will keep a local copy on the laptop.  So users will be able to restore from their local cache if necessary.  Will this protect you from hardware failure or from a stolen laptop?  No, it won’t, but users will be able to go to a previous version of a document when it is necessary even if they are working on the road.

3. What about notifications.  Everybody who has ever worked with DPM 2007 or with whatever backup solution for that matter will know that the system will start complaining whenever it can’t reach its clients.  DPM will do that also but they built in a system where you can specify how long it takes before it starts to complain.  Consider the fact that many people take 14 days vacation.  Add the weekends with that and you get 18 days.  So only after 18 days you let the DPM server complain that is missing a connection to a client.  This way you will avoid a lot of false alarms and only those that take more then 2 weeks vacation or those that are travelling longer are going in alert. 

What about the costs?  You can imagine that all the users data will take a lot of disk space.  First you know that you can use low-cost storage to do this and second, because the system is working pretty well you don’t have many human effort.  Compare it with letting the users backing up their own data to a network share.  This is mostly high-end storage which costs a lot, never cleaned by the users and you will probably have many files standing there 50 times.  DPM does not need this because it only contains the changes.  Second, think about the value of the data.  Ask the business what it cost when a road warrior loses its laptop and the data that it contains.  You can do the math quickly.

 

So how does the end-user sees this?

Below are a few screenshots of the end-user experience

image

End-user recovery

image

Agent in the notification area

image

Agent UI

 

Want more?  How about this…

A user loses his or hers laptop.  Or the machine just died.  You have a backup of yesterday on your DPM server.  The deployment team quickly prepares a new laptop with their favorite OSD tool.  Agent is installed or sysprepped on it.  You jump behind the DPM console and do a restore to another location.  User gets the data back :-)

Even more?

The DPM agent allows the end-user to synchronize now.  So suppose they made some important changes to a document they can synchronize it whenever they want to the DPM server if they have connection or to the local cache if they are not connected.  So if the end-user really did some important work, then he or she can create a “backup” of their own before flying out or going on a vacation.  With one simple click, the system will do the work.

 

Till next for part 4

Cheers,

Mike

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

6:45 pm in Uncategorized by mikeresseler

Hey All,

Here’s part 2 of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

Let’s continue today with Session 2

Session 2: Protecting Application Servers with DPM 2010

Session two of an entire DPM day (first four sessions were all on the same day so for me it was like being a child in the candy store for the entire day!  Not to mention that I had the change to have speak to Jason himself in the evening… :-))

Again this was presented by Jason Buffington.  This session explained the entire VSS process and the differences when protecting exchange, sql or sharepoint.  So here are my notes:

How does this VSS writer thingy works?

Here’s how:

To start, when you decide to protect a workload, DPM will create a replica.  This means that it will make an exact copy of the original sources, whether this is sharepoint, sql databases or files.  After that, DPM will never ever again make an entire copy of the data.

So what does it do?  DPM works with Express Full backups which is block level based and synchronizations which is byte level based.  The express full backup is the latest version.  All previous versions are the so-called layers. 

So after the replica, DPM will create a volume map of the data.  Is it large?  No, a 0 or 1 for each 120 kb so the footprint is small.  Here’s an example of a volume map

image

So let’s say after one hour, an express full backup will be taken and this is how the volume map looks like

image

This is what happens:

1.VSS Snapshot taken on production volume to ensure consistent data

2.Cache of changed blocks is sent to DPM server

image

Important to know here is that the file IO continues, the VSS writer will only “freeze” the blocks that have changed so that the server can continue normal operation!  So no more placing databases offline, bringing solutions in maintenance mode… If it has a vss writer, it is all online.

Finally, after the blocks are sent to the DPM server, the VSS writer will release the frozen blocks

In a nutshell, this is how the express full backups work.

But how does the synchronization works?  Again, this was explained with an example so here goes:

image

We assume in our example that we are working with a database

Every xx minutes (depending on your settings) you synchronize the closed transaction logs

image

image

In the case that you need to recover, you return to the database express full backup from 0:00 and roll forward the transaction logs till the point in time that you want.

That’s it.  So with DPM 2010 you can go to about any point back in time when you want.

Now how many points can you create?

If you perform the following schedule:

you want to be able to return 512 weeks times 7 days (one express full per day) times 24 times 4 (24 * 4 for 15 minutes synch) means 344.000 points in time.  This is the maximum but would mean point in time recovery for the last 9 years!

Now here is the joke:

  • MS doesn’t want you to recover a SQL 2005 database in 9 years
  • It will cost a lot of disk
  • It will cost A LOT of disk

(For your information, these are not my words :-))

If you want more information about this mechanism, make sure you check out http://edge.technet.com/Media/DPM-2007-SP1-How-does-DPM-really-work/ or one of our SCUG offline DPM events

You can imagine of course that there are some differences in protecting the different workloads.  So here is an overview of the differences

Exchange 2007 LCR (Local Continuous Replication)

What is it? One exchange server with a redundant copy of the database.  It can failover to the redundant copy in case of database corruption or when the drive is lost where the active database stands.

DPM will backup the database from the Active Database drive

Exchange 2007 CCR (Cluster Continuous Replication)

What is it? Redundant exchange servers and redundant databases.  These can be geo-diverse and the database logs are replicated.

DPM can now backup the active or passive database which you prefer.

You can choose this on a role preferred base:

  • Active – most current data
  • Passive – least production impact

Or you can choose this on a node preferred base when you are working geo-diverse, then you choose the node closest to the DPM server.

Exchange 2007 SCR ( Standby Continuous Replication)

image

This is even more intelligent.  Suppose you have a DPM server on your main site and exchange SCR.  The first DPM server will protect from the passive node.  SCR means that it will replicate to a standby node and if you see this picture this means that it need to replicate over the WAN.  Suppose that you have a secondary DPM on that other site.  Instead of replicating twice over the WAN, DPM is smart enough to do the second protection from the standby node, thus no additional bandwidth necessary.

Exchange 2010 DAG

And finally there is dag, where DPM works with a copy instead of a full backup. This lowers the resources necessary for protecting your exchange environment.  See the screenshot

image

SQL Server Mirrored database

  • Mirrors feature redundant SQL servers and redundant databases
  • Databases logs are replicated
  • Database Failover is automatically recovered

SQL Server Log Shipping

This features one SQL server with redundant databases.

  • Each copy is treated as a unique drive by DPM
  • Redundant backups require that both drives be protected
  • Express Full’s only – no T-Logs

If you are wondering why there are no transaction log backups with this kind of solution, the reason is pretty simple… Never but never let a protection application work with the transaction logs when the system is doing it himself.  It would be asking for trouble.

Sharepoint

Sharepoint uses a lookup to determine what kind of data is necessary to protect the sharepoint farm, including the content databases, web front end servers and so on.

For 2007 you still need a recovery farm if you want to do a item level recovery but with sharepoint 2010 you can actually do item-level recovery WITHOUT a recovery farm.

 

In the end, one final question…

Each time you deploy something… How are you going to back it up?

Till next,

Cheers,

Mike