System Center Data Protection Manager 2007 SP1: How it works

8:34 am in Uncategorized by mikeresseler

Hey All,

As promised, a post about how Data Protection Manager works.

First, System Center Data Protection Manager 2007 supports three types of backup as shown in the picture.



1) Disk-to-disk (D2D)

2) Disk-to-tape (D2T)

3) Disk-to-disk-to-tape (D2D2T)

Many organizations suffer with the same problem.  The size of the data is growing, and the backup window is getting smaller or is simply not big enough anymore to backup all the data necessary.  And even the weekend becomes more problematic these days.  Because of that, a lot of backup programs are working now with Disk to Disk backups.  Why?  Disk storage is getting cheaper, it is more reliable then tape, and the restore is much faster then from tape.  SCDPM does this also.

Still, the end of the tape is still far away.  Many organizations will keep tape backups to store them offsite in case of a disaster.  So a backup to tape is still necessary.  SCDPM supports this with the D2T, or better known as the ‘old school’ backup to tape, or in combination with the D2D, which makes it the D2D2T, the Disk to disk to tape backup.


First, how does it works.  As an example, we will talk about the backup of a file server.



As you can see, I will backup a volume (D:) on a file server.  The DPM server will have a replica of this volume.

After that, the DPM server will take snapshots on different times (that are adjustable by yourself) and he will base himself on changes of the filesystem.  Only the changes will be replicated to the replica.


In the picture, you see that the DPM server will synchronize every 15 minutes.  He will keep the data for 12 days and 5 times a day, a recovery point will be taken.


This is a screenshot for a user that can see the different versions of a file on a fileserver.

With this mechanism, you can actually give users the ability to restore their documents without intervention of an IT engineer.  An administrator / IT Engineer can decide on a schedule when the DPM server will synchronize the newer versions with the replica residing on the DPM server.  This process is called Continuous Data Protection (CDP).


Example of a real-life problem

DPM does not only have this mechanism for files, but also for a few key-applications within your environment such as Exchange, SQL and Sharepoint.  As an example, I will discuss the exchange technology.

Just like it does with files, DPM will make a full replica of the Exchange databases.  Then, depending on your settings, it will synchronize each x time (by default 15 minutes) a copy of the closed transaction logs to the DPM server.


First step: Full replica


Second step: Synchronization of the closed transaction logs

Minimal each 12 hours, there is a full express backup of the exchange replica.  This will cause each change since the last full express backup to be applied to the replica.  DPM uses a special filter that keeps track off the changes on byte-level, causing it to only transfer a minimum of content on the network.

The combination of the full express backup and the synchronization of the exchange logs gives an IT Engineer / Administrator the possibility to restore the full database and the transaction logs just before the problems arose or when the exchange serve went down.


Another (great) possibility is the option to create a disaster recovery with a secondary DPM server protecting the first one.  If you can locate this second DPM server in another datacenter or in another location, you are creating a very good disaster recovery plan and then you have the possibility to quickly recover from problems or disaster.


One of the most important things in these scenario’s are the calculation of the storage necessary to support these technologies.  But I will come back on that in my next post.

Although this all seems nice and many IT administrators / engineers will want this in their environment, one of the questions IT decision takers will ask is how difficult this is to work with.  Will the IT staff need special education.  Can they easily monitor the system?  What about reporting?  And how can I verify that everything is running as smoothly as it should.  And maybe last but not least, how many work will this application give my IT staff on a daily basis?

The good news is, I can answer on all this questions very positive.

No special education is needed because the interface is based on the same gui as for example Outlook or Microsoft System Center Operations Manager 2007 (SCOM). 


Monitoring the system is easy if you have SCOM in place (dedicated Management Pack for this technology), and if not, you can always let the system sends emails each time there is a problem.  And, as with every System Center product, when there is an alert, the application will always suggest possible solutions.


Reporting is also no problem, there are a lot of predefined, usable reports in the system that can be mailed daily


Also one of the major advantages of the system is that it will automatically do a consistency check after each “backup”.  This will allow the IT staff to quickly find non-consistent data in the environment.

And last but not least, will this give the IT staff a lot of work?  Honestly, no.  DPM is not a product that requires a “baby sitter”.  As long as everything is well-designed and implemented, your staff can read the daily reports, check for errors in SCOM or view the alerts in the console once a day (or through email) and the system will run on itself.  Second, you will gain a lot of time each time a restore needs to be done because of the speed and easiness you can recover.  Of course, you will need to invest a lot of time in the initial architecture / design and implementation of the system.  How you can achieve this?  Check out my next post on designing a DPM solution.