You are browsing the archive for MMS 2010.

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

8:21 am in Uncategorized by mikeresseler

Hey All,

Here’s part 6, and the final part of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

In this part I will give an overview of the partner announcements made @ MMS.

As you all know Microsoft has partnered with some companies to provide protection to the cloud.  But there are also partnerships around DPM on an appliance and on virtual tape library software.


1. Cristalink Firestreamer

Firestreamer is a utility that can create a virtual tape library and virtual tapes based on different kinds of storage such as internal and external hard disk drivers, flash memory, blu-ray, dvd’s, and so on.

Very cool solution if you use this in conjunction with DPM2DPM4DR.

For more information:

2. i368

i368, a division from Seagate delivers their eVault software together with DPM to support non-windows environments support.  Stuff such as Linux, VMWare, Sun Solaris, HP-UX, Oracle and so on will be protected by this, creating a solution with is fantastic for windows (DPM) and at the same time gives you the opportunity to protect other workloads.

The also offer their  solution in an appliance, based on a rebranded dell server, with everything preinstalled on it. for more information

3. Iron Mountain

Iron Mountain delivers protection to the cloud.  With this company, you can protect your data and sent it straight to the cloud from the DPM console.  A very cool solution for off-site backup.


That’s it for DPM week 2010.  In my humble opinion, the new version of DPM is a must have for every windows environment.  It has improved a lot over the DPM 2007 SP1 solution which was already a good product.  Now it just got better.  And because Microsoft realizes that not everything is Microsoft in your environment, they build strong partnerships with other companies that leverages the product and allow you to do tape library sharing, so that you can protect your other apps with whatever you want…

To be continued



DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

6:28 am in Uncategorized by mikeresseler

Hey All,

Here’s part 5 of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

This session was given on Friday morning and should have been normally a session from David Allen (System Center Operations Manager MVP – Deloitte) and Sergio De Chiara (DPM Architect – Microsoft Corporation)

Due to the ash cloud, both guys couldn’t make it to Las Vegas so that was quite a disappointment since I really wanted to see David in action.  He owns the blog which is a great resource for all of you that need to work with DPM.

Luckily for me, the DPM team decided to throw in another session and the title sounded promising: Disaster Recovery and Advanced Scenarios.

So session 5 of DPM for me, on a Friday morning.  And Jason, if you are reading this, don’t forget the promise you made to the guys that followed all of your sessions… I’m eagerly waiting for the book :-)

Anyway, session 5 with Jason Buffington and Vijay Sen. 

On the agenda for today:

  • End-User Backup and Recovery
  • Bare Metal Recovery
  • Disaster Recovery
  • Misc
    • Agent Deployment in the Enterprise
    • Non-Domain Servers
    • SCOM Management Pack

So the session starts with some figures about what it cost when disaster strikes for each hour that the environment is down.  All nice figures but a little bit too much oriented on the American Business.  I don’t think that I know a company that will loose 6.4 million dollar of income for each hour that they are out.  But no matter how much it cost, when your business is down, it will cost money, a lot of money, not to mention the image loss or worse, the compliance issues that you will be facing.  So in worst case, how are we going to recover, and how are we going to do this as fast as possible.

Definition of a disaster:

Process of recovering from any natural or man made disaster that results in loss of partial or complete loss of data center and infrastructure.

What I really liked is that this definition is more then a hurricane, a flood, 9-11 (hey, we were in Vegas…) but that it also includes a disk crash, a stolen laptop and so on.  Basically, when data is lost, no matter in what form, it costs money.  So we need to recover. 

All right, first topic discussed is dpm2dpm4dr (read: DPM to DPM for Disaster Recovery)


This was already working in DPM 2007, so nothing new here. 

However, they increased the possibilities with this:

  • One-click DPM DR failover and failback
  • Separate schedules per DPM server
  • Chaining support
  • Offsite tapes without courier services
  • Restore servers directly from offsite DPM


Suppose your DPM main server falls out.  By using the switch protection option you can change the recovery to the secondary server.  Rebuild or fix the primary DPM server, and use the same switch to change the protection again to the primary server.

For each DPM server you can use a different schedule, so your primary will probably have a very tight schedule, but your secondary will be protecting much slower if there is a wan between them

Chaining support is also one of the new cool features.  It basically allows you to do backup to backup to backup or protect multiple primary DPM servers with one secondary.  You can also start to cross.  Your primary server will be acting also as a secondary and visa versa.

Offsite tapes without courier services is how they see it when your secondary server is in an offsite location.  Since the tapes are offsite, it is not necessary to give them with a courier anymore.

And last but not least… Still need to recover after a major failure?  Recover straight from the secondary server.

Many other things were discussed during this session such as post and pre backup scripts


<?xml version=”1.0″ encoding=”utf-8″?>

<ScriptConfiguration xmlns:xsi=” xmlns:xsd= xmlns=””>

<DatasourceScriptConfig DataSourceName=”Data source”>

<PreBackupScript>”Path\Script” </PreBackupScript>


<PostBackupScript>”Path\Script” </PostBackupScript >





We also saw a great demo of a BMR recovery.  Just start your server with a windows cd (make sure that the network card and disk subsystem is recognized so use a wim file with injected drivers if necessary), choose recovery mode and connect to the location of the BMR files

The definition of a BMR backup is the following:

  • Backup of all Critical Volumes


  • Critical Volumes = Boot + System + Volumes hosting files of Server Roles
  • E.g: Boot, System, Active Directory (for DC’s)


  • Used for both System State Recovery and BMR Recovery

So important to remember is to have a different backup for other volumes that contain data!

Hereunder is a great overview screenshot of a BMR recovery


Till next,



DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

7:29 am in Uncategorized by mikeresseler

Hey All,

Here’s part 4 of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

This was the last session of DPM Wednesday, and given by Asim Mitra and Vijay Sen, 2 program managers within Microsoft and responsible for the Virtualization protection within DPM.

On the agenda of this session:

  • Protecting your hyper-v environment
  • Hyper-V Recovery Options
  • Recovering from a disaster
  • Sample Customer Deployments

They started by outlining the top priorities for CIOs in 2010


If you look at the screenshot, you will see that Disaster Recovery / Business Continuance and Server Virtualization comes in 2nd and 3rd.  First one is cost reduction, but I guess that will be so for the next x years :-)

I know that virtualization is more “sexy” then disaster recovery for an IT Pro, but it is of course pretty important to think about backup / disaster recovery whenever you deploy a new solution into your environment.  So why not do this hand in hand?  DPM is designed to protect hyper-v fully and if you have read one of my previous posts you know that it is also capable of backing up vmware virtual machines… if you tweak a bit :-)

So what are the features of DPM 2010 for protecting hyper-v?

  • Host-level backup of Hyper-V on WS 2008 R2
  • Cluster Shared Volumes (CSV) support
  • Seamless protection of Live Migrating VMs
  • Alternate Host Recovery
  • Item Level Recovery

Sounds interesting?  Let’s continue to have a look.

First, they started with a discussion on what to protect.  Should we protect on the host and backup entire VM’s?  Or should we protect inside the guests and take the data?  Now this was the sign for many people in the room to shoot the profile of their environment at the two and ask what the solution would be for their specific case.  Luckily these guys were smart enough (or well trained :-)) to leave all options open.  Why?  I think they share the same opinion as I have.  You never can take this decision without first assessing an environment thoroughly.  There are so many questions you need to ask first before you can decide on what strategy you are going to use.  And even then, in many cases, you will be using both.  I actually had a discussion that evening with a guy that could not believe that at a certain moment you would only choose for the host-level backup for a certain virtual machine.  I actually do think there are cases when this can be done.  Imagine a webserver that is running in production and where the configuration only changes once and a while.  A daily backup of the guest should be enough.  I think a lot of servers that are running and running and don’t contain user data or business data can be protected that way.  I mean, who cares that you lose log files if you are not compliant to something?  If you can recover the server quickly when he’s out, that’s more important then those log files right?  And if they are important, I’m sure that the business then have a solution to archive these logs into an auditing system.  But for the conclusion, this really should be looked at on an individual base and here under are some points that can be used to make that decision

  • Host
    • Protect or recover the whole machine
    • “Bare Metal Recovery” & “Item Level Recovery” of every VM
    • Protect non-Windows servers & LOB applications that don’t have VSS writers
    • No granularity of backup
    • Single DPM license on host, all guests protected
  • Guest
    • Protect or recover data specifically
    • SQL database
    • Exchange
    • SharePoint
    • Files
    • No different than protecting the physical server
    • DPML per Guest

Next topic, how does it work.

As always, you start with an initial replica.  After that, this is never done again.  What happens is the following:

  1. DPM initiates the backup process
  2. Using the VSS framework, an application consistent snapshot is created inside the guest virtual machine
  3. A snapshot of the VM is created on the Host (Important mark, use a hardware VSS writer is you are using a CSV)
  4. Then there is a checksum comparison of the VM snapshot with the DPM Replica
  5. Finally, only the changed blocks are replicated to the DPM Server

Seamless protection of Live Migrating VMs

Yep, you’ve read it correctly.  The backup administrator (I would like to introduce a new title for this job, I would like to call him a Business Continuity and Protection Engineer or Officer… what do you think? :-)) doesn’t need to care where the actual virtual machine resides.  With live migration, pro-tips in SCVMM and virtualization admins you can imagine that the placement of a virtual machine is never fixed.  And you can also image that the virtualization admins won’t like to update the backup guy every time a machine has moved.  With all the automation you can create these days (SCVMM, Opalis, SCOM…) they will probably don’t have a clue either.  DPM will know where the virtual machines is, and protect it from there.  If a machine is moved, then DPM will follow it to its new path.

What about Storage Migration?  Will that work also?  Yep, it will.  Again, DPM will follow the path

All nice and well, you are protected.  But issues happened, and you need to recover.  What are your options?

  • Restore VM back to original host or cluster

Probably the most expected option, system went down, recover to the same location and you’re up and running again.

  • Restore VM to a different host or cluster

A little less expected.  Restore the server to another cluster or individual host.  Now this opens options.  Take a backup of a production server, and restore it to another host for testing purposes.  Just make sure that your test environment doesn’t have the capability to talk to your production environment.  Not sure about the latest patches or service packs?  Restore to another environment, deploy the patches and see if the server starts nicely again.

  • Item Level Recovery (ILR) to file share

And this will become a very much used feature in the future.  Mount the virtual machine, get inside the virtual machine or guest and get the items out of the disk.  This can be extremely handy if you decommissioned a server but forgot to copy one or two files.

What they also discussed is disaster recovery and how to prepare for it, but this will be much more highlighted in the next part.

Finally they showed some real-life implementations.  I’ll add the example of a mid-sized asian hoster in here

CSV Production Environment

This customer has multiple 3-5 node CSV clusters with 30+ VMs on each.

Each CSV has Fiber channel SAN – Dell EqualLogic with H/W provider

Maintained a ratio of 1 CSV per cluster node & VHDs for a VM are co-located in a CSV.

Backup Configuration:

The VM workload mix comprises of almost all Microsoft workloads (Complete Microsoft Shop).

The average size / VM is ~70 GB.

All VMs are backed up at the host level with DPM 2010 on a daily basis.

35% of servers which require which require granular backup and near continuous RPO continue to get backed at guest level using DPM 2010, just as earlier in a physical environment.

Typical DPM 2010 Server Configuration

Number of Processors on DPM Servers: Intel 2×4 cores

Amount of RAM on DPM Server: 8 GB RAM

DPM 2010 protects a fan in of 3 such CSV clusters


Till next post



DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

6:37 am in Uncategorized by mikeresseler

Hey All,

Here’s part 3 of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

A session given by Tim Kremer and in backup, you guessed it, Jason Buffington :-)

This session was all about protecting your clients.  First thing we started with was the reason why we wanted to protect clients.  Many companies or IT pro’s will react that users should save their valuable data somewhere on the network or take a backup on their own.  While this probably works with one or two percent of the companies, I’m sure it fails with the other 98 percent.  The reason for that is simple.  When people are travelling, they won’t be uploading their data to a network share, and even when they are in the office and need to copy their data on a Friday evening to the server… guess what will happen :-).  If they need to backup their own data, then you will probably have users that have a 100 copies of their data on a expensive network share and others who never bother or backup to their local drive on their laptops.  So if the laptop gets stolen or the disk is dead….


According to some research companies (something like forrester or gartner or so, forgot which one) about 60% of the intelligence of a company resides on local disk from the users.  Now that’s a lot.  So if we want to protect that knowledge, then we need to find a good way to do that without too much trouble and without disturbing the users or let them do it themselves.  It just won’t happen.  Period.  (This process of letting the end users do the backup their selves is often called the “tax” for the end users)


When designing the solution, the architects @ Microsoft had the following challenges:

  • Mobile workforce
  • Different users with different needs
  • Large scale (many many desktops / laptops)

So they created the following goals:

  • Remove the end user tax
  • Support roaming user backups
  • Allow customizability for specific users
  • Enforce admin defined restrictions
  • Keep IT costs low

How did they solved those requirements.

With the same agent as the one for the servers you can start protecting your clients.  By using your favorite deployment method (SCE, SCCM, AD, MDT…) you can get the agents out there.  Remember, you don’t pay licenses for an agent if you don’t use it.  So deploying it over your entire network is not going to give you a licensing issue.  You start paying the moment you start to protect it.  Period.

Second is that an IT Pro can create different policies.  Let’s say that we want that a client will protect it’s my documents, a specific company directory and maybe some more folders that can be imported for the user such as favorites or something.  But of course, we don’t want the My pictures or My music folder to be protected.  The company is not interested in getting all the vacation pictures or mp3 library of their employees.  (Ok, the IT Pro’s might be interested in the mp3 collection :-)).  By defining a policy and including / excluding folders you can achieve this.  And it get’s even better, you don’t need to know the exact location of the my documents folder.  DPM will use the path variable to define where it is.  And last but not least, you can actually deny certain extensions.  No .mp3 files is a good example for this.  Whether we like it or not, end-users are mostly smart enough to see that certain folders are excluded and will move their “valuable data” to a folder that is protected.


Now what if users want to be able to protect some specific folders?  Folders that are not default in the company but still contain valuable information.  By giving the end-users (or some of them) the rights they can choose their selves certain folders to be protected.


Now what about users on the road?  How is this going to work?  Here’s the answer.

1. They support backup over VPN and direct access.  So whenever a client is connected to the main office over vpn or direct access it has the possibility of synchronizing with the office.  Remember the block-level copy from part 2!  So the data that is sent over is really not that much.

2. DPM provides you with two mechanisms.  While performing a backup, it will send the data to the DPM server if it is reachable.  At the same time, it will keep a local copy on the laptop.  So users will be able to restore from their local cache if necessary.  Will this protect you from hardware failure or from a stolen laptop?  No, it won’t, but users will be able to go to a previous version of a document when it is necessary even if they are working on the road.

3. What about notifications.  Everybody who has ever worked with DPM 2007 or with whatever backup solution for that matter will know that the system will start complaining whenever it can’t reach its clients.  DPM will do that also but they built in a system where you can specify how long it takes before it starts to complain.  Consider the fact that many people take 14 days vacation.  Add the weekends with that and you get 18 days.  So only after 18 days you let the DPM server complain that is missing a connection to a client.  This way you will avoid a lot of false alarms and only those that take more then 2 weeks vacation or those that are travelling longer are going in alert. 

What about the costs?  You can imagine that all the users data will take a lot of disk space.  First you know that you can use low-cost storage to do this and second, because the system is working pretty well you don’t have many human effort.  Compare it with letting the users backing up their own data to a network share.  This is mostly high-end storage which costs a lot, never cleaned by the users and you will probably have many files standing there 50 times.  DPM does not need this because it only contains the changes.  Second, think about the value of the data.  Ask the business what it cost when a road warrior loses its laptop and the data that it contains.  You can do the math quickly.


So how does the end-user sees this?

Below are a few screenshots of the end-user experience


End-user recovery


Agent in the notification area


Agent UI


Want more?  How about this…

A user loses his or hers laptop.  Or the machine just died.  You have a backup of yesterday on your DPM server.  The deployment team quickly prepares a new laptop with their favorite OSD tool.  Agent is installed or sysprepped on it.  You jump behind the DPM console and do a restore to another location.  User gets the data back :-)

Even more?

The DPM agent allows the end-user to synchronize now.  So suppose they made some important changes to a document they can synchronize it whenever they want to the DPM server if they have connection or to the local cache if they are not connected.  So if the end-user really did some important work, then he or she can create a “backup” of their own before flying out or going on a vacation.  With one simple click, the system will do the work.


Till next for part 4



DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

6:45 pm in Uncategorized by mikeresseler

Hey All,

Here’s part 2 of our DPM 2010 launch week overview

For the full set:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

Let’s continue today with Session 2

Session 2: Protecting Application Servers with DPM 2010

Session two of an entire DPM day (first four sessions were all on the same day so for me it was like being a child in the candy store for the entire day!  Not to mention that I had the change to have speak to Jason himself in the evening… :-))

Again this was presented by Jason Buffington.  This session explained the entire VSS process and the differences when protecting exchange, sql or sharepoint.  So here are my notes:

How does this VSS writer thingy works?

Here’s how:

To start, when you decide to protect a workload, DPM will create a replica.  This means that it will make an exact copy of the original sources, whether this is sharepoint, sql databases or files.  After that, DPM will never ever again make an entire copy of the data.

So what does it do?  DPM works with Express Full backups which is block level based and synchronizations which is byte level based.  The express full backup is the latest version.  All previous versions are the so-called layers. 

So after the replica, DPM will create a volume map of the data.  Is it large?  No, a 0 or 1 for each 120 kb so the footprint is small.  Here’s an example of a volume map


So let’s say after one hour, an express full backup will be taken and this is how the volume map looks like


This is what happens:

1.VSS Snapshot taken on production volume to ensure consistent data

2.Cache of changed blocks is sent to DPM server


Important to know here is that the file IO continues, the VSS writer will only “freeze” the blocks that have changed so that the server can continue normal operation!  So no more placing databases offline, bringing solutions in maintenance mode… If it has a vss writer, it is all online.

Finally, after the blocks are sent to the DPM server, the VSS writer will release the frozen blocks

In a nutshell, this is how the express full backups work.

But how does the synchronization works?  Again, this was explained with an example so here goes:


We assume in our example that we are working with a database

Every xx minutes (depending on your settings) you synchronize the closed transaction logs



In the case that you need to recover, you return to the database express full backup from 0:00 and roll forward the transaction logs till the point in time that you want.

That’s it.  So with DPM 2010 you can go to about any point back in time when you want.

Now how many points can you create?

If you perform the following schedule:

you want to be able to return 512 weeks times 7 days (one express full per day) times 24 times 4 (24 * 4 for 15 minutes synch) means 344.000 points in time.  This is the maximum but would mean point in time recovery for the last 9 years!

Now here is the joke:

  • MS doesn’t want you to recover a SQL 2005 database in 9 years
  • It will cost a lot of disk
  • It will cost A LOT of disk

(For your information, these are not my words :-))

If you want more information about this mechanism, make sure you check out or one of our SCUG offline DPM events

You can imagine of course that there are some differences in protecting the different workloads.  So here is an overview of the differences

Exchange 2007 LCR (Local Continuous Replication)

What is it? One exchange server with a redundant copy of the database.  It can failover to the redundant copy in case of database corruption or when the drive is lost where the active database stands.

DPM will backup the database from the Active Database drive

Exchange 2007 CCR (Cluster Continuous Replication)

What is it? Redundant exchange servers and redundant databases.  These can be geo-diverse and the database logs are replicated.

DPM can now backup the active or passive database which you prefer.

You can choose this on a role preferred base:

  • Active – most current data
  • Passive – least production impact

Or you can choose this on a node preferred base when you are working geo-diverse, then you choose the node closest to the DPM server.

Exchange 2007 SCR ( Standby Continuous Replication)


This is even more intelligent.  Suppose you have a DPM server on your main site and exchange SCR.  The first DPM server will protect from the passive node.  SCR means that it will replicate to a standby node and if you see this picture this means that it need to replicate over the WAN.  Suppose that you have a secondary DPM on that other site.  Instead of replicating twice over the WAN, DPM is smart enough to do the second protection from the standby node, thus no additional bandwidth necessary.

Exchange 2010 DAG

And finally there is dag, where DPM works with a copy instead of a full backup. This lowers the resources necessary for protecting your exchange environment.  See the screenshot


SQL Server Mirrored database

  • Mirrors feature redundant SQL servers and redundant databases
  • Databases logs are replicated
  • Database Failover is automatically recovered

SQL Server Log Shipping

This features one SQL server with redundant databases.

  • Each copy is treated as a unique drive by DPM
  • Redundant backups require that both drives be protected
  • Express Full’s only – no T-Logs

If you are wondering why there are no transaction log backups with this kind of solution, the reason is pretty simple… Never but never let a protection application work with the transaction logs when the system is doing it himself.  It would be asking for trouble.


Sharepoint uses a lookup to determine what kind of data is necessary to protect the sharepoint farm, including the content databases, web front end servers and so on.

For 2007 you still need a recovery farm if you want to do a item level recovery but with sharepoint 2010 you can actually do item-level recovery WITHOUT a recovery farm.


In the end, one final question…

Each time you deploy something… How are you going to back it up?

Till next,



DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

5:34 pm in Uncategorized by mikeresseler

Hey All,

Back to normal life so I found some time to blog about the most important thing that happened on MMS 2010.  DPM 2010 has RTM-ed :-).  Yes, you read it correctly, DPM has RTM-ed on monday 20th of april on MMS.  Evaluation version is available at, EA and VL will be available in may and GA and MS Price list will be there on June the 1st.

For all those who love DPM, this was certainly something we were looking out for, and I certainly got spoiled over there.  No less then 5 break-out sessions, 1 instructor led lab and 4 hands-on labs were there for the DPM fan.

Here’s the overview of what you have missed when you weren’t there:

Break-out sessions

  • Technical Introduction to DPM 2010
  • Protecting Applications with DPM 2010
  • Protecting Windows Clients with DPM 2010
  • Virtualization and Data Protection, Better Together
  • Disaster Recovery and Advanced DPM 2010 Scenarios

Instructor Led Lab

  • Technical Introduction to DPM 2010 – Instructor Led Lab

Hands-on labs

  • Technical Introduction to DPM 2010
  • How to protect SQL Server with DPM 2010
  • How to protect SharePoint with DPM 2010
  • How to protect Exchange Server with DPM 2010

In this series of posts I will cover the five break-out sessions and the partner announcements.  Here is the overview:

DPM 2010 launch week @ MMS 2010: Part 1: Technical Introduction

DPM 2010 launch week @ MMS 2010: Part 2: Protection Applications

DPM 2010 launch week @ MMS 2010: Part 3: Protecting Windows Clients

DPM 2010 launch week @ MMS 2010: Part 4: Virtualization and Data Protection, better together

DPM 2010 launch week @ MMS 2010: Part 5: Disaster recovery and advanced scenarios

DPM 2010 launch week @ MMS 2010: Part 6: Partner announcements

Session 1: Technical Introduction to DPM 2010

The first session given by the backup guy himself Jason Buffington.  The technical introduction started with the reason why MS decided to built a backup solution.  Microsoft builds applications such as exchange, sql and sharepoint.  Third-party vendors are building solutions to protect these environments.  Microsoft found out that many companies waited to implement the new applications until the backup vendors are ready to protect the application.  With the years passing by and the applications evolving, the backup vendors had more and more issues in protecting the workload.  And that’s the reason why they decided to create their own solution.

Second important reason… When you are in disaster recovery mode, and you are trying to recover but something is failing, who do you turn to?  The backup vendor?  He or she will say it is an Microsoft issue.  And Microsoft?  They will say that the data isn’t written correctly on tape or on disk.  So here is a gap.  Now that Microsoft has its own backup solution it is much simpler.  Something wrong?  Microsoft support.  Their applications, their backup solution.  Fix it :-)


Here is an overview of what DPM is possible of protecting.  This slide has been showed already many times and you will see it on many more occassions.

The statement about DPM couldn’t stay away either.  Those who followed my DPM session @ Microsoft Belgium or watched it online through edge (link 1, link 2) will certainly remember this one:

System Center Data Protection Manager 2010 delivers unified data protection for Windows servers and clients as a best-of-breed backup & recovery solution from Microsoft, for Windows environments. DPM 2010 provides the best protection and most supportable restore scenarios from disk, tape and cloud — in a scalable, reliable, manageable and cost-effective way.

Next up was an high-level overview of the capabilities of DPM 2010

These are the platforms supported with DPM 2010

  • Windows Server® 2008 R2
  • Windows Server® 2008
  • Windows Storage Server 2008
  • Windows Server® 2003 R2
  • Windows Server® 2003 Service Pack 1
  • Windows Storage Server 2003 R2
  • Windows Unified Data Storage Server
  • Windows® 7
  • Windows Vista® Business or higher
  • Windows® XP Professional – Service Pack 2

And these are the applications supported with DPM 2010

  • Microsoft® SQL Server™ 2008
  • Microsoft® SQL Server™ 2005
  • Microsoft® SQL Server™ 2000 Service Pack 4
  • SAP® running on Microsoft SQL Server
  • Microsoft® Exchange Server 2010 – including DAG
  • Microsoft® Exchange Server 2007 – including LCR, CCR , and SCR
  • Microsoft® Exchange Server 2003 Service Pack 2
  • Microsoft® Office SharePoint® Server 2010
  • Microsoft® Office SharePoint® Server 2007
  • Microsoft® Office SharePoint® Portal Server 2003
  • Windows® SharePoint® Foundation Services 4.0
  • Windows® SharePoint® Services version 3.0
  • Windows® SharePoint® Services version 2.0
  • Microsoft® Dynamics® AX 2009
  • Windows® Essential Business Server 2008
  • Windows® Small Business Server 2008

Also given was a short overview of what it can do with your application loads

File Services:

  • Windows Server 2003 through 2008 R2
  • Self-Service End-User Restore directly from Windows Explorer or Microsoft Office (yes, support from end-user recovery starting from Office 2003 or later)


  • SQL Server 2000 through 2008, including SAP®
  • Protect entire SQL instance – auto-protection of new DB’s (Just select an instance and every new database within that instance is discovered and protected!)
  • Ability to protect 1000’s of DB’s using a single DPM server
  • Self-Service Restore Tool for Database Administrators (Let your SQL admins recover their databases their selves.  No more backup administrator intervention!)
  • Recover 2005 databases to 2008 servers (Great feature to test the compatibility of line of business applications onto the 2008 version of SQL)


  • Office 14, MOSS 2007 and SPS 2003
  • Auto-protection of new content databases within Farm
  • Protect the Farm, Recover an Individual Document (Item-level recovery for sharepoint 2010 now without the need of a dedicated recovery farm!)


  • Exchange 2003 through 2010
  • Optimizations for SCC, CCR, SCR, DAG and ESE offloading


  • Host-level backup of Hyper-V on WS 2008 R2
  • Cluster Shared Volumes (CSV) support
  • Seamless protection of Live Migrating VMs (VM moved to another host?  DPM follows it to keep protecting it!)
  • Alternate Host Recovery
  • Item Level Recovery (Mount a vhd and restore only certain files out of the virtual machine!)

More information about the Client support

  • Support for XP, Vista, and W7
  • Backup over VPN and breaking news on MMS… Direct Access is supported in the RTM version!
  • Scale to 1000 clients per DPM server
  • “Unique user data” only
  • Not the whole machine, so that the OS is not repeatedly backed up
  • Integration with local Shadow Copies for Vista & W7
  • Centrally configured from DPM admin UI
  • End User enabled restore from local copies offline and online, as well as DPM copies
  • Admin enabled restore from DPM copies

One of the most heard comments on DPM 2007 was that it wasn’t enterprise ready.  The team worked hard in changing this and they succeed:


  • 100 Servers, 1000 Laptops, up to 2000 Database per Server
  • Significantly increased fan-in of data sources per DPM server
  • Up to 80 TB per DPM server


  • Automatic re-running of jobs and improved self-healing
  • Automatic protection of new data sources for SQL & MOSS
  • Decreased “Inconsistent Replicas” errors
  • Reduced Alert volume




Of course there had to be some explanation about the licensing.  One of the cool features of DPM is that it only has one agent.  Yes there are 2 versions of it, the 32 and 64 bit version, but in the end it is one agent.  You want to protect workstations? 1 agent, you want to protect exchange? Same agent.  System state? One agent.

Is it the same for the licensing?  No, there are three different licenses for the agents as you can see in the screenshot:

  • Client DPML
    • 1 workstation protected means 1 client DPML.
  • Standard DPML
    • A server where you only protect files or system state will cost you 1 standard DPML
  • Enterprise DPML
    • A server where you protect application workloads such as exchange, sql, sharepoint, Bare Metal Recovery or DPM2DPM4DR (DPM to DPM for Disaster Recovery, more on that in part 5: Disaster recovery and advanced scenarios)

Do you need to calculate this for yourself?  No, from the moment you start to protect something, DPM will calculate itself what kind of license you need.  It is even getting better, you can deploy on every server or workstation a DPM agent with your favorite deployment tool.  If it is not protected, you don’t pay anything and the agent will be sitting there, disabled, waiting to start working when YOU decide it.

Where does DPM situates?


The above screenshot shows the positioning of DPM.  It is a part of the System Center suite and is both used with the “big brother” versions of system center and with “little sister” SCE.  If you are still deciding on what to use as your management solution, make sure that you check out the SMSE and SMSD licenses for the suite.

That’s it for part 1, next parts will be more in depth of what has been told here.

Till then,