You are browsing the archive for 2009 July.

System Center Data Protection Manager 2007: Interesting Powershell Script

7:16 am in Uncategorized by mikeresseler

Hey All,

One of my colleagues (Thanks Arne!) found this interesting post about a powershell script included in Service Pack 1 of DPM 2007.  It is called the Migratedatasourcedatafromdpm.ps1 script.


The MigrateDatasourceDataFromDPM is a command-line script that lets you migrate DPM data for individual “data source(s)” or all Replica volumes and recovery point volumes to different physical disks. Such a migration might be necessary when your disk is full and cannot be expanded, your disk is due for replacement, or disk errors show up.

Depending on how you have configured your environment, this could mean one of more of the following scenarios for moving data source data:

· DPM Physical disk to another DPM Physical disk

· DPM Data source to different DPM Physical disk

· DPM Data source to Custom volume.

The MigrateDatasourceDataFromDPM script moves all data for a data source or disk to the new volume or physical disk. After migration is complete, the original disk from where the data was migrated from is not chosen for hosting any NEW backups, however the recovery points located on the source disk can be used for restores until the recovery points are expired.

Note: You must retain your old disks until all recovery points on them expire. After the recovery points expire, DPM automatically de-allocates the replicas and recovery point volumes on these disks.

Read more @



System Center Virtual Machine Manager: Reports error

6:17 am in Uncategorized by mikeresseler

Hey All,


Today I’ve got a notification of a customer telling me that he couldn’t run the reports anymore in SCVMM.  He send me this screenshot.

The error he’s got is: Server was unable to process request. –> There is not enough space on the disk.

He also told me that he had more then enough free space on his virtual machine manager server.  When I asked him how the storage was on his operations manager server, then we found the problem.  The disk was full over there.

(I still wonder though, if you have Operations Manager, shouldn’t you see this one coming ;-))



Virtual Machine Manager 2008: Cost Center

10:21 am in Uncategorized by mikeresseler

Hey All,

One of the nice things in Virtual Machine Manager is the Cost Center feature.  Although this seems like “just another field in the database”, it can come in handy when you need to prove to your management which business unit is using what resources.  Or, if the servers are used for certain projects, you can prove which project is using what.  This can be handy when you need more hardware and the management asks a prove if you are using everything you’ve got 😉

In my example, I’ve added some cost centers to some servers.  You do this by double-clicking on the virtual server or by opening the properties of it.


As you can see, I’ve added the Cost Center MSS to this server (which is actually the business unit I’m working for ;-))

Now the fun starts when you have Operations Manager in place and you have imported the Virtual Machine Manager Management Pack into OpsMgr.  Now you get cool reporting.

In Virtual Machine Manager, I go to reporting and I have one interesting report called Virtual Machine Allocation


Ok, I hid the names of the cost centers but the important thing is that you can see that for each cost center I can see the # of VM’s, # of VM’s deployed, # in the library, # of processors, total allocated memory, number of disks, max disk space allocated and the number of nics.

Now the management has a great overview of who’s using all the hardware resources within the company.



System Center Data Protection Manager 2007: Important Hotfix Rollup Package

9:30 am in Uncategorized by mikeresseler

Hey All,

Microsoft has released a hotfix rollup package for SCDPM 2007 SP1.  This package resolves a lot of issues with this product:

Issue 1
If you enable library sharing, you cannot delete a protection group, and you receive the following error message:

Cannot promote the transaction to a distributed transaction because there is an active save point in this transaction.

Issue 2
The SharePoint backup process fails if DPM 2007 cannot back up a content database. If you install this update, the SharePoint backup process will finish. However, an alert will be raised if DPM 2007 cannot back up a content database.

Issue 3
DPM 2007 jobs randomly fail. In the administrator console, you see error code 0x800706BA if you check the detailed information about the failed job.

Issue 4
DPM 2007 does not delete directories that are no longer being protected from the replica volume.

Issue 5
When you restore a Microsoft SharePoint site that is configured to use a host header, the incorrect SharePoint site is restored.

Issue 6
DPM 2007 performs a Volume Shadow Copy Service (VSS) full backup. Because the transaction logs are deleted when the DPM 2007 backup job is completed, this backup may interfere with other applications that are backing up transactional applications such as Microsoft SQL Server or Microsoft Exchange.
After you install this update rollup, you can configure DPM 2007 to perform VSS copy backup. This means that the application transaction logs will not be deleted when the DPM 2007 backup job is finished.

Important This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click the following article number to view the article in the Microsoft Knowledge Base:

322756 ( ) How to back up and restore the registry in Windows

To configure DPM 2007 to perform VSS copy backup, add the following CopyBackupEnabled registry value under the following registry subkey:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\2.0

Registry value: CopyBackupEnabled
Value Data: 1

Issue 7
Reconciling a VSS shadow copy causes the DPM 2007 service to crash.

Issue 8
The DPM 2007 service crashes if a tape backup job is canceled during the CheckConcurrencyBlock operation.

Issue 9
The Pruneshadowcopies.ps1 script cannot delete expired recovery points.

Issue 10
If a parent backup job of a SharePoint farm fails, but the child backup succeeds, the DPM 2007 service crashes.
This hotfix also includes the following previously released rollups for Data Protection Manager 2007 Service Pack 1:

961502 ( ) A replica may be listed as inconsistent after you install System Center Data Protection Manager 2007 Service Pack 1 on a computer that is running an x64-based version of Windows Server 2003

963102 ( ) Description of the hotfix rollup package for System Center Data Protection Manager 2007: February 16, 2009

968579 ( ) Description of the hotfix rollup package for System Center Data Protection Manager 2007: April 14, 2009

Make sure you install this one on all of your DPM installations:;EN-US;970867



System Center Service Manager: What’s this?

7:23 pm in Uncategorized by mikeresseler

In early 2010, Microsoft will release a new application as part of the System Center family.  The name, System Center Service Manager.  A much awaited application because it will be the Ticketing System from Microsoft.


Why is everybody waiting for this application?

Every time I’m talking to IT decision takers about the future of IT management, the discussion will be about a helpdesk system sooner or later.  Why?  Because may IT decision takers admit that this is a very strategic asset for their environment.  Whether it is a small internal IT team or a large IT team, a ticketing system is necessary for all of them.  But what is on the market for the moment?  Or you have some small ticketing systems (opensource, from a vendor…) that simply do what it is, a ticketing system.  Disadvantages… manual adding of the assets (which you all know starts good the first week and after that…).  Larger IT teams will implement expensive “service desk” solutions with sometimes built-in scanning to build up a CMDB and others implement large solutions but have spend hours, days, months and even years to integrate the solution in their environment.

Again, why is everybody so eager of this application?

Because of the success of the other System Center products (SCOM, SCCM) and the promise of Microsoft of a full integration with their ticketing system, many IT Decision takers are thinking about this product for implementation.  Why?  SCCM will do the scanning and the SCCM CMDB will be fully integrated into the ticketing system.  The alerts coming from SCOM will be automatically created as a ticket in the system so many manual work will be done automatically.

But can this ticketing system be a solution for many companies?

Yes, and maybe.  For many smaller IT teams, this will be a great solution.  It will have everything that they need.  It can be customized easily to their own processes and procedures, it has a self-service portal and it can be integrated with two very popular management applications (sccm and scom) which is very popular in those environments.

Maybe, because I’m not sure yet for the larger environments.  Will they be able to compete with for example a CA?  Will it be possible to insert SLA’s for different suppliers, or for different internal divisions?  Will the system be able to be used as a commercial service desk for ICT providers?  I can continue to think of questions but I’m not gonna do this.  Instead, I will start investigating what can be done with the system and how far we can go. 

One thing I know for sure.  Microsoft really build the platform so that it can be compliant with COBIT, ITIL, MOF (Their own version of COBIT) and so on.  If they would be able to get it ITIL certified, then the story for success is created.  And if it is flexible enough to alter the product with not too much code so that it can be used for large environments, then I think the larger ticketing system companies have a new, dangerous competitor.

In the next months, I will start with my investigation

Till then,



System Center Data Protection Manager SP1: How to start a DPM project

2:43 am in Uncategorized by mikeresseler

// Note: For some reason, my tables are not shown.  I’ll try to fix this but I don’t seem to be able to figure out why… Sorry about that.

As promised, here is a guidance on how to start a DPM project, based on the IPD Guides of Microsoft.

First thing to know is when you implement a DPM project (and this counts for about ALL of the System Center Suite) you need to know and understand exactly the business requirements.  Without them, it is impossible to deliver a great implementation.  With them, and understanding them, then the implementation will be quick, easy and your backup worries will decrease a lot.

Oke, here goes

First, I will give the Decision Flow according to the IPD


All these steps will now be discussed

Step 1: Project Scope

In this Step, you will need to collect all the information necessary for the implementation.

  • AD domain & Forest information

You start with the AD Domain and Forest information.  The servers you need to protect need to be in the same domain or there have to be a two-way cross-domain or cross-forest trust between the domains where the protected servers will be located.

I normally use the following table to write down the information

Domain Name FQDN Netbiosname DC
  • Network Topology and Bandwidth

Make sure that you have an overview of the Network Topology and Bandwidth.  It will be difficult to protect a server every 15 minutes if it is located on a WAN connection that has high latency or doesn’t always have connectivity.  A drawing of the topology can be very handy when designing the solution.

  • Data Loss Tolerance

The business will need to give this input.  What is tolerated in case of a disaster.  This is the equivalent to the recovery point objective (RPO).  This is necessary to determine the load on servers, storage and tapes.  Don’t let the business tell you that there is no tolerance for data loss if they are not prepared to pay the price for the storage, servers and tapes.  If there is not enough budget, then you can’t get it all…

  • Retention Range

Q from IT: How long must data be kept for availability?

A from business:…

Mostly it is important to verify if all services / data / … have the same retention range.  From time to time it is not necessary to keep certain data for 6 months or longer.  You should always ask the business whether they need a 3 month copy or if the last week is ok.  The better you precise the question about the different applications, the better the answers will be and you will save storage and tapes that you can use for more important issues.

Q from IT: Are you under some regularity compliance? (HIPAA, SOX…)

A from business:…

If you are under compliance, the retention ranges and stuff are already defined for you.  Read them and implement them.  End of story.

  • Speed of data Recovery

This is similar to the Recovery Time Objective (RTO) and will determine when disk is used or when tape is used.  The quicker you need to be able to recover, the more disk you will use and vice versa.

  • End-User Recovery

Will end-users be able to recover their own deleted files without the intervention of IT? What’s the business requirement on this one?

  • BCP / DRP

Will this implementation be part of the Business Continuity Plan (BCP) or/and Disaster Recovery Plan (DRP).  In other words, if it is part of the BCP, this means that you need to be able to recover crashed items asap.  If it is only part of the DRP plan, then you need to have a good strategy to recover when things fail, but then it is not necessary to recover on the spot.

  • Future plans

Are there any business acquisitions or divestments planned in the near future?  Will the DPM solution be used for this?  Are there servers or applications that will be retired in the near future.  Do we need to calculate a new application in the design?

Step 2: Determine What Data Will Be Protected

In this step, you will need to figure out what kind of data you will be protecting.

  • Virtual Machines

As you know, DPM can protect entire guest VM’s.  Fill in the next tables to get an overview of all VM’s you need to protect (This includes Hyper-V and Virtual Server 2005 SP1 virtual machines)

Additional note: Pass-through disks are NOT protected with this method.  You will need to backup that disk with an agent inside the VM.


Host IP


Guest IP



  • Exchange Server

DPM can protect only mailbox servers.  So no edge servers or other roles.  Only the data is protected.

Additional information:

– Exchange Server 2003 and Exchange Server 2007 Single Copy Cluster (SCC); DPM agent on all nodes in the cluster

– Exchange Server 2007 local continuous replication (LCR): Install the DPM agent on both the active and passive node

– Exchange Server 2007 cluster continuous replication (CCR): The DPM agent must be installed on both nodes in the cluster

– Exchange Server 2007 SP1 standby continuous replication (SCR): Install the DPM agent on the active node and standby nodes

Again, I use a simple table and fill in the data



Server IP

Storage Group


  • Sharepoint services

What are we going to protect here?  Again my tables… :-)



Server IP




This can be for Sharepoint Services 3.0, MOSS 2007 or Sharepoint portal server 2003.

Please note that when you protect a sharepoint farm or a sharepoint services site that you don’t need to backup that database separately afterwards.  It will only cause you troubles.

Also note that for recovering your sharepoint sites, you will need a recovery server.  So keep that in mind when you need to ask for more servers.

  • Volumes, folders and shares

No explanation necessary here I think, only remember that we are talking about windows server 2003 SP1 or later.  No more windows server 2000!




  • System State

Things that are protected by the system state are listed in the following article:



Server IP

  • Exclusions

Write down the exclusions here.  This can be folder based, File based or File extension based.  Maybe interesting if you don’t want to backup the entire companies MP3 collection 😉

Think also about the following: If DFS-N is in place (Distributed File Share-Namespace) then map to the actual file locations, because shares through the DFS hierarchy cannot be selected for protection, only the target paths can be selected.  If DFS-R (Replication) is used, the map to all the replicas and then select one of them for protection.



Excluded folder





Excluded files




Server IP

Excluded file extension



Step 3: Create Logical Design for Protection

In this step, the protection requirements will be translated into a logical design.  And that logical design will be configured as one or more protection groups.  But before you start, stop for a moment and consider the following VSS limitations

  • File protection to disk is limited to 64 shadow copies
  • File protection can have a maximum of 8 scheduled recovery points for each protection group each day
  • Application protection to disk is limited to 512 shadow copies, incremental backups are not counted towards this limit

Keep those in mind while designing this step

To do this, you will now need to fill in the next table so you can determine recovery goals, protection media and how the replica’s will be created.

Here is the explanation for the various parameters

Server or workstation: Name of the Server and if it is a server or a workstation
Location: Location of the data
Data to be protected: Application data or File data.
Data Size: Current size of the data
Rate of Change: How fast does the data change?
Protected volume: The name of the protected volume (if applicable)
Synchronization Frequency: How many times do we need to apply the changes to the replica
Retention Range: How long must this data be kept available (online and/or offline)
Recovery Point Schedule: How many time between RPs
Media: Which media is used? (disk or tape or disk/tape)
Replica Creation Method: Automatic or manual (backup/restore)?
Protection group name: Choose a name
DPM Server: Choose the correct DPM server. (If more then 1 server will be in place)


SQL Production Databases

Server or workstation



Physical Location, eg Antwerp Office

Data to be Protected


Data Source Type


Data Size

in total, 600 GB, calculated to 1 TB in five years

Rate of Change


Protected Volume

SQL Store

Synchronization Frequency

15 min

Retention Range

7 days

Recovery Point Schedule

9.00, 12.00, 15.00, 18.00, 21.00



Replica Creation method


Protection Group name

SQL Production

DPM Server


If you make this map for each of the data you need to backup, you already designed your protection groups.

Step 4: Design the Storage

Here’s the tricky part.  It is almost impossible to correctly calculate how much storage you need.  There are a few helpfull hands on the internet, but most of the time I have seen that taking the complete storage and make it two times that size is good enough.  This is (of course) when you want to use the synchronization features at full.  If you are only interested in the traditional way of backing up, then you can go with less.

Anyway, here are a few links for storage calculation

  • Custom Volumes

Do we need to consider custom volumes?  Only if:

  1. Critical data must be manually separated onto a high performance LUN
  2. To meet regulatory requirements
  3. To separate IO-intensive workloads across multiple spindles
  • Choose the Disk Subsystem

If you have an option, decide what you are going to use as disk subsystem.  Will you be using DAS, SAN, iSCSI?  What RAID configuration?  Choose this based on the Peak IOps during backup or restore but in my humble opinion, a good iSCSI solution will do the trick without any problems (Think Dell MD3000i for example…)

  • Tape Storage

What tapedrive model or robotic library will you be using?  Is it compliant?

Check for compliancy

  • Placement of Disk and Tape Storage

What is the location of the disk and tape storage towards the DPM server?  Is it close?  Is it network connected, fiber? scsi connected?

Step 5: Design the DPM Server

Finally you are getting to the end of this process.  You can design the DPM server itself.

  • calculate how many DPM Servers are needed

These are the limitations of one DPM Server:

  1. Maximum 250 storage groups
  2. Maximum 10 Tb for 32-bit DPM servers
  3. Maximum 45 TB for 64-bit DPM servers
  4. Maximum 256 data sources per DPM server (64-bit) where each data source needs two volumes
  5. Maximum 128 data sources per DPM server (32-bit)
  6. Maximum 8000 VSS shadow copies
  7. VSS Addressing limits: Add a DPM server for each 5 TB (32-bit) or 22 TB (64-bit)
  8. Maximum 75 protected servers and 150 protected workstations per server
  9. Data sources in another domain / forest that is untrusted… Add a new DPM server
  • Map protection groups to servers and storage.

Well, as already said, if more then one DPM is in place, map the table to the correct server.  Few pointers here:

  1. Separate data that cannot coexist on the same server for legal or compliance reasons
  2. Group protection groups that have different synchronization frequencies
  3. Group protection groups with the same media requirements
  4. Group protection groups that comprise data sources that are within the same high-speed network.
  5. Group protection groups that will be backed up from or to VM’s.
  • Hardware requirements

According to Microsoft:





1 Ghz

2.33 Ghz quad-core CPUs


2 GB

4 GB ram


0.2 % the size of all recovery points + 1.5 times the RAM


Disk Space

Program Files: 410 MB

Database file drive: 900 MB

System Drive: 2650 MB

2-3 GB free on the PF volume

Disk Space for Storage Pool

1.5 times the size of the protected data

2-3 times the size of the protected data

Logical Unit Number (LUN)


Maximum of 17 TB for GPT dynamic disks

2 TB for MBR disks

  • Software Requirements

You need to know these 5 things before deciding to place DPM on a server.

  1. NO ia64-bit OS
  2. NO Microsoft System Center Operations Manager on same server.
  3. NO domain controller or application server
  4. Windows Server 2008 (Standard & Enterprise Edition)
  5. Windows Server 2003 with SP2 (R2)
  • Virtual or not?

Yes you can run DPM virtually when you use pass-through disks or iSCSI device.  Please note that you can’t connect to a tape library directly attached to that server at that time.

  • Database

Please keep in mind that you need to run the DPM database on a separate SQL instance!  You also need to plan for SSRS to be implemented on each DPM server.  It is necessary, you can’t without.

  • Dedicated Network

Will you be using a dedicated network?  If so, write it down.

  • Fault Tolerance and protection for DPM

Two components of DPM can be made fault tolerant: The DPM server and the DPM database.  However, keep this in mind for fault tolerance:

  1. Server cannot be run as an MSCS clustered application
  2. Server can run in a VM, which can be a part of a clustered environment
  3. Database is not supported in an MSCS cluster
  4. DPM Server can backup its own databases to tape.
  5. A DPM Server can be used to protect the data from other DPM Servers.


Oke, that’s it.  Before you even started to do something, you have gathered all the information necessary to deploy a good DPM implementation.

It will lower the changes of failure and even (if necessary) point out to the management that additional resources are needed or that you can not deliver the asked business requirements