You are browsing the archive for DPM SCDPM Backup.

System Center Data Protection Manager 2007: Installation Part I: Prerequisites

9:02 am in Uncategorized by mikeresseler

Hey All,

In the next 3 posts, an installation guide for System Center Data Protection Manager 2007.  Let’s start with explaining the environment I’ve used and the prerequisites.

First, let’s look at the prerequisites as they are provided by Microsoft. (http://technet.microsoft.com/en-us/library/bb808832.aspx)

First, the security requirements

1) Security Requirements

To install DPM you must be a local admin and logon to the computer where you want to install DPM.  OK, this can be arranged :-)

After the installation of DPM, you need to have a domain user with administrator access to use the DPM Administrator Console.  Basically, this means that when you need to be able to use the administrator console, you need to be a local administrator on the DPM server.

2) Network Requirements

According to Microsoft, you need to have a windows server 2003 or windows server 2008 active directory domain.  My testenvironment is a native windows server 2008 domain so no problems there.  You also need to have persistent connectivity with the servers and desktop computers it protects and if you are protecting data across a WAN, then you need to have a minimum network bandwidth of 512kbps.  So far so good

3) Hardware Requirements

Disks:  You should have a disk that contains the storage pool and one that is dedicated to system files, DPM installation files, prerequisite software and database files.  Basically, this means that you need an OS and program files just as any other server (you can divide it into multiple volumes) but the storage dedicated for the backups can’t be the same as above.  When DPM takes the storage, it allocates it and it is not usable for something else.

CPU: Minimum 1Ghz, Recommended 2.33 Ghz

Memory: Minimum 2 GB RAM, Recommended 4 GB Ram

Pagefile: 0.2 percent the size of all recovery point volumes combined, in addition to the recommended size (1.5 times the Memory)

Disk Space:

– Minimum: Program Files: 410 MB, Database: 900 MB, System Drive: 2650 MB

– Recommended: 2 till 3 GB of free space on the program files volume

Disk Space for storage pool: 1,5 Times the size of the protected data, recommended = 2 – 3 times the size of the protected data

LUNS: Maximum of 17 TB for GPT dynamic disks, 2 TB for MBR disks

32-bit servers: 150 data sources

64-bit servers: 300 data sources

Since this will be a test environment, there is no way I’m going to follow the complete requirements for DPM. :-)

4) Operating System Prerequisites

32-bit or 64-bit servers.  No ia64 bit is supported

The DPM server cannot be a management server for SCOM

The DPM server cannot be an application server or an DC server.

There is a VSS limitation on 32-bit servers.  It can’t outgrow 10 TB of protected data.  Preferred is 4 TB on a 32-bit server.

OS: Windows Server 2008 Standard and Enterprise Edition or Windows Server 2003 (R2) SP2 or later or Windows server 2003 Advanced server or Windows Server 2003 Storage server.

The management shell can also be run on windows XP SP2, Vista and Windows Server 2003 SP2 or later.

 

My Environment

I will run the server on a Hyper-V platform (Not yet R2) and I’ve created a virtual machine with the following specifications

CPU: 2 * 3.60 GHz Xeon

Memory: 2048 MB

Storage: C:\ 40 GB  (OS and application) D:\ 60 GB (Play storage for backup)

OS: Windows Server 2008 x64 SP2, Fully patched

Software Prerequisites

Before I started with the installation, I’ve installed following things first:

– IIS 7 with the following options on:

  • Common HTTP Features
    • Static Content
    • Default Document
    • Directory Browsing
    • HTTP Errors
    • HTTP Redirection
  • Application Development
    • ASP.NET
    • .NET Extensibility
    • ISAPI Extensions
    • ISAPI Filters
    • Server Side Includes
  • Health and Diagnostics
    • HTTP Logging
    • Request Monitor
  • Security
    • Windows Authentication
    • Request Filtering
  • Performance
    • Static Content Compression
  • Management Tools
    • IIS Management Console (not really a prerequisite but handy ;-))
    • IIS 6 Management Compatibility
      • IIS 6 Metabase Compatibility
      • IIS 6 WMI Compatibility
      • IIS 6 Scripting Tools
      • IIS 6 Management Console

 

– SIS (Single Instance Store)

To enable this, open a command prompt with administrator privileges and run the following command:

start /wait ocsetup.exe SIS-Limited /Quiet /norestart

image

After this is finished, restart the server.  To check if the SIS is enabled, open regedit and check the existence of the following key:

HKEY_LOCAL_Machine\SYSTEM\CurrentControlSet\Services\SIS

image

That’s it

In the next part, the installation of SCDPM 2007

Till then,

Cheers,

Mike

System Center Data Protection Manager SP1: How to start a DPM project

2:43 am in Uncategorized by mikeresseler

// Note: For some reason, my tables are not shown.  I’ll try to fix this but I don’t seem to be able to figure out why… Sorry about that.

As promised, here is a guidance on how to start a DPM project, based on the IPD Guides of Microsoft.

First thing to know is when you implement a DPM project (and this counts for about ALL of the System Center Suite) you need to know and understand exactly the business requirements.  Without them, it is impossible to deliver a great implementation.  With them, and understanding them, then the implementation will be quick, easy and your backup worries will decrease a lot.

Oke, here goes

First, I will give the Decision Flow according to the IPD

image

All these steps will now be discussed

Step 1: Project Scope

In this Step, you will need to collect all the information necessary for the implementation.

  • AD domain & Forest information

You start with the AD Domain and Forest information.  The servers you need to protect need to be in the same domain or there have to be a two-way cross-domain or cross-forest trust between the domains where the protected servers will be located.

I normally use the following table to write down the information

Domain Name FQDN Netbiosname DC
  • Network Topology and Bandwidth

Make sure that you have an overview of the Network Topology and Bandwidth.  It will be difficult to protect a server every 15 minutes if it is located on a WAN connection that has high latency or doesn’t always have connectivity.  A drawing of the topology can be very handy when designing the solution.

  • Data Loss Tolerance

The business will need to give this input.  What is tolerated in case of a disaster.  This is the equivalent to the recovery point objective (RPO).  This is necessary to determine the load on servers, storage and tapes.  Don’t let the business tell you that there is no tolerance for data loss if they are not prepared to pay the price for the storage, servers and tapes.  If there is not enough budget, then you can’t get it all…

  • Retention Range

Q from IT: How long must data be kept for availability?

A from business:…

Mostly it is important to verify if all services / data / … have the same retention range.  From time to time it is not necessary to keep certain data for 6 months or longer.  You should always ask the business whether they need a 3 month copy or if the last week is ok.  The better you precise the question about the different applications, the better the answers will be and you will save storage and tapes that you can use for more important issues.

Q from IT: Are you under some regularity compliance? (HIPAA, SOX…)

A from business:…

If you are under compliance, the retention ranges and stuff are already defined for you.  Read them and implement them.  End of story.

  • Speed of data Recovery

This is similar to the Recovery Time Objective (RTO) and will determine when disk is used or when tape is used.  The quicker you need to be able to recover, the more disk you will use and vice versa.

  • End-User Recovery

Will end-users be able to recover their own deleted files without the intervention of IT? What’s the business requirement on this one?

  • BCP / DRP

Will this implementation be part of the Business Continuity Plan (BCP) or/and Disaster Recovery Plan (DRP).  In other words, if it is part of the BCP, this means that you need to be able to recover crashed items asap.  If it is only part of the DRP plan, then you need to have a good strategy to recover when things fail, but then it is not necessary to recover on the spot.

  • Future plans

Are there any business acquisitions or divestments planned in the near future?  Will the DPM solution be used for this?  Are there servers or applications that will be retired in the near future.  Do we need to calculate a new application in the design?

Step 2: Determine What Data Will Be Protected

In this step, you will need to figure out what kind of data you will be protecting.

  • Virtual Machines

As you know, DPM can protect entire guest VM’s.  Fill in the next tables to get an overview of all VM’s you need to protect (This includes Hyper-V and Virtual Server 2005 SP1 virtual machines)

Additional note: Pass-through disks are NOT protected with this method.  You will need to backup that disk with an agent inside the VM.

Host

Host IP

Guest

Guest IP

       
       

 

  • Exchange Server

DPM can protect only mailbox servers.  So no edge servers or other roles.  Only the data is protected.

Additional information:

– Exchange Server 2003 and Exchange Server 2007 Single Copy Cluster (SCC); DPM agent on all nodes in the cluster

– Exchange Server 2007 local continuous replication (LCR): Install the DPM agent on both the active and passive node

– Exchange Server 2007 cluster continuous replication (CCR): The DPM agent must be installed on both nodes in the cluster

– Exchange Server 2007 SP1 standby continuous replication (SCR): Install the DPM agent on the active node and standby nodes

Again, I use a simple table and fill in the data

Servername

OS

Server IP

Storage Group

Database

         
  • Sharepoint services

What are we going to protect here?  Again my tables… :-)

Servername

OS

Server IP

Farmname

Site

         

This can be for Sharepoint Services 3.0, MOSS 2007 or Sharepoint portal server 2003.

Please note that when you protect a sharepoint farm or a sharepoint services site that you don’t need to backup that database separately afterwards.  It will only cause you troubles.

Also note that for recovering your sharepoint sites, you will need a recovery server.  So keep that in mind when you need to ask for more servers.

  • Volumes, folders and shares

No explanation necessary here I think, only remember that we are talking about windows server 2003 SP1 or later.  No more windows server 2000!

Servername

IP

Name

     
  • System State

Things that are protected by the system state are listed in the following article: http://technet.microsoft.com/en-us/dpm/bb808714.aspx

Servername

OS

Server IP

     
  • Exclusions

Write down the exclusions here.  This can be folder based, File based or File extension based.  Maybe interesting if you don’t want to backup the entire companies MP3 collection 😉

Think also about the following: If DFS-N is in place (Distributed File Share-Namespace) then map to the actual file locations, because shares through the DFS hierarchy cannot be selected for protection, only the target paths can be selected.  If DFS-R (Replication) is used, the map to all the replicas and then select one of them for protection.

Servername

ServerIP

Excluded folder

     

 

Servername

ServerIP

Excluded files

     

 

Servername

Server IP

Excluded file extension

     

 

Step 3: Create Logical Design for Protection

In this step, the protection requirements will be translated into a logical design.  And that logical design will be configured as one or more protection groups.  But before you start, stop for a moment and consider the following VSS limitations

  • File protection to disk is limited to 64 shadow copies
  • File protection can have a maximum of 8 scheduled recovery points for each protection group each day
  • Application protection to disk is limited to 512 shadow copies, incremental backups are not counted towards this limit

Keep those in mind while designing this step

To do this, you will now need to fill in the next table so you can determine recovery goals, protection media and how the replica’s will be created.

Here is the explanation for the various parameters

Server or workstation: Name of the Server and if it is a server or a workstation
Location: Location of the data
Data to be protected: Application data or File data.
Data Size: Current size of the data
Rate of Change: How fast does the data change?
Protected volume: The name of the protected volume (if applicable)
Synchronization Frequency: How many times do we need to apply the changes to the replica
Retention Range: How long must this data be kept available (online and/or offline)
Recovery Point Schedule: How many time between RPs
Media: Which media is used? (disk or tape or disk/tape)
Replica Creation Method: Automatic or manual (backup/restore)?
Protection group name: Choose a name
DPM Server: Choose the correct DPM server. (If more then 1 server will be in place)

 

SQL Production Databases

Server or workstation

SQL1, SQL2, SQL3

Location

Physical Location, eg Antwerp Office

Data to be Protected

Application

Data Source Type

Disk

Data Size

in total, 600 GB, calculated to 1 TB in five years

Rate of Change

Frequent

Protected Volume

SQL Store

Synchronization Frequency

15 min

Retention Range

7 days

Recovery Point Schedule

9.00, 12.00, 15.00, 18.00, 21.00

Media

Disk/Tape

Replica Creation method

Automatic

Protection Group name

SQL Production

DPM Server

DPM01

If you make this map for each of the data you need to backup, you already designed your protection groups.

Step 4: Design the Storage

Here’s the tricky part.  It is almost impossible to correctly calculate how much storage you need.  There are a few helpfull hands on the internet, but most of the time I have seen that taking the complete storage and make it two times that size is good enough.  This is (of course) when you want to use the synchronization features at full.  If you are only interested in the traditional way of backing up, then you can go with less.

Anyway, here are a few links for storage calculation

http://technet.microsoft.com/en-us/library/bb795684.aspx

http://technet.microsoft.com/en-us/library/bb808859.aspx

http://blogs.technet.com/dpm/archive/2007/10/31/data-protection-manager-2007-storage-calculator.aspx

http://www.microsoft.com/downloads/details.aspx?FamilyID=445BC0CD-FC93-480D-98F0-3A5FB05D18D0&displaylang=en

  • Custom Volumes

Do we need to consider custom volumes?  Only if:

  1. Critical data must be manually separated onto a high performance LUN
  2. To meet regulatory requirements
  3. To separate IO-intensive workloads across multiple spindles
  • Choose the Disk Subsystem

If you have an option, decide what you are going to use as disk subsystem.  Will you be using DAS, SAN, iSCSI?  What RAID configuration?  Choose this based on the Peak IOps during backup or restore but in my humble opinion, a good iSCSI solution will do the trick without any problems (Think Dell MD3000i for example…)

  • Tape Storage

What tapedrive model or robotic library will you be using?  Is it compliant?

Check http://technet.microsoft.com/en-us/dpm/cc678583.aspx for compliancy

  • Placement of Disk and Tape Storage

What is the location of the disk and tape storage towards the DPM server?  Is it close?  Is it network connected, fiber? scsi connected?

Step 5: Design the DPM Server

Finally you are getting to the end of this process.  You can design the DPM server itself.

  • calculate how many DPM Servers are needed

These are the limitations of one DPM Server:

  1. Maximum 250 storage groups
  2. Maximum 10 Tb for 32-bit DPM servers
  3. Maximum 45 TB for 64-bit DPM servers
  4. Maximum 256 data sources per DPM server (64-bit) where each data source needs two volumes
  5. Maximum 128 data sources per DPM server (32-bit)
  6. Maximum 8000 VSS shadow copies
  7. VSS Addressing limits: Add a DPM server for each 5 TB (32-bit) or 22 TB (64-bit)
  8. Maximum 75 protected servers and 150 protected workstations per server
  9. Data sources in another domain / forest that is untrusted… Add a new DPM server
  • Map protection groups to servers and storage.

Well, as already said, if more then one DPM is in place, map the table to the correct server.  Few pointers here:

  1. Separate data that cannot coexist on the same server for legal or compliance reasons
  2. Group protection groups that have different synchronization frequencies
  3. Group protection groups with the same media requirements
  4. Group protection groups that comprise data sources that are within the same high-speed network.
  5. Group protection groups that will be backed up from or to VM’s.
  • Hardware requirements

According to Microsoft:

What

Minimum

Recommended

Processor

1 Ghz

2.33 Ghz quad-core CPUs

Memory

2 GB

4 GB ram

Pagefile

0.2 % the size of all recovery points + 1.5 times the RAM

N/A

Disk Space

Program Files: 410 MB

Database file drive: 900 MB

System Drive: 2650 MB

2-3 GB free on the PF volume

Disk Space for Storage Pool

1.5 times the size of the protected data

2-3 times the size of the protected data

Logical Unit Number (LUN)

N/A

Maximum of 17 TB for GPT dynamic disks

2 TB for MBR disks

  • Software Requirements

You need to know these 5 things before deciding to place DPM on a server.

  1. NO ia64-bit OS
  2. NO Microsoft System Center Operations Manager on same server.
  3. NO domain controller or application server
  4. Windows Server 2008 (Standard & Enterprise Edition)
  5. Windows Server 2003 with SP2 (R2)
  • Virtual or not?

Yes you can run DPM virtually when you use pass-through disks or iSCSI device.  Please note that you can’t connect to a tape library directly attached to that server at that time.

  • Database

Please keep in mind that you need to run the DPM database on a separate SQL instance!  You also need to plan for SSRS to be implemented on each DPM server.  It is necessary, you can’t without.

  • Dedicated Network

Will you be using a dedicated network?  If so, write it down.

  • Fault Tolerance and protection for DPM

Two components of DPM can be made fault tolerant: The DPM server and the DPM database.  However, keep this in mind for fault tolerance:

  1. Server cannot be run as an MSCS clustered application
  2. Server can run in a VM, which can be a part of a clustered environment
  3. Database is not supported in an MSCS cluster
  4. DPM Server can backup its own databases to tape.
  5. A DPM Server can be used to protect the data from other DPM Servers.

 

Oke, that’s it.  Before you even started to do something, you have gathered all the information necessary to deploy a good DPM implementation.

It will lower the changes of failure and even (if necessary) point out to the management that additional resources are needed or that you can not deliver the asked business requirements

Cheers,

 

Mike

 

Normal
0
false

21

false
false
false

NL-BE
X-NONE
X-NONE


 

System Center Data Protection Manager 2007 SP1: How it works

9:34 am in Uncategorized by mikeresseler

Hey All,

As promised, a post about how Data Protection Manager works.

First, System Center Data Protection Manager 2007 supports three types of backup as shown in the picture.

clip_image002

 

1) Disk-to-disk (D2D)

2) Disk-to-tape (D2T)

3) Disk-to-disk-to-tape (D2D2T)

Many organizations suffer with the same problem.  The size of the data is growing, and the backup window is getting smaller or is simply not big enough anymore to backup all the data necessary.  And even the weekend becomes more problematic these days.  Because of that, a lot of backup programs are working now with Disk to Disk backups.  Why?  Disk storage is getting cheaper, it is more reliable then tape, and the restore is much faster then from tape.  SCDPM does this also.

Still, the end of the tape is still far away.  Many organizations will keep tape backups to store them offsite in case of a disaster.  So a backup to tape is still necessary.  SCDPM supports this with the D2T, or better known as the ‘old school’ backup to tape, or in combination with the D2D, which makes it the D2D2T, the Disk to disk to tape backup.

 

First, how does it works.  As an example, we will talk about the backup of a file server.

 

image_thumb[1]

As you can see, I will backup a volume (D:) on a file server.  The DPM server will have a replica of this volume.

After that, the DPM server will take snapshots on different times (that are adjustable by yourself) and he will base himself on changes of the filesystem.  Only the changes will be replicated to the replica.

image_thumb[2]

In the picture, you see that the DPM server will synchronize every 15 minutes.  He will keep the data for 12 days and 5 times a day, a recovery point will be taken.

image_thumb[3]

This is a screenshot for a user that can see the different versions of a file on a fileserver.

With this mechanism, you can actually give users the ability to restore their documents without intervention of an IT engineer.  An administrator / IT Engineer can decide on a schedule when the DPM server will synchronize the newer versions with the replica residing on the DPM server.  This process is called Continuous Data Protection (CDP).

clip_image002[1]

Example of a real-life problem

DPM does not only have this mechanism for files, but also for a few key-applications within your environment such as Exchange, SQL and Sharepoint.  As an example, I will discuss the exchange technology.

Just like it does with files, DPM will make a full replica of the Exchange databases.  Then, depending on your settings, it will synchronize each x time (by default 15 minutes) a copy of the closed transaction logs to the DPM server.

clip_image002[3]

First step: Full replica

clip_image002[5]

Second step: Synchronization of the closed transaction logs

Minimal each 12 hours, there is a full express backup of the exchange replica.  This will cause each change since the last full express backup to be applied to the replica.  DPM uses a special filter that keeps track off the changes on byte-level, causing it to only transfer a minimum of content on the network.

The combination of the full express backup and the synchronization of the exchange logs gives an IT Engineer / Administrator the possibility to restore the full database and the transaction logs just before the problems arose or when the exchange serve went down.

clip_image001

Another (great) possibility is the option to create a disaster recovery with a secondary DPM server protecting the first one.  If you can locate this second DPM server in another datacenter or in another location, you are creating a very good disaster recovery plan and then you have the possibility to quickly recover from problems or disaster.

clip_image002[7]

One of the most important things in these scenario’s are the calculation of the storage necessary to support these technologies.  But I will come back on that in my next post.

Although this all seems nice and many IT administrators / engineers will want this in their environment, one of the questions IT decision takers will ask is how difficult this is to work with.  Will the IT staff need special education.  Can they easily monitor the system?  What about reporting?  And how can I verify that everything is running as smoothly as it should.  And maybe last but not least, how many work will this application give my IT staff on a daily basis?

The good news is, I can answer on all this questions very positive.

No special education is needed because the interface is based on the same gui as for example Outlook or Microsoft System Center Operations Manager 2007 (SCOM). 

clip_image002

Monitoring the system is easy if you have SCOM in place (dedicated Management Pack for this technology), and if not, you can always let the system sends emails each time there is a problem.  And, as with every System Center product, when there is an alert, the application will always suggest possible solutions.

clip_image002[8]

Reporting is also no problem, there are a lot of predefined, usable reports in the system that can be mailed daily

clip_image002[6]

Also one of the major advantages of the system is that it will automatically do a consistency check after each “backup”.  This will allow the IT staff to quickly find non-consistent data in the environment.

And last but not least, will this give the IT staff a lot of work?  Honestly, no.  DPM is not a product that requires a “baby sitter”.  As long as everything is well-designed and implemented, your staff can read the daily reports, check for errors in SCOM or view the alerts in the console once a day (or through email) and the system will run on itself.  Second, you will gain a lot of time each time a restore needs to be done because of the speed and easiness you can recover.  Of course, you will need to invest a lot of time in the initial architecture / design and implementation of the system.  How you can achieve this?  Check out my next post on designing a DPM solution.

Cheers,

Mike