You are browsing the archive for 2011 April.

SCOM 2007: installation bypassing the prerequisite checker

9:47 am in Uncategorized by Dieter Wijckmans

Most of the time the prerequisite checker when installing SCOM 2007 is right that there’s a prerequisite not met to install the specific role or specific item of SCOM 2007.

However If you are 100% sure everything is there you can bypass the prerequisite checker by running the install with the following command:

MSIEXEC /i <path>\MOM.msi /qn /l*v D:\logs\MOMUpgrade.log PREREQ_COMPLETED=1

This is however NOT supported by Microsoft.

Note: in Windows Server 2008 always run commands in an elevated prompt.

This should be your last resort to get things going. Most of the time there’s indeed a prerequisite not met and therefore the checker is right.

If you want to double check your prerequisites you can find them here:

http://technet.microsoft.com/en-us/library/bb309428.aspx

A known issue with the prerequisites is that ASP.Net is not correctly detected. More info here: http://support.microsoft.com/kb/934759

SCVMM 2012 Beta: Installation process

1:21 pm in Uncategorized by Dieter Wijckmans

I’ve installed SCVMM 2012 beta in my test lab. I thought I ‘d share the different steps involved.

Preparations:

“By failing to prepare you are preparing to fail” Benjamin Franklin.

First of all: Get the eval files. You can get them here in different forms:

http://technet.microsoft.com/en-us/library/gg671824.aspx

I’m using the files and not the pre prepped VHD in this post.

On the same page there’s also a link to documentation to get things started.

Prerequisites:

To save you a lot of time I’ve listed the prerequisites you need to fulfill to install the VMM2012 beta software as quickly and painless as possible:

Memory: If you use a Virtual machine in Hyper-V make sure you give it at least 2048mb of memory. Even when you use dynamic memory the minimum amount needs to be at 2048mb otherwise the installer will not pass the prerequisites and just stops. When you give the machine 2048mb it will pass the prerequisite check with a warning because the recommended amount of ram is 4096mb.

WAIK: Install the waik 2.0 which can be downloaded here: http://www.microsoft.com/downloads/en/confirmation.aspx?displaylang=en&FamilyID=696dd665-9f76-4177-a811-39c26d3b3b34

IIS 7 (for selfservice portal): Install IIS7 through the webserver role.

SQL: You can use an existing SQL server or install one locally. There’s no (or not yet an) option to install SQL express during install like in SCVMM 2008. I’ve installed SQL Server 2008 R2 Express which is free and downloadable through this link: http://www.microsoft.com/express/Database/

.NET Framework 3.5 SP1: Can be installed through the server role wizard.

User: Add the VMM admin user to the local admin group. Even when the user is in a group which is in the local admin group you still need to add the specific user to the local admin group during install or the install will fail. More info below.

Install:

So after the download is complete run the archive on your server:scvmm2012_1

scvmm2012_2

Start the setup from an elevated prompt. This is a habit I made myself used to to always start a setup like this. It’s a small effort to launch the setup like this but it can safe you a lot of troubleshooting time when things are not installing correctly.scvmm2012_3

If all goes well you will see the install splash screen (which is actually quit nice imho)

Click install:scvmm2012_4

OEPS first crash. The setup crashed.scvmm2012_6

Turned out I still had a pending reboot after installing the prerequisites and therefore the installer was not able to verify whether I had .NET framework installed.

I rebooted my test machine and ran the sequence above again and this time it passed the step without crashing. Remember this is still BETA so things may happen Smile

Let’s continue on our journey:

Tick the I have read button and click next:

scvmm2012_7

Select the desired features and click next.

Note that in comparison with SCVMM2008 if you select VMM Server it’s automatically installing the VMM admin console as well. In my test scenario I’m installing all the features:scvmm2012_8

Fill in your data and click next:scvmm2012_9

Choose your install location (some straight forward stuff, bear with me Smile ).scvmm2012_10

Although we are using dynamic memory on this machine and it can have all the memory it requests the machine thinks that there’s only 2048mb installed and gives us a warning but we’re able to pass and click next:scvmm2012_14

In the SQL database configuration you need to fill in the desired server, instance, user to use and dbase name. scvmm2012_15

I have chosen to install this on a locally installed SQL 2008 R2 express edition. However when I selected the installer was not able to find the proper dbase.

You have to make sure that the protocols are enabled on your SQL install to be able to connect to the dbase. Like shown below:scvmm2012_16

The install continued after that. In the next screen you need to fill in your accounts. You’ll need a domain account to be able to use the high availability options. In this case I choose not to store the encryption key in my AD but again for the high availability environments this is mandatory:scvmm2012_17

If your account is not in the local admin group you’ll get the error below:scvmm2012_18

I just leave all the ports as default. But if you need specific ports in your environment make sure to put them here and configure your firewall accordingly.scvmm2012_19

Review all the settings in the summary and press Install (finally some action Smile)scvmm2012_22

Installing…scvmm2012_23

Success! scvmm2012_24 

So we now have our SCVMM2012 beta installed. If you are using Hyper-V I would recommend to take a snapshot of your install. The added advantage is that if you install your dbase locally it’s also included in the snapshot which is great for testing environments Smile

In the next blog post I’ll go further into detail to get the environment up and running and see what nice features are in the new release.

Finally:

scvmm2012_25The new icon: Will there be more Cloud specific services in there? Surprised smile

SCOM: Moving the Opsdb Datawarehouse to another drive

8:57 pm in Uncategorized by Dieter Wijckmans

Recently I got a question of a customer to move the Opsdb Datawarehouse (DW) to another drive because the disk on which it was originally installed was not big enough. In fact they wanted to move the DW to an iscsi disk to boost performance.

To verify whether there would be an issue or it would be a straight forward move I did some browsing on the biggest manual out there… The internet!

However all that came up were actually moves from one server to another but not from one drive to another on the same server…

I did some testing in my lab and thought I ‘d share the outcome with you.

First of all this is your DW you are tempering about. Make sure you have proper backups of your db and read the entire blog before proceeding. Just to be on the safe side. It would be a shame that you lost all your data older than 8 days (if this is your grooming setting) because of a bad manipulation.

Ok enough said. Let’s get things started.

These are the steps I followed and in my case everything went smoothly without any problems.

First of all (again) take backups of your dbase and secondly plan a SCOM down time. To be absolutely sure that there’s no interference or blocking of the DW dbase you need to shutdown your RMS, any MS and GW servers in your environment (or at least in the management group of which the DW is part of). Some sources just drop the connections to the dbase which is an option as well but I prefer the first option. In my opinion it’s safer to do it like this.

Connect to the SQL server where your DW and open up the Microsoft SQL Server Management Studio:

scom_db_move01

Open up the connection to your DW. In my case it is residing on my VSERVER05.

Again better safe than sorry. Backing up!

scom_db_move02

The DW can be very big so it could be that it needs some time to perform the backup. When it’s finished.

At this point shutdown your environment. This means RMS, MS and GW’s. This sounds like a draconic measure but it ensures that your environment is completely shutdown and no queries are made to the dbase.

When this is done we can proceed to move the dbase

Take the DW offline by right clicking it and choosing “Take Offline”

scom_db_move04

A small dialog will popup and eventually of all goes well it will tell you the dbase is offline successfully. Notice the red arrow on the DW dbase.

Now take the ReportServer$OpSDBDW and ReportServer$OPSDBDWTempDB offline as well. Note that these dbases can have a different name in your environment or could not be present.

Note: My OpsdbDW is installed in a separate SQL instance. Be cautious with restarting your SQL service as this impacts all dbases under this instance.

When all the dbases are down they can be detached. This is done by right clicking the dbase > tasks > “detach”.

scom_db_move05

Choose the option to drop the connections to the dbase and hit OK.

Now we can copy (yes copy) the data. Again better safe than sorry and make a copy of the data rather than moving it.

After the copy has been done we are going to attach the copied DW to the SQL

Right click Databases and click Attach:

scom_db_move06

Select your dbase and attach:

scom_db_move07

In this case I’m moving my DW from E: to F: drive.

scom_db_move08

NOTE: It’s not automatically selecting the correct log file. Make sure you select it manually by clicking on the icon behind the path in the lower section.

When the attach is completed successfully you will dbases are moved to your new drive.

Start your SCOM environment again by starting your RMS first and then your MS and or GW servers you might have.

Just to be on the safe side verify whether you’re able to generate a report in the reporting view of your console with data older than 7 days (when your grooming settings are different you need to modify this to make sure you have a report with data older than your grooming setting.

If all goes well you now have successfully moved your dbase to another drive and you are free to delete the initial copy on your old location.

Preparing SCOM for cross platform monitoring

10:18 pm in Uncategorized by Dieter Wijckmans

Today at a customer I came across a problem with cross platform monitoring.

They had several Linux servers running with RedHat distro. They installed the Linux monitoring pack for cross platform monitoring their Linux environment.

They installed all the agents on the Linux servers but did not configure the proper action accounts to perform the discovery and monitoring.

To give my client some documentation how to perform these actions I came across this article on the Microsoft website.

http://technet.microsoft.com/en-us/library/dd788981.aspx

The instructions however are outdated with SCOM 2007 R2 so I’ll document them below.

First things first.

If you notice these events in the Operations Manager Eventlog:

Event Type: Error
Event Source: HealthService
Event Category: Health Service
Event ID: 1107
Date: 11/24/2008
Time: 2:18:03 PM
User: N/A
Computer: RMS_SERVER
Description:
Account for RunAs profile in workflow “Microsoft.Linux.RedHat.Computer.Discovery”, running for instance “Linux_server_name” with id”{384D2415-A49D-4002-768B-51D8D2EDBDD*}’ is not defined. Workflow will not be loaded. Please associate an account with the profile. Management group “group_name”

This most likely will indicate an issue with the run as accounts to connect to your Linux environment.

Following the article above at some point it’s outdated so here’s the proper way with some more clear instructions and some extra info I’ve learned in the field while configuring it for my customer.

Outlined steps:

  1. Open the Operations console with an account that is a member of the Operations Manager 2007 R2 Administrators profile.

  2. Select the Administration view.

  3. In the navigation pane under Run As Configuration, select Profiles.scom1

  4. In the results pane, double-click the UNIX Action Account, or UNIX Privileged Account. You need to create both.

  5. Click next on the first page. This is the overview page. Nothing can be changed here.

  6. scom2

  7. Click Add to create the action account which we are going to link to the UNIX Action Account.scom3
  8. In the next screen you need to select which user you are going to use as an action account on the Unix / Linux system. This screen consists out of 2 portions. The upper portion which is used to define the user and the bottom portion which will be defining the target.  scom4

  9. Select the Run As account by selecting the drop down list or create a new one. In this case we’ll create a new one. Click new…

  10. Click next on the welcome screen to proceed in creating the account:scom5

  11. The next screen you need to fill in the type of the account and the desired display name in SCOM. In this case we’re going to use the basic authentication type and we’ll name the user “UNIX Action Account” as shown below:scom6

  12. Click next and in the next screen fill in the credentials which have access to the Unix / Linux machine. In this example I’ve used the Root account. This can be any account with the proper access rights on your Unix / Linux server.scom7

  13. Click Next. The next thing you need to select is whether you want to manually select the targets where this action account will be targeted against or if you want to target it to all computers (which is less secure because all the admins on those machines can see the username and password). In this example we’ll choose the more secure way. scom8

  14. Click Create and on the following screen click close. It’s actually telling you that this first step is not enough but you have to associate it to a profile which will be done in the following step. Click Close.scom9

  15. Now we’re back at our 2 portioned screen. The top portion is filled in with the newly created user. So the next step will be to target it.scom10

  16. Select the “A Selected class, Group or object field and click the select button. A little selection list will pop up. In this example we chose to target the action account to a class…scom12

  17. The class selected for this example is Unix Computer. You have to see what’s manageable for your environment. Another approach is to target the run as account to Linux Computer group or specific Linux Objects.scom13

  18. Click OK. Now you’re back at the 2 portioned screen with the 2 sections filled in. Hit OK at this point. scom14

  19. Click save on the next screen.scom15

  20. Because we’ve chosen to manually select the computers we want to target the newly created action account the following screen will appear to do so.scom16

  21. Click on the User Action Account hyperlink to go to the settings page of the User Action Account. scom17

  22. In this example I’ve added the VSERVER07 to the list and clicked ok.

Normally now all your Linux servers should become discovered and the 1107 events should disappear. In the environment I had to manually close the events on the RMS queue and it also came back to a healthy state.

It’s probably a good thing to create a notification of these 1107 events to make sure you don’t miss any of these alerts as they are easy to miss but have a great impact on the monitoring of the Linux servers as they are not monitored when these events come up.

You need to repeat all the steps to create also a UNIX Privileged user to perform tasks which need more elevated rights.

After this the Linux servers status went from unmonitored to monitored and all the components were detected successfully.

SCOM: #Exchange 2010 SP1 MP is here

8:03 pm in Uncategorized by Dieter Wijckmans

exchange2010Today the updated management pack for Exchange 2010 with support for SP1 is published. It can be downloaded from the MS Download site:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=7150bfed-64a4-42a4-97a2-07048cca5d23

The new version is: 14.02.0071.0

Be sure to download also the explanatory doc which holds all the changes to this management pack. Some great info in there!

Download the correct file from the site:

  • Exchange2010ManagementPackForOpsMgr2007-EN-i386.msi
  • Exchange2010ManagementPackForOpsMgr2007-EN-x64.msi

This is not a standard straight forward management pack but requires to install a Exchange Correlation Engine.

This Correlation engine is basically a windows service which uses the Operations Manager SDK to first retrieve the health model and then process the stat change events. The correlation engine is capable of checking the health status before raising an alert. This significantly reduces the alerts generated as the engine is logically looking at the relationship between the alerts and closing them when they are caused by other alerts which were caused by the underlying issue.

The correlation engine is by default enabled. Be cautious when you are using helpdesk tools which don’t like events to be closed automatically.

 

Changes in This Update

The Exchange 2010 SP1 version of the Exchange 2010 Management Pack includes significant improvements beyond those included in the RTM version of the Exchange 2010 Management Pack. The following list includes some of the new features and updates:

  • Capacity planning and performance reports   New reports dig deep into the performance of individual servers and provide detailed information about how much capacity is used in each site.
  • SMTP and remote PowerShell availability report   The management pack now includes two new availability reports for SMTP client connections and management end points.
  • New Test-SMTPConnectivity synthetic transaction   In addition to the inbound mail connectivity tasks for protocols such as Outlook Web App, Outlook, IMAP, POP, and Exchange ActiveSync, the Management Pack now includes SMTP-connectivity monitoring for outbound mail from IMAP and POP clients. For information about how to enable this feature, see Optional Configurations.
  • New Test-ECPConnectivity view   Views for the Exchange Control Panel test task are now included in the monitoring tree.
  • Cross-premises mail flow monitoring and reporting   The Management Pack includes new mail flow monitoring and reporting capabilities for customers who use our hosted service.
  • Improved Content Indexing and Mailbox Disk Space monitoring   New scripts have been created to better monitor context indexing and mailbox disk space. These new scripts enable automatic repair of indexes and more accurately report of disk space issues.
  • The ability to disable Automatic Alert Resolution in environments that include OpsMgr connectors   When you disable Automatic Alert Resolution, the Correlation Engine won’t automatically resolve alerts. This lets you use your support ticketing system to manage your environment. For information about how to disable this feature, see Optional Configurations.
  • Several other updates and improvements were also added to this version of the Management Pack, including the following.
  • · Suppression of alerts when the alerts only occur occasionally was added to many monitors.
  • · Most of the event monitors in the Exchange 2010 Management Pack are automatically reset by the Correlation Engine. Automatic reset was added to those event monitors so that issues aren’t missed the next time they occur. For a list of the event monitors that are not reset automatically, see Understanding Alert Correlation.
  • · Monitoring was added for processes that crash repeatedly.
  • · Additional performance monitoring was added for Outlook Web App.
  • · Monitoring of Active Directory access was improved.
  • · Monitoring of anonymous calendar sharing was added.
  • · Reliability of database offline alerts was improved.
  • · Monitoring for the database engine (ESE) was added.

I’ll be playing with this MP shortly and post my findings.

Source: Exchange Server 2010 Management Pack Guide.doc

First insight into SCOM 2012: What’s up next…

7:56 pm in Uncategorized by Dieter Wijckmans

There were actually quite some sessions which gave a good preview of the SCOM2012 version which is pre-beta now and will become RTM by the end of 2012.

Until then more and more features will be communicated.

One of the most interesting features actually is that SCOM 2012 will tackle one of the biggest nightmares of all SCOM admins: the SPOF which is called RMS. All SCOM admins will have to admit that at one point or another they faced problems with their RMS which was acting up funny. In SCOM 2007 you are only allowed to run one RMS which is actually a MS which has the Root MS role. The SDK service can only and exclusively run on this machine making it the hart of your SCOM environment.

Your environment is highly impacted when your RMS is down.

The consciences:

  • You cannot perform any admin tasks
  • All consoles (including web) connect to RMS and will not open
  • Product connectors depend on RMS and therefore they can not get info out of SCOM.
  • Subscriptions depend on RMS therefore you will not have notifications.
RMS becomes “Management Pool”

RMS_gone

Fortunately this is tackled in SCOM 2012 by a new organization of the management servers. RMS which was introduced in SCOM 2007 will be history. In fact all the management servers (MS) will be automatically joined to a management pool which will all have the SDK service running. Because all the MS have the SDK service running they can all perform the task of the old RMS

This has some nice advantages:

  • MS can easily be added and removed because there’s a automatic failover between all the MS which are in the management pool
  • There’s no need for clustering in the management servers any more to assure high availability of the RMS.
  • High availability is now available out of the box!
  • MS share the workload over the entire management pool.

The management pool is automatically created when you install the first MS and it will automatically add all MS’s which are installed afterwards.

Pretty cool feature if you ask me Smile

New network monitoring features

infoblox-microsoft-scom-integrationSeems like Microsoft Really beefed up the network monitoring features. There’s a complete new way of discovering new devices which are in your environment. A nice cool feature is the map which is drawn of your network. You can also check which components are in the vicinity of the troublesome device so this can be very helpful in case of a faulty device.

  • The network will be drawn in nice topology maps and the monitoring will have some cool gauges / dashboards to make network monitoring much more clean and sleek.
  • MSFT plans to support roughly 90, IIRC, vendors out of the box so not much customization needed and more time to tackle the real day to day issues!
  • Monitoring includes all the small bits and bolts of your network like Network Port monitoring, memory counters, VLAN health, HSRP health, connection health at end points

SCVMM Dynamic Memory: Do we need it?

7:54 pm in Uncategorized by Dieter Wijckmans

question-markAmong a lot of new things implemented with the recent SP release for Windows Server 2008 R2 were also 2 nice features for Hyper-V.

Dynamic memory and Remote FX.

This blog post I’ll discuss Dynamic Memory.

Dynamic Memory solves a question a lot of customers (and admins) ask me when implementing an Hyper-V environment.

How many Memory do I have to give my machines to make them run smoothly?

With physical machines it’s rather easy to determine the amount of memory you need. You make an assessment when you buy the machine and order a certain amount of memory for the machine. If all goes well you will never have complaints about out of memory apps. To determine the amount of memory you look into the role of the server and the software manufacturer hardware requirements. You add some more just to be safe and you’re done.

BUT if the server is not using all it’s memory it’s just a waste of memory, money,… You’ll never notice it but this is actually the case for a lot of physical servers. There’s just so many unused memory out there. Why not take advantage of this wasted memory in a virtualized environment?

The same assessment is made for virtual machines.

How much memory do I need to assign to my machine to make it run smoothly and relax as an admin that the virtual machine will be up for the job it’s supposed to do.

There are some common scenario’s which I’ve seen at some customers. It’s more of a company policy to have 1 or more approach(es).

  • The trial and error approach: Just give the new virtual machine 1 GB and see whether the customer complains and than add some ram to keep them satisfied. .
  • The physical machine approach: Just give all the new virtual machines 4Gb of RAM. I have no clue what’s happening inside with the memory but everybody is happy.
  • The common sense approach: The software requirements are 4Gb. I’ll give the server 4GB + a margin of 50% to assure it has plenty of memory. I don’t have a clue whether the app needs all this memory but I don’t care…

If you use one of the approaches above you know in advance how many machines can operate simultaneously on a Hyper-V host because you can do the math.

For Example: If you have a machine with 16Gb of Ram you can run 3 machines with 4Gb and 1 with 3Gb. That’s it…

If there’s one machine which needs more RAM you’ll have to move it to another Hyper-V host or add RAM to the host to keep all the machines (and your customers) happy…

Wouldn’t it be cool to have a feature that dynamically checks whether the memory assigned to a server is still needed and if it requires extra memory transfers memory from one machine to another? Well this is where dynamic memory comes into play…

Dynamic memory will actively manage the memory of the different virtual machines!

So you can do more with the same amount of memory? So…

Thumbs-UpYes we need it!

So how does this actually work?

First of all the dynamic memory Driver gathers all the physical memory and adds it into the memory pool. The memory pool consists of all the memory on the machine minus the buffer you specify to keep the Hyper-V host up and running.

This memory pool is than used to distribute dynamically memory to virtual machines whenever they request additional memory.

When you enable dynamic memory you have to make sure the settings below are properly filled in:

Startup RAM: The amount of RAM the machine receives from the DM driver when it boots.

Maximum RAM: The maximum amount of RAM the machine can get from the DM driver.

Buffer: The percentage of Ram the DM driver reserves for the machine to make sure it can handle sudden bursts in workload and temporary needing more memory to cope.

3632_image_4B4FAD24

When these options are set. You can fire up the machine and the DM driver takes over the assignment of the RAM to the machine.

How does the DM driver determine how much memory is given to a machine?

The DM driver carefully monitors the amount of the machine needs to operate and makes a memory pressure ratio. This ratio basically is the relationship between the amount of memory the machine currently has and how much it want to operate properly. This ratio can be calculated by the total committed memory in the guest operating system running inside the virtual machine. When the machine asks more memory than there’s in the buffer the DM will supply extra memory to the machine.

How does the DM driver reclaim memory from one machine to give it to another?

This is where the ballooning driver comes in to play. The ballooning driver is actively managing the machines to give out memory back to the pool to be used to give to other Virtual Machines.

There’s a big difference between adding and removing memory.

  • The addition of memory is active: It is immediately added to the VM when requested
  • The reclamation of memory is passive: Memory is not reclaimed when there’s no need for extra memory in the pool to supply to other VM’s who are requesting additional memory. Unutilized memory is collected every 5 minutes.

The addition of memory is handled by the Synthetic memory driver and is light weight. It does not require any hardware emulation and adds the memory to the machine instantly.

The reclamation of memory is done by the ballooning driver. It’s basically taking the amount of memory requested by the DM driver and reserving it in the virtual machine. It then notifies the DM driver it has captured the memory within the virtual machine. Because the virtual machine can not use this memory anymore it can be extracted from the machine without interrupting it’s service. The amount of memory reclamation is calculated by the ratio which I’ve explained earlier in this post.

The memory is placed back in the pool and ready for use to distribute to another VM who is in desperate need of some additional memory to keep things going.

Because of this dynamic adding and reclaiming memory the amount of total physical RAM shown in the task manager is the high watermark of the memory allocated to the VM since it was booted. You can’t see the balloon size from within the guest—you have to use performance counters in the parent partition.

SCVMM 2008 SP1 now fully supports this new features of Windows 2008 R2 SP1 so you can continue to use your tools to manage your Hyper-v environment.

Next blog post will be about the installation / upgrade of your environment.