SCOM: Connect management groups between on-prem and Azure

August 21, 2014 at 4:27 pm in Azure, SCOM 2012, sysctr by Dieter Wijckmans

 

During a recent project I explored the benefits on hosting a 2 legged SCOM environment for both on-prem and cloud services. Although this is possible with just one management group and site to site VPN to the cloud they opted for a 2 management group approach to keep a certain sort of divider between the on-prem and the cloud.

In this blog post (who knows it could become a series) I’ll show you how to connect the management groups to each other so they can exchange alerts and use 1 console but benefit from presence of a management group on both platforms.

wall2top_z23gd-129

In this scenario I’m going to use connected management groups. As explained here http://technet.microsoft.com/en-us/library/hh230698.aspx

Connecting management groups in SCOM 2012 gives you a couple of benefits. The biggest one in my opinion is the fact you can have multiple management groups with different settings but use 1 console to get all the alerts. The customer wanted the ability to monitor their clients on different thresholds than their own systems. The own systems were mainly situated on site although the other systems were at the clients site or in the cloud.

The management group which will have the consolidated view is called the local management group. In my example it is VLAB which is on prem. The other management groups are called “connected management groups” in this case VCLOUD.

They relate to each other in a hierarchical fashion, with connected groups in the bottom tier and the local group in the top tier. The connected groups are in a peer-to-peer relationship with each other. Each connected group has no visibility or interaction with the other connected groups; the visibility is strictly from the local group into the connected group.

So in this scenario it’s a good idea to connect these management groups to see all data in 1 console for both on-prem and client based. In VCLOUD it’s not possible to see the alerts of VLAB but the other way around it’s possible.

So what do we need to do to obtain this (even without different AD domains and firewalls in between).

First of all prep the VCLOUD in Azure:

Create endpoints on Azure machine

In order to be able to resolve the Azure management group from the on prem we need to make sure that connection is possible to the VCLOUD management server. This is done through port 5723 and 5724.

Open the Azure management portal:

My server is called vcloud-ms1

printscreen-0231

Open the endpoints and add 5723 and 5724 to the endpoints. This in fact opens the firewall of azure to your machines. All communication will happen over these 2 ports.

printscreen-0232

Click add and fill in the endpoints as shown below.

printscreen-0233

Next find the following

  • The Public Virtual IP address (VIP) and take a note. In my case it’s 23.101.73.xxx
  • The DNS name: in my case vcloud-ms1.cloudapp.net

 

printscreen-0234

Prepare the onsite management server

Now that the management server of our VCLOUD management group is configured we need to configure the management server in our VLAB environment to become the local management group which will receive the alerts.

First we need to make sure that the onsite server can resolve AND reach the server in VCLOUD management group.

This can be done by changing the hosts file on the VLAB management server.

Go to c:\windows\system32\drivers\etc\ and open the hosts file:

printscreen-0235 

Note: I’ve deleted the last 3 digits of all the IP addresses above you need to fill in the full IP address as documented in the Windows Azure console.

Let’s check whether this works now from the VLAB management server. Doing THE route check: ping the hostname:

printscreen-0236

hmmm not working. Did we configure something incorrect? Check, double check. NO.

Well this makes perfect sense because: PING IS DISABLED towards Azure machines. Therefore you will get a Request timed out all the time you test no matter what you configure!

Connecting the management groups

Now that we have both ends configured it’s time to see whether we can connect the management groups. Remember: initiate the connection from the local management group (the one who needs to see all alerts and is on top of the hierarchy)

So let’s connect to the management server in VLAB:

Open the Administration pane and select Connected Management Groups and click

printscreen-0237

Right click and choose Add Management Group

printscreen-0238

Fill in all the data requested:

  • Management Group Name: The name of the VCLOUD management group
  • Management Server: The name of the management server in VCLOUD (make sure to use the exact name as filled in in the host file)
  • Account: Because the account we use as SDK service resides in the VLAB AD and is not known in the VCLOUD we need to use the VCLOUD credentials

printscreen-0239

Note: You need to initiate this from the management server where you have changed the host file so make sure there’s a console on there

You will get the message below because it’s not possible to validate the account in the local AD:

printscreen-0240

Just click next and normally you should be connected at this point:

printscreen-0241

Success!

So now all we have to do is configure what we want to show on the local management group.

 

I’ll explain this further in the next blog in this series.

Microsoft System Center Advisor Limited Preview is live!

May 14, 2014 at 8:33 pm in operations manager, SCOM, SCOM 2012, Uncategorized by Dieter Wijckmans

There are days that products become hot on the spot. It’s all about cloud lately and sometimes it’s amazing how fast things are evolving for us ITPRO’s.

One of these cool products which leverages the possibilities of the cloud, uses the full potential of the virtually endless storage space to store data and use the computing power of the cloud is System Center Advisor.

printscreen-0207

When System Center advisor first emerged it was a small service in the cloud where you had to seperately make a small proxy agent to send data into the cloud and configure it to get usefull data. You had to set up or designate a server as a gateway to send data to the online service. The data was only updated once per day and was only available through a webconsole. It was a nice product but it was way ahead of it’s time for the time being. One of the problems it had was the fact that not a lot of people understood the need for advisor as it was branded as just a system center advisor software…

The potential of the product was already there but it had to be easier to use…

Since SCOM 2012 SP1 Advisor got a revamp and is fully integrated in the SCOM console. It received more rules and better performance and people started embracing the fact that they gained access to the vast dbase of microsoft filled with best practices to automatically evaluate their systems. No need for those complex mbsa scans (ouch remember those…)

More and more people started using the service but still for a lot of customers I visited System Center Advisor was not that well known. It was rather a big unknown. As soon as I explained the possibilities they started using and appreciating the service and installed it in their environment.

[jwplayer mediaid="1421"]

source: https://channel9.msdn.com/Blogs/C9Team/System-Center-Advisor-Limited-Preview

Now with the new Limited Preview Microsoft is showing the future of this cool product. All the different and familiar functions are still there but there’s more…

Intelligence Packs

If you are familiar with SCOM you’ll definately will know Management packs but Intelligence Packs? Intelligence packs are the new way of adding functionality to your Advisor environment taylored for your business. It is the key to customizing your Advisor to your environment to show the data you specifically want to show in Advisor.

These management packs are stored in the Advisor Gallery and will be installed online. In a later stadium it will be possible to configure and create your own intelligence packs to gather specific data for your environment to further customize your environment. Similar to what you are doing with your management packs in your SCOM environment.

Currently there are Intelligence Packs for:

advisor1

All are available from the Intelligence Pack Gallery and install with just a couple of clicks. Not much configuration is needed afterwards.

The store can be reached through the Intelligence Pack button on the portal:

printscreen-0203

Let’s for example take the Log Management Intelligence pack (this will take some time to get used to). It enables a cool new feature to gather eventlogs of your servers in one central place and search and query them to get a one place to get insight in your environment.

After we have installed the Intelligence Pack through the console it will appear in our main portal view:

printscreen-0216

(notice that I already played with the other intelligence packs as well)

So if we click on the tile “Log Management” we’ll jump to the configuration and tell Advisor which logs we would like to gather in our Advisor to get the insights with queries. Again this is a great way of gathering all your data in one place. When you have all the data in one place you can use it to get insight in your environment because let’s face it: It’s you who knows your environment best.

printscreen-0217

After we have told Advisor to gather the System log on all the machines which are connected to the advisor (both Errors and Warnings) the Intelligence pack will kick in and will gather the info for the first time to give you a view of the collected data.

Search Data explorer

Now that we have data in our Advisor we would love to find out things on our own to get perhaps the root cause why systems are running slow for example. The search can be performed by using the Search Data Explorer. Open the Search Data Explorer on the right to access the search tool:

printscreen-0208

This will open the Search where you can start your journey through the gathered data:

printscreen-0209

On the right you’ll have common search queries to get you started. Expect more and more lists for search queries to come online but if you really need to create your own search query you can always check out the Syntax Preference link under documentation to get you going.

In fact there are 3 easy steps to get your data:

1. Enter the searcg term:

In this example I’m using * to get all my data because Advisor hasn’t run that long it doesn’t have a lot of data yet so I would like to see what’s already in there:

printscreen-0210

Next step is to filter the resulte with the tools in the right column to really only get the data we are after:

The facets are the different objects gathered by type an facets per type. In addition to this it’s also possible to scope what a time frame for the events gathered. This can come in handy when you want to troubleshoot a problem on your environment for example:

printscreen-0211

For now this data is not exportable through PowerShell and only available online. Futher down the road in the developement of Advisor it will be possible to query this data through PowerShell to use the data in your own applications.

Feedback

Another feature that has been introduced in the console is the feedback option.

The button is located on the bottom right and will open the feedback page in a separate window:

printscreen-0212

This will take you straight to the feedback window.

printscreen-0213

People who already worked with connect and forums will find that it’s a mix between those 2. In here you can give tips or requests to further enhance the product with new possibilities but also file bugs you’ve came accross. Members of the community can answer questions to get you going or vote for another request.

This will give you a nice one stop place to get you up to speed fast with the product but most important will give you the opportunity to give feedback first hand. This list will be used by the product team involved to prioritize the new enhancements.

Conclusion

This Limited preview of the next generation of Advisor will give the possibility to gather even more data about your environment and use this data to gain further insight in your environment. Because the system has been setup with Intelligence Packs it’s very easy to taylor the console to your needs. Add the performance of the cloud storage and computing to the game and we have a new additional powerful tool to gather and analyse data.

Will this completely replace all other monitoring needs? Not yet… Will it be a great enhancement to the tools we already have in place? Certainly!

This tool is free of charge during the preview period. So for now the only thing that is stopping you from using this tool is… yourself.

Keep an eye on the blog as I’ll dig deeper into the different intelligence packs when data comes in

A first glance at Squared-Up Operations 1.8

May 14, 2014 at 1:10 am in SCOM 2012 by Dieter Wijckmans

Face it: In my believe Operations Manager is a cool product with lot’s of capabilities out of the box. But there is room for improvement as well. One of these areas of improvement is showing the data which you eagerly collect in SCOM to the operators or even to people who are not that tech minded. All they want to see is whether everything is running fine and they can happily (I do hope so) continue their work. SCOM is very good in showing the data to the Operators but is lacking these capabilities of showing data in a more simple way.

Don’t get me wrong on this… It DOESN’T need to have this capability on board by default… Luckily there are a number of players on the market regarding easy setup dashboards and visualizations of this data in your SCOM environment like Squared-up.

During MMS 2013 I came across Squared-up. A small UK based company who took a rather different approach towards dashboarding. The difference is in fact that they are not focusing on creating dashboards in the console as such but generating these dashboards on top of a lightweight HTML5 server which can be installed on a management server or another server if you like. All you need to do is install the Squared up app and connecting it to your environment. From there on all the data is collected by tapping into the SCOM SDK without interfering with the console as such.

No fuzz, no hassle, just straightforward dashboards out of the box…

So the first part of this blog series (yep I will dig deeper into this program) is to see how we are going to install the product and see what is available out of the box. (Note that the print screens are based on version 1.7. I recently install 1.8 on top of this version without issues)

Website: http://www.squaredup.com/

Install

So let’s start the install:

SNAG-0266

Read through the entire License agreement like I always do (right)

SNAG-0267

Install the HTML5 web-server

SNAG-0268

Yep installing…

SNAG-0269

After install we need to connect it to our management group to be able to tap into the SDK.

SNAG-0270

When all is done we can open the console for the first time by clicking on the link provided on screen:

SNAG-0271

We are using the Operations manager user “administrator”. No extra users need to be created just the users already present in SCOM will do:

SNAG-0272

After first log in you need to put in your activation key and you are good to go.

SNAG-0273

To my surprise data was already coming in and being shown in the website. Without any additional configuration or settings I already have standard view of my environment.

SNAG-0274

First browse through the standard views:

Active Directory view out of the box

SNAG-0313

On the left we get a quick overview of the status of the different services and on the right we get straight out of the box the graphs about the key perfomance indicators straight out of the Datawarehouse in real time. Pretty impressive if you ask me.

Web servers view out of the box:

SNAG-0314

So if I click an alert it will instantly open a new web view with the alert and all the different parameters of the alert in a very sleak design giving you all parameters and data in a glance.

SNAG-0308

This dashboard is also fully functional. It’s possible to close alerts, assign alerts or even reset monitors in a glance as shown below.

SNAG-0309

First conclusion

I’ll definitely have to play more with the product to get to know it’s full potential but so far I’m pleased with what I’m seeing: Easy setup, dashboards straight out of the box filled with data, speed (although my environment is running locally on my demo laptop),…

In a further stadium when I find the 25th hour in a day I’ll dig into the creation of custom dashboards which hopefully will be the same easy setup as the install.

Small tip

If you want to test drive this web console without moving back and forth on your screen you can always open it in a Page view in the console itself like shown below:

printscreen-0215

SCOM 2012 R2 UR2: version number agent not increased

May 12, 2014 at 10:24 am in SCOM, SCOM 2012 by Dieter Wijckmans

 

Recently the new update rollup version (UR2) for SCOM 2012 R2 was released to the general public. One of the things that came up in the community was the fact that the agent number was not increasing in the SCOM console when it was pushed through the console.

problemsolved

Stanislav Zhelyazkov worked closely with other community members to pinpoint the problem and found a workaround which is both genious and simple: Run repair from the console.

Please read his full blog post here:

http://cloudadministrator.wordpress.com/2014/05/10/system-center-2012-r2-operations-manager-ur2-does-not-updated-agents-trough-the-console/

Social Update: A manner of speaking…

May 9, 2014 at 10:27 am in LiveMeeting, scu, sysctr by Dieter Wijckmans

A lot of exciting things are happening in the System Center community these days. Different new releases; new features are already delivered or are on the brink of being delivered shortly. TechED NA is right around the corner and other events are being planned as well. I always enjoy being part of these events and meet old and new friends all with the same interest: System Center products.

speaker

This blog post will be my (and your) one place to keep track of all sessions which I’m presenting and events I’ll be attending both national and International.

Hope you will attend one of my sessions and if you do, make sure to take the time to meet up!

If you have any suggestions or question about this list please sure to drop me a line on twitter or send me a mail

Event Date Location Session URL
SCU Network 22/05/2014
1PM CET
Online Webinar Exploring monitoring beyond the borders of Microsoft: Part 1 Linux monitoring http://www.systemcenteruniverse.com/scunetwork.htm
SCU Network 27/05/2014
1PM CET
Online
Webinar
Exploring monitoring beyond the borders of Microsoft: Part 2 PowerShell http://www.systemcenteruniverse.com/scunetwork.htm
SCU Network 05/06/2014
1PM CET
Online Webinar Exploring monitoring beyond the borders of Microsoft: Part 3 Monitoring API based devices http://www.systemcenteruniverse.com/scunetwork.htm
ITPROceed 12/06/2014 Antwerp (Belgium) Can SCOM monitor other stuff than Windows Thingies? Euhm yes itt can! http://www.itproceed.be

 

Home automation: Putting a child lock on my Nest thermostat using SCOM

April 24, 2014 at 10:23 am in SCOM 2012, sysctr by Dieter Wijckmans

 

This post is part of a series on how I demonstrate how to use SCOM to basically monitor everything. The other parts can be found here:

After I have successfully been able to get data into SCOM from my Nest Thermostat and my Flukso energy meter it’s time to do some cool stuff with it. More devices are in the pipeline to get data into SCOM to create the ultimate Domotics controller or should I say “SCOMotics”…

The world: Keeping an eye on Teen Trouble

One problem I have in real life is the fact that it’s very hard to explain to my wife and kids the process off radiant floors. It takes some time to heat up but it stays warm a long time so there’s no point in setting the thermostat to a higher point to get instant heat because it takes approx 1 hour to heat up 2 degrees celcius (something I also learned from getting my Nest thermostat data into SCOM).

But you can explain all you want if they find it chilly they’ll turn up the thermostat assuming it will get warm instantly but in fact they are just using more energy than necessary to heat the house in 2 hours when they already left the house.

So the mission was very simple. To stop them from doing this. Yes… I could put a lock code on the Nest thermostat and make it only available to me but if I’m not home and they really need to put the heating higher they are not able to do so.

So I came up with another solution: Setting a hard limit on the degrees and enforcing it.

So in short what do I need to achieve with SCOM:

  • Detection of the current temperature set: Target temperature
  • Alerting when the Target temperature breaches the set limit
  • Take corrective action to make sure the target temperature is set below the max temperature.

So let’s start with the detection of the current target temperature. I can reuse the work I already did to read in this value and compare it to the limit. To keep track of things and as this is a more general approach I’ve documented the process of creating a PowerShell script monitor using Silect MPAuthor here: http://scug.be/dieter/2014/04/24/scom-creating-a-powershell-script-monitor-with-silect-mpauthor/

So now that we have the monitor in place let’s check out whether it’s working!

First of all I’m setting my nest thermostat to 20 Celsius while my limit is set to 19 Celsius:

SNAG-0257

After the first run the monitor is picking up that indeed the temperature is higher than the requested limit. This is detected by running the PowerShell script monitor we’ve configured earlier:

SNAG-0263

Here you can see that the Recovery target which I configured kicked in as well. This recovery target consists out of a PHP file which is located on my Webserver and loaded by using the PowerShell Invoke-Webrequest module..

Note: I’m running this recovery against my Watchernode class which consists of 1 server and thus I’ve copied the “settempnest.ps1” to the local folder of that particular server.

How did I configure the recovery task

First open the monitor and click add on the “configure recovery tasks” section

SNAG-0260

Fill in the name of the recovery and select the status where to react upon.

SNAG-0261

Enter the command:

  • Full path: C:\Windows\System32\WindowsPowerShell\V1.0\powershell.exe
  • Parameter: -noexit “& “C:\scripts\settempnest.ps1″

SNAG-0262

The powershell is running a invoke-webrequest on my webserver. The PHP script it is running is copied below:


<?php

require 'inc/config.php';
require 'nest-api-master/nest.class.php';

define('USERNAME', $config['nest_user']);
define('PASSWORD', $config['nest_pass']);
date_default_timezone_set($config['local_tz']);

$nest = new Nest();
$nest->setTargetTemperatureMode(TARGET_TEMP_MODE_HEAT, 18.0);

So after running the recovery we see the monitor changing back from error to healthy:

SNAG-0259

There we go… All good again saving some energy

SNAG-0265

And final check on the thermostat itself… Back humming at 18 degrees.

SNAG-0264

SCOM: Creating a PowerShell script monitor with Silect MPAuthor

April 24, 2014 at 10:15 am in SCOM 2012, sysctr by Dieter Wijckmans

Sometimes it’s necessary to create a monitor to monitor something which is not included in the standard management packs. Unfortunately it’s not possible in SCOM  to use PowerShell to crerate a script monitor in the scom console. Although it’s not a good idea to start authoring in the operations console it sometimes can be a quick and easy way to create a monitor.

Recently Silect Sofftware released a free version of MPAuthor to create your management packs. I’m using this to create my script monitors to collect and monitor the data which I use in my monitoring my home series: http://scug.be/dieter/2014/02/19/monitor-your-home-with-scom/

Download the tool here: http://www.silect.com/mp-author

Below is an example of how I monitor the target temperature set on my Nest Thermostat.

So open the tool and create a new management pack => Create New Script Monitor…

SNAG-0243

Name the script (if you have the script somewhere as a PS1 file it will load the script body automatically.

SNAG-0246

This is the script I’m using:


param([int]$maxtarget)
[void][system.reflection.Assembly]::LoadFrom(“C:\Program Files (x86)\MySQL\MySQL Connector Net 6.8.3\Assemblies\v2.0\MySQL.Data.dll”)

#Create a variable to hold the connection:

$myconnection = New-Object MySql.Data.MySqlClient.MySqlConnection

#Set the connection string:

$myconnection.ConnectionString = "Fill in the connection string here"

#Call the Connection object’s Open() method:

$myconnection.Open()

$API = New-Object -ComObject "MOM.ScriptAPI"
$PropertyBag = $API.CreatePropertyBag()

#uncomment this to print connection properties to the console
#echo $myconnection

#The dataset must be created before it can be used in the script:
$dataSet = New-Object System.Data.DataSet

$command = $myconnection.CreateCommand()
$command.CommandText = "SELECT target FROM data ORDER BY timestamp DESC LIMIT 1";
$reader = $command.ExecuteReader()
#echo $reader
#The data reader will now contain the results from the database query.

#Processing the Contents of a Data Reader
#The contents of a data reader is processes row by row:

while ($reader.Read()) {
#And then field by field:
for ($i= 0; $i -lt $reader.FieldCount; $i++) {
$value = $reader.GetValue($i) -as [int]
}
}
#echo $value
$myconnection.Close()
#$value = $value -replace ",", "."

if($value -gt $maxtarget)
{
$PropertyBag.addValue("State","ERROR")
$PropertyBag.addvalue("Desription","Target temperature currently set to " + $value + ": is higher than the maximum target temp " + $maxtarget)
}
else
{
$PropertyBag.addValue("State","OK")
$PropertyBag.addvalue("Desription","Target temperature currently set to " + $value + ": is lower than the maximum target temp " + $maxtarget)
}

$PropertyBag

Note that you need to pass the parameters through to SCOM via the propertybags. I also am a fan of doing the logic in the script itself as shown above to avoid any logic in SCOM afterwards. It’s far more easy to do the comparison in the PowerShell script. In this case I’m setting State to either ERROR or OK. This also avoids the format conflict of the output whether it’s a string or an integer.

I’m setting the maxtarget parameter to 19

SNAG-0245

Next you need to create the conditions for the monitor states:

SNAG-0247

As I’m only using a 2 state monitor I’m deleting the OverWarning state and only using UnderWarning (= Healthy state) and OverError (= Error state).

SNAG-0248

For the Healthy state I’m detecting the “State” property value as OK (note that I’m defining the Type as a String as the state is just plain text)

SNAG-0249

For the Error state I’m detecting the “State” property value as ERROR

SNAG-0250

Now we need to target the monitor. In my case it’s the watcher node target I’ve created earlier on.

 

SNAG-0251

Naming and enabling the rule

SNAG-0252

Set the schedule how many time to check the status of the max temp

SNAG-0253

Speciffy the alert that needs to be raised if any:

SNAG-0255

And create.

SNAG-0256

Now save the management pack and test it in your environment.

System Center 2012 R2 Update Rollup 2 Released

April 23, 2014 at 10:55 am in sysctr by Dieter Wijckmans

 

Just a quick note that System Center 2012 R2 Update Rollup 2 was released last night. For a full view of the different updates included head over to the official KB which is located here: http://support.microsoft.com/kb/2932881

it_photo_119959
A lot of features and fixes.

Below you can find the links to the different fixes.

Data Protection Manager (KB2958100) (6 fixes in total)

Operations Manager (KB2929891) (9 fixes in total)

Operations Manager – UNIX and Linux Monitoring (Management Pack Update KB2929891) (1 fix in total)

2929891 System Center 2012 Operations Manager R2 Update Rollup 2

Orchestrator (KB2904689) (3 fixes in total)

Service Manager (KB2904710) (15 (!) fixes in total)

Service Provider Foundation (KB2932939) (6fixes in total)

 

Virtual Machine Manager (KB2932926) (30 (!) fixes in total)

 

As always these packages are cumulative and hold all the fixes off Update Rollup 1 as well. I’ll be taking the different packages for a test spin in my lab environment and will keep you informed about the things I came across.

Last but not least the Windows Azure Pack also got a very extended update.

More info can be found here: http://support.microsoft.com/kb/2932946

SCOM: Agentpostinstall.ps1 PowerShell demo script Webcast 01042014

April 3, 2014 at 10:29 am in LiveMeeting, SCOM, SCOM 2012 by Dieter Wijckmans

On april 1st 2014 (a day I will remember for a long time due to various reasons) I held a webcast for Microsoft Technet Belux regarding automation of admin tasks in SCOM.

I went over the basics to get started, the pitfalls and gave some tips and tricks to get you going. This session was recorded and together with the slide deck it’s made available here:

http://www.slideshare.net/technetbelux/make-scom-work-for-you-and-not-the-other-way-around

In this demo I created a small PowerShell script that could save you some time when agents are installed in your environment through an image. In this particular scenario the agents are automatically in the “pending approval” list in SCOM.

Running this PowerShell will add them to the environment, make them remotely manageable, point them all to a management server of your choice and put agent proxying on true.

Feel free to adapt the script for your needs.

The script in question:


#=====================================================================================================
# AUTHOR:    Dieter Wijckmans
# DATE:        01/04/2014
# Name:        agentpostinstall.PS1
# Version:    1.0
# COMMENT:    Approve agents after install, make remotely manageable, assign to 1 management server
#           and enable agent proxying.
#
# Usage:    .\postinstallagenttasks.ps1 mgserverfrom mgserverto sqlserverinstance dbase
# Parameters: mgserverfrom: the primary server at this point
#             mgserverto: The new primary server
#             sqlserverinstance: the sql server where the opsdb resides + instance
#             dbase: name of the opsdb
#
#=====================================================================================================

param ([string]$mgserverfrom,[string]$mgserverto,[string]$sqlserverinstance,[string]$dbase)
###Prepare environment for run###

####
# Start Ops Mgr snapin
###

##Read out the Management server name
$objCompSys = Get-WmiObject win32_computersystem
$inputScomMS = $objCompSys.name

#Initializing the Ops Mgr 2012 Powershell provider#
Import-Module -Name "OperationsManager"
New-SCManagementGroupConnection -ComputerName $inputScomMS

#Get all agents which are in pending mode and approve
$pending = Get-SCOMPendingManagement | Group AgentPendingActionType
$Count = $pending.count
echo $count

If ($count -eq $null)
{
echo "No agents to approve"
Exit
}
Else
{
Get-SCOMPendingManagement | where {$_.AgentPendingActionType -eq "ManualApproval"} | Sort AgentName | Approve-SCOMPendingManagement
}

#Let all servers report to 1 primary management server

$serverfrom = Get-SCOMManagementServer | ? {$_.name -eq "$mgserverfrom"}
$agents = Get-SCOMAgent -ManagementServer $serverfrom
$serverto = Get-SCOMManagementServer | ? {$_.name -eq "$mgserverto"}
Set-SCOMParentManagementServer -Agent:$agents -FailoverServer:$null
Set-SCOMParentManagementServer -Agent:$agents -PrimaryServer:$serverto
Set-SCOMParentManagementServer -Agent:$agents -FailoverServer:$serverfrom

#Set all servers to remotely manageable in SQL

$ServerName = "$sqlserverinstance"
$DatabaseName = "$dbase"
$Query = "UPDATE MT_HealthService SET IsManuallyInstalled=0 WHERE IsManuallyInstalled=1"

#Timeout parameters
$QueryTimeout = 120
$ConnectionTimeout = 30

#Action of connecting to the Database and executing the query and returning results if there were any.
$conn=New-Object System.Data.SqlClient.SQLConnection
$ConnectionString = "Server={0};Database={1};Integrated Security=True;Connect Timeout={2}" -f $ServerName,$DatabaseName,$ConnectionTimeout
$conn.ConnectionString=$ConnectionString
$conn.Open()
$cmd=New-Object system.Data.SqlClient.SqlCommand($Query,$conn)
$cmd.CommandTimeout=$QueryTimeout
$ds=New-Object system.Data.DataSet
$da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
[void]$da.fill($ds)
$conn.Close()
$ds.Tables

#Set all servers to agent proxy enabled

Get-SCOMAgent | where {$_.ProxyingEnabled.Value -eq $False} | Enable-SCOMAgentProxy

It can be downloaded here

download-button-fertig11

Note

  • that you need to give the proper parameters for it to work as stated in the description.
  • that perhaps you will have to check the SQL connection string on-line 68 with your SQL dba and adapt accordingly.

Received MVP 2014 award

April 2, 2014 at 10:02 am in Uncategorized by Dieter Wijckmans

 

Yesterday I have received the news that I am awarded with the Microsoft Most Valuable Professional award 2014 in Cloud and Datacenter Management.

SNAG-0229

I can’t describe how thrilled I am to be a part of this community to share even more knowledge with true experts in the field to gain even more insight in the System Center products.

This couldn’t have been possible without the help and support of a lot of people who guided me into the world of System Center. However there’s a small problem with name dropping: You are always forgetting some people. But hey I’m happy to take the risk.

First of all I would like to go back to 2010. While I was working at a client I came across Kurt Van Hoecke (who’s an MVP now as well) who introduced me to the System Center Suite. I did have an ITIL background but never heard of System Center as such. I agreed to join him to MMS2010 and barely got there due to the ash cloud. During that MMS I already met the people of System Center User Group and other System Center engineers who became good friends afterwards.

Time went by and I started to experiment with SCOM and other Sysctr products. I changed employer specifically to start working with Sysctr products and from then on it started rolling.

I officially joined SCUG Belgium in 2011 and have blogged ever since. Started speaking at events as well with already recently a couple of highlights (Expertslive, SystemCenteruniverse US,…) and hopefully many more to come.

During the past years I enjoyed sharing my knowledge, findings regarding the Sysctr products, helping out people with issues and just meeting new people with the same passion. I can’t count the hours I’ve spend on these activities but I enjoy doing it otherwise you would not continue right?

So what now? Well euhm basically nothing. I will continue blogging, speaking, helping out and hopefully meet even more people with the same passion. As a board member of SCUG I can say that we will continue to provide a platform for System Center content in Belgium and throughout the world. If you would like to start blogging / speaking / contributing here just drop me a line.

So finally I would like to start name dropping… The dangerous stuff right?

First of all thanks to Arlindo Alves and Sigrid VandenWeghe: As Microsoft Belux community leads they provide us (and me) with a solid platform to build and grow our community platform.

Second I would like to thank the members of the SCUG who helped me in the beginning of my wanders through the System Center world.

Third I would like to shout out to some specific people who had a significant impact on my journey II ‘ve travelled so far. Thanks Maarten Goet, Kenny Buntinx, Tim de Keukelaere, Cameron Fuller, Kurt van Hoecke, Kevin Greene, Marnix Wolf, Mike Resseler and so many more I’m forgetting to mention right now.

It’s because of these individuals and much more due to the buzz in the Sysctr community  that I really like sharing my knowledge and meeting new people while I’m speaking

Last I would like to express a special thanks to the Sysctr Community members who provided good content in the past, now and in the future. It’s their blogs, effort and guidance who helped me in the beginning to gain a good insight in the Sysctr world.

Some blogs that really helped me in the beginning (and still are helping me today)

Last but not least I want to encourage you to share your knowledge as well in the community. Every bit of effort even the smallest ones really contribute in keeping this community alive and helping others to fully understand the potential of the system center suite. Hopefully see you at one of the events in the near future!

Connect with me on