You are browsing the archive for 2010 April.

MMS 2010: Monday overview

2:01 pm in Uncategorized by mikeresseler

Hey All,

First day of MMS is over, it’s now way too early (still suffering from a jetlag after the crazy roadtrip) but I’m very glad I did it because I already saw some great stuff.

Here’s an overview…

Session: Data Warehouse and Reporting in System Center Service Manager 2010

This was an Instructor Led Lab (ILL) which was great to do.  In about 1 hour and 15 minutes, we got a nice overview of the data warehouse and reporting capabilities of the product.  And I’m not talking about the out-of-the box reports, I’m also talking about creating your own reports through excel or sql report builder.  It is clear that they really made a great effort in providing the possibility of creating or customizing your own reports.  Very cool. 

Session: Conquering the Summit: A Freshmen Orientation

It is my first year here, so I thought this would be a great sessions to see what is happening and what we can do…

The answer: There is way to much and after this session, I even found it more difficult to create my schedule.  :-) Luckily for me, I’m here at the summit with some experienced guys and they help me a lot. 

Session: : Incident and change management in Service Manager

Again an ILL which failed completely because the virtual images weren’t workable.  However, what I did learn was that SCSM is really build to customize it for your needs.  Designing a workflow for specific situations isn’t really that hard.  It only requires that you have a good workflow in advance :-). I’m not going to discuss a lot about SCSM here, because our fellow SCUG member Kurt can do this much better 😉

Session: Configuration Manager Dashboard

Heard a lot about it, didn’t see it live in action so I figured, now is my change.  I must say, I’m impressed.  It’s free, it’s easy and out-of-the box it delivers great reports.  If you are using SCCM, just install it and use it.  It’s great.

They showed us the configuration manager dashboard for SCCM for internal IT at Microsoft.  Pretty impressive figures over there, and great reporting from the dashboard.  This is the kind of dashboard that every boss wants from you.

Session: System Center Operations Manager 2007 R2 – Advanced Concepts

Another ILL (yep, was very busy yesterday :-)).  A little bit disappointed in this one, but that was because I thought the technical level was a bit low.  I figured, Advanced Concepts would be more in depth…

Expo and Reception

If you ever want to see 4000 people rush to one booth to get a MyITForum badge (which is an entrance ticket to the MyITForum party) then you have to come to MMS.  I didn’t got one (still have a change in some drawing today but what are the changes…) but I couldn’t help looking from a distance and enjoying myself with all those people that were trying to get a badge.  It was awesome :-)

The reception itself was good (there was beer… unlimited, just no Belgium beer :-( ) good food and a lot of gadgets to get from the different vendors.  (I packed light but I fear I will return heavy :-))

Only downside to yesterday was that it was very difficult to discuss things with the vendors because of the crowd and noise (and maybe also because of the beers :-))

I’m certainly going to stop a few times more over there when it is less crowdy.

Had one great conversation though… Spoke to the authors of the VMM and SCSM documentation.  Great people, had interesting stuff to say and are very keen to listen to the users.

Final notes

Buzz of the day, SCE 2010 and DPM 2010 got RTM-ed

And finally check out http://techmet.com/wiki

 

Till next

Cheers,

Mike

MMS 2010: The roadtrip

6:47 am in Uncategorized by mikeresseler

As many of you heard, the ash cloud because of the volcano eruption in Iceland is giving a lot of European travelers to MMS 2010 in Las Vegas problems.  Many of us will not make it and others will arrive late.

Thanks to our operations coordinator, I succeeded in finding another flight, but I have to get to Madrid to take that flight.  This means that I have a 1600 KM (1050 miles) drive coming up just to get to the airport of Madrid.  Luckily, my colleagues Arne Peleman and Kenny Buntinckx (SCCM MVP) will be joining me for the drive. But still, I’m not looking forward to the long drive.

This means that we will be a little more then 48 hours on the road (plain + drive) just to get to Las Vegas.  I’ll be a mess when I arrive there. 

You might wonder why we are so crazy to do this.  Well, the answer is simple… MMS 2010 is THE summit for everyone that has to deal with IT management in all its forms.  Whether it is operational, technical, implementation wise or architecting, designing.  Also the people who write out the processes are going to have all the information they need over there.  All the product teams and MVP’s in the system center suite will be there.  It is that moment of the year where we will  be discussing the future of management, the newest solutions and the best practices with all the system center specialists from the entire world.

And for that reason, we are doing this crazy trip to Madrid.

And for that reason, it will be terrible that many of the European specialists won’t be there…

So if all goes well, I’ll be Sunday, around midnight Las Vegas time, in my hotel.

So for those who are going to be there… See you then

For those who are not going to make it… I’m really sorry and hope that you can follow as much as possible through webcasts, blogs and so on…

Cheers,

Mike

SCE 2010: Part 2: Comparison & Thoughts

5:56 pm in Uncategorized by mikeresseler

Hey All,

In my previous post (link) I described in short what System Center Essentials 2010 is.  In this post, we are going to dive a little deeper and compare SCE 2010 with Operations Manager, Configuration Manager and Virtual Machine Manager.

While Operations Manager, Configuration Manager and Virtual Machine Manager are three different products, with three different consoles, SCE combines them all in one product, one console.  But, SCE is built for midsize businesses, meaning that it doesn’t contain all the functionality of it’s three ‘big brothers’.  Here’s the comparison

SCE 2010 versus Operations Manager

image

The table above shows the differences.

  • Monitoring of Windows Servers, Clients, Hardware, Software and Services (both)
    • The big difference is the way Essentials monitor network devices.
  • Management packs with expert knowledge (both)
    • As stated previously, Essentials will use the same management packs as Operations Manager so no differences there
  • Agentless Exception Monitoring (AEM) (both)
  • Add monitoring wizard (both)
  • Reporting (both)
    • 1st difference, Essentials doesn’t have a data warehouse.  In Operations Manager, you can retrieve data for one year and it works with an operational database and a data warehouse database.  Essentials only has one database and contains 40 days of data max.
    • Although there are many reports built in in the product, you can’t do authoring.  Operations Manager gives you flexibility if you want to create your own reports but essentials doesn’t have that possibility
  • Branch Office Monitoring (both)
    • As already said, Essentials is a one box solution, so if you are monitoring servers or clients in a branch office, then everything needs to go over the wire, while Operations Manager gives you the flexibility to place gateways, multiple MS servers.
  • Role Based Security (only OpsMgr)
    • If you want to work with Essentials, you need to be a local admin on the SCE server or a domain admin.  End-of-story.  Operations Manager gives you the flexibility of working with different roles, where you can give limited access to certain users.  SCE doesn’t
  • Connector framework (only OpsMgr)
    • Operations Manager has a connector framework allowing you to connect the system to other tools (helpdesk systems, other Management Groups…)  SCE doesn’t.
  • Audit Collection Services (only OpsMgr)
    • Operations Manager has something called Audit Collection Services (ACS).  With ACS, you have the possibility to do audit tracking on security, and save this to a special database for compliance reasons.  SCE doesn’t have this
  • Web Console (only OpsMgr)
    • Operations Manager gives you a webconsole where you can log on and do almost everything that you can do with the installed console.  SCE doesn’t have this.  If you want to work with SCE, you need to have access to a console.
  • Cross Platform support (only OpsMgr)
    • Operations Manager can monitor non-windows environments such as Red Hat Enterprise Linux for example.  SCE can’t

SCE 2010 versus Configuration Manager

image

  • Patch Management (Microsoft and Third Party) (both)
    • Although the table doesn’t say so, but there is a difference between SCE and SCCM.  SCCM has much more flexibility then SCE.  But everything that you can deploy as a patch with SCCM can be deployed with SCE.
  • Software Distribution (both)
    • SCCM is much more flexible and allows you to do advanced packaging.  SCE is about deploying MSI and EXE with some parameters but in the end, it is only capable of doing basic software distribution.
  • Hardware and Software Inventory (both)
    • SCE collects quite a lot but can’t be extended.  If you need additional inventory then you can use SCCM that can be extended through the use of MOF files
  • Branch office updates and software distribution (both)
    • Again, don’t forget that essentials is one box, so software distribution and patches are flying over the wire.  Ok, it is using BITS, but still, keep that in mind when choosing a solution.  SCCM can work with distribution points remotely
  • Operating System Deployment (only ConfigMgr)
  • Desired Configuration Management (only ConfigMgr)
  • Wake on LAN (only ConfigMgr)
  • NAP integration (only ConfigMgr)

SCE 2010 versus Virtual Machine Manager and Hyper-V console

image

In this table, there is the comparison with Virtual Machine Manager but also with the Hyper-V console

  • Templates (Essentials and VMM)
  • VM Cloning (Essentials and VMM)
  • Candidate Identification (Essentials and VMM)
  • Physical to Virtual Conversion (Essentials and VMM)
  • Virtual to Virtual Conversion (Essentials and VMM)
  • Migration across physical machines (Essentials and VMM)
  • Virtualization Reports (Essentials and VMM)
  • Monitoring VMs (Essentials and VMM)
  • PRO tips (Essentials and VMM)
  • Library (Essentials and VMM)
  • Provisioning (All three)
  • VM Configuration and properties (All three)
  • VM State (All three)
  • Checkpoints (Snapshots) (All three)
  • 64 bit guest OS (All three)
  • Hardware Assisted Virtualization (All three)
  • Live Thumbnail (All three)
  • Synthetic Network Support (All three)
  • Import VM (multiple VHD + snapshot (Hyper-V console and VMM)
  • Configure advanced network settings (Hyper-V console and VMM)
  • Inspect Disk (Hyper-V console and VMM)
  • Export VM (Hyper-V console)
  • VMWare Management (VMM)
  • Self-service console (VMM)

 

Thoughts

So above is the comparison of SCE with the three tools (OpsMgr, ConfigMgr and Virtual Machine Manager).  I don’t want to compare it with the Hyper-V console since this is a management console which is free. 

If you have a mid-sized company (meaning around 50 servers or less and 500 desktops or less) you now need to make a decision.  Will I go for the SCE solution, that has less features or do I have to go for the full-blown solution with all the three products.  The answer to that is (as always) not simple.  For each feature that is noted above, you are going to check if you really, really need it.  If you really need it, and it is not included in SCE… well then go for the full suite.  If you don’t need it, consider SCE for a moment.  But what if the company is growing?  And what if it outnumbers the 50 servers and 500 desktops.  For the new version I don’t know if it will be possible, but with SCE 2007 you could buy an upgrade path to the full solutions, and it costs you nothing extra, meaning that you already paid for SCE and pay additional the price for OpsMgr and ConfigMgr minus the price for SCE.  So no loss there.  Again, I don’t have information yet about pricing for SCE so I don’t know if they will keep that option.

Now let’s look at a few different features that are not the same in essentials.  I will just ask some questions that can help you in deciding.  The answer is not to be given by me, but should be taken by the company.

Differences between OpsMgr and SCE 2010

– Network monitoring.  Both products don’t have a “great” way to monitor network devices.  If you need this, then the solution won’t be to upgrade to Operations Manager but to look at 3rd party add-ons for OpsMgr and SCE.

– Reporting: As said, OpsMgr allows you to author and has a data warehouse.  So the questions you need to ask yourself are: Do I really need to author reports or am I happy with the reports (over 60) out of the box?  And for how long do I want to keep my data?  1 year, or the maximum of 40 days in SCE.  Both questions are crucial for deciding.  Do you really want (or obliged to) to keep your performance data for a server for 1 year?  Do I really want to retrieve an alert from a year ago?

– Branch office monitoring: This can be a tricky one.  How is the connection to your main office?  Still using dial-up? SCE might be not a good option.  Having a very slow WAN link which is already overused for other things?  Maybe SCE not a good option.  On the other hand, can I deploy additional OpsMgr roles to that branch office?  Do I have a (virtual) server overthere that can do the trick?

– Role based security: Important one!  Who needs access to the console?  Does it need to be limited for some users?  Then SCE is not an option.  Do you have just a few admins that all have the same rights?  Then nobody cares…

– Connector framework: Are you going to connect your monitoring solution to an external solution?  Then SCE is not an option anymore.  If you want the alerts (for example) to appear immediately in a helpdesk system then you need to consider Operations Manager (and check that your solution has the possibility to connect).  If this is not important, well, another feature gone :-)

–  ACS: Do you need to audit your security?  And if you’re not having a solution in place then ACS can help you.  But then you need OpsMgr.  Otherwise, the options remain open.

– Web console: Do you need to be able to view alerts, performance and other items through a webconsole, then you have OpsMgr that does the trick.  On the other hand, this mostly means that you also need Role Based Security.  If your admins have a console locally installed (We call these consoles the Outlook for Admins) or pushed through RDS or Citrix then they can also access it anywhere.  Make sure that you check with your admins whether they really need it or if it is just something “nice” to have.

– Cross Platform Management:  Do you need to monitor non-windows environments?  Are they supported by the cross-platform agents from OpsMgr?  Are there third-party add-ons that can deliver the same functionality?  Make sure you know these answers before deciding

Differences between ConfigMgr and SCE 2010

– Patch management: How much do you want to automate in the patch management?  If you want to automate the entire patch management process, including installing and rebooting of your servers then SCCM is the way to go.  But if you don’t want to do that, and if you are perfectly happy with doing the user patch management almost fully automated (meaning just approve certain updates where you don’t have an Auto-rule for) and the server patch management more manually, then the both products can do the same.  (But keep in mind that the way to handle the patch management is quite different in SCCM)

– Hardware and Software inventory:  Simple question, what do you want to know from your hardware and software.  If you don’t need to know some really really specific items where you need to adjust MOF files or write your own WMI queries, then SCE will do the job.  You need to know more, go for SCCM.  It all depends how important that data is.

– Branch office updates and software distribution: Check above, think about the connection bandwidth again.  Don’t forget that it uses bits and will download its updates during the day when traffic is low but still, this can be crucial for the decision

– Operating System Deployment: Do you need Operating System Deployment?  Yes? SCE doesn’t have this.  But wait, before you shout SCCM!  Do you need zero-touch deployment, meaning don’t touch anything, boot the computer through wake-on-lan or intel vPro or is a light-touch deployment (meaning press F12 in the lightest case) enough?  If the LTI choice is enough, then bing MDT 2010 asap.  (And put it on the same server as SCE ;-))

– Desired Configuration Management: Do you want DCM?  With this you can create baselines (for example: Windows Server 2008 R2, IIS, Powershell enabled, HIT driver version x, Latest patches, AV version x etc…) and do you want a tool that checks if all is OK (you can do the same for your workstations) then go for SCCM.  If you are not interested then this is another feature that you don’t need.  (By the way, this is a very nice feature, but takes time to deploy, but still very nice feature ;-))

– Wake on Lan: SCCM has it.  SCE doesn’t.  SCCM can use wake-on-lan for its purposes.  If you want this, then go for SCCM, but, first ask you network team if they allow it (you can’t believe how many network people start shooting the moment I drop the words Wake on Lan… Welcome to the real world gentleman.  Wake-on-lan is great to have, and not every workstation has Intel vPro. :-))

– NAP integration: SCCM has NAP integration.  With the right policies this is a great feature.  Imagine that a workstation is denied through NAP and quarantined to a separate Vlan.  At that moment, SCCM can be used to automatically push all the requirements.  User disabled Anti-Virus?  Don’t think so.  User doesn’t have the latest patches… You guessed it.  If you need this, then SCCM is the tool.  If not (because you use NAP but update a quarantined workstation another way) then we loose another feature to choose from :-)

Differences between Virtual Machine Manager and SCE 2010

Before I start, one important statement.  I said I’m not going to compare the hyper-v console with SCE 2010, but you do need to keep in mind that some features that can’t be done by SCE but only with Hyper-V require more work.  It’s much easier to do this from VMM then by doing it through the Hyper-V console.  Why?  Well, you need to know on which host the virtual server is residing.  But if you have a limited set of hyper-v hosts, then this is still perfectly possible.  If you have a lot of hyper-v hosts, then start considering Virtual Machine Manager,  but then again, you probably are over the 50 server limit…

– VMWare management:  You need to manage also virtual servers running on ESX?  Use virtual machine manager.  It connects through your Virtual Center and you can do everything which virtual center can.

– Self-service provisioning:  This is a fantastic feature if you have people that need to be able to create their own servers or if you want certain people to be able to restart their own servers and follow the boot process.  This is quite often used in development environments where the developers have their own environments (and infrastructure guys don’t want to restart every five seconds a server that is blocked by a bad code or wrong formed SQL query)  But again, do you need this in your environment? 

Conclusion

Before taking a decision about what tool to use, make sure that you look at all the questions.  SCE is a very powerful tool that has the advantage of one console, but lacks features compared to its big brothers.  It is also a one server solution so flexibility is limited.  You can’t separate roles on different servers.  If you have doubt if one server is capable of managing 50 servers and 500 desktops, I can guarantee you it doesn’t.  Size it well enough and it won’t be a problem.  But think about the features, because that should conclude whether you need SCE or the others…

Just my 2 cents,

Cheers

Mike

SCE 2010: What can we expect part 1: Overview

6:38 am in Uncategorized by mikeresseler

Hey All,

System Center Essentials 2010 will be released soon, and a few weeks back I had some live meetings about this new product.  Since I used and implemented SCE 2007 a few times, I wondered how the product got evolved.  SCE 2007 is quite a nice product, but it had its flaws and shortcomings.  So in the next few posts, I’m going to try to describe some interesting features about the product.

So let’s start with what System Center Essentials 2010 exactly is and what the requirements are.

SCE 2010 is called a unified solution for midsize businesses.

image  

Unified experience: SCE 2007 was already quite a unified experience.  You could manage your software, updates through one console.  Create your line-of-business applications through one console.  You could monitor your servers and workstations through one console.  As said, it had its shortcomings but for 90% of the cases this was enough.  I heard many times that you can’t deploy every software with system center essentials.  That’s true, but then again, if you can’t deploy it with SCE 2007, then it means that the software will need advanced techniques to be deployed.  And the question here is what you are going to do at that moment.  Every software vendor that brings his package in MSI can be deployed.

Proactive management: SCE 2007 was the “little brother” of Operations Manager.  It allowed you to monitor your environment and do proactive management.  It lacks a data warehouse compared to Operations Manager, but if you need that, then you probably need more then SCE.

Increased Efficiency: Working with one console and being able to perform many management tasks from this console is indeed an important asset of this tool.  Now with SCE 2010 you also will be able to manage your virtual environment from one console which will make it even more efficient.

 

First thing learned, they increased the number of servers that can be managed.  With SCE 2010 you can manage 50 servers (instead of the 30 from before) and 500 clients.

Second, System Center Essentials will need SQL Server 2008 instead of 2005 from SCE.

image

This picture represents the architecture overview for SCE 2010.  Here you can see very well on what SCE 2010 is built.  SCE 2010 is built onto a combination of Operations Manager 2007 and WSUS 3.  Now for those who worked with SCE 2007 know that not many management packs were available for SCE 2007 in the catalog, but that it was quite easy to check if a management pack from Operations Manager 2007 SP1 could be used.  That is something that many of us did and still do.  My first question here would be if this still would be the case or is the management pack library for SCE 2010 going to be maintained better as before.  The statement they made is clear: Essentials is using the same management packs as Operations Manager 2007.  So that’s a good thing.  A very good thing.

Platforms SCE 2010 can be installed:

  • Windows Essentials Server
  • Windows Server 2003 Standard, Enterprise SP2 or later
  • Windows Server 2008 Standard, Enterprise
  • Windows Small Business Server 2008 (note: x64 only)
  • Windows Essential Business Server 2008 (note: x64 only)

Of course the management console can be installed on remote machines and the database can also be placed on another server (SQL Server 2008 express, workgroup, standard, enterprise SP1 or later)

Also very important to know is that if you want to work with the virtualization part of essentials 2010 you need to install it on Windows Server 2008 or R2 and only on a x64 platform

Managed nodes

What can we manage with essentials 2010

  • XP SP2 or later
  • Vista Business, Enterprise or Ultimate
  • Windows 7 professional or ultimate
  • Windows Server 2003 Web, Standard, Enterprise SP2 or later
  • Windows Server 2008 Standard, Enterprise
  • Windows Essentials Business Server 2008 x64 only

Prerequisites

What is needed to use this software?  I’m assuming here that you want the full blown solution, including the virtualization management:

  • 2.8 GHz or faster CPU
  • 4 GB of RAM
  • 20 GB of Disk Space + additional 150 GB for virtualization + 100 GB for the wsus updates if stored locally
  • Windows Server 2008 with IIS 6.0 or 7.0 and .NET 3.5 SP1
  • .NET Framework 3.0
  • Active Directory
  • Domain Admin / Group Policy Admin

Now this Group Policy Admin is optional, but I’ve been in situations where I couldn’t create the GPO for SCE 2007.  This is quite messy to manage afterwards so I strongly suggest that you have the rights for creating a GPO during the implementation of SCE 2010.  SCE works with group policy to pass the correct settings to the clients and it is quite handy that this can be arranged for you.

All on 1 server

SCE 2007 was a one solution server.  It doesn’t have the flexibility to spread roles over multiple servers like his big brother.  For SCE 2010, this remains the same.  Put everything on one box.  The only two things that you can separate are the management consoles (remember, only 5 consoles can be used at one time) and the reporting server / databases that can be placed on another server. 

That’s it for now, next post: SCE 2010 compared against Operations Manager, Configuration Manager and Virtual Machine Manager.

Cheers,

Mike

SCE 2007: Agent deployment issues

6:55 pm in Uncategorized by mikeresseler

Hey All,

Today I was with a customer and they told me that the system administrators weren’t able to deploy an agent to a certain workstation.  This workstation was just deployed clean with MDT.

The error they got is the following:

The MOM Server could not start the MOMAgentInstaller service on computer <computername> in time

Also, I received error 80070102.

Since I never saw this error, and the logs didn’t really told me a lot, except somewhere in the installation log file that the firewall exceptions couldn’t be made.

This was quite strange since I deployed the firewall exceptions through GPO (it is not only for SCE so I used 1 GPO for all firewall exceptions).

So I started surfing on the net for solutions.

Not much info there, but finally I came across a blog post from Kevin Holman

http://blogs.technet.com/kevinholman/archive/2009/01/27/console-based-agent-deployment-troubleshooting-table.aspx

Ok, since this was one isolated case, I could have gone to the user and installed it manually, as many people advised on the internet, but since I’m quite lazy I didn’t feel like it.

According to Kevin, and I quote:

Sometimes – the Windows Firewall service – even if disabled – will have a stuck rule. Run: (netsh advfirewall firewall delete rule name=”MOM Agent Installer Service”)

So I fired up psexec from my workstation and checked if this could be true.  And yes, there was a rule that called Mom Agent Installer Service.

Since the program wasn’t installed, it was not necessary so I used the command to delete the rule.

Tried the remote installation again, and it worked.

Thanks Kevin :-) I didn’t had to leave my chair today

Cheers,

Mike

Wrong assumptions about the System Center Suite

6:45 pm in Uncategorized by mikeresseler

Hey All,

It’s Friday (TGIF) and I’m sitting on the train (for quite a long time) heading home after an exiting, but exhausting week.  I’ve had some nice successes with my customer and I went to the Techdays 2010 in Belgium.  As I am sitting here, I was thinking about a few things that were said on the techdays about System Center, and also what the new products will mean for System Administrators in their day-to-day work.  And I realized that many people are having the wrong ideas about the system center suite.  So here are my thoughts about some wrong assumptions.

 

Wrong assumption #1: System Center is a technical solution

 

Every time I’m talking to IT-pro’s, I basically meet two kind of people: the ones that are convinced about System Center tell me that it can do practically everything.  And the ones that are not yet convinced will tell me that it is a great suite, but needs some additional work, and most of them will tell me that they either use it in their environment, or certainly looking at it at a constant base.  But this is were tehy all go wrong.  We look at it as a great tool that can do great technical things.  So what are most people doing?  They integrate it as a technical solution, deploy as many management packs, DCM rules, packages or whatever, as possible, and finally deploy all the agents.  Oke, we are monitoring everything, deploying everyting, backing up everything… how cool is that. 

Yes, it is cool, but we should also look at it from a business perspective, and preferred BEFORE we start to implement this solution.  Why?  To deploy it as a monitoring tool for services.  Or a deployment solution for services or… Think about it.  What if you can say to your peers (and I know I am talking now an easy example) that you monitor the email system as a whole.  They will like it (and probably ask a monthly report about it).  Imagine that you say to your peers that you can use Operations Manager to monitor all your exchange servers, both the virtual and physical ones, your switches, your internet line for sending and receiving emails, your gateway, your anti-virus, your …  What will the response be?  If you are lucky, they will say ‘Good for you’ and if not they will say ‘I don’t care’ or even worse ‘Did I pay that much money and that’s the only thing you can do with it?’.

Back to the ‘email service’.  Peers or managers or directors or whatever title your bosses have will not care what technical magic you are doing.  They only care about the fact that their email is working, that it is safe, and that they can’t catch viruses.  That’s it.  So if you say that you monitor the email service (call it messaging service, sounds even better for them) and can give a report about it, then they will say “Job well done” and you are on the way for buying your next system center product 😉

Doing it like this, will give you additional benefits.  Think about it, the only thing that you need to monitor is the “messaging service”.  As long as it runs, no problem.  So if you have failover, clustering, DAG, live-migration and who knows what extra protection then you can deliver your 99,99% messaging service availability almost every time.  Even if one of the components go down, you get an alert, your SLA is not violated, you fix it with the knowledge in the system, case closed, nobody cared, no time pressure, and no slap on the head because you didn’t meet your SLA’s.

 

Wrong assumption #2: Opalis takes the ITPro’s work away

A new product is coming to the system center suite.  For the moment, still called System Center Opalis (no idea if they going to rebrand it) and it is the talk of the town.  Opalis is what they call an ITPA (IT Process Automation) or RBA (Run Book Automation).  A few weeks ago, I was telling about Opalis to a colleague and the first thing he said was ‘Damn, something to create automatic ITIL processes, keep it away from me as long as possible’.  Wrong assumption.  While it can be perfectly used to implement ITIL processes (or Cobit, or MOF) it can also do general maintenance.  So he says, it is a ‘scheduler’.  Wrong again, true, this can be done by it, but it is also event-driven.  ‘So what is it?’ he asked.  I explained him, that the purpose of this tool is to create processes in such a way that it takes away as much as possible of the day-to-day work and to automate solutions to problems when it goes wrong.  Before I could say more, he drew his gun and shot me down :-).  The reason for that?  I was talking away his work.  He would get fired because he wouldn’t be necessary anymore.  And if it wasn’t him, it would be one of his colleagues.  Wrong assumption.  Although that he was wrong three times, it is quite a clever IT guy.  I told him that there still were people necessary to build these processes.  And that these processes aren’t always working.  Ill explain that a little bit later with an example given by Maarten Goet yesterday at the techdays in Belgium which I found a very good example.  Anyway, what Opalis will be doing is:

  1. Try to solve issues out of its own
  2. Fill in data (ticketing systems, alerts, …)
  3. Do day-to-day maintenance out of its own without a user intervening so stopping human errors (and we all know that these most of the time happen when you need to do repetitive work)
  4. Be faster then humans! Think about it, if you can automate many things, the problem will be much faster resolved then you can as an IT pro.  And is that a bad thing?  No way, the lesser fires you need to put out, the better.

So what’s the IT Pro is going to do more?  Simple.  As I said, he is a clever guy, so he can use his time to improve the IT infrastructure and processes.  He can use his time to think actively about the requirements of the users, his customers.  If he’s lucky, he will even have the change to discuss the future roadmap of the company.  And that is exactly what he should do.  No, changes are that he won’t be taking the decisions.  Hey, he will probably not be in the meeting where the decision is taken.  But he will be actively involved in finding a good solution for the business requirements and assist his manager in finding and defending a good solution towards the management.  The manager who doesn’t want this is in my opinion not a good manager.  And the IT Pro that doesn’t want time to investigate new things on the market, study on business requirements but only wants to fight fires?  I still have to meet this guy.  And even if he doesn’t get the ability to do these things, it will still mean less evening work, less weekend work, and less stress.  Hooray!

Now about that example.  Maarten showed us a process where Operations Manager alerted that a certain windows service went down.  The process created a ticket in the helpdesk system, changed the alert in Operations Manager, tried to start the service again, and on success updated the alert, the ticket and case closed.  Nobody did something except see that there was a new ticket in the helpdesk system.  But Maarten created also the path when the service didn’t start again.  And then the ticket got escalated.  An SMS was sent to an engineer (in the demo a pop-up but you get the picture right :-)) and now it was time for the engineer to troubleshoot why the service doesn’t start anymore.  Still, the first step, restart the service was already taken, the ticket already existed, so basically the engineer won at least 15 minutes if not more.

 

Wrong assumption #3: The suite are stand-alone separate products

The system center suite consists out of x number of applications, and they don’t work together.  Difficult one, because it is true, and not true.  Yes, you can perfectly setup an infrastructure where all these tools are running next to each other, and where they don’t work together.  In fact, I hear that many projects are dealt this way.  Why?  Probably because it is much cheaper this way, and the project is much faster implemented so you have quicker results.  This is (of course) short term thinking, and should be avoided.  Operations Manager and Virtual Machine Manager work perfectly together.  Configuration Manager and Operations Manager work perfectly together.  And while you are busy, deploy your virtual machines with Configuration Manager, just the same way you are deploying your physical machines.  Don’t forget to use the patch management within Configuration Manager and view the failures in Operations Manager.  Data Protection Manager and Operations Manager.  Check.  Data Protection Manager and Virtual Machine Manager.  Check.  Deploy your different agents with configuration manager.  Check.  (I can continue for a while but I hope you get the picture.)

The statement is also true, because it was lacking two important things within the suite.

  1. A central helpdesk system (or service desk, whatever term you are using and don’t get me wrong, I refuse to look down on the “helpdesk”.  I invite every engineer, architect or whatever to run a few weeks in an average helpdesk, they will start smoking and drinking after day 3 ;-))
  2. An automation system

Hey, number 2 is just discussed, and number 1 will be there very soon.  Say again that Microsoft has no vision, they just proved you wrong :-)

But for all those guys that like to shoot at Microsoft, it was already possible before.  There were connectors to other products that could do the job very fine.

 

Wrong assumption #4: It is only for technical guys

No, no, three times no.  Almost every product within this suite can be organized so well that whoever needs access to whatever data, you can arrange it WITHOUT risking that those persons screw up everything.  This takes me back quite a few years where I wanted to delegate the right to create, update and disable (not delete) users within Active Directory to the secretary of Human Resources.  Reasons:

  1. I’m lazy and type to many times a name wrong :-)
  2. By the time I knew that somebody new started at department x, he was already standing at my desk for a username (don’t you just hate that, but hey, another reason for Configuration Manager and OS Deployment, or MDT if that fits your needs)
  3. I never had the time (I used to be an excellent fire fighter.) to drop everything and create that user.  Not to mention that I always wanted to see paperwork from HR, you never know who tries to fool you :-)

About every colleague I had started to shout that that was about the most stupid idea I could have (And I can assure you, from time to time I really can have stupid ideas ;-)).  Why I asked? Answer: Nobody wanted the secretary (there were actually 2) to do something inside the heart of our infrastructure.  Now you have to image that Identity Management solutions were or extremely expensive or basically not usable so that was not an option.  So I tried to convince them with delegation of rights, tutored the secretaries for 2 hours, and let them use it, closely monitored by the colleagues.  After one month, nobody cared anymore, and no one of us needed to create a user anymore.  Only if the user needed to be deleted (which I think is bad practice…) we needed to get in action.  No more changing AD data because a user had a new address, changed her last name into her married one and so on.

Back to 2010 and system center.  Why not delegate certain jobs to other people.  Wouldn’t it be great if the helpdesk could use Config Manager to take over desktops.  To deploy workstations.  (I even have the scenario where the guy from the warehouse accepts the new workstations, takes the attached email sheet received from VendorX (I can’t advertise here right :-)), puts it in SCCM, takes out the computers to a desk, plugs in the cable and magic is happening… 50 new workstations ready to be given to new users.  Or what about the SQL team?  Give them a limited view in Operations Manager so that they can see their alerts.  And how about some managers?  Give them reports about the software metering, number of alerts on a monthly base (Imagine this, a manager sees 500 alerts in one month, but no user complaints and more important, he never noticed anything… Meet your new nickname: Speedy Gonzalez).  I can probably continue here with hundreds of examples but just think about your own situation in the office.  How many times do you need to deliver data or reports or whatever to managers, other IT teams and so on.  Give them the rights to look at them their selves.  Then they won’t bother you, and you just have some more free time.

 

Wrong assumption #5: First we implement a project, then we think about System Center.

Yeah, common mistake, but big mistake.  Try to convince your co-workers, peers and whoever is involved to think about the management upfront.  I promise you, you will gain with it.  We actually use sometimes Operations Manager to take away human errors out of our implementations.  If it is there, use it.  Oh, and never forget Backup before you start a new project.  I hate it when they do that and it always costs me way too much time to resolve everything afterwards.  First thinking, then touching the keyboard.

 

Conclusion

Do you agree with me?  Some will say no, others will say yes and others will say yes but…  And it is normal.  You should look at the system center suite as a framework.  Don’t just install everything with next next next finish, but think about it before you start.  Think about the advantages that you can have if you model the suite to the business needs.  You will save valuable time to spent with your family, investigate new stuff or get a go from your boss to be at the techdays next year and come over to the SCUG booth and have a discussion with me :-). 

If you have comments about these Friday evening thoughts just shoot.  Don’t agree? Fire away.  The more people, the more ideas and hey, the better the results.  Some of you will probably have even more wrong assumptions about System Center.  Will be glad to hear them.

Hoping that you do these things above so that you don’t read this in the weekend 😉 Oh and sorry about my proza, it is friday evening after all, and I really just typed my thoughts :-)

Cheers,

Mike

Techdays 2010: What I saw

6:34 am in Uncategorized by mikeresseler

Hey All,

The techdays 2010 Belgium are over.  After two days of intensive sessions, talks and meeting with great people, it’s back to the day-to-day work again.

So here are my thoughts about the two days (didn’t went to the pre-conference)

Arrived way to early (how come there is always traffic when you are in a hurry, and when you actually count traffic into your travelling time, there is none whatsoever :-)) but started the day with some breakfast and a lot of coffee.  Went over to the SCUG booth (of course…).  After talking to a lot of people, it was time to start.

It all started with the keynote from Luc Van de Velde, called IT in a Transformative Time, how we can change the game.  It was years ago since I saw a keynote that was this good.  Yes it contained the traditional “what is Microsoft up to”, “how good Windows 7 is”, “what will bring the future”, …  But it also brought an interactive session of three great speakers who gave a great performance on stage, and demonstrated, in a story-telling way, what the future will bring, and what the new tools of Microsoft will give us.  I especially liked the part where I could work from my lazy chair at home and still have all the rich tools with me.  I’m already dreaming of working this way, with a nice glass of wine next to me.  For those who want to view it (and I think you should… http://edge.technet.com/Media/TechDays-2010-Keynote-IT-in-a-transformative-time-how-can-we-change-the-game/

Things I certainly remembered from this session:

  • Bitlocker-to-go
  • System Center for Azure
  • Branch Cache

Next session: Corey Hynes, Managing Server 2008 R2 and Windows 7 with Windows PowerShell V2.

This session was all about PowerShell.  And about Remote powershell.  With a few quick demo’s and some interesting tools, Corey showed us how easy it is to do remote work with PowerShell.  To quote him, ‘If you want to mean something the next few years in IT, start knowing PowerShell or IPv6…).

Things I certainly remembered from this session:

  • Modules (Import-module, get-module …)
  • Troubleshootingpacks (certainly worth to check out, you can troubleshoot remote computers with these things)
  • GPO backup with two commands on one line: get-gpo-all | backup-GPO
  • Steps to enable remote management (Enable Remote management through GUI (win2k8R2) or PowerShell (Core / Windows 7)
  • Firewall needs to open port 5985 for http

Yep, I think I’m going to need to invest some time in PowerShell because I saw really cool stuff.

Next-up, lunch-time, which was not bad, but not great.  Those catering businesses should really learn to understand that we work in IT, we need heavier food :-)

Next session: Sneak-preview of Windows Server 2008 R2 and SP1, by Bryon Surace.

This was an ‘undisclosed-session’ and the reason for that was that they couldn’t announce it in front that there was going to be a sneak preview of SP1.

What we saw, was a history of Virtualization (from Microsoft) throughout the years, and spent the last 45 minutes or so on SP1. 

And yes, SP1 contains what everybody expects: Dynamic Memory.  And they thought it through.  You can set your virtual environments with a Startup amount of Ram and a maximum.  This doesn’t mean that your server will have at least the Startup Ram, it will lower if it can.  But they also build in some sort of Buffering, so that certain servers can get a block of memory very fast.  You can put your thresholds and importance for each server separately which I think is a good thing.

Also on SP1, RemoteFX.  If you are running Windows Server 2008 R2 as the terminal server (yeah, I know, RDS…) and Windows 7 as the client (you need RDP 7.1) you can have a full-blown remote Windows 7 Desktop with all the features in it.  Silverlight, DirectX, Windows Media, Aero Glass, Flash… It will work, and fluently.  How they did it?  You can now have configure your virtual machine with a “3d graphical card”.  Something to watch out for

Things I certainly remembered from this session:

  • Dynamic Memory
  • Memory Priority
  • RemoteFX
  • 3d graphical card in a server :-)

Next-up: Kurt Roggen with Buidling Hyper-V in a real-life world

To be honest, not exactly what I expected.  I figured this would be tips and tricks about a hyper-v implementation,  but it was more a session on how to size your hyper-v to make the best hardware choices.  Looking at it afterwards, this was a nice approach for an architect and I certainly took some points with me.  His point of view on asking the hardware vendor the right questions was a good approach, as long as you know what you need.

Things I remembered:

  • Second Level Address Translation (make sure that you have that now!)
  • How many nics?  What will that do with your switches (cables!)
  • VLans and Network Teaming
  • VMQ and Network Teaming
  • TOE
  • Jumbo frames

And then I headed to Rhonda Layfield for Deploy Windows 7 Using Microsoft’s FREE Deployment Tools

Rhonda is an unbelievable speaker.  She threw 75 slides and 6 demo’s in 75 minutes at us, and at the end of the session, it was all clear and understood.  She is a passionate speaker, and explained many possibilities and pro’s / con’s for MDT and WDS.  Since I quite often work with MDT, not much new here, but certainly a confirmation and hearing the people afterwards, they all agreed that this was a session not to miss.

Things I remembered:

  • Wim VS Vhds (Vhds not supported in MDT2010… yet?)

And with that, the first day ended for me.  There was one more timeslot, but had to attend something else.

Day 2 also started not so well since I had to go to the office for a few hours.  Luckily, I could close-up quickly so that I was in time for session 2.

Session: WMI for the SCCM Admin by Kim Oppalfens

Yep, our very own, SCUG co-founder, MVP and SCCM expert gave a session on techdays.  What we got there was, to start with, a short but thorough overview of WMI, the tools and some good pointers and tricks.  And then came the Magic.  He actually succeeds on making SCCM event-driven instead of schedule-based.  This session was certainly a great one, but maybe a little hard for most of the people in the room.  People from MMS, if you read this, this is a must-have session on MMS :-)  Already waiting until the session gets online.

Thins I remembered:

  • WbemTest, WMI Explorer, WMIC, Policspy
  • Synchronous, Asynchronous and semi-synchronous WMI calls
  • Licenses that expire on the 1st of april :-)

Lunch again, and now it were some sandwiches (these little ones you know…) Most heard comment: They should have placed plates so that you could take a few at the same time and didn’t had to run to the table all the time 😉

Next on my plate: Performance, Resource, and Security Optimizations for Hyper-V and System Center Virtual Machine Manager by Bryon Surace and Belgium’s Arlindo Alves.

This session was a 75-minute trip with a lot of tips for implementing Hyper-V.  If you missed it, certainly worth to watch it if it comes online.

Things I remembered:

  • Tips for AV (exclusion of VHD’s)
  • Use the SCVMM library
  • Always create a windows server 2003 template with 2 CPU’s.  Downsize later if necessary (enjoy always the multi processor HAL)
  • Performance of Dynamic VHD’s is getting close to a Fixed VHD
  • Pass-through performance is slightly better so only use it in IO-intensive demands
  • Snapshot chaining performance is much better in R2
  • Always close the Window Managers, they take resources!
  • Don’t use the Root partition for anything else then Hyper-V, management consoles and AV
  • Pro Tip: Exclude the SCVMM server from the tips if it is running virtual

Next on topic: Maarten Goet with Opalis

Maarten is a well known speaker about System Center products and gave a session about Opalis.  He started with explaining what Opalis is and what it can mean for an IT environment.  The dynamic data center is getting closer and closer :-)

Things I remembered:

  • Microsoft has a lot of work to do
  • DPM is not yet on the roadmap (as a DPM fan, I’m disappointed)
  • Opalis is a continuous work-in-progress product
  • But you can do great stuff with it…

Final session for me (had again some other business to attend to… damned) Virtualization for the End User – Implementing VDI with Windows by Corey Hynes

When you give a presentation about VDI, and start with telling that you will be convinced of NOT using VDI by the end of the session, then you get my attention :-)  In the 75-minutes, we got an incredible overview of the VDI solutions from Windows and Windows + Citrix offering.  He explained all the different layers and gave us great information on how to handle a project implementation.  Corey gave us a lot of pointers and where the pitfalls are.  At the end of the session, he certainly convinced me of the fact NOT to use VDI unless in specific cases.  And I think that he is right (funny was that we had this discussion with the SCUG members just before this session).  VDI is certainly something to look at, and will give a lot of advantages in many cases.  But it won’t be always a great solution.  Make sure that you understand the requirements of the project very well, before deciding on using VDI, or just plain RDS or Xenapp.  He ended with a funny movie about one computer that provisioned 250 workstations over one gigabit UTP line.  And the damn thing worked :-) http://youtu.be/IWGMwmv13UE

Conclusion:

Some great sessions, some good sessions, others where I hoped to much or did already know what they said.  Still, considering the public, I think it was a great match of sessions and speakers.  Make sure to keep up with the good work on selecting your speakers.  The food could be better (or maybe I’m just to attached to good food :-)), we had some nice questions at the user booth, I met some great people and I even got interviewed at the SCUG booth (hope they loose that movie ;-)).  The conference bag was lousy (Hey, it is crisis after all) but the coffee mugs made it all good :-).  As a true system center fanatic, I missed of course System Center sessions.  So for next year Arlindo, get JBuff over here or contact us for DPM and SCE sessions :-)

Cheers,

Mike