Quantcast
Channel: Stefan Roth
Viewing all 140 articles
Browse latest View live

OMS – Agent for Linux Troubleshooting Help

$
0
0

In my previous post I introduced the OMS Agent for Linux. This time I would like to give you some troubleshooting starting points. There are countless possibilities for errors to occur, so it is nice to have at least a consolidated list where to find a log or configuration file. This should give you a pretty good overview of the most important places to look for. For detailed configuration scenarios read the documentation on GitHub .

Documents-icon

Log file paths:

In general  the logs for the OMS Agent for Linux can be found at:

/var/opt/microsoft/omsagent/log/

The logs for the omsconfig (agent configuration) program can be found at:

/var/opt/microsoft/omsconfig/log/

Logs for the OMI and SCX components (which provide performance metrics data) can be found at:

/var/opt/omi/log/ and /var/opt/microsoft/scx/log

Logs for the DSC setting can be found at:

/opt/microsoft/omsconfig/Scripts/


Document-icon

Specific log files:

The log files for omsagent (fluentd) can be found here:

/var/opt/microsoft/omsagent/log/omsagent.log

The log files for onboarding & certificates:

/var/opt/microsoft/omsagent/bin/omsadmin.log

The log files about DSC feature omsconfig (DSC):

var/opt/microsoft/omsconfig/omsconfig.log
var/opt/omi/log/omiserver.log

The log files for perf counters issues:

/var/opt/microsoft/scx/log/scx.log
/var/opt/omi/log/omiserver.log


speed-test-icon

Specific OMS agent tests:

Operating system namespace probe on OMI agent:

/opt/microsoft/scx/bin/tools/omicli ei root/scx SCX_OperatingSystem

Agent namespace probe on OMI agent:

/opt/microsoft/scx/bin/tools/omicli ei root/scx SCX_Agent

If you want to display the desired configuration:

sudo su omsagent –c /opt/microsoft/omsconfig/Scripts/GetDscConfiguration.py

If you want to test desired configuration:

sudo su omsagent –c /opt/microsoft/omsconfig/Scripts/TestDscConfiguration.py


settings-icon

Configuration files:

If you want to configure the Syslog collection edit one of these files, depending on your distribution:

/etc/rsyslog.d/rsyslog-oms.conf

/etc/syslog.conf

/etc/rsyslog.conf

/etc/syslog-ng/syslog-ng.conf (SLES)

If you want to configure general agent settings:

/etc/opt/microsoft/omsagent/conf/omsadmin.conf

If you want to configure performance counter, alert settings for Zabbix, Nagios and Container data:

/etc/opt/microsoft/omsagent/conf/omsagent.conf

If you want to configure omiserver:

/etc/opt/omi/conf/omiserver.conf

If you want to configure omicli:

/etc/opt/omi/conf/omicli.conf


Network-Panel-Settings-icon

General problems & solutions:

image

(Source: Microsoft)

Some more OMI specific troubleshooting steps you can find here http://social.technet.microsoft.com/wiki/contents/articles/19527.scom-2012-r2-manually-installing-and-troubleshooting-linuxunix-agents.aspx


Filed under: Azure Operational Insights, Configuration, OMS, Recommended, System Center, Troubleshooting, Xplat

OMS – Price & Size Calculator

$
0
0

image

You might have already  heard of Operations Management Suite (OMS) or you are already using the free OMS version which is great, besides the limitations :). Now you are deciding to actually buy a licenses for your company and you don’t know how much the licenses will cost. Luckily Microsoft has created an online calculator to estimate cost and the actual services you get. Navigate to http://omscalculator.azurewebsites.net/ and get an overview which license model is appropriate for you.

Data gathering page…

image

…and the actual comparison between the two license options…

image

Enjoy!


Filed under: OMS, System Center, Tool

PowerShell – Remote Desktop Cmdlets “A Remote Desktop Services deployment does not exist…”

$
0
0

PowerShellBanner

Recently while automating some cool stuff I needed to create a PowerShell workflow for deploying VDI clients using Windows Server 2012 R2 Remote Desktop Services. One of the first things I always do is checking the existing PowerShell support and I figured out there is a large number of cmdlets available for managing RDS services. So the first thoughts were, this is going to be an easy walk in the park. Well, not really…

One of the first things I wanted to know, which users are assigned to which client. The Get-RDPersonalVirtualDesktopAssignment cmdlet gives you this information by providing the connection broker and collection name…

Get-RDPersonalVirtualDesktopAssignment [-CollectionName] <String> [-ConnectionBroker <String> ]

Because I will execute the script in a PowerShell workflow from a remote machine (SMA) using WinRM, I did some tests and I used Invoke-Command to do some PowerShell Remoting just to get started. Usually we develop PowerShell workflows starting with its core part / functionality and then wrap all other stuff around it, like logging, error handling and PowerShell workflow structure.

My test command looks like this…

$ConnectionBroker = "ConnectionBroker.domain.com"
$VDICollection = "MyVDICollection"
$UserName = "domain\user"

Invoke-Command -ComputerName $ConnectionBroker -Credential (Get-Credential -UserName $UserName -Message "Enter credentials") -ScriptBlock `
{
Import-Module RemoteDesktop;`
Get-RDPersonalVirtualDesktopAssignment -CollectionName $Using:VDICollection -ConnectionBroker $Using:ConnectionBroker
} 

The specified user has administrator permission on the connection broker and VDI deployment itself, so it should be working just fine. Well, it did not and I received an error…

A Remote Desktop Services deployment does not exist on ComputerName. This operation can be performed after creating a deployment. For information about creating a deployment, run "Get-Help New-RDVirtualDesktopDeployment" or "Get-Help New-RDSessionDeployment".
+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Get-RDPersonalVirtualDesktopAssignment
+ PSComputerName : ComputerName

To make it short it seems that the Get-RDPersonalVirtualDesktopAssignment connects to the connection broker doing another hop, so we run here into a second hop problem. What is a ”second hop problem”? Don Jones has published a nice post here explaining the second hop. In this paper on page 39 Ravikanth Chaganti explains our problem a bit more in detail and how to handle it.

Finally to solve the problem we need to use CredSSP for passing the authentication to the second hop. In order to do that we need to use the parameter “-Authentication CredSSP” which will delegate our credential to the “second” hop. Be aware that you also need to enable CredSSP either via GPO or via PowerShell using Enable-WSManCredSSP cmdlet and then it worked like a charm.

$ConnectionBroker = "ConnectionBroker.domain.com"
$VDICollection = "MyVDICollection"
$UserName = "domain\user"



Invoke-Command -ComputerName $ConnectionBroker -Credential (Get-Credential -UserName $UserName -Message "Enter credentials") -ScriptBlock `
{
Import-Module RemoteDesktop;`
Get-RDPersonalVirtualDesktopAssignment -CollectionName $Using:VDICollection -ConnectionBroker $Using:ConnectionBroker
} -Authentication CredSSP

I would like to thank my buddy Fulvio Ferrarini and Marc van Orsouw for helping troubleshooting this issue.

This is an old problem but it does not always present you with an “Access Denied” error or anything like that as you can see in this example. I hope it save you some time!


Filed under: Configuration, Script, SMA, Troubleshooting

SCOM – Authoring History and System Center Visual Studio Authoring Extensions 2015

$
0
0

mp

I usually don’t blog about new releases of management packs or similar things, but this time I feel I have to do so. If you have been working for some time with SCOM, you know there is a (long) history behind authoring MOM/SCOM management packs. Back in the days where MOM 2005 used to rule the monitoring world, you had these AKM management pack files which could not be changed or authored outside of MOM. In 2007 when SCOM 2007 was released, Microsoft changed that concept to the sealed (MP extension) / unsealed (XML extension) management pack concept which is still valid up to this point. In the same wave Microsoft released the widely loved Authoring Console which was a GUI driven approach and more or less intuitive to work with for an IT Pro.

ac

In 2009 the next version of SCOM 2007 R2 was released and it also included a newer version of the Authoring Console which was included in the System Center Operations Manager 2007 R2 Authoring Resource Kit which also included MP Best Practice Analyzer, MP Spell Checker, MP Visio Generator, MP Diff etc. to make your MP authoring experience a bit more comfortable. 3 years later in 2012, Microsoft released SCOM 2012 and also a new way of authoring management packs – Visual Studio Authoring Extensions for System Center Operations Manager were born.

vsae

This extension was basically an Add-On for Visual Studio 2012 and would let you author MP fragments in XML in a mix of a semi-GUI driven way. It has several advantages:

  • Work directly with the XML of the management pack allowing you to create any management pack element and monitoring scenario.
  • Provides XML templates and IntelliSense for different management pack elements so that you don’t have to have detailed knowledge of the schema.
  • Allows you to create XML fragments containing different management pack elements. The fragments can be copied within the management pack, to another management pack, and combined to build the final management pack.
  • Allows multiple authors to work on a single management pack project at the same time.

(Source:TechNet Wiki)

The downside of VSAE was / is that it is focused on experienced IT Pros or MP developers, but not for an average SCOM administrator because of the MP authoring knowledge needed – “You need to know what you do”.

Microsoft’s answer to this problem was a huge flop, called System Center 2012 Visio MP Designer VMPD and was an add-in for Visio 2010 Premium. The idea behind was, to author MPs in a graphical way, utilizing Visio as a graphical interface and by pressing a button the MP was pushed to SCOM. This way of authoring was very limited by some basic monitors, rules and health model.

vmpd

Some time later, Microsoft discontinued to invest in this tool and started a co-operation with Silect, to build a free “successor” of the former Authoring Console called Silect MP Author. This tool was / is meant for the IT Pro who gets wizard driven support for authoring management packs. In its first version, MP Author was a kind of buggy and also some basic functionality like editing an authored MP, PowerShell script support etc. which was fixed in the later released service packs. The current version of MP Author is MP Author SP 5 . In the mean time Microsoft released support of Visual Studio 2013 for Visual Studio Authoring Extensions 2013 for System Center Operations Manager. Which was basically only a compatibility support release for Visual Studio 2013.

Up to the year 2015, not much changed and soon Visual Studio 2015 was released. The problem was that the Visual Studio Authoring Extensions 2013 for System Center Operations Manager was not supported. Microsoft did NOT even consider supporting Visual Studio 2015 and any new version of Visual Studio! In summer 2015 Microsoft released a UserVoice questionary asking for feedback on any SCOM topic and the community feedback was that strong and powerful, that Microsoft decided to release a new version of Visual Studio Authoring Extensions 2015 for System Center Operations Manager which supports Visual Studio 2012/2013/2015 (all editions) . The release date was yesterday :).

The feature summary looks like this:

  • VS Projects for Monitoring MPs, System Center 2012 and later MPs including Operations Manager and Service Manager.
  • MP Item Templates for quick creation of MP Items.
    • XML MP Item Templates (generates MP XML for editing).
    • Template Group Item Templates (Abstract your intent from MP XML).
    • Snippet Templates (generates MP XML from CSV)
  • IntelliSense for MP XML for the following versions:
    • System Center Operations Manager 2007 R2
    • System Center Operations Manager 2012 and later
    • System Center Operations Manager 2016
    • System Center Service Manager 2012 and later
  • Integrates into Visual Studio Project System with *.mpproj.
    • Enables building within VS & MSBuild.
    • Supports custom build tasks (simply edit *.mpproj or *.sln)
    • Build multiple MPs (multiple *.mpproj) in a solution.
    • Integrates into any VS supported Source Control systems.
  • MP Navigation Features
    • Management Pack Browser for browsing MP Items.
    • Go to Definition
    • Find All References
  • ResKit Tools integrated
    • Workflow Simulator
    • Generate Visio Diagram
    • MP Best Practice Analyzer
    • MP Spell Checker
    • MP Cookdown Analyzer

I am very happy with this decision and this short history lesson shows you, how Microsoft listens to you and it also shows you how strong the community feedback can be. It even can steer the US Titanic a little bit in its direction.


Filed under: Authoring, Development, Management Pack, Tool

PowerShell – SCCM Cmdlet Library “Get-CMDeviceCollection : Specified cast is not valid.”

$
0
0

While doing some SCCM automation we bumped into an issue with the SCCM Cmdlet Library 5.0.8249.1128.

When you try to execute a workflow using PowerShell Remoting in SMA like this…

workflow test {

InlineScript{ $VerbosePreference = "Continue" 

$ModuleName = (get-item $env:SMS_ADMIN_UI_PATH).parent.FullName + "\ConfigurationManager.psd1" 

Import-Module $ModuleName 

cd P01: 

$DeviceCollection = Get-CMDeviceCollection -CollectionId "P010000C" 

Return $DeviceCollection 

} -PSComputerName "SERVERFQDN" 

}

You will receive an error in SMA like this…

Get-CMDeviceCollection : Specified cast is not valid.
At test:3 char:3
+ 
    + CategoryInfo          : NotSpecified: (:) [Get-CMDeviceCollection], InvalidCastException
    + FullyQualifiedErrorId : System.InvalidCastException,Microsoft.ConfigurationManagement.Cmdlets.Collections.Commands.GetDeviceCollectionCommand
    + PSComputerName        : [SERVERFQDN]

After some investigation we could not determine the cause of it, so the last option was to rollback to Cmdlet Library Version 5.0.82.31.1004 and then everything worked fine. The problem exists if we provide a named parameter like -CollectionId or –Name the problem also exists in other Cmdlets and also in the latest version of SCCM 2016 (vNext). Microsoft has confirmed / fixed this issue and it will be available in the next version. I have filed this bug on connect.

I hope this helps!


Filed under: Configuration, SCCM, SMA, System Center, Troubleshooting

OMS – Free Microsoft Operations Management Suite (OMS) E-Book

$
0
0

image

Great achievements deserve great attention! The”Black Belts” of OMS Tao Yang, Stanislav Zhelyazkov, Pete Zerger and Anders Bengtsson have just released a free new e-book about the latest and greatest  Microsoft Operations Management Suite (OMS). It has over 400 pages and covers the following topics…

Chapter 1: Introduction and Onboarding
Chapter 2: Searching and Presenting OMS Data
Chapter 3: Alert Management
Chapter 4: Configuration Assessment and Change Tracking
Chapter 5: Working with Performance Data
Chapter 6: Process Automation and Desired State Configuration
Chapter 7: Backup and Disaster Recovery
Chapter 8: Security Configuration and Event Analysis
Chapter 9: Analyzing Network Data
Chapter 10: Accessing OMS Data Programmatically
Chapter 11: Custom Management Pack Authoring
Chapter 12: Cross-Platform Management and Automation

If you are curious about OMS and need to get some guidelines how to get you started or even some deeper knowledge, I highly recommend reading this book. You can download it here!

Thank you guys for providing such great contribution to the community!


Filed under: Book, OMS, Recommended

Azure Automation – ISE Add-On Editing Runbooks

$
0
0

image

Well it has been a while since last post, because there is a lot going on in my private life as also in my job. But now some “tasks” are completed and I will have more time for community work again. Microsoft product machinery is running at high speed in all areas. One tool I really appreciate is the ISE add-On for Azure Automation. I have written quite a lot of runbooks in the past for SMA using regular ISE and Visual Studio but a tool for writing runbooks which integrates into the SMA environment is missing. This add-On integrates seamlessly into your ISE environment and lets you write runbooks for Azure Automation in different flavors like regular PowerShell scripts and PowerShell workflows and executes them using Azure Automation. As a target you are able to choose either Azure itself or a Hybrid Worker Group. Joe Levy (PM Azure Automation) has already written a post about this add-on. I would like to dive a bit more into this.

How does it look like?

As you can see it seamlessly integrates into ISE…

image

How do I install it?

The installation is quite easy, depending on your needs. The ISE add-on module is available from PowerShell Gallery . Just open ISE and run

> Install-Module AzureAutomationAuthoringToolkit -Scope CurrentUser

Then, if you want the PowerShell ISE to always automatically load the add-on, run:

> Install-AzureAutomationIseAddOn

Otherwise, whenever you want to load the add-on, just run the following in the PowerShell ISE:

> Import-Module AzureAutomationAuthoringToolkit

The add-on will prompt you to update if a newer version becomes available.

How does it work?

I just started ISE and on the right side of the ISE there you can provide all necessary configuration. You are able to connect to Azure with your account and subscription in the Configuration tab…

image

As soon you are connected, you are able to manually download existing runbooks and assets or upload locally created runbooks and assets…

image

If you leave the default, all your configuration like runbooks and assets gets downloaded into your user profile path…

image

…within that folder and some folder hopping you find the actual files…

C:\Users\StefanRoth\AutomationWorkspace\[Subscription]\[Ressource Group]\[Automation Account]

image

if you look at the encrypted (SecureLocalAssets.json) and unencrypted (LocalAssets.json) files you will see this…

image

The strange thing is, that the connection strings are saved within the encrypted file although they are not encrypted.

What’s cool?

Well you are able to run your PowerShell scripts or PowerShell workflows either on Azure or on your Hybrid Worker Group, the output will be displayed in a separated window…

image

Right from ISE you are able to create the scripts or workflows…

image

…and of course all necessary Assets either encrypted or not…

image

Conclusion:

It is a very lightweight tool that works just right. I really like this approach and hope Microsoft will do the same for SMA. Few enhancements I would suggest are:

  • Some sort of grouping in a folder structure for runbooks and assets
  • Managing the runbooks with some Tags for classification
  • Having some sort of version control, like integration into TFS Online
  • Dependency view (TreeView) to see which child runbook belongs to which parent runbook
  • SMA integration

I hope this gives you a good overview of this add-On! Download the source code here


Filed under: Authoring, Azure Automation, Configuration, Development, Script, Software

Quick Post – Get Cmdlet Related DLL

$
0
0

image

In some situation you are running a cmdlet, but you have no idea where it is stored. I mean you don’t know to which “*.dll” it belongs to or maybe you want to know some more details about the command.

A very easy way to figure this out for the Get-AzureRmResource cmdlet…

(Get-Command Get-AzureRmResource).DLL

image

…as you can see the output will be the path to the “Microsoft.Azure.Commands.ResourceManager.Cmdlets.dll”. Of course you could run this command with any other cmdlet.

If you want to see other interesting details just run…

Get-Command Get-AzureRmResource | Select *

image

Finding the related DLL was quite useful for me in the past, so I though it might help you as well.


Filed under: PowerShell, Script

Office 365 – Microsoft. Exchange.Data.Storage. UserHasNoMailboxException

$
0
0

While playing around with Office 365 I bumped into an issue, which you might also could face. I created an administrator role in Azure Active Directory and activated my Office 365 E3 License (thank you Microsoft for this free license!). After setting up my tenant properly I assumed I could log into my Office 365 mailbox. But the I faced this error here…

1

…hmm and when I tried to add the user in the Exchange admin console to add a mailbox I saw this greyed out “pencil” sign…

image

I did “bing” around, but found not any solution to this problem. After a short while, inspecting my admin user in the Office 365 Admin Center…

image

I figured out that I did not assign a license to my user….

image

After assigning the license I could create a mailbox for my user and also login into the mailbox.

5Well, the error message is correct, but you will find misleading information when you try to find the answer online. Sometimes the solution is not that complicated Smiley. I hope this saves you some time…


Filed under: Office 365, Troubleshooting

PowerShell – PowerShellGet Module “Publish-PSArtifactUtility : Cannot process argument transformation on parameter ‘ElementValue’”

$
0
0

In PowerShell 5.0, Microsoft introduced the PowerShellGet module. This module contains cmdlets for different tasks. E.g. it lets you easily install / upload PowerShell modules / scripts from and to an online gallery such as PowerShellGallery.com. It even lets you find scripts, modules and DSC resources in such repositories. This is a fantastic way to share your script goodies and make it available to others, which can use them on-premise or even in Azure Automation for their runbooks or DSC projects.

In every collaboration scenario, there must be some rules. Publishing scripts has also some rules to follow, otherwise all scripts will end in a chaos and no one will ever find an appropriate script with the latest version etc. Therefore we need to provide structured data for version control, prerequisites and author information. This can be done using the PowerShellGet module.

Here just an overview of the cmdlets provided by this module…

image

Here comes the first pain point, if you try to run a cmdlet e.g. from your Windows 10 client, check the version of the module. In the screenshot above I ran it on an Azure VM with Windows Server 2016 TP4 installed. On my actual Windows 10 client I see this…

image

As you can see, there is a difference in version and cmdlet count. If you think now, that you could just upgrade the PowerShell version to the latest release on your Windows 10 box, well you need to wait until end of February 2016, because Microsoft has pulled the latest RTM release back, because of some issues. Find the post and status on the PowerShell blog . If you managed to get to the latest release of the PowerShellGet module and you have the full set of cmdlets available, you are ready to start.

So how does that work?

Let’s assume we want to publish a PowerShell script to http://PowerShellGallery.com . Before you can start, you need to register with your Microsoft or Organizational account and then you will be ask to give PowerShell Gallery access to your account.

image

After registration you will get a key which will be needed later for uploading your files.

The least information needed to publish a script or a module is the following metadata provided in the header part of the script:

  • Version number
  • Description
  • Author
  • A URI to the license terms of the script

[Source]

In order to get a template structure for the metadata just run…

New-ScriptFileInfo -Path C:\Temp\myscript.ps1 -Version 1.0 -Description “My description”

image

This will create a new script file with a bunch of header data. As I have mentioned before, a must requirement are only VERSION, DESCRIPTION, AUTHOR and LICENSEURI, if you want to publish your script. If you don’t add this data, the Publish-Script or Publish-Module cmdlet will complain and you won’t be able to upload the files to the PowerShellGallery.com . After you finished editing the data and you feel like having everything the way you want it, then you are ready to publish your script. As an example I have just played with it and this is, how it could look like….

image

If you already have a file written and you just need to update the metadata you could use Update-ScriptFileInfo -Path “C:\Temp\Script.ps1” -Version 2.0 –PassThru. I was not able to do so, the cmdlet always failed requesting to provide all parameters (null value was not allowed).

If you are in doubt about your metadata, you simply can test it by using the cmdlet Test-ScriptFileInfo -Path C:\temp\Get-ExpiredWebhook.ps1 this will read the information and display it accordingly…

image

…and all properties shown here…

image

But there is another problem, which I initially wanted to blog about and took me few minutes to figure out. If you have a line break within your description, it look like this…

image

…it shows a comma although there is no comma in the description…image

Trying to upload to PowerShellGallery using the Publish-Script -Path C:\Users\returnone\Desktop\Get-ExpiredWebhook.ps1 -NuGetApiKey 12345678-1234-1234-1234-123456789123   fails with the following error which is in my opinion not very clear…

Publish-PSArtifactUtility : Cannot process argument transformation on parameter 'ElementValue'. Cannot convert value
to type System.String.
At C:\Program Files\WindowsPowerShell\Modules\Powershellget\1.0.0.1\PSModule.psm1:2154 char:17
+ ... Publish-PSArtifactUtility -PSScriptInfo $PSScriptInfo `
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (:) [Publish-PSArtifactUtility], ParameterBindingArgumentTransformationExce
ption
+ FullyQualifiedErrorId : ParameterArgumentTransformationError,Publish-PSArtifactUtility

image

After removing the line break, the Publish-Script cmdlet worked perfectly. I could reproduce the error and each time I saw this problem. The encoding was UTF-8 and just a plain text file / script.

If you want to know more about publishing scripts to PowerShellGallery.com go to that site and explore it.  If you want to know more about the PowerShellGet module in general, which is available in PowerShell 5.0 go to TechNet here .

The idea behind these cmdlet is very cool and also easy to use, but there is still some work to do, to fix some of these bugs.  These few hints might answer some questions Smiley.


Filed under: Configuration, PowerShell, Script, Troubleshooting

Azure Automation – Twitter + IFTTT + Webhook = Start Runbook

$
0
0

image

I assume you know Twitter and you probably also know what a webhook is, right? No? Ok, a webhook is just a HTTP POST. In Azure Automation we are able to create a webhook for a runbook. This runbook will “consume” the webhook request  (URL) + post data and start the runbook. The cool thing is, that you are able to trigger a runbook in a secure way without the need of credentials and you are able to pass parameters within this request. Well, this is nothing special in todays world, but sometimes the combination of things make the magic.

Another technology, which has been around for a few years is IFTTT (If This Than That), this is a online service that let’s you choose a channel A (trigger) and if a certain condition happens it will trigger channel B (action). For example channel A could check the weather in Switzerland (because you are planning a trip to Switzerland) and if it will start raining you could trigger channel B to receive a warning by email. This combination of channels is called a “recipe”. You can choose from dozens of channels and combine them as you like. I highly recommend to check this service out, it is easy and fun.

In this post I want to show how to trigger a webhook / runbook, if someone is tweeting about SCOM.

1) Create a runbook

In Azure Automation I just created a simple PowerShell runbook that looks like this called Hello-Twitter

image

param (
 [object]$WebhookData
 )

if ($WebhookData -ne $null) 
	{
		$BodyContent = $WebhookData.RequestBody
		Write-Output "There was a tweet from $BodyContent"
	}
else
	{
		Write-Error "Something went wrong buddy"	
	}

There is not much more to say than that, so let’s create a webhook next.

2) Create a webhook

So there are two ways you could create a webhook either by GUI or using PowerShell. I prefer to use PowerShell, because it is easier. Make sure you change the parameter to your environment accordingly. In this example the webhook will expire in 10 days from now…

$Credential = Get-Credential

#Authenticate to Azure and AzureRM
Add-AzureAccount -Credential $Credential | Out-Null
Add-AzureRmAccount -Credential $Credential | Out-Null

#Provide the necessary information for your environment
$Webhook = New-AzureRmAutomationWebhook `
 -Name "TriggeredByTwitter"`
 -RunbookName "Hello-Twitter"`
 -IsEnabled 1 `
 -ExpiryTime (Get-Date).AddDays(10)`
 -ResourceGroupName "Automation"`
 -AutomationAccountName "AutomationAccount"

#Print the webhook uri
Write-Host $Webhook.WebhookUri -ForegroundColor Green

If you don’t have Azure PowerShell installed locally and you want to know how to do it read this blog here. If you have done so, you are able to execute this script in ISE or your favorite PowerShell editor.

After you executed the script, you will see something like this…

image

Make sure you copy the URI (green output) to Notepad because we will need it later on.

If I check in the GUI, everything seems ok…

image

So far we have the runbook and the webhook, next we will create the IFTTT recipe.

3)Create IFTTT recipe

First go to http://ifttt.com and create an account. Then click Create Recipe

image

Choose Twitter channel…

image

Choose New tweet from search…

image

Then type hashtag #SCOM and hit Create Trigger

image

So that is the IF part or the trigger next we do the actual action for our webhook…

Click on the that link and you will be redirected to choose the action channel. Search for Maker….

image

Choose make a web request….

image

In this step comes the real meat. Paste the webhook URI into the URL field, set the Method to POST and the Content Type to application/json. Into the body of the request we add the UserName field of the user who tweeted about SCOM. This data comes from the previously configured Twitter channel. Then hit Create Action.

image

And create the recipe…

image

So that’s it :). If you configured everything properly you will see in the Azure portal that the runbook gets triggered and the output looks like this…

image

Everytime someone tweets about SCOM my runbook get’s triggered. I think that rocks!

There are endless possibilities to combine these technologies and build cool solutions. Here I just wanted to show, how to get started and deliver a simple idea.


Filed under: Azure Automation, PowerShell, Script

OMS – Intelligence Packs Cheat Sheet

$
0
0

image

Operations Management Suite (OMS) is one of the (probably) hottest technologies Microsoft is currently working on. If you want to bet on a horse, which will win the crazy technology race now and in the future, OMS will be a save choice. Because of that I highly recommend start using and learning OMS today. There are plenty of sources on the internet to get you started.

OMS uses solutions / Intelligence Packs to add functionality, logic, data and visualization to OMS. As soon you add a solution to OMS, files are downloaded (but not in all cases) to your server where the Microsoft Monitoring Agent is installed. These files look like SCOM management packs and the internal structure is also similar. In a lot of cases, these management packs contain collection rules which are executed at a certain interval. For OMS you can choose two different ways , either you just use the Microsoft Monitoring Agent (MMA) in a “agent only” scenario or you are using the Microsoft Monitoring Agent in conjunction with SCOM. In both situation you will be able to collect data, but not in both situation you will be able to use all solutions, because some solution(s) require a SCOM management group. Another interesting finding is, that not all data gathering processes use the same “methods”. E.g. some solutions just execute PowerShell to gather event and log entries, which will be sent to OMS other solutions use a lot of bundled DLL files to deliver sophisticated data collection. I was interested in getting a kind of an overview, which solution uses what “technology” to collect the information from your systems and what targets are these collection rules using. I did basic investigations, like first activating a solution in the OMS portal and then I checked out what management pack got downloaded to SCOM. After figuring this part out, I checked the management pack itself to see what rules and assemblies it contains etc.

Because every management pack behaves different, I tried to put the most important information into a cheat sheet. I know the solutions will change rapidly and new solutions will come out. But I think for having a first overview / impression, it will help in certain meetings or troubleshooting scenarios.

You will find some deeper information like the following:

  • All rules involved in the data collection process
  • The target class the rules are using
  • What resources (DLL, execpkg) files are used?
  • How frequently the data gets collected (interval)?
  • Are there RunAs profiles involved?
  • What technologies are supported?
  • Agent requirements, MMA only or MMA + SCOM?
  • The solution title is a hyperlink to TechNet article
  • The intelligence pack title is a hyperlink to SystemCenterCore.com which will show all IP / MP details

Let me know if you have any comments, updates or ideas. I will try to frequently update this sheet. You can download the PDF from TechNet here.


Filed under: Azure Operational Insights, OMS, Troubleshooting, White Paper

SCOM – How Data is Encrypted

$
0
0

data_encryption_button-600x450

Recently I got a question from a customer how SCOM traffic is encrypted. Well, I knew that the traffic IS encrypted, but how the encryption works, that is a different story.

First we need to know, about what traffic we are talking about. Is it the communication between agents , respectively healthservices? Is it the encryption of RunAs accounts / credentials within the communication channel? Or, are we talking about the encryption of RunAs accounts within the SCOM database? On TechNet you will find an article talking about the communication and encryption https://technet.microsoft.com/en-us/library/bb735408.aspx but what is the context having certificates or Kerberos in place. To get the full picture, we need to answer these questions.

No one else could answer these questions better than Microsoft itself and therefore of course “Mr. SCOM” Kevin Holmann. All credits to him, he provided me with this very interesting information and letting me publishing it. Thank you Kevin!

Let’s first talk about the healthservice to healthservice communication.

1, Healthservice to Healthservice Encryption and Authentication:

Communication among these Operations Manager components begins with mutual authentication. If certificates are present on both ends of the communications channel (and enabled for use in the registry for the healthservice), then certificates will be used for mutual authentication.  Otherwise, the Kerberos version 5 protocol is used. If any two components are separated across an untrusted domain/forest boundary that doesn’t support Kerberos, then mutual authentication must be performed using certificates.

If Kerberos is available, the agent is authenticated via Kerberos, and then still using Kerberos, the data channel is encrypted using Kerberos AES or RC4 cypher.  A by-product of the Kerberos authentication protocol is the exchange of the session key between the client and the server. The session key may subsequently be used by the application to protect the integrity and privacy of communications. The Kerberos system defines two message types, the safe message and the private message to encapsulate data that must be protected, but the application is free to use a method better suited to the particular data that is transmitted.

If certificates are used for mutual authentication, the same certificates are used to encrypt the data in the channel.

=> Agents are initially authenticated via Kerberos, or Certificates.  Then that same protocol is used for encryption of the channel – per:

https://technet.microsoft.com/en-us/library/bb735408.aspx

From the agent to the gateway server, the Kerberos security package is used to encrypt the data, because the gateway server and the agent are in the same domain. The alert is decrypted by the gateway server and re-encrypted using certificates for the management server. After the management server receives the alert, the management server decrypts the message, re-encrypts it using the Kerberos protocol, and sends it to the RMS where the RMS decrypts the alert.

2. RunAs Credential Encryption Decryption in the Channel

In addition to the default channel authentication and encryption, there is an additional layer of encryption for RunAs account credentials.  There is a self-signed cert that gets generated under the OperationsManager folder in certificates, which gets generated when the healthservice starts, and updated/replaced as needed.  This certificate is solely used to protect RunAs account credentials in the transmission from the management server to the agent.  It doesn’t have any impact on how they are stored on the agent itself, and is not used for authentication.  On the agent RunAs accounts are stored in the registry and protected using DPAPI.  RunAs accounts are sent to the agent as part of the OpsMgrConnector.Config.xml file.  In that file the RunAs accounts are encrypted, base64 encoded and placed in the Message/State/SecureData element.  The encryption key is the agent self-signed certificate.  When the agent starts up it creates or gets the existing certificate and publishes the public key to its management server, which then submits it to the database via the SDK service.  When the configuration service generates configuration for an agent, it looks up the public key for that agent, and then uses that key to encrypt the SecureData part of the configuration XML.  The agent has a cert lifetime set of 1 year and will generate and transmit a new certificate when it is getting near expiration.

You will see associated events for this cert:

Log Name:      Operations Manager
Source:        HealthService
Date:          12/3/2015 8:23:12 AM
Event ID:      7006
Task Category: Health Service
Level:         Information
Keywords:      Classic
User:          N/A
Computer:     {ServerName]
Description:
The Health Service has published the public key [4A A8 71 8B 0D 3F 9E 9D 4A 59 44 D8 EE BC B1 42 ] used to send it secure messages to management group [MG Name].  
This message only indicates that the key is scheduled for delivery, not that delivery has been confirmed.

3. Encryption for RunAs Accounts Stored in the SCOM DB

There IS an RMS Encryption key.  Well, there “was”.  However – it isn’t really an “RMS” key anymore, it is simply a “RunAs Account password encryption key”.  The key is used to encrypt the passwords of the RunAs account credentials in the database, and then decrypt them for use. The first management server generates it, and all subsequent management servers get their copy key when they are installed. We no longer care anymore, because the management servers are federated for the SDK and config, and as long as you have one MS left in a DR scenario – you simply add management servers and the key is preserved.  If you lose ALL your management servers, you install using the /Recover switch, and a name of a previously existing MS – and this automatically regenerates a new key, and this is why you must re-enter your RunAs account passwords for all of them in this scenario, to re-encrypt them in the database using the new key.

This works very similarly to how it worked in SCOM 2007, which in SP1 Microsoft released the CREATE_NEWKEY=1 setup option.  However, it happens automatically now – and is a hands off process – so we (Microsoft) don’t discuss it or back it up, since it isn’t necessary. The key is stored in the Management Server registry, to my understanding. 

  • SCOM 2007:  HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\MOMBins
  • SCOM 2012 (and SCSM):  HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\System Center\2010\Common\MOMBins

The encryption is done via Windows CryptoAPI services (Crypt32.dll) using CryptProtectData: https://msdn.microsoft.com/en-us/library/windows/desktop/aa380261(v=vs.85).aspx . It uses the pOptionalEntropy parameter using a random generator function to add additional entropy to the encryption. From the article liked, here is the relevant detail about the encryption strength:

DPAPI Security

DPAPI provides an essential data protection capability that ensures the confidentiality of protected data while allowing recovery of the underlying data in the event of lost or changed passwords. The password-based protection provided by DPAPI is excellent for a number of reasons.

  • It uses proven cryptographic routines, such as the strong Triple-DES algorithm in CBC mode, the strong SHA-1 algorithm, and the PBKDF2 password-based key derivation routine.
  • It uses proven cryptographic constructs to protect data. All critical data is cryptographically integrity protected, and secret data is wrapped by using standard methods.
  • It uses large secret sizes to greatly reduce the possibility of brute-force attacks to compromise the secrets.
  • It uses PBKDF2 with 4000 iterations to increase the work factor of an adversary trying to compromise the password.
  • It sanity checks MasterKey expiration dates.
  • It protects all required network communication with Domain Controllers by using mutually authenticated and privacy protected RPC channels.
  • It minimizes the risk of exposing any secrets, by never writing them to disk and minimizing their exposure in swappable RAM.
  • It requires Administrator privileges to make any modifications to the DPAPI parameters in the registry.
  • It uses Windows File Protection to help protect all critical DLLs from online changes even by processes with Administrator privileges.

SSLv3 is not required by SCOM. When SCOM uses authentication, and SSLv3 is not present, the standard OS security handshake will negotiate another method, such as TLS.

I hope this answers your questions and provides you with solid information.


Filed under: Configuration, System Center, Troubleshooting

SMA – Database Grooming Some Things You Should Know

$
0
0

know

SMA is Microsoft’s on-premise automation engine and the successor of Opalis / Orchestrator. We have utilized this engine quite a lot and have lots of experience developing PowerShell workflows for SMA. But as every system your need to maintain and pamper it, otherwise it will strike back at some point. We recently experienced such an issue, which also could happen in your environment.

When a runbook is executed it generates a job, see more details here. A job can have different status either be failed, stopped, suspended or running. So, if you decide you want to debug a runbook because it fails all the time, you can turn on different log levels or also known as runbook streams. There is an excellent post on System Center: Orchestrator Engineering Blog explaining how you turn on one of the six different streams like Output, Progress, Warning, Error, Verbose, and Debug. Depending on the type you receive different information levels.

What happens is, as soon you turn on e.g. verbose stream you will see it in the job output like this…

image

A best practice is to keep these streams turned off and only enabling it if you really need them. But why is that? Well, this output has to stay “somewhere” otherwise it would not be “persistent”. In SMA this output gets stored in the Stream.JobStreams table. If you run a select query against this table you will see something like this…

image

If you have a closer look at the Stream TypeName column you figure out the stream type like Verbose, Output, Progress etc. If you see Output, this does not mean it is only data from Write-Output, instead it is also data returned by a runbook for passing as input for the next runbook. As a side note, you should never use Write-Output in your runbooks instead use Write-Verbose. Write-Output is only meant for output objects and consuming by other runbooks.

Let’s assume you did leave this the switch LOG VERBOSE RECORD accidentally turned on, set to TRUE…

image

What happen is, that the runbook will log verbose data into the SMA Stream.JobStreams database table and the Stream.JobStreams table will grow quickly. If you want to figure out which runbook as e.g. verbose logging activated use this query…

SELECT
      [RunbookName]
      ,[LogDebug]
      ,[LogVerbose]
      ,[LogProgress]
  FROM [SMA].[Core].[Runbooks]
  WHERE LogVerbose = 1 

What happen if you don’t check these settings you could run into some trouble. I show you, what I mean.

From time to time it makes sense to run the SQL default report like Disk Usage by Top Tables which outputs the largest tables in your database…

image

This will show the largest tables like in this example…

image

As you can see the Stream. JobStreams table is the largest table in SMA.

This leads us to the question, isn’t there some kind of grooming that would take care of this? The answer is yes, there is according to TechNet it says:

  • By default, the database purge job runs every 15 minutes, and it runs only if there are records to purge.
  • Records are purged only if they are older than the default duration of 30 days. This time is configurable by using the Set-SmaAdminConfiguration cmdlet and setting the –PurgeJobsOlderThanCountDays parameter.
  • If the total job record count exceeds the MaxJobRecords parameter set by the same Set-SmaAdminConfiguration cmdlet, then more job records will be purged. The default value for this parameter is 120,000 records.

In order to check these settings we can run Get-SmaAdminConfiguration, which shows in our picture the default settings…

12

But how does grooming work in more detail? When you install SMA, a SQL job is created called SMA Database Purge Job

image

This job runs every 15 minutes and executes a stored procedure called Purge.PurgeNextBatch. This stored procedure triggers a bunch of other stored procedures to groom data to keep the database small and in consistent shape. But now let’s have a look at the Purge.PurgeNextBatch stored procedure to understand WHEN it cleans records out of the SMA database.

The grooming process will first delete all records that are older than 30 days and this will be done in batches of 1000 records. If there are no records, that are older than 30 days (based on the LastModifiedTime time stamp in the Core.Jobs table) but if the row count of the Core.Jobs table is higher, than the defined MaxJobRecords like 120’000 (default) it will also start grooming out these records. If the row count e.g. is 120’900 of the Core.Jobs table the batch size will not be 1000 instead it gets reassigned to 900 and these records will be deleted.

As I mentioned Purge.PurgeNextBatch stored procedure is just the main trigger for other stored procedures seen here like Purge.xxxxxx

14

After the Purge.PurgeNextBatch has determined which records will be deleted, it passes the jobs to the Purge.PurgeJobs, which will take care of these table Stream.JobStreams, Stats.JobStatusLog, Stats.JobSummary, WorkflowState.BinaryData, WorkflowState.TextData, Core.JobExceptions, Core.JobPendingAction, Core.JobStreamStatus, Core.Jobs. As you can see the our large Stream.JobStreams table gets purged every time the job runs.

Well this sounds great, but there is an issue if you are using the SMA database in a SQL Always-On cluster. When you are building your SQL Always-On cluster, you need to be aware of certain things, as mentioned in this article https://msdn.microsoft.com/en-us/library/hh270282.aspx .

Logins and jobs are not the only information that need to be recreated on each of the server instances that hosts an secondary replica for a given availability group. For example, you might need to recreate server configuration settings, credentials, encrypted data, permissions, replication settings, service broker applications, triggers (at server level), and so forth.

Before we have seen, that the grooming job is created when you install SMA. If you run the SMA database on SQL Always-On cluster, it could be, that the job exists on node A (secondary, read-only) and the SMA database will be active on node B (primary, read/write). This means the grooming job will never be able to succeed and in the job history you will see entries like this…

image

You can solve this issue by creating the grooming job on the node which has not the grooming job configured. Select the job Script Job as > CREATE To > New Query Editor Window , this will dump the job into the query window and after that, run this query on the other node of the cluster.

image

Finally, you have on both nodes a grooming job. But, there is another problem, the job will run on both nodes and it will fail on the node, which is currently the secondary node (read-only), because the job cannot modify the database. Therefore, we need to check if the job runs on the primary node or not, and if so the job can start otherwise it should exit. Luckily on Stackexchange there is a snipped about this kind of logic, so we need just to implement this into the job. How to do that?

Open the job properties…

image

Click edit and add these lines of code…

DECLARE @ServerName NVARCHAR(256)  = @@SERVERNAME 
DECLARE @RoleDesc NVARCHAR(60)

SELECT @RoleDesc = a.role_desc
    FROM sys.dm_hadr_availability_replica_states AS a
    JOIN sys.availability_replicas AS b
        ON b.replica_id = a.replica_id
WHERE b.replica_server_name = @ServerName


IF @RoleDesc = 'PRIMARY'
BEGIN
	exec SMA.Purge.PurgeNextBatch    
END 

ELSE
	PRINT 'SMA Purge Job skipped - ' + @@SERVERNAME + ' is ' + @RoleDesc

Like this…

image

That’s it! So every time (every 15 minutes) the job runs, it will check if it is running on the primary node and if so it will execute and purge your tables.

I want to thank my buddy Fulvio Ferrarini for helping and working on this issue and for his awesome SQL knowledge, input and ideas!

I hope this helps you keeping the SMA database in good shape!


Filed under: Configuration, Performance, SMA, System Center, Troubleshooting

SMA – Invoke Runbook Error “Cannot find the ‘’command.”

$
0
0

error

This is just a quick post about SMA. I bumped many times in this error while writing PowerShell workflows in SMA …

SMASpaceError

At line:3497 char:21 + PAT000287-RelateAppSetSR -AppSetID $AppSet -SRID $Applicatio ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Cannot find the 'PAT000287-RelateAppSetSR' command. If this command is defined as a workflow, ensure it is defined before the workflow 
that calls it. If it is a command intended to run directly within Windows PowerShell (or is not available on this system),
place it in an InlineScript: 'InlineScript { PAT000287-RelateAppSetSR }'

…and I figured out that there are many reasons for this error.

If you are nesting runbooks like this…

workflow ParentWorkflow 
{
	ChildWorkflow -Param1 "Test" 	#Calling child workflow
}

Several reasons could lead to this problem:

  • The child workflow does not exist in SMA.
  • The child workflow has not been published.
  • The naming of the child workflow name does not match the invoke command.
  • There is a space in front of the child workflow. Just edit the line, where you call the child runbook and check in the parent runbook again.
  • Before you check in the parent workflow, you must check in the child workflow. This applies only when the runbook itself did not exist before. If the child runbook was created any time before the parent workflow, this error will not happen.

I hope this will help you this quite common error to eliminate during development.


Filed under: Configuration, PowerShell, SMA, Troubleshooting

SCOM – Comtrade Nutanix MP Beta Release Overview

$
0
0

image

Last week Comtrade invited me to a demo on their latest management pack product for monitoring Nutanix.  Comtrade is / was known for their outstanding Citrix management packs. These management packs cover, the entire monitoring experience from end-to-end and I mean in a real end-to-end experience. At the beginning of this year Citrix bought all the Citrix management packs from Comtrade and offer them now as part of their Platinum license, get more information about this deal here.

Maybe because of that, Comtrade decided to build another management pack for another flourishing technology called Nutanix. Nutanix offers a hyperconverged solution that has compute power (CPU and RAM) and software defined storage packed into so called nodes,1 to 4 of these nodes form a “block”. If you need more computing or storage power you just can add other blocks to run more workloads. It is supposed to be very easy to add other blocks to the Nutanix cluster and there are nifty logics for placing the workloads on proper storage as also replicating the VMs to another node for backup purposes. This means you can scale your computing needs and these blocks or boxes will just collaborate with each other. If you want to know more about this technology visit their website. Here just a picture for a better understanding…

architecture

As I said Comtrade, is currently working on their first beta release for monitoring Nutanix. As we are used to get high-quality management packs from Comtrade, so I was very interested in seeing what they come up with for their beta release. The first thing I wanted to know what is the architecture of the management pack. In SCOM we create a dedicated resource pool and add either gateway servers or management servers into this resource pool. On any of these members (gateway and/ or management servers) you need to install a piece of software called Nutanix data collector, which runs as a Windows Service in the background. This data collector will talk to the Nutanix cluster using the Nutanix REST API for gathering all monitoring data (pull requests). The port which is used is Nutanix Prism port 9440 by default. If the port is changed, then an override can be used to instruct the MP to send requests to a non-default port.

The data collector will also do data aggregation and preparation consumed by the SCOM management pack. Additionally it is used to discover applications on VMs using WinRM or SSH protocols on Nutanix clusters. Talking about permission requirements, the management pack requires a basic (read only) Nutanix Prism account to access the Nutanix REST interface for monitoring the Nutanix environment. For the Application Awareness functionality (which we will explain a bit later) the requirements are an account with local admin rights on desired VMs for a connection to be established. Additionally for discovering Citrix Applications another account with Citrix Administrator rights and permissions to establish a remote management connection is needed.

image

So what does the MP deliver so far? Remember it is in a beta state and Comtrade’s first “draft”. The key areas this MP is focus on are the following…

  • Cluster / Hardware
    • HW health (CPU, GPU, RAM, disk, NIC)
    • Certificate expiration, clock skew, drive configuration, etc.
    • Resource use (CPU, memory, metadata, etc.)
  • Storage
    • Storage pool and container high use, avg IO latency high
    • Erasure coding garbage, suboptimal performance
  • Data Protection
    • Replication health, Metro available
    • Configuration issues, volume group and snapshot issues
  • VMs
    • CPU load, IOPS, IO latency, memory use
    • Controller VM health
  • Application
    • Identification of apps hosted on VMs (App Awareness)

Time to get some insights. A quick check running few PowerShell commands will get some figures. There are 3 sealed management packs, 2 of them contain dashboards and one widgets…

image

The Comtrade.Nutanix.Base MP contains 69 performance collection rules, 202 unit monitors and 50 dependency monitors….

image

If you group these monitors by category, you will find 183 monitors which will alert…

image

If you need more details what these MP contain, here is a detailed Excel about all management pack content…

image

How does it look like? In this version, we basically see lot’s of views and dashboards grouped into the following categories…

image

I cannot show all of the views and dashboards, but to get an impression, here some screenshots.  There are…

…several (native) SCOM dashboards, like this VM overview dashboard…

image

..or like this overview dashboard showing some new widgets…

image

…diagram views, like this showing the blocks and nodes and finally the hardware which is failing…

image

…performance views, like this performance view showing the CPU load across two clusters…

image

…state views, like this showing state of the controller VM’s…

image

One feature Comtrade is beating on is called Application Awareness / App Awarness. Application Awareness will discover what kind of workload is running in which VM. The management pack will identify if there is Citrix or some Microsoft workload like Exchange, SharePoint, Lync is running within the VM’s. Why is this useful?  Well, imagine your MP will dynamically know what is running inside your VM, then the simplest scenario is to group the servers / VMs dynamically and present the information in some nice dashboards. The next step could be, based on Application Awareness data to create management packs on top of it to represent entire services, kind of distributed applications but more dynamic. In this beta version we just see a dashboard with the (dynamic) application group which discovered Citrix workloads within the VM’s respectively cluster…

image

As I mentioned the list of views is extremely long and as far I can tell, there is very detailed information available for each of the Nutanix components. Now you are asking yourself about reports, right? I mean there are almost 70 collection rule, now we should have plenty of data for showing some shiny reports. Well, at this point in time there are not any reports available. You would have to use the SCOM generic reports to pull out some long term information. But Comtrade has confirmed, that it is on their list and will be provided in the next release. Some more things are on their near-term road map:

  • Deeper HW health visibility (fans, power supply, rack info, …)
  • App Awareness for Microsoft applications (Exchange, SQL, Skype for Business, SharePoint, etc.)
  • App Awareness for F5 virtual appliances
  • Out of box reports (Level 1 – 3 support reports, SLA/Mgmt reports)
  • Insight into VM’s processes
    • Combine Nutanix MP with other MPs
    • Citrix MPs + Nutanix MP
    • F5 MP + Nutanix MP

Another question you might ask yourself, why should I buy a management pack, if Nutanix will offer one for free. Well, yes good question, as always it depends. I have not dissected the free Nutanix MP but if you just have a brief look at this blog post here you will see the amount of stuff monitored is far less than what Comtrade offers in its beta version. In fact, the free Nutanix MP only delivers 24 monitors and 7 rules, where Comtrade MP for Nutanix delivers 252 monitors and 69 rules. To compare both solution in all aspects, Comtrade will provide a comparison sheet, which I will publish as soon it is available.

Conclusion:

This MP offers in its beta version already a large amount of visualization, rules and monitors. What definitely is missing are the nifty reports which provide you with the “wow!” information, like the ones we are used in the Citrix MP’s. In addition some stunning dashboards are also missing so far. In my opinion Comtrades is building a solid foundation of a management pack and I know they will ship all this nifty visualization in upcoming releases. Application Awareness seems to be a promising technique, which opens new capabilities for identifying workload dynamically across Nutanix clusters and correlate this data with other management packs and dashboards.

If you are interested in the beta program of this MP you can sign up here it will start on March 31 2016.


Filed under: Management Pack, System Center

Quick Post – Windows 10 & Visual Studio 2015 Getting Started Links

$
0
0

image

I have been in the IT industry quite some time, starting from the client side and then moving towards the backend side. All these years, I used Windows operating systems starting from Windows 3.1 up to the latest and greatest Windows 10. But there is also another hot topic going on besides Windows 10 which is the release of Visual Studio 2015. Why? Well, Visual Studio 2015 and the Windows 10 platform provide a perfect combination to write “Universal Apps”. What does that mean? In a short version, it means that Windows 10 offers a Universal Windows Platform (UWP) which provides a common platform across all devices. In other words, you will be able to write apps once, which run on many different Windows 10 devices.

This means you can create a single app package that can be installed onto a wide range of devices. And, with that single app package, the Windows Store provides a unified distribution channel to reach all the device types your app can run on.

[Source]

image

[Source]

Now instead writing your app for an operating system, you will start to write apps for different kind of device families. All these device families inherit the API’s from the Universal device family which guarantees that the Universal device API’s are present on all “child families”. So if you want to run your app on as many devices as possible you will use the Universal device family API’s.

image

[Source]

The consequences of this very exciting approach of Windows 10 and Visual Studio 2015 is, that you have less work and more devices (targets) where you can run your apps on . If I think about all the countless “things” you are able to develop, I get very enthusiastic and excided. I think there is a good time to explore and learn this new possibilities and let’s start to unleash your skills.

I would like to share some good places to learn about Windows 10 and Visual Studio 2015. I also would like to compliment Microsoft for doing a very good job in providing learning material to the community over past few years!

Here just some of the most important links to get you started:

Windows 10:

image

Visual Studio 2015:

image

I hope this gives you a very good starting point and shows you a new direction and you might want to try new things:).


Filed under: Development, Recommended, Windows 10

Quick Post – Influence SCOM vNext Features aka “SCOM Wish List”

$
0
0

Wish list

It is not quite Christmas yet but you are right now allowed to submit your wish list for SCOM features and improvements! The SCOM product team opened yesterday a feedback form for submitting any ideas for SCOM vNext. There might be things you are missing, things you hate how they work or you just have seen things in other monitoring tools you would like to have in SCOM. Now is the time to let Microsoft know WHAT YOU WANT! YES! YOU! Don’t complain about SCOM, help to improve it and bring SCOM to the next level.

You can find the “wish list” here http://systemcenterom.uservoice.com/forums/293064-general-operations-manager-feedback .

I also submitted a couple of improvements and you might want to vote for them http://systemcenterom.uservoice.com/users/95583468-stefan-roth .

Nothing in life is perfect, but we can help to make SCOM almost perfect!


Filed under: Software, System Center

SCOM – Silect Software / Infront Consulting MP University Recordings

$
0
0

On August 12, Silect Software and Infront Consulting hosted a webinar about MP authoring with all the big shots in the management pack authoring space. I did also attend the webinar and the content was just awesome. If you mist the event, you are able to watch the recordings on YouTube. All of these guys did an awesome job and I highly recommend watching these recordings.

Brian Wren MP Best Practices

MP Best Practices

Randy Roffey (Silect) / Brian Wren Visual Studio Authoring Extensions

Visual Studio Authoring Extstensions

Freddy Mora Silva MP Reporting Authoring

MP Report Authoring

Mike Sargeant  How to Monitor a Network Device

Monitor Network Device

Kevin Holman Authoring PowerShell Performance Rules

Authoring Performance Rules

Jonathan Almquist MP Authoring for UNIX/Linux

MP Authoring for UNIX/Linux

Thank’s to all of the presenters for creating such great content! Find more details here http://www.systemcentercentral.com/infront_university/ .


Filed under: Authoring, Development, Recommended

SCOM 2016 TP3 – Connecting Operations Management Suite Problem

$
0
0

Microsoft released SCOM 2016 TP3 end of August 2015 and of course I am eager to know what’s new. One problem I hit, was trying to onboard SCOM to Operations Management Suite (OMS) on Windows Server 2016 TP3. Onboarding to OMS is as easy as 1-2-3 (usually), but this time Internet Explorer settings are blocking the sign-in process.

I am talking about this connector here…

image

When you first start the connector, will face this screen…

3

Because this Windows contains just a embedded Website we check the URL…

4

…after adding the URL above to the Trusted Sites, you are prompted for credentials…

5

…you can try to sign in you will hit another error that cookies must be allowed. We check the URL and add this site to Trusted Sites in IE….

image

At one point, I even hit another error, showing me an URL, that OMS is not available. Checking the URL, redirected me to the ancestor of OMS :)…

7

After I added the URL above I still could not run the OMS connector wizard. Because we should enable JavaScript, I enabled the Active Scripting for Internet zone in IE…

8

I still was not able to run the configuration wizard, but luckily I saw a tweet from MVP Adin Ermie which mentioned to enable the settings Trusted Sites to Medium-Low…

9

…and finally adding these three sites to the Trusted Sites zone…

10

After all this configuration issues setup, I was able to run the OMS connector successfully.


Filed under: Configuration, System Center, Troubleshooting
Viewing all 140 articles
Browse latest View live