Integrating Azure IoT Devices with MongooseOS MQTT and PowerShell

Introduction and Recap

In my last post here on IoT I detailed getting started with Azure IoT Hubs and registering an IoT device and sending telemetry from the IoT Device to the Azure IoT Hub. And doing all that using PowerShell.

If you haven’t read that post or worked through those steps, stop here, work through that and then come back. This post details configuring MongooseOS to receive MQTT messages from Azure IoT which is the last mile to making the IoT Device flexible for integration with anything you can think of.


The only change to my setup from the previous post is I installed the Mongoose Demo App onto my ESP8266 device. Specifically the demo-js App detailed in the application list here. Install is quick and simple on Windows using the MOS Tool. Details are here. I also enabled the Mongoose Dashboard on my Mongoose IoT Device so that I don’t have to have the IoT Device connected to my laptop when configuring and experimenting with it. Essentially check the checkbox for Dashboard when configuring the IoT Device when connected locally via a USB cable.

The rest of the configuration is using the defaults in Azure IoT with respect to MQTT.

MongooseOS MQTT Subscribe Configuration – Init.js

On your IoT Device in the MongooseOS init.js we need to configure the ability to subscribe to a MQTT topic. In the first post we were publishing to send telemetry. Now we want to receive messages from Azure IoT.

Include the following lines in your init.js configuration file and restart your IoT Device. The devices//messages/devicebound/# path for the MQTT Subscription will allow the IoT device to subscribe to messages from the Azure IoT Hub. 

// Receive MQTT Messages from Azure
MQTT.sub('devices/' + Cfg.get('') + '/messages/devicebound/#', function(conn, topic, msg) {
 print('Topic:', topic, 'message:', msg);
}, null);

In order to test the configuration of the IoT Device I initially use the Device Explorer. It is available from GitHub here. The screenshot below shows me successfully sending a message to my IoT Device.

DeviceExplorer to IoT Device.PNG

From the Mongoose OS Dashboard we can inspect the Console Log and see the telemetry we are sending to the IoT Hub, but also the message we have received. Success.

Mongoose Device Log.PNG

Sending MQTT Messages from Azure IoT to MongooseOS using PowerShell

Now that we’ve verified that we have everything setup correctly let’s get to the end goal of sending messages to the IoT Device using PowerShell. Here is a little script that uses the AzureIoT Module that we used previously to assist with configuration automation, to send a message from Cloud to Device.

Update it for your Resource Group, IoTHub, DeviceID and IoTKeyName. Also the message if you feel the need (line 40).

Hello from the Cloud via PowerShell MQTT Message Received.

Cloud to Device ConsoleLog.PNG


Through the two blog posts I’ve detailed the creation of an Azure IoT Hub, registration of an IoT Device, sending telemetry from MongooseOS on the IoT Device to Azure IoT and now sending messages to the IoT Device from Azure, all via PowerShell. Now we have end to end connectivity bi-directionally, what can we do with it? Stay tuned for future posts.

Validating a Yubico YubiKeys’ One Time Password (OTP) using Single Factor Authentication and PowerShell

Multi-factor Authentication comes in many different formats. Physical tokens historically have been very common and moving forward with FIDO v2 standards will likely continue to be so for many security scenarios where soft tokens (think Authenticator Apps on mobile devices) aren’t possible.

Yubico YubiKeys are physical tokens that have a number of properties that make them desirable. They don’t use a battery (so aren’t limited to the life of the battery), they come in many differing formats (NFC, USB-3, USB-C), can hold multiple sets of credentials and support open standards for multi-factor authentication. You can checkout Yubico’s range of tokens here.

YubiKeys ship with a configuration already configured that allows them to be validated against YubiCloud. Before we configure them for a user I wanted a quick way to validate that the YubiKey was valid. You can do this using Yubico’s demo webpage here but for other reasons I needed to write my own. There wasn’t any PowerShell examples anywhere, so now that I’ve worked it out, I’m posting it here.


You will need a Yubikey. You will need to register and obtain a Yubico API Key using a Yubikey from here.

Validation Script

Update the following script to change line 2 for your ClientID that  you received after registering against the Yubico API above.

Running the script validates that the Key if valid.

YubiKey Validation.PNG

Re-running the submission of the same key (i.e I didn’t generate a new OTP) gets the expected response that the Request is Replayed.

YubiKey Validation Failed.PNG


Using PowerShell we can negate the need to leverage any Yubico client libraries and validate a YubiKey against YubiCloud.


Commanding your Philips Hue lights with PowerShell

A couple of years ago I bought a number of Philips Hue bulbs and put them in the living areas of my house. Typically we control them via the Hue App on our phones, or via the Google Assistant. This all works very well, but of course I’m a techie and have a bunch of other Internet of Things devices and it would be great to integrate the Hue lights with those.

This post is the first in doing that integration. Setting up access to the Philips Hue Bridge and manipulating the lights. For ease of initial experimentation I’m using PowerShell to perform the orchestration. The API calls can easily be transposed to any other language as they are simple web requests.


First you will need to have your Philips Hue lights setup with your Philips Hue Bridge. Test the lights are all working via the Philips Hue mobile app.

Locate the IP address of your Philips Hue Bridge. I found mine easily via my Unifi console and you should be able to get it via your home router.

Getting Started

Navigate to your Philips Hue Bridge using a browser and its IP Address. You will see a splash screen with a list of the open source modules that it utilises. Now append the IP Address with /debug/clip.html For me that is;

Create an Account

The Rest API takes a JSON payload. We can quickly create that in the API Debugger. See my example body below and change the URL to /api. Whilst pressing the button on the top of your Philips Hue Bridge select the POST button. This will create an account that you can then use to orchestrate your hue lights.

{“devicetype”:”AzureFunction#iphone Darren”}

Create Philips Hue User.PNG

Via the API we’ve just created an account. Copy the username from the response. We’ll need this for the API calls.

Test Connection

Change the URL in the debugger as shown below and clear the Message Body. Select GET and you should get returned the light(s) connected to your Philip Hue Bridge.



Controlling a Light

I have many lights. In our kitchen we have three pendant lights in a row that are all Philips Hue lights. I’m going to start by testing with one of them. Choosing one from the list from the response above Light 5 should be the middle light. The command is:


In the body put On and True to turn on. False would be to turn it off. Select PUT. My light turned on. Updating the message body to false and pressing PUT turned it off.

Turn Light On.PNG

Using PowerShell to Manage a Philips Hue Light

Now lets manipulate the Hue Light using PowerShell now that we have an account and know the light we want to manage.

Update the following test script for the IP address of your Philips Hue Bridge, the light number you wish to control and the username you got when you performed the enablement steps earlier. The script will then get the current status of the light and reverse it (turn OFF if it was ON and ON if it was OFF).

Flipping the state of a light

If you have configured everything correctly your light will change and you will get a success reply and the state it transitioned too.

Reverse Light State.PNG

Controlling Multiple Lights

Now let’s do that for multiple lights. In my kitchen we have 3 drop lights over the counter bench. Lets control all three of those. I created a collection of the light numbers, then iterate through each one and flip its state. NOTE: you can also control multiple lights through the Groups method. I won’t be covering that though

I had one set in an inverse state to the other two before I started, to show each is individually updated.

Reverse Multiple Lights State.PNG

Controlling Multiple Lights and Changing Colors

Now lets change the color. Turn the lights on if they aren’t already and make the color PINK.

As you can see, iterating through the lights the script turns them on and makes them Pink.

Turn Lights On and make them PINK.PNG

Finally, effects for multiple lights

Now lets turn on all the lights, and set them to use the color loop effect (transition through the color spectrum) for 15 seconds then make them all pink.

The lights transition through the color spectrum for 15 seconds then the effect is turned off and the color is set to pink.

Turn Lights On Color Effect and make them PINK.PNG


We created an account on the Philips Hue Bridge, were able to enumerate the lights that we have and then orchestrate them via PowerShell.

Here is a short video showing three lights being turned on, changing color and iterating through the color spectrum then setting them Pink.

Now to integrate them with other IoT Devices.

A synopsis of my first Microsoft (MVP) Summit

Last week I attended my first Microsoft Most Valuable Professional (MVP) Summit. Compared to a lot of the conferences I’ve been to over the years this was tiny with just over 2000 attendees. The difference however is that every attendee is an expert in their field (associated with at least one Microsoft technology) and they come from over 80 countries. It is the most diverse mix of attendees for the number of participants.

The event is also not the typical tech type conference that provides you details on current trends, public road maps and guidance on how to implement or migrate technology. Instead it is a look behind the development curtain and almost full transparent dialogue with the product and engineering teams determining and building the future for each technology stream. It also isn’t held at a sterile function center. It’s held on site at Microsoft’s headquarters in Redmond, Washington. Everywhere you look you can find nuggets of Microsoft’s history. Nightly activities are predominantly centered around Bellevue (a short distance from Redmond).


My MVP is associated with Identity & Access. Internally at Microsoft they refer to the small number of us in that category an Identity MVP’s. I spent the week in deep technical sessions around Identity and Access Management getting insights for the short, medium and longer term plans for all things Identity & Access Management related and conversing with my peers. I can’t say more than that, as privilege for that level of insight is only possible through a strict and enforced NDA (Non Disclosure Agreement) between each MVP and Microsoft.


I thoroughly enjoyed my first MVP Summit. I reconnected with a number of old colleagues and acquaintances and made a bunch of new connections both within Microsoft and the Identity MVP community. It has prepared me with vision of what’s coming that will be directly applicable to many of the longer term projects I’m currently designing. It definitely filled in the detail between the lines associated with recent Microsoft announcements in the Identity and Access Management space.

Want to become an MVP? Looking to know what it takes to be awarded with MVP status? Want a full rundown on the benefits? Checkout this three-part blog post starting here by Alan about the MVP program.

Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager v2, k-Anonymity and Have I Been Pwned



In August 2017 Troy Hunted released a sizeable list of Pwned Passwords. 320 Million in fact.

I subsequently wrote this post on Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager which called the API and sets a boolean attribute in the MIM Service that could be used with business logic to force users with accounts that have compromised passwords to change their password on next logon.

Whilst that was a proof of concept/discussion point of sorts AND  I had a disclaimer about sending passwords across the internet to a third-party service there was a lot of momentum around the HIBP API and I developed a solution and wrote this update to check the passwords locally.

Today Troy has released v2 of that list and updated the API with new features and functionality. If you’re playing catch-up I encourage you to read Troy’s post from August last year, and my two posts about checking Active Directory passwords against that list.

Leveraging V2 (with k-Anonymity) of the Have I Been Pwned API

With v2 of the HIBP passwod list and API the number of leaked credentials in the list has grown to half a billion. 501,636,842 Pwned Passwords to be exact.

With the v2 list in conjunction with Junade Ali from Cloudflare the API has been updated to be leveraged with a level of anonymity. Instead of sending a SHA-1 hash of the password to check if the password you’re checking is on the list you can now send a truncated version of the SHA-1 hash of the password and you will be returned a set of passwords from the HIBP v2 API. This is done using a concept called k-anonymity detailed brilliantly here by Junade Ali.

v2 of the API also returns a score for each password in the list. Basically how many times the password has previously been seen in leaked credentials lists. Brilliant.

Updated Pwned PowerShell Management Agent for Pwned Password Lookup

Below is an updated Password.ps1 script for the previous API version of my Pwned Password Management Agent for Microsoft Identity Manager. It functions by;

  • taking the new password received from PCNS
  • hashes the password to SHA-1 format
  • looks up the v2 HIBP API using part of the SHA-1 hash
  • updates the MIM Service with Pwned Password status

Checkout the original post with all the rest of the details here.


Of course you can also download (recommended via Torrent) the Pwned Password dataset. Keep in mind that the compressed dataset is 8.75 GB and uncompressed is 29.4 GB. Convert that into an On-Premise SQL Table(s) as I did in the linked post at the beginning of this post and you’ll be well in excess of that.

Awesome work from Tory and Junade.


Automating the submission of WordPress Blog Posts to your Microsoft MVP Community Activities Profile using PowerShell



In November last year (2017) I was honored to be awarded Microsoft MVP Status for Enterprise Mobility – Identity and Access. MVP Status is awarded based on community activities and even once you’ve attained MVP Status you need to keep your community activity contributions updated on your profile.

Up until recently this was done by accessing the portal and updating your profile, however mid last year a MVP PowerShell Module (big thanks to Francois-Xavier Cat and Emin Atac) was released that allows for some automation.

In this post I’ll detail using the MVP PowerShell Module to retrieve your latest WordPress Blog Post and submit it to your MVP Profile as a MVP Community Contribution.


In order for this to work you will need;

  • to be a Microsoft MVP
  • Register at the MS MVP API Developer Portal using your MVP registered email/profile address
    • subscribe to the MVP Production Application
    • copy your API key (you’ll need this in the script below)

The Script

Update the script below for;

  • MVP API Key
    • Update Line 5 with your API key as detailed above
  • WordPress Blog URL (mine is
    • Update Line 11 with your WP URL. $wordpressBlogURL = ‘’
  • Your Award Category
    • Update Line 17 with your category $contributionTechnology = “Identity and Access”
    • type New-MVPContribution – ContributionTechnology and you’ll get a list of the MVP Award Categories

Award Categories.PNG

The Script

Here is the simple script. Run it after publishing a new entry you also want added to your Community Contributions Profile. No error handling, nothing complex, but you’re a MVP so you can plagiarize away for your submissions to make it do what suits your needs.


Hopefully that makes it simple to keep your MVP profile up to date. If you’re using a different Blogging platform I’m sure the basic process will work with a few tweaks after returning a query to get the content. Enjoy.

Automating the creation of Azure IoT Hubs and the registration of IoT Devices with PowerShell and VS Code

The creation of an Azure IoT Hub is quick and simple, either through the Azure Portal or using PowerShell. But what can get more time-consuming is the registration of IoT Devices with the IoT Hub and generation of SAS Tokens for them for authentication.

In my experiments with micro-controllers and their integration with Azure IoT Services I often find I keep having to manually do tasks that should have just been automated. So I did. In this post I’ll cover using PowerShell to;

  • create an Azure IoT Hub
  • register an Azure IoT Device
  • generate a SAS Token for the IoT Device to use for authentication to an Azure IoT Hub from a Mongoose OS enabled ESP8266 micro controller

IoT Integration


In order to fully test this, ideally you will have a micro-controller. I’m using an ESP8266 based micro-controller like this one. If you want to test this out without physical hardware, you could generate your own DeviceID (any text string) and use the AzureIoT Library detailed further on to send MQTT messages.

You will also require an Azure Subscription. I detail using a Free Tier Azure IoT Hub which is limited to 8000 messages per day. And instead of using PowerShell/PowerShell ISE get/use Visual Studio Code.

Finally you will need the AzureRM and AzureIoT PowerShell modules. With WinRM 5.x you can get them from the PowerShell Gallery with;

install-module AzureRM
install-module AzureIoT

Create an Azure IoT Hub

The script below will create a Free Tier Azure IoT Hub. Change the location (line 15) for which Azure Region you will use (the commands on the lines above will list what regions are available), the Resource Group Name that will be created to hold it (line 18) and the name of the IoT Hub (line 23) and let it rip.

From your micro-controller we will need the DeviceID. I’m using the ID generated by the device which I obtained from the Device Configuration => Expert View of my Mongoose OS enabled ESP8266.

Device Config.PNG

Register the IoT Device with our Azure IoT Hub

Using the AzureIoT PowerShell module we can automate the creation/registration of the IoT Device. Update the script below for the name of your IoTHub and the Resource Group that contains it that you created earlier (lines 7 and 11). Update line 21 for the DeviceID or your new IoT Device. I’m using the AzureIoT module to do this. With WinRM 5.x you can install it quickly fromt the gallery with install-module AzureIoT

Looking at our IoTHub in the Azure Portal we can see the newly registered IoT Device.


Generate an IoT Device SAS Token

The final step is to create a SAS Token for our IoT Device to use to connect to the Azure IoTHub. Historically you would use the IoT Device Explorer to do that. Alternatively you can also use the code samples to implement the SAS Device Token generation via an Azure Function App. Examples exist for JavaScript and C#. However as of mid-January 2018 you can do it direct from VS Code or Azure Cloud Shell using the Azure CLI and the IOT Extension. I’m using this method here as it is the quickest and simplest method of generating the Device SAS Token.

The command to generate a token that would work for all Devices on an IoT Hub is

az iot hub generate-sas-token --hub-name

Here I show executing it via the Azure Cloud Shell after installing the IOT Extensions as detailed here. To open the Bash Cloud Shell select the >_ icon next to the notification bell in the right top menu list.

Generate IOT Device SAS Token.PNG

As we have done everything else via PowerShell and VS Code we can also do it easily from VS Code. Install the Azure CLI Tools (v0.4.0 or later in VS Code as detailed here. Then from within VS Code press Control + Shift + P to open the Command Palette and enter Azure: Sign In. Sign in to Azure. Then Control + Shift + P again and enter Azure: Open Bash in Cloud Shell to open a Bash Azure CLI Shell. You can check to see if you have the Azure CLI IOT Extension (if you’ve previously used the Azure CLI for IoT operations) by typing;

az extension show --name azure-cli-iot-ext

and install it if you don’t with;

az extension add --name azure-cli-iot-ext

Then run the same command from VS Code to generate the SAS Token

az iot hub generate-sas-token --hub-name

VSCode Generate SAS Token.PNG

NOTE: That token can then be used for any Device registered with that IOT Hub. Best practice is to have a token per device. To do that type

az iot hub generate-sas-token --hub-name  --device-id

Generate SAS Token VS Code Per Device.PNG

By default you will get a token valid for 1 hour. Use the –duration switch to specify the duration of the token you require for your environment.

We can now take the SAS Token and put it into our MQTT Config on our Mongoose OS IoT Device. Update the Device Configuration using Expert View and Save.

Mongoose SAS Config.PNG

We can then test our IoT Device sending updates to our Azure IoT Hub. Update Init.js using the telemetry sample code from Mongoose.


let topic = 'devices/' + Cfg.get('') + '/messages/events/';

Timer.set(1000, true /* repeat */, function() {
 let msg = JSON.stringify({ ram: Sys.free_ram() });
 let ok =, msg, 1);
 print(ok, topic, '->', msg);
 }, null);

We can then see the telemetry being sent to our Azure IOT Hub using MQTT. In the Device Logs after the datestamp and before device/ if you see a 0 instead of 1 (as shown below) then your conenction information or SAS Token is not correct.

Mongoose IOT Events.png

On the Auzre IoT side we can then check the metrics and see the incoming telemetry using the counter Telemetry Metrics Sent as shown below.

Telemetry Metrics Sent.PNG

If you don’t have an IoT Device you can simulate one using PowerShell. The following example shows sending a message to our IoT Hub (using variables from previous scripts).

$deviceParams = @{
 iotConnString = $IoTConnectionString
 deviceId = $deviceID
$deviceKeys = Get-IoTDeviceKey @deviceParams 
# Get Device 
$device = Get-IoTDeviceClient -iotHubUri $IOTHubDeviceURI -deviceId $deviceID -deviceKey $deviceKeys.DevicePrimaryKey

# Send Message
$deviceMessageParams = @{
 deviceClient = $device
 messageString = "Azure IOT Hub"
Send-IoTDeviceMessage -deviceClient $deviceMessageParams


Using PowerShell we have quickly been able to;

  • Create an Azure IoT Hub
  • Register an IoT Device
  • Generate the SAS Token for the IoT Device to authenticate to our IoT Hub with
  • Configure our IoT Device to send telemetry to our Azure IoT Hub and verify integration/connectivity

We are now ready to implement logic onto our IoT Device for whatever it is you are looking to achieve.

New Laptop time. What do I need and what did I buy?

I joined the IT Industry as a full-time career in January 1992. It’s now January 2018 and in June ’17 last year I bought my very first laptop. WTF? 26 years and you’ve never bought a laptop? Yep. For all of my career I’ve worked for IT integrators and have been supplied with the core equipment required to perform my role. Which obviously includes a laptop. I’ve had a myriad of them and have even performed the evaluation of them to recommend what to purchase for technical staff. From memory I’ve used for extended periods laptops for vendors such as Compaq, AST, HP, IBM ThinkPad, Dell, Microsoft (Surface RT/Pro), and Lenovo ThinkPads.

So when it came time to purchase my own and now that I and a lot of us are in the BYOD (Bring Your Own Desktop) cycle, and we’re in a highly IaaS/SaaS/PaaS world what do you buy? How do you work these days and what do you need from your daily interface into the world of an IT/Identity Professional? Here is the process I went through when evaluating my purchase last year.

What don’t I do anymore?

  • I don’t run virtual machines anymore on my laptop. I used to do that a lot, either for customer pre-sale demo’s or for development of solutions.
  • I don’t have all my documents located on my laptop anymore. They are in a cloud service and I can access them quickly when required which is pretty irregular. Everything I require regularly is sync’d.

What do I do?

  • I use the Microsoft Office 365 Suite along with Visio and Project. Between those and a browser, that is my regularly day.
  • For development I primarily have virtual machines in Azure with the environment(s) required. These replaced running VM’s on my laptop.

What did I want?

In essence what I considered mandatory were;

  1. Life of the unit needs to be 3yrs. 3yrs of fully functioning everyday use. Not 2 yrs. and limping along for the 3rd
    • This feed into decision points for Processor and RAM requirements. Keep in mind a number of laptops will have to remain the spec that you purchase them at. That is you cannot increase the amount of RAM, replace the battery, hard disk etc. Think you’ll need/require 16Gb of RAM, then you need to buy it with that now.
  2. The laptop and power supply must be slimline and light being that it gets lugged around everywhere and often counted against me for weight on carry-on airlines in Australia and New Zealand. As a sidenote I also wanted to downsize my daily work bag.
    • This feed into decision points for screen size laptop type. Convertibles, 2-in-1 etc.
  3. Budget
    1. Having not purchased a laptop recently (or ever) I didn’t really have a grasp on what you got for your $$$. Ideally I love to only spend $2000 and I figured I should be able to get something better than what I had that was reaching end of life.
      • I had been using a Lenovo X1 Carbon for the last 3yrs. It had an i5 Proc, 8Gb RAM, 160Gb HDD, touch screen and finger scanner. Ideally I just wanted to double all of that with its successor.
  4. Optional/Nice to have
    • Stylus, pencil pointing drawing thing. Not sure I’ll use it, but would consider it
    • 4/5G SIM port. As a consultant I’m always mobile. Having the SIM in the device would be great rather than having another unit in the bag each day as a mobile access point

What did I find?

I found out pretty quick, that when you hit the latest i7 Processor, 16Gb of RAM and 512Gb of HDD spec machines you are instantly in the AU$2500-AU$3000 territory.

Looking around at what I got for that and from manufacturers I’d had positive experiences with left me with a pretty short list:

  1. Lenovo X1 Carbon Gen5
    • This unit was available and had most of what I had on my list. The gap though was it wasn’t touch screen.
  2. Dell XPS
    • I’ve had positive and negative experiences with Dell over the years. A number of my colleagues are using the Dell XPS series of machines with varying degrees of reliability.
  3. Lenovo Yoga 910
    • A hybrid of tablet and laptop. I’m skeptical about the hinge, but research showed it had been around for some time and few colleagues had them. One colleague had recently had a hot beverage incident with this former laptop and had replaced it with a Yoga 910 and was loving it


What did I get?

I purchased the Lenovo Yoga 910. I managed to get a deal on it in Platinum (silver over black) with the latest i7 Proc, 16Gb RAM and 1Tb of SSD. It had touch screen, finger scanner and with a deal and coupon actually hit my original thought estimates on pricing.

Did I get what I wanted/needed? Happy?

I wrote all the above when I originally made my purchase decision in June last year. Six months on, what are my thoughts?

This is a solid workhorse. It is light and is a great form factor. The finger scanner is quick and accurate, much more than the one from the X1 Carbon. It is fast and quiet. The 4k screen is fantastic. The coating on the screen leaves less of my finger prints on it too. Whilst I don’t regularly flip it over into tablet mode, I do when I’m commuting and not on crowded transport.

If there is one fault that I’d have to point out, it’s the battery life. I was dubious about the claimed battery life of 9+ hours and maybe I didn’t pay too close attention to the conditions under which that’d be realised. Essentially I’m getting around 4-5 hours battery life with normal use. That is still better than my old Lenovo X1 Carbon. I do see 9+ hours estimated, but that’s when all I’m doing is typing and it’s in flight mode. That’s not normal operation for me.

Another important item to keep in mind, is that the unit has two USB-C ports (one which you will normally have the power pack plugged into), and one USB 3.0 port. This means you need to think about the devices you normally plug into your laptop. External mouse (non-Bluetooth or wired), keyboard, docking station, monitor(s) etc. I use a Bluetooth headset and mouse, but often present at customers sites via VGA or HDMI. You have to buy USB-C to VGA/HDMI dongles to be able to continue to do that.

All in all, I’m very happy with it. I’d happily recommend the Yoga 910.

Automating the generation of Microsoft Identity Manager Configuration Documentation


Last year Microsoft released the Microsoft Identity Manager Configuration Documenter which is available here. It is a fantastic little tool from Microsoft that supersedes its predecessor from the Microsoft Identity Manager 2003 Resource Toolkit (which only documented the Sync Server Configuration).

Running the tool (a PowerShell Module) against a base out-of-the-box reference configuration for FIM/MIM Servers reconciled against an exported configuration from the MIM Sync and Service Servers from an implementation, generates an HTML Report document that details the existing configuration of the MIM Service and MIM Sync.


Last year I wrote this post based on an automated solution I implemented to perform nightly backups of a FIM/MIM environment during development.

This post details how I’ve automated another daily task for a large development environment where a number of changes are going on and I wanted to have documentation generated that detailed the configuration for each day. Partly to quickly be able to work out what has changed when needing to roll back/re-validate changes, and also to have the individual configs from each day so they could also be used if we need to rollback.

The process uses an Azure Function App that uses Remote PowerShell into MIM to;

  1. Leverage a modified (stream lined version) of my nightly backup Azure Function to generate the Schema.xml and Policy.xml MIM Service configuration files and the Lithnet MIIS Automation PowerShell Module installed on the MIM Sync Server to export of the MIM Sync Server Configuration
  2. Create a sub-directory for each day under the MIM Documenter Tool to hold the daily configs
  3. Execute the generation of the Report and have the Report copied to the daily config/documented solution

Obtaining and configuring the MIM Configuration Documenter

Download the MIM Configuration Documenter from here and extract it to somewhere like c:\FIMDoco on your FIM/MIM Sync Server. In this example in my Dev environment I have the MIM Sync and Service/Portal all on a single server.

Then update the Invoke-Documenter-Contoso.ps1 (or whatever you’ve renamed the script to) to make the following changes;

  • Update the following lines for your version and include the new variable $schedulePath and add it to the $pilotConfig variable. Create the C:\FIMDoco\Customer and C:\FIMDoco\Customer\Dev directories (replace Customer with something appropriate.
######## Edit as appropriate ####################################
$schedulePath = Get-Date -format dd-MM-yyyy
$pilotConfig = "Customer\Dev\$($schedulePath)" # the path of the Pilot / Target config export files relative to the MIM Configuration Documenter "Data" folder.
$productionConfig = "MIM-SP1-Base_4.4.1302.0" # the path of the Production / Baseline config export files relative to the MIM Configuration Documenter "Data" folder.
$reportType = "SyncAndService" # "SyncOnly" # "ServiceOnly"
  • Remark out the Host Settings as these won’t work via a WebJob/Azure Function
#$hostSettings = (Get-Host).PrivateData
#$hostSettings.WarningBackgroundColor = "red"
#$hostSettings.WarningForegroundColor = "white"
  • Remark out the last line as this will be executed as part of the automation and we want it to complete silently at the end.
# Read-Host "Press any key to exit"

It should then look something like this;

Azure Function to Automate execution of the Documenter

As per my nightly backup process;

  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 6am every morning using the following CRON configuration
0 0 6 * * *
  • I also needed to increase the timeout for the Azure Function as generation of the files to execute the report and the time to execute the report exceed the default timeout of 5 mins in my environment (19 Management Agents). I increased the timeout to the maximum of 10 mins as detailed here. Essentially added the following to the host.json file in the wwwroot directory of my Function App.
 "functionTimeout": "00:10:00"

Azure Function PowerShell Timer Script (Run.ps1)

This is the Function App PowerShell Script that uses Remote PowerShell into the MIM Sync/Service Server to export the configuration using the Lithnet MIIS Automation and Microsoft FIM Automation PowerShell modules.

Note: If your MIM Service is on a different host you will need to install the Microsoft FIM Automation PowerShell Module on your MIM Sync Server and update the script below to change references to http://localhost:5725 to whatever your MIM Service host is.

Testing the Function App

With everything configured, manually running the Function App and checking the output window if you’ve configured everything correct will show success in the Logs as shown below. In this environment with 19 Management Agents it takes 7 minutes to run.

Running the Azure Function.PNG

The Report

The outcome everyday just after 6am is I have (via automation);

  • an Export of the Policy and Schema Configuration from my MIM Service
  • an Export of the MIM Sync Server Configuration (the Metaverse and all Management Agents)
  • I have the MIM Configuration Documenter Report generated
  • If I need to rollback changes I have the ability to do that on a daily interval (either for a MIM Service change or an individual Management Agent change

Under the c:\FIMDoco\Data\Customer\Dev\\Report directory is the HTML Configuration Report.

Report Output.PNG

Opening the report in a browser we have the configuration of the MIM Sync and MIM Service.



Checking and patching your Microsoft Windows computer for Meltdown and Spectre


A Google team named Project Zero in mid 2017 identified vulnerabilities with many Intel, AMD and ARM CPU’s that allow speculative pre-processing of code to be abused. Speculative pre-processing aids performance which is why it exists. However when used maliciously it would allow an attacker to use JavaScript in a webpage to access memory that could contain information present in a users environment such as key strokes, passwords and personal sensitive information.

A very good overview on the how (and a little of the why) is summarised in a series of tweets by Graham Sutherland here.


In the January Security updates Microsoft have provided updates to protect its operating systems (Windows 7 SP1 and later). More on this below. They have also provided a PowerShell Module to inspect and report on the status of a Windows operating system.

What you are going to need to do is patch your Windows Operating System and update your computers firmware (BIOS).

Using an Administrative PowerShell session on a Windows workstation with Windows Management Framework 5.x installed the following three lines will download and install the PowerShell module, import it and execute it to report on the status.

Install-Module SpeculationControl
Import-Module SpeculationControl

The output below shows that the operating system does not contain the updates for the vulnerability.

PowerShell Check.PNG

Obtaining the Windows Security Updates

Microsoft included updates for its operating systems (Windows 7 SP1 and newer) on January 3 2018 in the January update as shown below.  They can be obtained from the Microsoft Security Portal here. Search for CVE-2017-5715 to get the details.


Go to the Microsoft Update Catalog to obtain the update individually.

The quickest and easiest though is to press your Windows Key, select the Gear (settings) icon, Update & Security, Windows Update.

Update & Security.PNG

Check status, install the updates, and restart your Windows computer.

Windows Update.PNG

Speculation Control Status

After installing the updates and restarting the computer we can run the check again. It now shows we are partially protected. Protected for Meltdown but partially protected for Spectre. A BIOS update is required to complete the mitigation for Spectre.

Rerun Powershell Check.PNG

For me I obtained the latest BIOS for my laptop from the manufacturers support website. If you are also on a Lenovo Yoga 910 that is here. However for me the latest Lenovo firmware doesn’t include updates for this vulnerability. And my particular model of laptop isn’t listed as being affected. I’ll keep checking to see if that changes.


In Microsoft environments your patching strategy will get you most of the way with the Microsoft January Security updates. BIOS updates to your fleet will take additional planning and effort to complete.