Using AutoRest for PowerShell to generate PowerShell Modules

AutoRest for PowerShell

Recently the Beta of the AutoRest for PowerShell Generator has been made available. At the recent Microsoft MVP Summit in Seattle Garrett Serack gave those that were interested a 1 hr corridor session on getting started with AutoRest for PowerShell.

AutoRest is a tool that generates client libraries for accessing RESTful web services. Microsoft are moving towards using AutoRest to generate SDK’s for the API’s in the standard languages they provide SDK’s for. In addition the AutoRest for PowerShell generator aims to automate the generation of PowerShell Modules for Azure API’s.

In this post I’ll give an intro to getting started with AutoRest to generate PowerShell modules using an AutoRest example and then an Azure Function based API.

AutoRest for PowerShell Prerequisites

AutoRest is designed to be cross-platform. As such its dependencies are NodeJS, PowerShell Core and Dotnet Core.

AutoRest for PowerShell Installation

Installation is done via command line once you have NodeJS installed and configured

npm install -g autorest@beta

if you have previously installed AutoRest you will want to update to the latest version as updates and fixes are coming regularly

autorest --reset

AutoRest Update.PNG

Building your first AutoRest for PowerShell Module

The AutoRest documentation has a couple of examples that are worth using initially to get to know the process and the expected output from AutoRest.

From a PowerShell Core command prompt (type pwsh in a command window) make the following Invoke-WebRequest (IWR) to retrieve the XKCD Swagger/OpenAPI file. I pre-created a sub-directory named XKCD under the directory where I ran the following command.

iwr https://raw.githubusercontent.com/Azure/autorest/master/docs/powershell/samples/xkcd/xkcd.yaml -outfile ./xkcd/xkcd.yaml

Download XKCD Yaml.PNG

We can now start the process to generate the PowerShell module. Change directory to the directory where you put the xkcd.yaml file and run the following command.

autorest --powershell --input-file:./xkcd.yaml

AutoRest Generate Libs.PNG

Following a successful run a generated folder will be created. We can now build the PowerShell Module. We can add the -test flag so that at the end of Generating the Module, Pester will be used to test it. First run PowerShell Core pwsh  then build-module

pwsh 
./generated/build-module.ps1 -test

AutoRest - Build Module.PNG

With the module built and the tests passed we can run it. We need to load the module and look to see the cmdlets that have been built for the API.

.\generated\run-module.ps1 xkcd.psm1
get-command -module xkcd

AutoRest - Load Module.PNG

Using the Get-ComicForToday cmdlet we can query the XKCD API and get the Comic of the Day.

Get-XkcdComicForToday | fl

AutoRest - Comic of the Day.PNG

Taking it one step further we can download the comic of the day and open it, with this PowerShell one-liner.

invoke-webrequest (Get-XkcdComicForToday).img -outfile image.png ; & ./image.png

Download and Display XKCD Comic of the Day.PNG

Let’s Create a Simple API – BOFH Excuse Generator

I created an HTTP Trigger based PowerShell API Function that takes GET requests and returns a BOFH – Bastard Operator from Hell (warning link has audio) style excuse. For those that haven’t been in the industry for 20+ years or aren’t familiar with the Bastard Operator from Hell, it is essentially a set of (semi) fictional transcripts of user interactions from a Service Desk Operator (from Hells’) perspective set in a University environment. Part the schtick is an Excuse of the Day. My API when queried returns a semi plausible Excuse of the Day.

Here is my HTTP Trigger PowerShell Azure Function (v1). A simple random excuse generator.

I configured the function for anonymous, GET operations only and the route to be excuse

Testing the Function from my local machine showed it was working and returning an excuse.

Now it’s time to generate the OpenAPI Spec. Select the Azure Function App => Platform Features => API definition

Select Generate API definition template and the basics of the OpenAPI spec for the new API will be generated. I then updated it with the output application/json and what the API returns as shown in the highlighted sections below. Select Save 

Now we can test it out and we see that we have success.

Taking the OpenAPI Spec (YAML format file) for our BOFH Excuse API we go through the steps to successfully generate the PowerShell Module using AutoRest. Running the freshly baked cmdlet from the BOFH PowerShell Module returns a BOFH Excuse. Awesome.

Summary

Using AutoRest it is possible to generate PowerShell Modules for our API’s. Key to the successful generation is the definition of the OpenAPI Spec for our API. In my example above, obviously if you don’t define what the API call returns then the resulting cmdlet will query the API, but won’t return the result.

Azure AD/Active Directory User Security Evaluation Reporter

During December 2018 – February 2019 Microsoft have run an online Microsoft Graph Security Hackathon on Devpost.

The criteria of the hackathon was;

  • Build or update a functioning Microsoft Graph-powered solution that leverages the Microsoft Graph Security API

Following the announcement of the Hackathon I was encouraged by Kloud management to enter. During the busy month of December I started to formulate a concept for entry in the Hackathon taking learnings from the hackathon I entered in 2018. Over the Xmas holiday period I started working on my entry which continued into January and February at nights and weekends.

Problem

A Security Administrator within an Organisation enables security related configuration options on an Azure Tenant to implement security controls that align an organisation with Microsoft recommendations and best practice.

The Azure Security Score provides an evaluation on the alignment of an organisation with best practice, however to some extent it still requires end users to have the right configuration for security related elements of their profile. But as a Service Desk Operator or Cyber Security Officer there isn’t a single view of a user’s security posture that can give you an individual user security score summary. My solution……

Microsoft User Security Evaluation Reporter (USER)

Microsoft User Security Evaluation Reporter (USER) is an Azure AD and Active Directory tool for use by the Service Desk and Cyber Security Officers to get instant visibility of an organisations Azure Security Score that allows them to then evaluate current risks within an organisation right down to individual users.

When the Microsoft USER loads the current Azure Security Score is retrieved, evaluated and displayed for alignment with Microsoft Recommendations. Also, on load the last 5 Active Security Risk Events are displayed.

Microsoft USER Recent Risk Events and Azure Secure Score.PNG

The Service Desk Operator or Cyber Security Officer can select one of the recent Security Events or search for a user and drill down into the associated identity. They will be quickly able to understand the users’ individual security posture aligned with best practice.

What are the recent Security Risk Events for that user? Does that user;

  • Have MFA enabled? Is MFA enabled with an Authenticator App as the primary method?
  • Is the users Active Directory password in the Pwned Passwords v4 list from Have I Been Pwned?
  • Has the user recently being attempting Azure Password Reset functions?
  • What are the last 10 logins for that user?
  • What is the base user information for that user and what devices are registered to that user? Are they Azure AD Joined?

User Secure Score Summary.PNG

The clip below gives a walk through with more detail of my Microsoft USER tool.

How I built it

The solution is built using;

  • NodeJS and Javascript
  • leveraging Azure Functions to interface with Azure AD, Microsoft Graph, Azure Table Service
  • Lithnet Password Protection for Active Directory that in turn leverages the Have I Been Pwned v4 dataset
  • All secrets are stored in Azure Key Vault.
  • The WebApp is Application Insights enabled.
  • The WebApp is deployed using a Docker Container into Azure App Service

The architecture is shown below.

MS User Security Evaluation Reporter Architecture

The Code

A Repo with the code can be found here. Keep in mind I’m not a developer and this is my first WebApp that was put together late at night and over weekends and only tested in Chrome and Edge. The Readme also contains hopefully everything you should need to deploy it.

 

An Azure PowerShell Trigger Function for MAC Address Vendor / Manufacturer Lookup

Recently I started working on another side IoT Project. As part of that I needed to identify the Vendor / Manufacturer of networking equipment. As you are probably aware each network device has a unique MAC Address. A MAC Address looks like this 60:5b:b4:f9:63:05The first 24 bits (6 hex characters) detail the vendor / manufacturer.

There are a number of online lookup tools to determine who the vendor is from the MAC address. And some like that one have an API to allow lookup too. If you are only looking up small volumes that is all good, but after that you get into subscription fee costs. I needed more than 1000 per day, but I also had a good idea of what the vendors were likely to be for a lot of my requests. So I rolled my own using an Azure Trigger Function.

Overview

The IEEE standards body maintains a list of the manufacturers assigned the 24 bit identifiers. A full list can be found here which is updated regularly. I downloaded this list and wrote a simple parser that created a PowerShell Object with the Hex, Base16 and Name of each Manufacturer.

I then extracted the manufacturers I expect to need to reference/lookup into a PSObject that is easily exportable and importable (export-clixml / import-clixml) and use that locally in my application. The full list to too large to keep locally so I exported the full list (again using export-clixml) and implemented a lookup as an Azure Function (that reads in the full list as a PSObject that takes ~1.7 seconds for 25,000+ records) which can then be queried with either Hex or Base16 as per the format in the IEEE list and the vendor name is returned.

Converting the IEEE List to a PowerShell Object

This little script will download the latest version of the OUI list and convert to a PowerShell Object.  The resulting object looks like this:

vendor base16 hex
------ ------ ---
Apple, Inc. F0766F 40-CB-C0
Apple, Inc. 40CBC0 40-98-AD
Apple, Inc. 4098AD 6C-4D-73

Update:

  • Line 4 for the local location to output the OUI List too
  • Line 39 for the PSObject file to create

If you want to query the file locally using PowerShell you can like this:

$query="64-70-33"
$result = $vendors | Select-Object | Where-Object {$_.hex -like $query}
$result
which will output
vendor base16 hex
------ ------ ---
Apple, Inc. 50A67F 64-70-33

If you want to extract all entries associated with a hardware vendor (e.g Apple) you can like this;

$apple = $vendors | Select-Object | Where-Object {$_.vendor -like "Apple*"}

and FYI, Apple have 671 registrations. Yes they make a LOT of equipment.

Azure Function

Here is the Azure Trigger PowerShell Function that takes a JSON object with a query containing the Base16 or Hex values for the 24bit Vendor Manufacturer and returns the Vendor / Manufacturer. e.g

{"query": "0A-00-27"}

Don’t forget to upload the Vendors.xml exported above to your Azure Function (you can drag and drop using Kudu) and update the path in Line 7.

An example PowerShell script to query would be similar to the following. Update $queryURI with the URI to your Azure Function.

$queryURI = "https://FUNCTIONAPP.azurewebsites.net/api/AZUREFUNCTION?code=12345678/uiEx6kse6NujQG0b4OIcjx6B2wHLhZepBD/8Jy6fFawg=="
$query = "0A-00-27"
$body = @{"query" = $query} | ConvertTo-Json
$result=Invoke-RestMethod-Method Post -Uri $queryURI-Body $body
$result
The output will then return the manufacturer name. e.g
Microsoft Corporation

To lookup all MAC addresses from your local windows computer the following snippet will do that after updating $queryURI for you Azure Function.

# Query MAC Address
$queryURI = "https://FUNCTIONAPP.azurewebsites.net/api/AZUREFUNCTION?code=12345678/uiEx6kse6NujQG0b4OIcjx6B2wHLhZepBD/8Jy6fFawg=="
$netAdaptors = Get-NetAdapter

foreach ($adaptor in $netAdaptors){
    $mac=$adaptor.MacAddress
    $macV=$mac.Split("-")
    $macLookup="$($macV[0])$($macV[1])$($macV[2])"
    $body=@{"query"=$macLookup} |ConvertTo-Json
    $result=Invoke-RestMethod-Method Post -Uri $queryURI-Body $body-Headers @{"content-type"="application/text"}
    Write-Host-ForegroundColor Blue $result
}

Summary

With the power of PowerShell it is quick to take a large amount of information and transform it into a usable collection that can then also be quickly exported and re-imported. It is also quickly searchable and thanks to Azure Functions supporting PowerShell it’s simple to stand-up the collection and query it as required programatically.

 

A Voice Assistant for Microsoft Identity Manager

This is the third and final post in my series around using your voice to query/search Microsoft Identity Manager or as I’m now calling it, the Voice Assistant for Microsoft Identity Manager.

The two previous posts in this series detail some of my steps and processes in developing and fleshing out this Voice Assistant for Microsoft Identity Manager concept. The first post detailed the majority of the base functionality whilst the second post detailed the auditing and reporting aspects into Table Storage and Power BI.

My final architecture is depicted below.

Identity Manager integration with Cognitive Services and IoT Hub 4x3
Voice Assistant for Microsoft Identity Manager Architecture

I’ve put together more of an overview in a presentation format embedded here.

GitPitch Presents: github/darrenjrobinson/MIM-VoiceAssistant/presentation

The Fastest Way from Idea to Presentation for everyone on GitHub, GitLab, and Bitbucket.

If you’re interested in building the solution checkout the Github Repo here which includes the Respeaker Python Script, Azure Function etc.

Let me know how you go @darrenjrobinson

Using your Voice to Search Microsoft Identity Manager – Part 2

Introduction

Last month I wrote this post that detailed using your voice to search/query Microsoft Identity Manager. That post demonstrated a working solution (GitHub repository coming next month) but was still incomplete if it was to be used in production within an Enterprise. I hinted then that there were additional enhancements I was looking to make. One is an Auditing/Reporting aspect and that is what I cover in this post.

Overview

The one element of the solution that has visibility of each search scenario is the IoT Device. As a potential future enhancement this could also be a Bot. For each request I wanted to log/audit;

  • Device the query was initiated from (it is possible to have many IoT devices; physical or bot leveraging this function)
  • The query
  • The response
  • Date and Time of the event
  • User the query targeted

To achieve this my solution is to;

  • On my IoT Device the query, target user and date/time is held during the query event
  • At the completion of the query the response along with the earlier information is sent to the IoT Hub using the IoT Hub REST API
  • The event is consumed from the IoT Hub by an Azure Event Hub
  • The message containing the information is processed by Stream Analytics and put into Azure Table Storage and Power BI.

Azure Table Storage provides the logging/auditing trail of what requests have been made and the responses.  Power BI provides the reporting aspect. These two services provide visibility into what requests have been made, against who, when etc. The graphic below shows this in the bottom portion of the image.

Auditing Reporting Searching MIM with Speech.png
Voice Search for Microsoft Identity Manager Auditing and Reporting

Sending IoT Device Events to IoT Hub

I covered this piece in a previous post here in PowerShell. I converted it from PowerShell to Python to run on my device. In PowerShell though for initial end-to-end testing when developing the solution the body of the message being sent and sending it looks like this;

[string]$datetime = get-date
$datetime = $datetime.Replace("/","-")
$body = @{
 deviceId = $deviceID
 messageId = $datetime
 messageString = "$($deviceID)-to-Cloud-$($datetime)"
 MIMQuery = "Does the user Jerry Seinfeld have an Active Directory Account"
 MIMResponse = "Yes. Their LoginID is jerry.seinfeld"
 User = "Jerry Seinfeld"
}

$body = $body | ConvertTo-Json
Invoke-RestMethod -Uri $iotHubRestURI -Headers $Headers -Method Post -Body $body

Event Hub and IoT Hub Configuration

First I created an Event Hub. Then on my IoT Hub I added an Event Subscription and pointed it to my Event Hub.

IoTHub Event Hub.PNG
Azure IoT Hub Events

Streaming Analytics

I then created a Stream Analytics Job. I configured two Inputs. One each from my IoT Hub and from my Event Hub.

Stream Analytics Inputs.PNG
Azure Stream Analytics Inputs

I then created two Outputs. One for Table Storage for which I used an existing Storage Group for my solution, and the other for Power BI using an existing Workspace but creating a new Dataset. For the Table storage I specified deviceId for Partition key and messageId for Row key.

Stream Analytics Outputs.PNG
Azure Stream Analytics Outputs

Finally as I’m keeping all the data simple in what I’m sending, my query is basically copying from the Inputs to the Outputs. One is to get the events to Table Storage and the other to get it to Power BI. Therefore the query looks like this.

Stream Analytics Query.PNG
Azure Stream Analytics Query

Events in Table Storage

After sending through some events I could see rows being added to Table Storage. When I added an additional column to the data the schema-less Table Storage obliged and dynamically added another column to the table.

Table Storage.PNG
Table Storage Events

A full record looks like this.

Full Record.PNG
Voice Search Table Storage Audit Record

Events in Power BI

Just like in Table Storage, in Power BI I could see the dataset and the table with the event data. I could create a report with some nice visuals just as you would with any other dataset. When I added an additional field to the event being sent from the IoT Device it magically showed up in the Power BI Dataset Table.

PowerBI.PNG
PowerBI Voice Search Analytics

Summary

Using the Azure IoT Hub REST API I can easily send information from my IoT Device and then have it processed through Stream Analytics into Table Storage and Power BI. Instant auditing and reporting functionality.

Let me know what you think on twitter @darrenjrobinson

Using your Voice to Search Microsoft Identity Manager – Part 1

Introduction

Yes, you’ve read the title correctly. Speaking to Microsoft Identity Manager. The concept behind this was born off the back of some other work I was doing with Microsoft Cognitive Services. I figured it shouldn’t be that difficult if I just break down the concept into individual elements of functionality and put together a proof of concept to validate the idea. That’s what I did and this is the first post of the solution as an overview.

Here’s a quick demo.

Overview

The diagram below details the basis of the solution. There are a few extra elements I’m still working on that I’ll cover in a future post if there is any interest in this.

Searching MIM with Speech Overview

The solution works like this;

  1. You speak to a microphone connected to a single board computer with the query for Microsoft Identity Manager
  2. The spoken phrase is converted to text using Cognitive Speech to Text (Bing Speech API)
  3. The text phrase is;
    1. sent to Cognitive Services Language Understanding Intelligent Service (LUIS) to identify the target of the query (firstname lastname) and the query entity (e.g. Mailbox)
    2. Microsoft Identity Manager is queried via API Management and the Lithnet REST API for the MIM Service
  4. The result is returned to the single board computer as a text result phase which it then uses Cognitive Services Text to Speech to convert the response to audio
  5. The result is spoken back

Key Functional Elements

  • The microphone array I’m using is a ReSpeaker Core v1 with a ReSpeaker Mic Array
  • All credentials are stored in an Azure Key Vault
  • An Azure Function App (PowerShell) interfaces with the majority of the Cognitive Services being used
  • Azure API Management is used to front end the Lithnet MIM Webservice
  • The Lithnet REST API for the MIM Service provides easy integration with the MIM Service

Summary

Leveraging a lot of Serverless (PaaS) Services, a bunch of scripting (Python on the ReSpeaker and PowerShell in the Azure Function) and the Lithnet REST API it was pretty simple to integrate the ReSpeaker with Microsoft Identity Manager. An alternative to MIM could be any other service you have an API interface into. MIM is obviously a great choice as it can aggregate from many other applications/services.

Why a female voice? From a small response it was the popular majority.

Let me know what you think on twitter @darrenjrobinson

How to quickly copy Azure Functions between Azure Tenants and implement ‘Run From Zip’

As mentioned in this post yesterday I needed to copy a bunch of Azure WebApps and Functions from one Tenant to another. With anything cloud based, things move fast. Some of the methods I found were too onerous and more complex than they needed to be. There is of course the Backup option as well for Azure Functions. This does require a storage account associated with the Function App Plan. My Functions didn’t have the need for storage and the plan tier they were on meant that wasn’t a prerequisite. I didn’t have the desire to add a storage account to backup to then migrate.

Overview

In this post I show my method to quickly copy Azure Functions from one Azure Tenant to another. My approach is;

  • In the Source Tenant from the Azure Functions App
    • Using Kudu take a backup of the wwwroot folder (that will contain one or more Functions)
  • In the Target Tenant
    • Create an Azure Function App
    • Using Kudu locate the wwwroot archive in the new Azure Function App
    • Configure Azure Function Run From Zip

Backing up the Azure Functions in the Source Tenant

Using the Azure Portal in the Source Tenant go to your Function App => Application Settings and select Advanced Tools. Select Debug Console – Powershell and navigate to the Site Folder. Next to wwwroot select the download icon to obtain an archive of your functions.

Download WWWRoot Folder 2.PNG

Copying the Azure Functions to the Target Tenant

In the Target Tenant first create a New Azure Function App. I did this as I wanted to change the naming, the plan and a few other configuration items. Then using the Azure Portal go to your new Function App, Application Settings and select Advanced Tools.

Function Advanced Tools

Create a folder under D:\home\data named SitePackages.

Create Site Packages Folder

Drag and drop your wwwroot.zip file into the SitePackages Folder.

Drag Drop wwwroot

In the same folder select the + icon to create a file named siteversion.txt

Site Packages

Inside the file give the name of your archive file e.g.  wwwroot.zip Select Save.

Siteversion.txt.png

Back in your new Function App select Application Settings

Application Settings

Under Application Settings add a new setting for Website_Use_Zip with a setting value of ‘1’.

Website Use Zip.PNG

Refresh your Function App and you’ll notice it is now Read Only as it is running from Zip. All the Functions that were in the Zip are displayed.

Functions Migrated.PNG

Summary

This is a quick and easy method to get your functions copied from one Tenant to another. Keep in mind if your functions are using Application Settings, KeyVaults, Managed Service Identity type options you’ll need to add those settings, certificates, credentials in the target environment.

Evaluating the migration of Azure Functions to Microsoft Flow – Twitter IoT Integration

 

Introduction

Almost 18 months ago I wrote this post on integrating Twitter with Azure Functions to Tweet IoT data. A derivative of that solution has been successfully running for about the same period. Azure Functions have been bullet proof for me.

After recently implementing Microsoft Flow as detailed in my Teenager Notification Device post here I started looking at a number of the Azure Functions I have running and looked at what would be better suited to being implemented with Flow. What could I simplify by migrating to Microsoft Flow?

The IoT Twitter Function linked above was one the simpler Functions I had running that I’ve transposed and it has been running seamlessly. I chose this particular function to migrate as the functions it was performing were actions that Microsoft Flow supported. Keep in mind (see the Summary), that there isn’t a one size fits all. Flow and Functions each have their place and often work even better together.

Comparison

Transposing the IoT Twitter Function App to Microsoft Flow provided me with the same outcome, however the effort to get to that outcome is considerably less. As a quick comparison I’ve compared the key steps I needed to perform with the Azure Function to enable the integration vs what it took to implement with Microsoft Flow.

Function vs Flow.PNG

That’s pretty compelling. For the Azure Function I needed to register an App with Twitter and I needed to create an Azure Function App Plan to host my Azure Function. With Microsoft Flow I just created a Flow.

To setup and configure the Azure Function I needed to set up Deployment Options to upload the Twitter PowerShell Module (this is the third-party module), and I needed to store the two credential sets associated with the Twitter Account/App. In Microsoft Flow I just chose Twitter as an Action and provided conscent to the oAuth2 challenge.

Finally for the logic of the Azure Function I had to write the script to retrieve the data, manipulate it, and then post it to Twitter. In Microsoft Flow it was simply a case of configuring the workflow logic.

Microsoft Flow

As detailed above, the logic is still the same. On a schedule, get the data from the IoT Devices via a RestAPI, manipulate/parse the response and output a Tweet with the environment info. Doing that in Flow though means selection of an action and configuring it. No code, no modules, no keys.

Below is a resultant Flow (overview) to achieve the same result as my Azure Function that I originally implemented as an Azure Function as detailed here.

MS Flow - Twitter.PNG

The schedule part is triggered hourly. Using Recurrence it is easy to set the schedule (much easier than a CRON format in Azure Functions) complete with timezone (within the advanced section). I then get the Current time to allow me to acquire the Date and Time in a format that I will use in the resulting tweet.

Schedule

Next is to perform the first RestAPI call to get the data from the first of the IoT devices. Parse the JSON response to get the temperature value.

GET

Repeat the above step for the other IoT Device located in a different environment and parse that. Formulate the Tweet using elements of information from the Flow.

Repeat and Tweet

Looking at Twitter we see a resultant Tweet from the Flow.

Tweet.PNG

Summary

This is a relatively simple flow. Bare in mind I haven’t included any logic to validate what is returned or perform any conditional operations during processing. But very quickly it is possible to retrieve, manipulate and output to a different medium.

So why don’t I used Flow for everything? The recent post I mentioned at the beginning for the Teenager Notification Device that also used a Flow, also uses an Azure Function. For that use case the integration of the IoT Device with Azure IoT is via MQTT. There isn’t currently that capability in Flow. But Flow was used to initiate an Action of initiating a trigger for an Azure Function that in turn sent an MQTT message to an IoT Device. The combination of Flow with Functions provides a lot of flexibility and power.

 

Building a Teenager Notification Service using Azure IoT an Azure Function, Microsoft Flow, Mongoose OS and a Micro Controller

Introduction

This is the third and final post on my recent experiments integrating small micro controllers (ESP8266) running Mongoose OS integrated with Azure IoT Services.

In the first post in this series I detailed creating the Azure IoT Hub and registering a NodeMCU (ESP8266 based) micro controller with it. The post detailing that can be found here. Automating the creation of Azure IoT Hubs and the registration of IoT Devices with PowerShell and VS Code

In the second post I detailed communicating with the micro controller (IoT device) using MQTT and PowerShell. That post can be found here. Integrating Azure IoT Devices with MongooseOS MQTT and PowerShell

Now that we have end to end functionality it’s time to do something with it.

I have two teenagers who’ve been trained well to use headphones. Whilst this is great at not having to hear the popular teen bands of today, and numerous Facetime, Skype, Snapchat and similar communications it does come with the downside of them not hearing us when we require their attention and they are at the other end of the house. I figured to avoid the need to shout to get attention, a simple visual notification could be built to achieve the desired result. Different colours for different requests? Sure why not. This is that project, and the end device looks like this.

IoT Notifier using Neopixel
IoT Notifier using Neopixel

Overview

Quite simply the solution goes like this;

  • With the Microsoft Flow App on our phones we can select the Flow that will send a notification
2018-03-25 18.56.38 500px.png
Send IoT Notification Message
  • Choose the Notification intent which will drive the color displayed on the Teenager Notifier.
2018-03-25 18.56.54 500px
IoT Notifier Task Message
  • The IoT Device will then display the color in a revolving pattern as shown below.

The Architecture

The end to end architecture of the solution looks like this.

IoT Cloud to Device - NeoPixel - 640px
IoT Message Cloud to Device

Using the Microsoft Flow App on a mobile device gives a nice way of having a simple interface that can be used to trigger the notification. Microsoft Flow sends the desired message and details of the device to send it to, to an Azure Function that puts a message into an MQTT queue associated with the Mongoose OS driven Azure IoT Device (ESP8266 based NodeMCU micro controller) connected to an Azure IoT Hub. The Mongoose OS driven Azure IoT Device takes the message and displays the visual notification in the color associated with the notification type chosen in Microsoft Flow at the beginning of the process.

The benefits of this architecture are;

  • the majority of the orchestration happens in Azure, yet thanks to Azure IoT and MQTT no inbound connection is required where the IoT device resides. No port forwarding / inbound rules to configure on your home router. The micro controller is registered with our Azure IoT Hub and makes an outbound connection to subscribe to its MQTT topic. As soon as there is a message for the device it triggers its logic and does what we’ve configured
  • You can initiate a notification from anywhere in the world (most simply using the Flow mobile app as shown above)
  • And using Mongoose OS allows for the device to be managed remote via the Mongoose OS Dashboard. This means that if I want to add an additional notification (color) I can update Flow for a new option to select and update the configuration on the Notifier device to display the new color if it receives such a command.

Solution Prerequisites

This post builds on the previous two. As such the prerequisites are;

  • you have an Azure account and have set up an IoT Hub, and registered an IoT Device with it
  • your IoT device (micro controller) can run Mongoose OS on. I’m using a NodeMCU ESP8266 that I purchased from Amazon here.
  • the RGB LED Light Ring (generic Neopixel) I used I purchased from Amazon here.
  • 3D printer if you want to print an enclosure for the IoT device

With those sorted we can;

  • Install and configure my Mongoose OS Application. It includes all the necessary libraries and sample config to integrate with a Neopixel, Azure IoT, Mongoose Dashboard etc.
  • Create the Azure PowerShell Function App that will publish the MQTT message the IoT Device will consume
  • Create the Microsoft Flow that will kick off the notifications and give use a nice interface to send what we want
  • Build an enclosure for our IoT device

How to build this project

The order I’ve detailed the elements of the architecture here is how I’d recommend approaching this project. I’d also recommend working through the previous two blog posts linked at the beginning of this one as that will get you up to speed with Mongoose OS, Azure IoT Hub, Azure IoT Devices, MQTT etc.

Installing the AzureIoT-Neopixel-js Application

I’ve made the installation of my solution easy by creating a Mongoose OS Application. It includes all the libraries required and sample code for the functionality I detail in this post.

Clone it from Github here and put it into your .mos directory that should be in the root of your Windows profile directory. e.g C:\Users\Darren\.mos\apps-1.26 then from the MOS Configuration page select Projects, select AzureIoT-Neopixel-JS then select the Rebuild App spanner icon from the toolbar. When it completes select the Flash icon from the toolbar.  When your micro controller restarts select the Device Setup from the top menu bar and configure it for your WiFi network. Finally configure your device for Azure MQTT as per the details in my first post in this series (which will also require you to create an Azure IoT Hub if you don’t already have one and register your micro controller with it as an Azure IoT Device). You can then test sending a message to the device using PowerShell or Device Explorer as shown in post two in this series.

I have the Neopixel connected to D1 (GPIO 5) on the NodeMCU. If you use a different micro controller and a different GPIO then update the init.js configuration accordingly.

Creating the Azure Function App

Now that you have the micro controller configured and working with Azure IoT, lets abstract the sending of the MQTT messages into an Azure Function. We can’t send MQTT messages from Microsoft Flow, so I’ve created an Azure Function that uses the AzureIoT Powershell module to do that.

Note: You can send HTTP messages to an Azure IoT device but … 

Under current HTTPS guidelines, each device should poll for messages every 25 minutes or more. MQTT and AMQP support server push when receiving cloud-to-device messages.

….. that doesn’t suit my requirements 

I’m using the Managed Service Identity functionality to access the Azure Key Vault where credentials for the identity that can interact with my Azure IoT Hub is stored. To enable and use that (which I highly recommend) follow the instructions in my blog post here to configure MSI on an Azure Function App. If you don’t already have an Azure Key Vault then follow my blog post here to quickly set one up using PowerShell.

Azure PowerShell Function App

The Function App is an HTTP Trigger Based one using PowerShell. In order to interact with Azure IoT Hub and integrate with the IoT Device via Azure I’m using the same modules as in the previous posts. So they need to be located within the Function App.

Specifically they are;

  • AzureIoT v1.0.0.5
  • AzureRM v5.5.0
  • AzureRM.IotHub v3.1.0
  • AzureRM.profile v4.2.0

I’ve put them in a bin directory (which I created) under my Function App. Even though AzureRM.EventHub is shown below, it isn’t required for this project. I uploaded the modules from my development laptop (C:\Program Files\WindowsPowerShell\Modules) using WinSCP after configuring Deployment Credentials under Platform Features for my Azure Function App. Note the path relative to mine as you will need to update the Function App script to reflect this path so the modules can be loaded.

Azure Function PS Modules.PNG
Azure Function PS Modules

The configuration in WinSCP to upload to the Function App for me is

WinSCP Configuration
WinSCP Configuration

Edit the AzureRM.IotHub.psm1 file

The AzureRM.IotHub.psm1 will locate an older version of the AzureRM.IotHub PowerShell module from within Azure Functions. As we’ve uploaded the version we need, we need to comment out the following lines in AzureRM.IotHub.psm1 so that it doesn’t do a version check. See below the lines to remark out (put a # in front of the lines indicated below) that are near the start of the module. The AzureRM.IotHub.psm1 file can be edited via WinSCP & notepad.

#$module = Get-Module AzureRM.Profile
#if ($module -ne $null -and $module.Version.ToString().CompareTo("4.2.0") -lt 0)
#{
# Write-Error "This module requires AzureRM.Profile version 4.2.0. An earlier version of AzureRM.Profile is imported in the current PowerShell session. Please open a new session before importing this module. This error could indicate that multiple incompatible versions of the Azure PowerShell cmdlets are installed on your system. Please see https://aka.ms/azps-version-error for troubleshooting information." -ErrorAction Stop
#}
#elseif ($module -eq $null)
#{
# Import-Module AzureRM.Profile -MinimumVersion 4.2.0 -Scope Global
#}

HTTP Trigger Azure PowerShell Function App

Here is my Function App Script. You’ll need to update it for the location of your PowerShell Modules (I created a bin directory under my Function App D:\home\site\wwwroot\myFunctionApp\bin), your Key Vault details and the user account you will be using. The User account will need permissions to your Key Vault to retrieve the password (credential) for the account you will run the process as and to your Azure IoT Hub.

You can test the Function App from within the Azure Portal where you created the Function App as shown below. Update for the names of the IoT Hub, IoT Device and the Resource Group in your associated environment.

Testing Function App.PNG
Test Function App

Microsoft Flow Configuration

The Flow is very simple. A manual button and a resulting HTTP Post.

Microsoft Flow Config 1
Microsoft Flow Configuration

For the message I have configured a list. This is where you can choose the color of the notification.

Manual Trigger.PNG
Microsoft Flow Manual Trigger

The Action is an HTTP Post to the Azure Function URL. The body has the configuration for the IoTHub, IoTDevice, Resource Group Name, IoTKeyName and the Message selected from the manual button above. You will have the details for those settings from your initial testing via the Function App (or PowerShell).

The Azure Function URL you get from the top of the Azure Portal screen where you configure your Function App. Look for “Get Function URL”.

HTTP Post
Microsoft Flow HTTP Post

Testing

Now you have all the elements configured, install the Microsoft Flow App on your mobile if you don’t already have it for Apple iOS Appstore and Android Google Play Log in with the account you created the Flow as, select the Flow, the message and done. Depending on your internet connectivity you should see the notification in < 10 seconds displayed on the Notifier device.

Case 3D Printer Files

Lastly, we need to make it look all pretty and make the notification really pop. I’ve created a housing for the neopixel that sits on top of a little case for the NodeMCU.

As you can see from the final unit, I’ve printed the neopixel holder in a white PLA that allows the RGB LED light to be diffused nicely and display prominently even in brightly lit conditions.

Neopixel Enclosure
Neopixel Enclosure

I’ve printed the base that holds the micro controller in a different color. The top fits snugly through the hole in the micro controller case. The wires from the neopixel to connect it to the micro controller slide through the shaft of the top housing. It also has a backplate that attaches to the back of the enclosure that I secure with a little hot glue.

Here is a link to the Neopixel (WS2812) 16 RGB LED light holder I created on Thingiverse.

NodeMCU Enclosure.PNG
NodeMCU Enclosure

Depending on your micro controller you will also need an appropriately sized case for that. I’ve designed the neopixel light holder top assembly to sit on top of my micro controller case. Also available on Thingiverse here.

Summary

Using a combination of Azure IoT, Azure PaaS Services, Mongoose OS and a cheap micro controller with an RGB LED light ring we have a very versatile Internet of Things device. The application here is a simple visual notifier. A change of output device or even in conjunction with an input device could change the application, whilst still re-using all the elements of the solution that glues it all together (micro-controller, Mongoose OS, Azure IoT, Azure PaaS). Did you build one? Did you use this as inspiration to build something else? Let me know.

Automating the generation of Microsoft Identity Manager Configuration Documentation

Introduction

Last year Microsoft released the Microsoft Identity Manager Configuration Documenter which is available here. It is a fantastic little tool from Microsoft that supersedes its predecessor from the Microsoft Identity Manager 2003 Resource Toolkit (which only documented the Sync Server Configuration).

Running the tool (a PowerShell Module) against a base out-of-the-box reference configuration for FIM/MIM Servers reconciled against an exported configuration from the MIM Sync and Service Servers from an implementation, generates an HTML Report document that details the existing configuration of the MIM Service and MIM Sync.

Overview

Last year I wrote this post based on an automated solution I implemented to perform nightly backups of a FIM/MIM environment during development.

This post details how I’ve automated another daily task for a large development environment where a number of changes are going on and I wanted to have documentation generated that detailed the configuration for each day. Partly to quickly be able to work out what has changed when needing to roll back/re-validate changes, and also to have the individual configs from each day so they could also be used if we need to rollback.

The process uses an Azure Function App that uses Remote PowerShell into MIM to;

  1. Leverage a modified (stream lined version) of my nightly backup Azure Function to generate the Schema.xml and Policy.xml MIM Service configuration files and the Lithnet MIIS Automation PowerShell Module installed on the MIM Sync Server to export of the MIM Sync Server Configuration
  2. Create a sub-directory for each day under the MIM Documenter Tool to hold the daily configs
  3. Execute the generation of the Report and have the Report copied to the daily config/documented solution

Obtaining and configuring the MIM Configuration Documenter

Download the MIM Configuration Documenter from here and extract it to somewhere like c:\FIMDoco on your FIM/MIM Sync Server. In this example in my Dev environment I have the MIM Sync and Service/Portal all on a single server.

Then update the Invoke-Documenter-Contoso.ps1 (or whatever you’ve renamed the script to) to make the following changes;

  • Update the following lines for your version and include the new variable $schedulePath and add it to the $pilotConfig variable. Create the C:\FIMDoco\Customer and C:\FIMDoco\Customer\Dev directories (replace Customer with something appropriate.
######## Edit as appropriate ####################################
$schedulePath = Get-Date -format dd-MM-yyyy
$pilotConfig = "Customer\Dev\$($schedulePath)" # the path of the Pilot / Target config export files relative to the MIM Configuration Documenter "Data" folder.
$productionConfig = "MIM-SP1-Base_4.4.1302.0" # the path of the Production / Baseline config export files relative to the MIM Configuration Documenter "Data" folder.
$reportType = "SyncAndService" # "SyncOnly" # "ServiceOnly"
#################################################################
  • Remark out the Host Settings as these won’t work via a WebJob/Azure Function
#$hostSettings = (Get-Host).PrivateData
#$hostSettings.WarningBackgroundColor = "red"
#$hostSettings.WarningForegroundColor = "white"
  • Remark out the last line as this will be executed as part of the automation and we want it to complete silently at the end.
# Read-Host "Press any key to exit"

It should then look something like this;

Azure Function to Automate execution of the Documenter

As per my nightly backup process;

  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 6am every morning using the following CRON configuration
0 0 6 * * *
  • I also needed to increase the timeout for the Azure Function as generation of the files to execute the report and the time to execute the report exceed the default timeout of 5 mins in my environment (19 Management Agents). I increased the timeout to the maximum of 10 mins as detailed here. Essentially added the following to the host.json file in the wwwroot directory of my Function App.
{
 "functionTimeout": "00:10:00"
}

Azure Function PowerShell Timer Script (Run.ps1)

This is the Function App PowerShell Script that uses Remote PowerShell into the MIM Sync/Service Server to export the configuration using the Lithnet MIIS Automation and Microsoft FIM Automation PowerShell modules.

Note: If your MIM Service is on a different host you will need to install the Microsoft FIM Automation PowerShell Module on your MIM Sync Server and update the script below to change references to http://localhost:5725 to whatever your MIM Service host is.

Testing the Function App

With everything configured, manually running the Function App and checking the output window if you’ve configured everything correct will show success in the Logs as shown below. In this environment with 19 Management Agents it takes 7 minutes to run.

Running the Azure Function.PNG

The Report

The outcome everyday just after 6am is I have (via automation);

  • an Export of the Policy and Schema Configuration from my MIM Service
  • an Export of the MIM Sync Server Configuration (the Metaverse and all Management Agents)
  • I have the MIM Configuration Documenter Report generated
  • If I need to rollback changes I have the ability to do that on a daily interval (either for a MIM Service change or an individual Management Agent change

Under the c:\FIMDoco\Data\Customer\Dev\\Report directory is the HTML Configuration Report.

Report Output.PNG

Opening the report in a browser we have the configuration of the MIM Sync and MIM Service.

Report