Integrating with SailPoint IdentityNow Private (v1) API’s using PowerShell

How to generate the ‘Password Hash’ to leverage the IdentityNow Private API’s

Recently I’ve posted about integrating with the SailPoint IdentityNow API’s. Specifically;

So why another post on a very similar subject? Well, not all IdentityNow API’s are exposed on the v2 API Endpoints that were leveraged in the previous posts. The v1 (Private API List) is detailed on SailPoint Compass here and contains a number of functions that are extremely useful. And here is where it gets interesting. The authentication methods between v1 and v2 are not the same. There is a document that gives you most of what you need also on SailPoint Compass here that describes IdentityNow’s oAuth methods.

But here is the kicker. In order to use the older v1 API endpoints you need to generate a hash for the user associated with oAuth2.0 authentication which is in addition to the Client ID and Client Secret. The only method that I’m aware of (and believe me I lost time and effort searching) that SailPoint provide to generate the hash is to use the File Upload Utility which is available on Compass here. And that method only came to my attention after posting to the IdentityNow Community.

Now let’s say you run into the problem I did with the version of Java (yes it is a Java Utility) I had installed (v1.7);

java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
Wrong Version of Java.PNG
identitynow fileuploader hash generator – idnFileUploader.jar

I didn’t have a need for Java, in fact I was surprised it was even installed on my laptop. Seems it was from an installation of Virtual Box some time ago. I didn’t want to go and get another version etc etc. But I did spin up a VM and get the Java App to run and generate the hashed password. A hashed password for IdentityNow v1 API integration looks like this;

160a028019a1ce58c679f2216a7f707fa666a674772b4742cef7b08eab99de7b

Looking at that I guessed SHA256, but there was something else going on (check the case).

The Premise and Intent

I’ve generated the password hash, why not just move on? What I was looking to achieve was the ability for a number of services to interact with IdentityNow via v1 Private API’s. As part of those processes and for management of that privileged access, it needs to be automated. Running a Java app to generate the hashed password for each identity and on each password change wasn’t an option.

The Solution

What I ended up doing, as there was no accessible documentation around how the hashed password is generated for v1 API integration still makes me feel a bit dirty and guilty that I had to do it. Maybe I shouldn’t even be documenting it? But we are licensed for this service and integration with it is still limited to what functions are available via the API and identity you are authenticating with.

I decompiled the Java App, peeked inside and looked to see how the hash was being generated. Then mimicked the process in PowerShell.

Direct Integration with IdentityNow Private (v1) API’s

Direct integration is essentially like any other API. What you need is;

  • Client ID and Client Secret (enabled via the Admin => Global => Security Settings => API Management)
  • Admin Account and Password (account name and password of an account granted Admin permissions in IdentityNow)
    • this is passed in the WebRequest Header as Digest AuthN (Base64 Encoded)
  • Token End Point to get a Token
    • “https://orgname.identitynow.com/api/oauth/token?grant_type=password&username=AdminUser&password=HashedAdminPassword”

The process is;

  1. Generate the Request to the Token Endpoint
  2. Submit the request and obtain a token
  3. Execute subsquent requests with the Bearer Token received in Step 2
  4. Renew token before it expires if continuing to make requests longer than the token life (default 60 mins)

That all looks simple right. Except for that bit about ‘hashed password’.

The Special Sauce

You’ve made it this far, so you’re probably looking to do this. Here it is. The inputs are the IdentityNow Login Name for an Admin account and the plaintext password for that account.

In order to simplify generating the hash you will need to install the PowerShell Community Extensions (PSCX) Module which is in the PowerShell Gallery here. This provides the super handy Get-Hash cmdlet that I use extensively in a number of my scripts. Here I’m using it to generate the SHA256 hashes.

You should be on PowerShell 5.1 or later so the following will install it for you (from an elevated PowerShell console);

install-module -name pscx

Generate passwordhash Script

Update lines 2 and 3 for the Admin Identity you are generating the hash for. Line 8 and 9 are the lines that generate the hash and prepare it for use in requesting a token.

Calling IdentityNow Private API’s with PowerShell

Here is an example of using PowerShell to generate the hashed Password, obtain a token then call the API to return the list of Profiles in IdentityNow.

  • Update the obvious lines for your instance (Lines 2, 4, 8, 10, 11)
  • Update Line 25 if you want to call a different v1 API

Summary

Hopefully I’ve saved others the couple of hours of messing around to workout how to programatically generate the needed password hash to allow automated integration with the SailPoint IdentityNow v1 Private API’s.

Lifecycle Management of Identities in SailPoint IdentityNow via API and PowerShell

Introduction

If you’ve been following along I’ve been posting about leveraging the SailPoint IdentityNow API for;

Now that I’ve covered Searching and Authoring all that is left is lifecycle management. And that’s what I’ll cover in this post. Updating and Deleting Entities via the API.

Updating SailPoint IdentityNow Entities

If you have not read the first post in this series, start there as ‘updating’ builds on top of Search/Reporting. It also covers enabling the API.

My quick start guide to updating IdentityNow Entities starts with searching to find the Entities (probably Users) you want to update. In my example below I’m searching for all objects on a Source. Then I iterate through the results and update them. I’m updating the Country attribute.

When updating an entity (e.g User) you need to perform a PATCH webrequest specifying the underlying ID (objectID) of the object. The URI format looks like;

https://orgName.api.identitynow.com/v2/accounts/2c91808365bd1f010165caf761625bcd?org=orgName

Example Script

Here is an example script. As per the previous two posts, change all the lines for your tenant and your API details.

  • Line 16 is the query for objects to update
  • Lines 39-41 is the attribute to update

Updating Manager

For manager, the attribute is a reference on the IdentityNow Source to the Manager. On my “External Entities” Source I locate the object representing the Manager and obtain their accountId (which in my case is firstname.lastname) and set that as the ManagerID. I then find the users that I want to update for this manager and update them as we did in the previous example, but with a reference to accountId of the Manager for the Manager attribute.

NOTE: When querying IdentityNow via the API the syntax is very important. Especially when also incorporating variables. If I have a variable $manager with a displayName value, that would normally contain a space. So we need to capture the whole string. Here is an example of doing that. So in order to query for $manager = “Rick Sanchez” in PowerShell that would be:

$queryManager = "attributes.displayName:"+'"'+"$($manager)"+'"'

which will give us attributes.displayName:”Rick Sanchez” which will return in my case the single object for Rich Sanchez not a list of references to Rick Sanchez.

Deleting SailPoint IdentityNow Entities

Deleting is very similar to Updating. Again the easiest method is to search and obtain the object(s) to be deleted and then delete via a DELETE webrequest specifying the underlying ID (objectID) of the object to be deleted. The URI looks like;

https://orgName.api.identitynow.com/v2/accounts/2c91808565bd1f110165cb628d1a702f?org=orgName

Example Script

Here is an example script. It searches IdentityNow based on object naming (see line 14), then finds the Source that the object is connected to that we wish to delete. In this example the Source is the one I created in the last post “External Entities”. Update for the name of your Source (line 25).

Summary

Using the API we can Search for Identities, Author and Update them.

Authoring Identities in SailPoint IdentityNow via the API and PowerShell

Introduction

A key aspect of any Identity Management project is having an Authoritative Source for Identity. Typically this is a Human Resources system. But what about identity types that aren’t in the authoritative source? External Vendors, contingent contractors and identities that are used by End User Computing systems such as Privileged Accounts, Service Accounts, Training Accounts.

Now some Identity Management Solutions allow you to Author identity through their Portals, and provide a nice GUI to create a user/training/service account. SailPoint IdentityNow however doesn’t have that functionality. However it does have an API and I’ll show you in the post how you can use it to Author identity into IdentityNow via the API.

Overview

So, now you’re thinking great, I can author Identity into IdentityNow via the API. But, am I supposed to get managers to interface with an API to kick off a workflow to create identities? Um, no. I don’t think we want to be giving them API access into our Identity Management solution.

The concept is this. An Identity Request WebApp would collect the necessary information for the identities to be authored and facilitate the creation of them in IdentityNow via the API. SailPoint kindly provide a sample project that does just that. It is available on Github here. Through looking at this project and the IdentityNow API I worked out how to author identity via the API using PowerShell. There were a few gotchas I had to work through so I’m providing a working example for you to base a solution around.

Getting Started

There are a couple of things to note.

  • Obviously you’ll need API access
  • You’ll want to create a Source that is of the Flat File type (Generic or Delimited File)
    • We can’t create accounts against Directly Connected Sources
  • There are a few attributes that are mandatory for the creation
    • At a minimum I supply id, name, givenName, familyName, displayName and e-mail
    • At an absolute bare minimum you need the following. Otherwise you will end up with an account in IdentityNow that will show as “Identity Exception”
      • id, name, givenName, familyName, e-mail*

* see note below on e-mail/email attribute format based on Source type

Creating a Flat File Source to be used for Identity Authoring

In the IdentityNow Portal under Admin => Connections => Sources select New.

Create New Source.PNG

I’m using Generic as the Source Type. Give it a name and description. Select Continue

New Generic Source.PNG

Assign an Owner for the Source and check the Accounts checkbox. Select Save.

New Source Properties.PNG

At the end of the URL of the now Saved new Source get and record the SourceID. This will be required so that when we create users via the API, they will be created against this Source.

SourceID.PNG

If we look at the Accounts on this Source we see there are obviously none.

Accounts.PNG

We’d better create some. But first you need to complete the configuration for this Source. Go and create an Identity Profile for this Source, and configure your Identity Mappings as per your requirements. This is the same as you would for any other IdentityNow Source.

Authoring Identities in IdentityNow with PowerShell

The following script is the bare minimum to use PowerShell to create an account in IdentityNow. Change;

  • line 2 for your Client ID
  • line 4 for your Client Secret
  • line 8 for your Tenant Org Name
  • line 12 for your Source ID
  • the body of the request for the account to be created (lines 16-21)

NOTE: By default on the Generic Source the email attribute is ’email’. By default on the Delimited Source the email attribute is ‘e-mail’. If your identities after executing the script and a correlation are showing as ‘Identity Exception’ then it’s probably because of this field being incorrect for the Source type. If in doubt check the Account Schema on the Source.

Execute the script and refresh the Accounts page. You’ll see we now have an account for Rick.

Rick Sanchez.PNG

Expanding Rick’s account we can see the full details.

Rick Full Details.PNG

Testing it out for a Bulk Creation

A few weeks ago I wrote this post about generating user data from public datasets. I’m going to take that and generate 50 accounts. I’ve added additional attributes to the Account Schema (for suburb, state, postcode, street). Here is a script combining the two.

Running the script creates our 50 users in conjunction to the couple I already had present.

Bulk Accounts Created.PNG

Summary

Using the IdentityNow API we can quickly leverage it to author identity into SailPoint IdentityNow. That’s the easy bit sorted. Now to come up with a pretty UI and a UX that passes the End-User usability tests. I’ll leave that with you.

Reporting on SailPoint IdentityNow Identities using the ‘Search’ (Beta) API and PowerShell

Introduction

SailPoint recently made available in BETA their new Search functionality. There’s some great documentation around using the Search functions through the IdentityNow Portal on Compass^. Specifically;

^ Compass Access Required

Each of those articles are great, but they are centered around performing the search via the Portal.  For some of my needs, I need to do it via the API and that’s what I’ll cover in this post.

*NOTE: Search is currently in BETA. There is a chance some functionality may change. SailPoint advise to not use this functionality in Production whilst it is in Beta.  

Enabling API Access

Under Admin => Global => Security Settings => API Management select New and give the API Account a Description.

New API Client.PNG

Client ID and Client Secret

ClientID & Secret.PNG

In the script to access the API we will take the Client ID and Client Secret and encode them for Basic Authentication to the IdentityNow Search API. To do that in PowerShell use the following example replacing ClientID and ClientSecret with yours.

$clientID = 'abcd1234567'
$clientSecret = 'abcd12345sdkslslfjahd'
$Bytes = [System.Text.Encoding]::utf8.GetBytes("$($clientID):$($clientSecret)")
$encodedAuth =[Convert]::ToBase64String($Bytes)

Searching

With API access now enabled we can start building some queries. There are two methods I’ve found. Using query strings on the URL and using JSON payloads as an HTTP Post. I’ll give examples of both.

PowerShell Setup

Here is the base of all my scripts for using PowerShell to access the IdentityNow Search.

Change;

  • line 3 for your Client ID
  • line 5 for your Client Secret
  • line 10 for your IdentityNow Tenant Organisation name (by default the host portion of the URL e.g https://orgname.identitynow.com )

Searching via URL Query String

First we will start with searching by having the query string in the URL.

Single attribute search via URL

$query = 'firstname EQ Darren'
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($query)" -Headers @{Authorization = "Basic $($encodedAuth)" }

Single Attribute URL Search.PNG

Multiple attribute search via URL

Multiple criteria queries need to be constructed carefully. The query below just looks wrong, yet if you place the quotes where you think they should go, you don’t get the expected results. The following works.

$query = 'attributes.firstname"="Darren" AND attributes.lastname"="Robinson"'

and it works whether you Encode the URL or not

$queryEncoded = [System.Web.HttpUtility]::UrlEncode($query)
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($queryEncoded)" -Headers @{Authorization = "Basic $($encodedAuth)" 

Multiple Attribute Query Search.PNG

Here is another searching based on identities having a connection to a source containing the word ‘Directory’ AND having less the 5 accounts

$URI = "https://$($org).api.identitynow.com/v2/search/identities?"
$query = '@access(source.name:*Directory*) AND entitlementCount:<5'
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($query)" -Headers @{Authorization = "Basic $($encodedAuth)" }

Multiple Attribute Query Search2.PNG

Searching via HTTP Post and JSON Body

Now we will perform similar searches, but with the search strings in the body of the HTTP Request.

Single attribute search via POST and JSON Based Body Query

$body = @{"match"=@{"attributes.firstname"="Darren"}}
$body = $body | convertto-json 
$Accounts = Invoke-RestMethod -Method POST -Uri "$($URI)limit=$($searchLimit)" -Headers @{Authorization = "Basic $($encodedAuth)" } -ContentType 'application/json' -Body $body
Single Attribute JSON Search.PNG

Multiple attribute search via POST and JSON Based Body Query

If you want to have multiple criteria and submit it via a POST request, this is how I got it working. For each part I construct it and convert it to JSON and build up the body with each search element.

$body1 = @{"match"=@{"attributes.firstname"="Darren"}}
$body2 = @{"match"=@{"attributes.lastname"="Robinson"}}
$body = $body1 | ConvertTo-Json
$body += $body2 | ConvertTo-Json
$Accounts = Invoke-RestMethod -Method POST -Uri "$($URI)limit=$($searchLimit)" -Headers @{Authorization = "Basic $($encodedAuth)" } -ContentType 'application/json' -Body $body
Multiple Attribute JSON Search.PNG

Getting Full Identity Objects based off Search

Lastly now that we’ve been able to build queries via two different methods and we have the results we’re looking for, lets output some relevant information about them. We will iterate through each of the returned results and output some specifics about their sources and entitlements. Same as above, update for your ClientID, ClientSecret, Orgname and search criteria.

Extended Information.PNG

Summary

Once you’ve enabled API access and understood the query format it is super easy to get access to the identity data in your IdentityNow tenant.

My recommendation is to use the IdentityNow Search function in the Portal to refine your searches for what you are looking to return programmatically and then use the API to get the data for whatever purpose it is.

Using your Voice to Search Microsoft Identity Manager – Part 2

Introduction

Last month I wrote this post that detailed using your voice to search/query Microsoft Identity Manager. That post demonstrated a working solution (GitHub repository coming next month) but was still incomplete if it was to be used in production within an Enterprise. I hinted then that there were additional enhancements I was looking to make. One is an Auditing/Reporting aspect and that is what I cover in this post.

Overview

The one element of the solution that has visibility of each search scenario is the IoT Device. As a potential future enhancement this could also be a Bot. For each request I wanted to log/audit;

  • Device the query was initiated from (it is possible to have many IoT devices; physical or bot leveraging this function)
  • The query
  • The response
  • Date and Time of the event
  • User the query targeted

To achieve this my solution is to;

  • On my IoT Device the query, target user and date/time is held during the query event
  • At the completion of the query the response along with the earlier information is sent to the IoT Hub using the IoT Hub REST API
  • The event is consumed from the IoT Hub by an Azure Event Hub
  • The message containing the information is processed by Stream Analytics and put into Azure Table Storage and Power BI.

Azure Table Storage provides the logging/auditing trail of what requests have been made and the responses.  Power BI provides the reporting aspect. These two services provide visibility into what requests have been made, against who, when etc. The graphic below shows this in the bottom portion of the image.

Auditing Reporting Searching MIM with Speech.png
Voice Search for Microsoft Identity Manager Auditing and Reporting

Sending IoT Device Events to IoT Hub

I covered this piece in a previous post here in PowerShell. I converted it from PowerShell to Python to run on my device. In PowerShell though for initial end-to-end testing when developing the solution the body of the message being sent and sending it looks like this;

[string]$datetime = get-date
$datetime = $datetime.Replace("/","-")
$body = @{
 deviceId = $deviceID
 messageId = $datetime
 messageString = "$($deviceID)-to-Cloud-$($datetime)"
 MIMQuery = "Does the user Jerry Seinfeld have an Active Directory Account"
 MIMResponse = "Yes. Their LoginID is jerry.seinfeld"
 User = "Jerry Seinfeld"
}

$body = $body | ConvertTo-Json
Invoke-RestMethod -Uri $iotHubRestURI -Headers $Headers -Method Post -Body $body

Event Hub and IoT Hub Configuration

First I created an Event Hub. Then on my IoT Hub I added an Event Subscription and pointed it to my Event Hub.

IoTHub Event Hub.PNG
Azure IoT Hub Events

Streaming Analytics

I then created a Stream Analytics Job. I configured two Inputs. One each from my IoT Hub and from my Event Hub.

Stream Analytics Inputs.PNG
Azure Stream Analytics Inputs

I then created two Outputs. One for Table Storage for which I used an existing Storage Group for my solution, and the other for Power BI using an existing Workspace but creating a new Dataset. For the Table storage I specified deviceId for Partition key and messageId for Row key.

Stream Analytics Outputs.PNG
Azure Stream Analytics Outputs

Finally as I’m keeping all the data simple in what I’m sending, my query is basically copying from the Inputs to the Outputs. One is to get the events to Table Storage and the other to get it to Power BI. Therefore the query looks like this.

Stream Analytics Query.PNG
Azure Stream Analytics Query

Events in Table Storage

After sending through some events I could see rows being added to Table Storage. When I added an additional column to the data the schema-less Table Storage obliged and dynamically added another column to the table.

Table Storage.PNG
Table Storage Events

A full record looks like this.

Full Record.PNG
Voice Search Table Storage Audit Record

Events in Power BI

Just like in Table Storage, in Power BI I could see the dataset and the table with the event data. I could create a report with some nice visuals just as you would with any other dataset. When I added an additional field to the event being sent from the IoT Device it magically showed up in the Power BI Dataset Table.

PowerBI.PNG
PowerBI Voice Search Analytics

Summary

Using the Azure IoT Hub REST API I can easily send information from my IoT Device and then have it processed through Stream Analytics into Table Storage and Power BI. Instant auditing and reporting functionality.

Let me know what you think on twitter @darrenjrobinson

Using your Voice to Search Microsoft Identity Manager – Part 1

Introduction

Yes, you’ve read the title correctly. Speaking to Microsoft Identity Manager. The concept behind this was born off the back of some other work I was doing with Microsoft Cognitive Services. I figured it shouldn’t be that difficult if I just break down the concept into individual elements of functionality and put together a proof of concept to validate the idea. That’s what I did and this is the first post of the solution as an overview.

Here’s a quick demo.

 

Overview

The diagram below details the basis of the solution. There are a few extra elements I’m still working on that I’ll cover in a future post if there is any interest in this.

Searching MIM with Speech Overview

The solution works like this;

  1. You speak to a microphone connected to a single board computer with the query for Microsoft Identity Manager
  2. The spoken phrase is converted to text using Cognitive Speech to Text (Bing Speech API)
  3. The text phrase is;
    1. sent to Cognitive Services Language Understanding Intelligent Service (LUIS) to identify the target of the query (firstname lastname) and the query entity (e.g. Mailbox)
    2. Microsoft Identity Manager is queried via API Management and the Lithnet REST API for the MIM Service
  4. The result is returned to the single board computer as a text result phase which it then uses Cognitive Services Text to Speech to convert the response to audio
  5. The result is spoken back

Key Functional Elements

  • The microphone array I’m using is a ReSpeaker Core v1 with a ReSpeaker Mic Array
  • All credentials are stored in an Azure Key Vault
  • An Azure Function App (PowerShell) interfaces with the majority of the Cognitive Services being used
  • Azure API Management is used to front end the Lithnet MIM Webservice
  • The Lithnet REST API for the MIM Service provides easy integration with the MIM Service

 

Summary

Leveraging a lot of Serverless (PaaS) Services, a bunch of scripting (Python on the ReSpeaker and PowerShell in the Azure Function) and the Lithnet REST API it was pretty simple to integrate the ReSpeaker with Microsoft Identity Manager. An alternative to MIM could be any other service you have an API interface into. MIM is obviously a great choice as it can aggregate from many other applications/services.

Why a female voice? From a small response it was the popular majority.

Let me know what you think on twitter @darrenjrobinson

Utilising Azure Speech to Text Cognitive Services with PowerShell

Introduction

Yesterday I posted about using Azure Cognitive Services to convert text to speech. I also eluded that I’ve been leveraging Cognitive Services to do the conversion from Speech to Text. I detail that in this post.

Just as with the Text to Speech we will need an API key to use Cognitive Services. You can get one from Azure Cognitive Services here.

Source Audio File

I created an audio file in Audacity  for testing purposes. In my real application it is direct spoken text, but that’s a topic for another time.  I set the project rate to 16000hz for the conversion source file then exported the file as a .wav file.

Capture Audio

The Script

The Script below needs to be updated for your input file (line 2) and your API Key (line 7). Run it liine by line in VSCode or PowerShell ISE.

Summary

That’s it. Pretty simple once you have a reference script to work with. Enjoy.

Converted

 

 

Getting started with the Lithnet REST API for the Microsoft Identity Manager Service

Introduction

A common theme with my posts on Microsoft Identity is the extensibility of it particularly with the Lithnet tools that Ryan has released.

One such tool that I’ve used but never written about is the Lithnet REST API for the Microsoft Identity Manger Service. For a small proof of concept I’m working on I was again using this REST API and I needed to update it as Ryan has recently added some new functionality. I realised I hadn’t set it up in a while and while Ryan’s documentation is very good it was written some time ago when IIS Manager looked a little different. So here is a couple of screenshots and a little extra info to get you started if you haven’t used it before to supplement Ryan’s documentation located here.

Configuring the Lithnet REST API for the Microsoft Identity Manager Service

You can download the Lithnet REST API for the FIM/MIM Service from here

If you are using the latest version of the Lithnet Rest API you will need to make sure you have .NET 4.6.1 installed. If you are running Windows Server 2012 R2 you can get it from here.

When configuring your WebSite make sure you choose .NET v4.5 Classic for the Application Pool.

WebSite AppPool Settings.PNG

The web.config must match your MIM version. Currently the latest is 4.4.1749.0 as detailed here. That therefore looks like this.

WebConfig Resource Management Version.PNG

Finally you’ll need an SSL Certificate. For development environments a Self-Signed Certificate is fine. Personally I use this Cert Generator. Make sure you put the certificate in the cert store on the machine you will be testing access with. Here’s an example of my command line for generating a cert.

Cert Generation.PNG

You could also use Lets Encrypt.

In your bindings in IIS have the Host Name match your certificate.

Bindings.PNG

If you’ve done everything right you will be able to hit the v2 endpoint help. By default with Basic Auth enabled you’ll be prompted for a username and password.

v2 EndPoint.PNG

Using PowerShell to query MIM via the Lithnet Rest API

Here is an example script to query MIM via the Lithnet MIM Rest API. Update for your credentials (Lines 2 and 3), the URL of the server running the API Endpoint (Line 11) and what you are querying for (Line 14). My script takes into account Self Signed Certs in a Development environment.

Example output from a query is shown below.

Example Output.PNG

Summary

Hopefully that helps you quickly get started with the Lithnet REST API for the FIM/MIM Service. I showed an example using PowerShell directly, but using an Azure Function is also a valid pattern. I’ve covered similar functionality in the past.
 

Building a Teenager Notification Service using Azure IoT an Azure Function, Microsoft Flow, Mongoose OS and a Micro Controller

Introduction

This is the third and final post on my recent experiments integrating small micro controllers (ESP8266) running Mongoose OS integrated with Azure IoT Services.

In the first post in this series I detailed creating the Azure IoT Hub and registering a NodeMCU (ESP8266 based) micro controller with it. The post detailing that can be found here. Automating the creation of Azure IoT Hubs and the registration of IoT Devices with PowerShell and VS Code

In the second post I detailed communicating with the micro controller (IoT device) using MQTT and PowerShell. That post can be found here. Integrating Azure IoT Devices with MongooseOS MQTT and PowerShell

Now that we have end to end functionality it’s time to do something with it.

I have two teenagers who’ve been trained well to use headphones. Whilst this is great at not having to hear the popular teen bands of today, and numerous Facetime, Skype, Snapchat and similar communications it does come with the downside of them not hearing us when we require their attention and they are at the other end of the house. I figured to avoid the need to shout to get attention, a simple visual notification could be built to achieve the desired result. Different colours for different requests? Sure why not. This is that project, and the end device looks like this.

IoT Notifier using Neopixel
IoT Notifier using Neopixel

Overview

Quite simply the solution goes like this;

  • With the Microsoft Flow App on our phones we can select the Flow that will send a notification
2018-03-25 18.56.38 500px.png
Send IoT Notification Message
  • Choose the Notification intent which will drive the color displayed on the Teenager Notifier.
2018-03-25 18.56.54 500px
IoT Notifier Task Message
  • The IoT Device will then display the color in a revolving pattern as shown below.

The Architecture

The end to end architecture of the solution looks like this.

IoT Cloud to Device - NeoPixel - 640px
IoT Message Cloud to Device

Using the Microsoft Flow App on a mobile device gives a nice way of having a simple interface that can be used to trigger the notification. Microsoft Flow sends the desired message and details of the device to send it to, to an Azure Function that puts a message into an MQTT queue associated with the Mongoose OS driven Azure IoT Device (ESP8266 based NodeMCU micro controller) connected to an Azure IoT Hub. The Mongoose OS driven Azure IoT Device takes the message and displays the visual notification in the color associated with the notification type chosen in Microsoft Flow at the beginning of the process.

The benefits of this architecture are;

  • the majority of the orchestration happens in Azure, yet thanks to Azure IoT and MQTT no inbound connection is required where the IoT device resides. No port forwarding / inbound rules to configure on your home router. The micro controller is registered with our Azure IoT Hub and makes an outbound connection to subscribe to its MQTT topic. As soon as there is a message for the device it triggers its logic and does what we’ve configured
  • You can initiate a notification from anywhere in the world (most simply using the Flow mobile app as shown above)
  • And using Mongoose OS allows for the device to be managed remote via the Mongoose OS Dashboard. This means that if I want to add an additional notification (color) I can update Flow for a new option to select and update the configuration on the Notifier device to display the new color if it receives such a command.

Solution Prerequisites

This post builds on the previous two. As such the prerequisites are;

  • you have an Azure account and have set up an IoT Hub, and registered an IoT Device with it
  • your IoT device (micro controller) can run Mongoose OS on. I’m using a NodeMCU ESP8266 that I purchased from Amazon here.
  • the RGB LED Light Ring (generic Neopixel) I used I purchased from Amazon here.
  • 3D printer if you want to print an enclosure for the IoT device

With those sorted we can;

  • Install and configure my Mongoose OS Application. It includes all the necessary libraries and sample config to integrate with a Neopixel, Azure IoT, Mongoose Dashboard etc.
  • Create the Azure PowerShell Function App that will publish the MQTT message the IoT Device will consume
  • Create the Microsoft Flow that will kick off the notifications and give use a nice interface to send what we want
  • Build an enclosure for our IoT device

How to build this project

The order I’ve detailed the elements of the architecture here is how I’d recommend approaching this project. I’d also recommend working through the previous two blog posts linked at the beginning of this one as that will get you up to speed with Mongoose OS, Azure IoT Hub, Azure IoT Devices, MQTT etc.

Installing the AzureIoT-Neopixel-js Application

I’ve made the installation of my solution easy by creating a Mongoose OS Application. It includes all the libraries required and sample code for the functionality I detail in this post.

Clone it from Github here and put it into your .mos directory that should be in the root of your Windows profile directory. e.g C:\Users\Darren\.mos\apps-1.26 then from the MOS Configuration page select Projects, select AzureIoT-Neopixel-JS then select the Rebuild App spanner icon from the toolbar. When it completes select the Flash icon from the toolbar.  When your micro controller restarts select the Device Setup from the top menu bar and configure it for your WiFi network. Finally configure your device for Azure MQTT as per the details in my first post in this series (which will also require you to create an Azure IoT Hub if you don’t already have one and register your micro controller with it as an Azure IoT Device). You can then test sending a message to the device using PowerShell or Device Explorer as shown in post two in this series.

I have the Neopixel connected to D1 (GPIO 5) on the NodeMCU. If you use a different micro controller and a different GPIO then update the init.js configuration accordingly.

Creating the Azure Function App

Now that you have the micro controller configured and working with Azure IoT, lets abstract the sending of the MQTT messages into an Azure Function. We can’t send MQTT messages from Microsoft Flow, so I’ve created an Azure Function that uses the AzureIoT Powershell module to do that.

Note: You can send HTTP messages to an Azure IoT device but … 

Under current HTTPS guidelines, each device should poll for messages every 25 minutes or more. MQTT and AMQP support server push when receiving cloud-to-device messages.

….. that doesn’t suit my requirements 

I’m using the Managed Service Identity functionality to access the Azure Key Vault where credentials for the identity that can interact with my Azure IoT Hub is stored. To enable and use that (which I highly recommend) follow the instructions in my blog post here to configure MSI on an Azure Function App. If you don’t already have an Azure Key Vault then follow my blog post here to quickly set one up using PowerShell.

Azure PowerShell Function App

The Function App is an HTTP Trigger Based one using PowerShell. In order to interact with Azure IoT Hub and integrate with the IoT Device via Azure I’m using the same modules as in the previous posts. So they need to be located within the Function App.

Specifically they are;

  • AzureIoT v1.0.0.5
  • AzureRM v5.5.0
  • AzureRM.IotHub v3.1.0
  • AzureRM.profile v4.2.0

I’ve put them in a bin directory (which I created) under my Function App. Even though AzureRM.EventHub is shown below, it isn’t required for this project. I uploaded the modules from my development laptop (C:\Program Files\WindowsPowerShell\Modules) using WinSCP after configuring Deployment Credentials under Platform Features for my Azure Function App. Note the path relative to mine as you will need to update the Function App script to reflect this path so the modules can be loaded.

Azure Function PS Modules.PNG
Azure Function PS Modules

The configuration in WinSCP to upload to the Function App for me is

WinSCP Configuration
WinSCP Configuration

Edit the AzureRM.IotHub.psm1 file

The AzureRM.IotHub.psm1 will locate an older version of the AzureRM.IotHub PowerShell module from within Azure Functions. As we’ve uploaded the version we need, we need to comment out the following lines in AzureRM.IotHub.psm1 so that it doesn’t do a version check. See below the lines to remark out (put a # in front of the lines indicated below) that are near the start of the module. The AzureRM.IotHub.psm1 file can be edited via WinSCP & notepad.

#$module = Get-Module AzureRM.Profile
#if ($module -ne $null -and $module.Version.ToString().CompareTo("4.2.0") -lt 0)
#{
# Write-Error "This module requires AzureRM.Profile version 4.2.0. An earlier version of AzureRM.Profile is imported in the current PowerShell session. Please open a new session before importing this module. This error could indicate that multiple incompatible versions of the Azure PowerShell cmdlets are installed on your system. Please see https://aka.ms/azps-version-error for troubleshooting information." -ErrorAction Stop
#}
#elseif ($module -eq $null)
#{
# Import-Module AzureRM.Profile -MinimumVersion 4.2.0 -Scope Global
#}

HTTP Trigger Azure PowerShell Function App

Here is my Function App Script. You’ll need to update it for the location of your PowerShell Modules (I created a bin directory under my Function App D:\home\site\wwwroot\myFunctionApp\bin), your Key Vault details and the user account you will be using. The User account will need permissions to your Key Vault to retrieve the password (credential) for the account you will run the process as and to your Azure IoT Hub.

You can test the Function App from within the Azure Portal where you created the Function App as shown below. Update for the names of the IoT Hub, IoT Device and the Resource Group in your associated environment.

Testing Function App.PNG
Test Function App

Microsoft Flow Configuration

The Flow is very simple. A manual button and a resulting HTTP Post.

Microsoft Flow Config 1
Microsoft Flow Configuration

For the message I have configured a list. This is where you can choose the color of the notification.

Manual Trigger.PNG
Microsoft Flow Manual Trigger

The Action is an HTTP Post to the Azure Function URL. The body has the configuration for the IoTHub, IoTDevice, Resource Group Name, IoTKeyName and the Message selected from the manual button above. You will have the details for those settings from your initial testing via the Function App (or PowerShell).

The Azure Function URL you get from the top of the Azure Portal screen where you configure your Function App. Look for “Get Function URL”.

HTTP Post
Microsoft Flow HTTP Post

Testing

Now you have all the elements configured, install the Microsoft Flow App on your mobile if you don’t already have it for Apple iOS Appstore and Android Google Play Log in with the account you created the Flow as, select the Flow, the message and done. Depending on your internet connectivity you should see the notification in < 10 seconds displayed on the Notifier device.

Case 3D Printer Files

Lastly, we need to make it look all pretty and make the notification really pop. I’ve created a housing for the neopixel that sits on top of a little case for the NodeMCU.

As you can see from the final unit, I’ve printed the neopixel holder in a white PLA that allows the RGB LED light to be diffused nicely and display prominently even in brightly lit conditions.

Neopixel Enclosure
Neopixel Enclosure

I’ve printed the base that holds the micro controller in a different color. The top fits snugly through the hole in the micro controller case. The wires from the neopixel to connect it to the micro controller slide through the shaft of the top housing. It also has a backplate that attaches to the back of the enclosure that I secure with a little hot glue.

Here is a link to the Neopixel (WS2812) 16 RGB LED light holder I created on Thingiverse.

NodeMCU Enclosure.PNG
NodeMCU Enclosure

Depending on your micro controller you will also need an appropriately sized case for that. I’ve designed the neopixel light holder top assembly to sit on top of my micro controller case. Also available on Thingiverse here.

Summary

Using a combination of Azure IoT, Azure PaaS Services, Mongoose OS and a cheap micro controller with an RGB LED light ring we have a very versatile Internet of Things device. The application here is a simple visual notifier. A change of output device or even in conjunction with an input device could change the application, whilst still re-using all the elements of the solution that glues it all together (micro-controller, Mongoose OS, Azure IoT, Azure PaaS). Did you build one? Did you use this as inspiration to build something else? Let me know.

Automating the submission of WordPress Blog Posts to your Microsoft MVP Community Activities Profile using PowerShell

 

Introduction

In November last year (2017) I was honored to be awarded Microsoft MVP Status for Enterprise Mobility – Identity and Access. MVP Status is awarded based on community activities and even once you’ve attained MVP Status you need to keep your community activity contributions updated on your profile.

Up until recently this was done by accessing the portal and updating your profile, however mid last year a MVP PowerShell Module (big thanks to Francois-Xavier Cat and Emin Atac) was released that allows for some automation.

In this post I’ll detail using the MVP PowerShell Module to retrieve your latest WordPress Blog Post and submit it to your MVP Profile as a MVP Community Contribution.

Prerequisites

In order for this to work you will need;

  • to be a Microsoft MVP
  • Register at the MS MVP API Developer Portal using your MVP registered email/profile address
    • subscribe to the MVP Production Application
    • copy your API key (you’ll need this in the script below)

The Script

Update the script below for;

  • MVP API Key
    • Update Line 5 with your API key as detailed above
  • WordPress Blog URL (mine is blog.darrenjrobinson.com)
    • Update Line 11 with your WP URL. $wordpressBlogURL = ‘https://public-api.wordpress.com/rest/v1/sites//posts/?number=1’
  • Your Award Category
    • Update Line 17 with your category $contributionTechnology = “Identity and Access”
    • type New-MVPContribution – ContributionTechnology and you’ll get a list of the MVP Award Categories

Award Categories.PNG

The Script

Here is the simple script. Run it after publishing a new entry you also want added to your Community Contributions Profile. No error handling, nothing complex, but you’re a MVP so you can plagiarize away for your submissions to make it do what suits your needs.

Summary

Hopefully that makes it simple to keep your MVP profile up to date. If you’re using a different Blogging platform I’m sure the basic process will work with a few tweaks after returning a query to get the content. Enjoy.