Automate the nightly backup of your Development FIM/MIM Sync and Portal Servers Configuration

Last week in a customer development environment I had one of those oh shit moments where I thought I’d lost a couple of weeks of work. A couple of weeks of development around multiple Management Agents, MV Schema changes etc. Luckily for me I was just connecting to an older VM Image, but it got me thinking. It would be nice to have an automated process that each night would;

  • Export each Management Agent on a FIM/MIM Sync Server
  • Export the FIM/MIM Synchronisation Server Configuration
  • Take a copy of the Extensions Folder (where I keep my PowerShell Management Agents scripts)
  • Export the FIM/MIM Service Server Configuration

And that is what this post covers.

Overview

My automated process performs the following;

  1. An Azure PowerShell Timer Function WebApp is triggered at 2330 each night
  2. The Azure Function App initiates a Remote PowerShell session to my Dev MIM Sync Server (which is also a MIM Service Server)
  3. In the Remote PowerShell session the script;
    1. Creates a new subfolder under c:\backup with the current date and time (dd-MM-yyyy-hh-mm)

  1. Creates further subfolders for each of the backup elements
    1. MAExports
    2. ServerExport
    3. MAExtensions
    4. PortalExport

    1. Utilizes the Lithnet MIIS Automation PowerShell Module to;
      1. Enumerate each of the Management Agents on the FIM/MIM Sync Server and export each Management Agent to the MAExports Folder
      2. Export the FIM/MIM Sync Server Configuration to the ServerExport Folder
    2. Copies the Extensions folder and subfolder contexts to the MAExtensions Folder
    3. Utilizes the FIM/MIM Export-FIMConfig cmdlet to export the FIM Server Configuration to the PortalExport Folder

Implementing the FIM/MIM Backup Process

The majority of the setup to get this to work I’ve covered in other posts, particularly around Azure PowerShell Function Apps and Remote PowerShell into a FIM/MIM Sync Server.

Pre-requisites

  • I created a C:\Backup Folder on my FIM/MIM Server. This is where the backups will be placed (you can change the path in the script).
  • I installed the Lithnet MIIS Automation PowerShell Module on my MIM Sync Server
  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 2330 every night using the following CRON configuration

0 30 23 * * *

  • I set the Azure Function App Timezone to my timezone so that the nightly backup happened at the correct time relative to my timezone. I got my timezone index from here. I set the  following variable in my Azure Function Application Settings to my timezone name AUS Eastern Standard Time.

    WEBSITE_TIME_ZONE

The Function App Script

With all the pre-requisites met, the only thing left is the Function App script itself. Here it is. Update lines 2, 3 & 6 if your variables and password key file are different. The path to your password keyfile will be different on line 6 anyway.

Update line 25 if you want the backups to go somewhere else (maybe a DFS Share).
If your MIM Service Server is not on the same host as your MIM Sync Server change line 59 for the hostname. You’ll need to get the FIM/MIM Automation PS Modules onto your MIM Sync Server too. Details on how to achieve that are here.

Running the Function App I have limited output but enough to see it run. The first part of the script runs very quick. The Export-FIMConfig is what takes the majority of the time. That said less than a minute to get a nice point in time backup that is auto-magically executed nightly. Sorted.

Summary

The script itself can be run standalone and you could implement it as a Scheduled Task on your FIM/MIM Server. However I’m using Azure Functions for a number of things and having something that is easily portable and repeatable and centralised with other functions (pun not intended) keeps things organised.

I now have a daily backup of the configurations associated with my development environment. I’m sure this will save me some time in the near future.

Follow Darren on Twitter @darrenjrobinson

How to create an Azure Function App to Simultaneously Start|Stop all Virtual Machines in a Resource Group

Just on a year ago I wrote this blog post that detailed a method to “Simultaneously Start|Stop all Azure Resource Manager Virtual Machines in a Resource Group”. It’s a simple script that I use quite a lot and I’ve received a lot of positive feedback on it.

One year on though and there are a few enhancements I’ve been wanting to make to it. Namely;

  • host the script in an environment that is a known state. Often I’m authenticated to different Azure Subscriptions, my personal, my employers and my customers.
  • prioritize the order the virtual machines startup|shutdown
  • allow for a delay between starting each VM (to account for environments where the VM’s have roles that have cross dependencies; e.g A Domain Controller, an SQL Server, Application Servers). You want the DC to be up and running before the SQL Server, and so forth
  • and if I do all those the most important;
    • secure it so not just anyone can start|stop my environments at their whim

Overview

This blog post is the first that executes the first part of implementing the script in an environment that is a known state aka implementing it as an Azure Function App. This won’t be a perfect implementation as you will see, but will set the foundation for the other enhancements. Subsequent posts (as I make time to develop the enhancements) will add the new functionality. This post covers;

  • Creating the Azure Function App
  • Creating the foundation for automating management of Virtual Machines in Azure using Azure Function Apps
  • Starting | Stopping all Virtual Machines in an Azure Resource Group

Create a New Azure Function App

First up we are going to need a Function App. Through your Azure Resource Manager Portal create a new Function App.

For mine I’ve created a new Resource Group and a new Storage Account as this solution will flesh out over time and I’d like to keep everything organised.

Now that we have the Azure App Plan setup, create a New PowerShell HTTP Trigger Function App.

Give it a name and hit Create.

 

Create Deployment Credentials

In order to get some of the dependencies into the Azure Function we need to create deployment credentials so we can upload them. Head to the Function App Settings and choose Go to App Service Settings.

Create a login and give it a password. Record the FTP/Deployment username and the FTP hostname along with your password as you’ll need this in the next step.

Upload our PowerShell  Modules and Dependencies

Just as my original PowerShell script did I’m using the brilliant Invoke Parallel Powershell Script from Rambling Cookie Monster. Download it from that link and save it to your local machine.

Connect to your Azure Function App using your favourite FTP Client using the credentials you created earlier. I’m using WinSCP. Create a new sub-directory under /site/wwwroot/ named “bin” as shown below.

Upload the Invoke-Parallel.ps1 file from wherever you extracted it to on your local machine to the bin folder you just created in the Function App.

We are also going to need the AzureRM Powershell Modules. Download those via Powershell to your local machine (eg. Save-Module -Name AzureRM -Path c:\temp\azurerm). There are a lot of modules obviously and you’re not going to need them all. At a minimum for this solution you’ll need;

  • AzureRM
  • AzureRM.profile
  • AzureRM.Compute

Upload them under the bin directory also as shown below.

Test that our script dependencies are accessible

Now that we have our dependent modules uploaded let’s test that we can load and utilise them. Below is commands to load the Invoke-Parallel script and test that it has loaded by getting the Help.

# Load the Invoke-Parallel Powershell Script
. "D:\home\site\wwwroot\RG-Start-Stop-VirtualMachines\bin\Invoke-Parallel.ps1"

# See if it is loaded by getting some output
Get-Help Invoke-Parallel -Full

Put those lines into the code section, hit Save and Run and select Logs to see the output. If successful you’ll see the help. If you don’t you probably have a problem with the path to where you put the Invoke-Parallel script. You can use the Kudu Console from the Function App Settings to get a command line and verify your path.

Mine worked successfully. Now to test our AzureRM Module Loads. Update the Function to load the AzureRM Profile PSM as per below and test you have your path correct.

# Import the AzureRM Powershell Module
import-module 'D:\home\site\wwwroot\RG-Start-Stop-VirtualMachines\bin\AzureRM.profile\2.4.0\AzureRM.Profile.psm1'
Get-Help AzureRM

Success. Fantastic.

Create an Azure Service Principal

In order to automate the access and control of the Azure Virtual Machines we are going to need to connect to Azure using a Service Principal with the necessary permissions to manage the Virtual Machines.

The following script does just that. You only need to run this as part of the setup for the Azure Function so we have an account we can use for our automation tasks. Update line 6 for your naming and the password you want to use. I’m assigning the Service Principal the “DevTest Labs User” Azure Role (Line 17) as that allows the ability to manage the Virtual Machines. You can find a list of the available roles here.

Take note of the key outputs from this script. You will need to note the;

  • ApplicationID
  • TenantID

I’m also securing the credential that has the permissions to Start|Stop the Virtual Machines using the example detailed here in Tao’s post.

For reference here is an example to generate the keyfile. Update your path in line 5 if required and make sure the password you supply in line 18 matches the password you supplied for the line in the script (line 6) when creating the Security Principal.

Take note of the password encryption string from the end of the script to pair with the ApplicationID and TenantID from the previous steps. You’ll need these shortly in Application Settings.

Additional Dependencies

I created another sub-directory under the function app site named ‘keys’ again using WinSCP. Upload the passkey file created above into that directory.

Whilst we’re there I also created a “logs” directory for any erroneous output (aka logfiles created when you don’t specify them) from the invoke-parallel script.

Application Variables

Using the identity information you have created and generated we will populate variables on the Function App, Application Settings that we can then leverage in our Function App. Go to your Azure Function App, Application Settings and add an application setting (with the respective values you have gathered in the previous steps) for;

  • AzureAutomationPWD
  • AzureAutomationAppID
  • AzureAutomationTennatID (bad speed typing there)

Don’t forget to click Save up the top of the Application Settings screen.

 

The Function App Script

Below is the sample script for your testing purposes. If you plan to use something similar in a production environment you’ll want to add more logging and error handling.

Testing the Function

Select the Test option from the right-hand side pane and update the request body for what the Function takes (mode and resourcegroup) as below.   Select Run and watch the logs. You will need to select Expand to get more screen real estate for them.

You will see the VM’s enumerate then the script starting them all up. My script has a 30 second timeout for the Invoke-Parallel Runspace as the VM’s will take longer than 30 seconds to startup. And you pay for use, so we want to keep this lean. Increase the timeout if you have more VM’s or latency that doesn’t see all your VM’s state transitioning.

Checking in the Azure Portal I can see my VM’s all starting up (too fast on the screenshot for the spfarm-mim host).

 

Sample Remote PowerShell Invoke Script

Below is a sample PowerShell script that is remotely calling the Azure Function and providing the info the Function takes (mode and resourcegroup) the same as we did in the Test Request Body script in the Azure Function Portal.  This time to stop the VMs.

Looking in the Azure Portal and we can see all the VMs shutting down.

 

Summary

A foundational implementation of an Azure Function App to perform orchestration of Azure Virtual Machines.

The Function App is rudimentary in that the script exits (as described in the Runspace timeout) after 30 seconds which is prior to the VMs fully returning after starting|stopping. This is because the Function App will timeout after 5mins anyway.

Now to workout the enhancements to it.

Finally, yes I have renewed/changed the Function Key so no-one else can initiate my Function 🙂

Follow Darren Robinson on Twitter

How to use a Powershell Azure Function to Tweet IoT environment data

Overview

This blog post details how to use a Powershell Azure Function App to get information from a RestAPI and send a social media update.

The data can come from anywhere, and in the case of this example I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the difference in this post is outputting the manipulated data to social media (Twitter) whilst still using a TimerTrigger Powershell Azure Function App to perform the work and leverage the “serverless” Azure Functions model.

Prerequisites

The following are prerequisites for this solution;

Create a folder on your local machine for the Powershell Module then save the module to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name InvokeTwitterAPIs -Path c:\temp\twitter

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption so you only pay for what you use, and select an appropriate location and Storage Account.

Create a Twitter App

Head over to http://dev.twitter.com and create a new Twitter App so you can interact with Twitter using their API. Give you Twitter App a name. Don’t worry about the URL too much or the need for the Callback URL. Select Create your Twitter Application.

Select the Keys and Access Tokens tab and take a note of the API Key and the API Secret. Select the Create my access token button.

Take a note of your Access Token and Access Token Secret. We’ll need these to interact with the Twitter API.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function. For my app I’m changing from the default of a 5 min schedule to hourly on the top of the hour. I did this after I’d already created the Function App as shown below. To update the schedule I edited the Function.json file and changed the schedule to “schedule”: “0 0 * * * *”

Give your Function App a name and select Create.

Configure Azure Function App Application Settings

In your Azure Function App select “Configure app settings”. Create new App Settings for your Twitter Account, Twitter Account AccessToken, AccessTokenSecret, APIKey and APISecret using the values from when you created your Twitter App earlier.

Deployment Credentials

If you haven’t already configured Deployment Credentials for your Azure Function Plan do that and take note of them so you can upload the Twitter Powershell module to your app in the next step.

Take note of your Deployment Username and FTP Hostname.

https://dl.dropboxusercontent.com/u/76015/BlogImages/FunctionIOTWeather/AppDevSettings.png

Upload the Twitter Powershell Module to the Azure Function App

Create a sub-directory under your Function App named bin and upload the Twitter Powershell Module using a FTP Client. I’m using WinSCP.

From the Applications Settings option start Kudu.

Traverse the folder structure to get the path do the Twitter Powershell Module and note it.

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help lines for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help line for the module so we can see in the logs that the module was imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Twitter Module.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and sends a tweet out to Twitter.

Viewing the Tweet

And here is the successful tweet.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a social media platform. The input could easily be business data from an API and the output a corporate social platform such as Yammer.

Follow Darren on Twitter @darrenjrobinson

How to use a Powershell Azure Function App to get RestAPI IoT data into Power BI for Visualization

Overview

This blog post details using a Powershell Azure Function App to get IoT data from a RestAPI and update a table in Power BI with that data for visualization.

The data can come from anywhere, however in the case of this post I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the major change is to use a TimerTrigger Azure Function to perform the work and leverage the “serverless” Azure Functions model. No need for a reporting server or messing around with Windows scheduled tasks.

Prerequisites

The following are the prerequisites for this solution;

  • The Power BI Powershell Module
  • Register an application for RestAPI Access to Power BI
  • A Power BI Dataset ready for the data to go into
  • AzureADPreview Powershell Module

Create a folder on your local machine for the Powershell Modules then save the modules to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name PowerBIPS -Path C:\temp\PowerBI
Save-Module -Name AzureADPreview -Path c:\temp\AzureAD 

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption Plan for the Hosting Plan so you only pay for what you use, and select an appropriate location and Storage Account.

Register a Power BI Application

Register a Power BI App if you haven’t already using the link and instructions in the prerequisites. Take a note of the ClientID. You’ll need this in the next step.

Configure Azure Function App Application Settings

In this example I’m using Azure Functions Application Settings for the Azure AD AccountName, Password and the Power BI ClientID. In your Azure Function App select “Configure app settings”. Create new App Settings for your UserID and Password for Azure (to access Power BI) and our PowerBI Application Client ID. Select Save.

Not shown here I’ve also placed the URL’s for the RestAPI’s that I’m calling to get the IoT environment data as Application Settings variables.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function App. The default of a 5 min schedule should be perfect. Give it a name and select Create.

Upload the Powershell Modules to the Azure Function App

Now that we have created the base of our Function App we’re going to need to upload the Powershell Modules we’ll be using that are detailed in the prerequisites. In order to upload them to your Azure Function App, go to App Service Settings => Deployment Credentials and set a Username and Password as shown below. Select Save.

Take note of your Deployment Username and FTP Hostname.

Create a sub-directory under your Function App named bin and upload the Power BI Powershell Module using a FTP Client. I’m using WinSCP.

To make sure you get the correct path to the powershell module from Application Settings start Kudu.

Traverse the folder structure to get the path to the Power BI Powershell Module and note the path and the name of the psm1 file.

Now upload the Azure AD Preview Powershell Module in the same way as you did the Power BI Powershell Module.

Again using Kudu validate the path to the Azure AD Preview Powershell Module. The file you are looking for is the Microsoft.IdentityModel.Clients.ActiveDirectory.dll” file. My file after uploading is located in “D:\home\site\wwwroot\MyAzureFunction\bin\AzureADPreview\2.0.0.33\Microsoft.IdentityModel.Clients.ActiveDirectory.dll”

This library is used by the Power BI Powershell Module.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Power BI Powershell Module. Include the get-help line for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Power BI Module get-help command. I can see that the module was successfully loaded.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and puts the data directly into Power BI.

Viewing the data in Power BI

In Power BI it is then quick and easy to select our Inside and Outside temperature readings referenced against time. This timescale is overnight so both sensors are reading quite close to each other.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a visualization of IoT data into Power BI. The input could easily be business data from an API and the output a real time reporting dashboard.

Follow Darren on Twitter @darrenjrobinson

 

 

 

 

Get Users/Groups/Objects from Microsoft/Forefront Identity Manager with Azure Functions and the Lithnet Resource Management Powershell Module

Introduction

As an Identity Management consultant if I had a $1 for every time I’ve been asked “what is user x’s current status in IDAM”, “is user x active?”, “does user x have an account in y?”, “what is user x’s primary email address?”, particularly after Go Live of an IDAM solution my holidays would be a lot more exotic.

From a Service Desk perspective IDAM implementations are often a black box in the middle of the network that for the most part do what they were designed and implemented to do. However as soon as something doesn’t look normal for a user the Service Desk are inclined to point their finger at that black box (IDAM Solution). And the “what is the current value of ..”, “does user x ..” type questions start flying.

What if we could give the Service Desk a simple query interface into FIM/MIM without needing to give them access to another complicated application?

This is the first (of potentially a series) blog post on leveraging community libraries and Azure PaaS services to provide visibility of FIM/MIM information. This first post really just introduces the concept with a working example in an easy way to understand and replicate. It is not intended for production implementation without additional security and optimisation. 

Overview

The following graphic shows the concept of using Azure Functions to take requests from a client (web app, powershell, some other script) query the FIM/MIM Service and return the result. This post details the setup and configuration for the section in the yellow shaded box with the process outlined in the numbers 1, 2 & 3. This post assumes you already have your FIM/MIM implementation setup and configured according to your connected integrated applications/services such as Active Directory. In my example my connected datasource is actually Twitter.

Prerequisites

The prerequisites for this solution are;

 

Creating your Azure App Service

First up you’ll need to create an Azure App Service in your Azure Tenant. To keep everything logically structured for this example I created an Azure App Service in the same resource group that contains my MIM IaaS infrastructure (MIM Sync Server, MIM Service Server, SQL Server, AD Domain Controller etc).

In the Azure Marketplace select New (+) and search for Function App. Select the Function App item from the results and select Create.

Give your Azure App Service a name, choose the Resource Group where you want to locate it. Choose Dynamic for the Hosting Plan. This means you don’t have to worry about resource management for your Web App and you only pay for execution time which unless you put this into production and have gone crazy with it your costs should be zero as they will (should) be well under the free grant tier.  Put the application in the appropriate location such as close to your FIM/MIM resources that it’ll be interacting with and select Create.

Now that you have your Azure App Service setup, you need to create your Azure Function.

Create an HTTP Trigger Powershell Function App

In the Azure Portal locate your App Services Blade and select the Function App created in the steps above. Mine was named MIMMetaverseSearch in the example above. Select PowerShell as the Language and HttpTrigger-Powershell as the Function type.

Give your Function a name. I’ve kept it simple in this example and named it the same as my App Service Plan. Select Create.

Adding the Lithnet Powershell Module into your Function App.

As you’d expect the Powershell Function App by default only has a handful of core Powershell Modules. As we’re using something pretty specific we’ll need put the module into our Function App so we can load it and use the library.

Download and save the Lithnet Resource Management Powershell Module to your local machine. Something like the Powershell command below will do that.

Next follow this great blog post here from Tao to upload the Lithnet RMA PS Module you downloaded earlier into your function directory. I used WinSCP as my FTP client as I’ve shown below to upload the Lithnet RMA PS Module.

FTP to the host for your App Service and navigate to the /site/wwwroot/

Create a bin folder and upload via FTP the Lithnet RMA PS Module.

Using Kudu navigate to the path and version of the Lithnet RMA PS Module.
I’m using v1.0.6088 and my app is named MIMMetaverseSearch so MY path is D:\home\site\wwwroot\MIMMetaverseSearch\bin\LithnetRMA\1.0.6088

Note: the Lithnet RMA PS Module is 64-bit so you’ll need to configure your Web App for 64-bit as per the info in the same blog you followed to upload the module here.

Test loading the Lithnet RMA PS Module in your Function App

In your Function App select </> Develop. Remove the sample script and in your first line import the Lithnet RMA PS Module using the path from the previous step. Then, to check that it loads add a line that references a cmdlet in the module. I used Help Get-Resource. Select Save then Run.

If you’ve done everything correct when you select Run and look at the Logs you’ll see the module was loaded and the Help Get-Resource command was run in the Logs.

Allow your Function App to access your FIM/MIM Service Server

Even though you have logically placed your Function App in the same Resource Group (if you did it like I have) you’ll need to actually allow the Function App that is running in a shared PaaS environment to connect to your FIM/MIM Service Server.

Create an inbound rule in your Network Security Group to allow access to your FIM/MIM Service Server. The example below isn’t as secure as it could (and should) be as it allows access from anywhere. You should restrict the source of the request(s) accordingly. I’m just showing how to quickly get a working example. TCP Port 5725 is required to access your MIM Service Server. Enter the details as per below and select Ok.

Using an Azure Function to query FIM/MIM Service

Note: Again, this is an example to quickly show the concept. In the script below your credentials are in the script in clear text (and of course those below are not valid). For anything other than validating the concept you must protect your credentials. A great example is available here in Tao’s post.

The PS Azure Function gets the incoming request and converts it from JSON. In my request which you’ll see in the next step I’m passing in “displayName” and “objectType”.

In this example I’m using Get-Resource from the Lithnet RMA to get an object from the FIM/MIM Service. First you need to open a connection to the FIM/MIM Service Server. On my Azure IaaS MIM Service Server I’ve configured a DNS name so you can see I’m using that name in line 17 to connect to it using the unsecured credentials from earlier in the script. If you haven’t set up a DNS name for your FIM/MIM Service Server you can use the Public IP Address instead.

Line 20 queries for the ObjectType and DisplayName passed into the Function (see calling the Function in the next step) and returns the response in line 22. Again this is just an example. There is no error checking, validation or anything. I’m just introducing the concept in this post.

Testing your Function App

Now that you have the function script saved, you can test it from the Function App itself. Select Test from the options up in the right from your function. Change the Request Body for what the Function is expecting. In my case displayname and objectType. Select Run and in the Logs if you’ve got everything configured correctly (like inbound network rules, DNS name, your FIM/MIM Service Server is online, your query is for a valid resource) you should see an object returned.

Calling the Function App from a Client

Now that we have our Function App all setup and configured (and tested in the Function App) let’s send a request to the Azure Function using the Powershell Invoke-RestMethod function. The following call I did from my laptop. It is important to note that there is no authN in this example and the function app will be using whatever credentials you gave it to execute the request. In a deployed solution you’ll need to scope who can make the requests, limit on the inbound network rules who can submit requests and of course further protect the account credentials used to connect to your FIM/MIM Service Server.

Successful Response

The following screenshot shows calling the Function App and getting the responding object. Success. In a couple of lines I created a hashtable for the request, converted it to JSON and submitted it and got a response. How powerful is that!?

Summary

Using the awesome Lithnet Resource Management PowerShell Module with Azure Functions it is pretty quick and flexible to access a wealth of information we may want to expose for business benefit.

Now if only there was an affiliation program for Azure Functions that could deposit funds for each IDAM request to an Azure Functions App into my holiday fund.

Stay tuned for more posts on taking this concept to the next level.

Follow Darren on Twitter @darrenjrobinson

Getting Users, Groups & Contacts via the Azure Graph API using Differential Query & PowerShell

This is the final post in a series detailing using PowerShell to leverage the Azure AD Graph API. For those catching up it started here introducing using PowerShell to access the Azure AD via the Graph API, licensing users in Azure AD via Powershell and the Graph API, and returning all objects using paging via Powershell and the Graph API.

In this post I show how to;

  • enumerate objects from Azure AD via Powershell and the Graph API, and set a delta change cookie
  • enumerate changes in Azure AD since the last query
  • return objects that have changed since the last query
  • return just the changed attributes on objects that have changed since the last query
  • get a differential sync from now delta change link

Searching through MSDN and other resources working this out I somehow stumbled upon a reference to changes in the API that detail the search filters. v1.5 and later of the API requires filters using the context ‘Microsoft.DirectoryServices.User|Group|Contact’ etc instead of ‘Microsoft.WindowsAzure.ActiveDirectory.User|Group|Contact’ which you’ll find in the few examples around. If you don’t want to return all these object types update the filter on line 21 in the script below.

Here is the script to return all Users, Groups and Contacts from the tenant along with all the other options I detail in this post. Update the following for your tenant;

  • line 6 for your tenant URI
  • line 10 for your account in the tenant
  • line 11 for the password associated with your account from line 10

Here is a sample output showing the Users, Groups, Contacts and DirectoryLinkChange objects. Note: if you have a large tenant that has been in place for a period of time it may take a while to enumerate.  In this instance you can use the Differential Sync from now option. More on that later.

Running the query again using the Differential DeltaLink from the first run now returns no results. This is as expected as no changes have been made in the tenant on the objects in our query.

Now if I make a change in the tenant and run the query again using the Differential DeltaLink I get 1 result. And I get the full object.

What if I just wanted to know the change that was made?

If we add ‘&ocp-aad-dq-include-only-changed-properties=true’ to the URI that’s exactly what we get. The object and what changed. In my case the Department attribute.

Finally as alluded to earlier there is the Differential Sync from now option. Very useful on large tenants where you can query and get all users, contacts, groups etc without using differential sync, then get the Differential Delta token for future sync queries. So I’ve used the same URI that I used as the beginning of this blog post but in the header specified ‘ocp-aad-dq-include-only-delta-token’ = “true” and as you can see I returned no results but I got the important Differential Query DeltaLink.

Summary

Using Powershell we can leverage the Azure AD Graph RestAPI and use the Differential Sync functions to efficiently query Azure AD for changes rather than needing to enumerate an entire tenant each time. Brilliant.

Follow Darren on Twitter @darrenjrobinson

Enumerating all Users/Groups/Contacts in an Azure tenant using PowerShell and the Azure Graph API ‘odata.nextLink’ paging function

Recently I posted about using PowerShell and the Azure Active Directory Authentication Library to connect to Azure AD here. Whilst that post detailed performing simple tasks like updating an attribute on a user, in this post I’ll use the same method to connect to Azure AD via PowerShell but cover;

  • enumerate users, contacts or groups
  • where the number of objects is greater than the maximum results per page, get all remaining pages of results
  • limit results based on filters

The premise of my script was one that could just be executed without prompts. As such the script contains the ‘username’ and ‘password’ that are used to perform the query. No special access is required for this script. Any standard user account will have ‘read’ permissions to Azure AD and will return results.

Here is the base script to return all objects of a given type from a tenant. For your environment;

  • change line 7 for your tenant name
  • change line 11 for your account in your tenant
  • change line 12 for the password associated with the account specified in line 11
  • change line 18 for the object type (eg. Users, Groups, Contacts)

I’ve hardcoded the number of results to return per page in both line 39 and 64 to the maximum 999. The default is 100. I wanted to return all objects as quickly as possible.

The first query along with returning 999 query results also returns a value for $query.’odata.nextLink’ if there are more than 999 results. The .nextLink value we then use in subsequent API calls to return the remaining pages until we have returned all objects.

Brilliant. So we can now simply change line 18 for different object types (Users, Groups, Contacts) if required. But what if we want to filter on other criteria such as attribute values?

Here is a slightly modified version (to the URI) to include a query filter. Lines 19-24 have a couple of examples of query filters.

So there you have the basics on getting started returning large numbers of objects from Azure AD via Azure Graph from PowerShell. Hopefully the time I spent working out the syntax for the URI’s helps someone else out as there aren’t any examples I could find whilst working this out.

Follow Darren on Twitter @darrenjrobinson

 

 

 

Automating the simultaneous deployment of AzureRM Virtual Machines for a development environment

This post is details my method for automating the creation of AzureRM virtual machines for use in a development environment. I’m using this process to quickly standup an environment for testing configurations on.

In summary this process;

  • parallel creation of the AzureRM Virtual Machines
  • All machines have the same configuration
    • NIC, Disks etc
  • All machines are created in a new Resource Group, with associated Virtual Network

Simultaneous Creating the AzureRM Virtual Machines for MIM 2016

For my MIM 2016 Lab I’m going to create 5 Virtual Machines. Their roles are;

  • Active Directory Domain Controllers (2)
  • MIM 2016 Portal Servers (2)
  • MIM 2016 Synchronization Server & collocated SQL Server (1)

In order to stand up the Virtual Machines as quickly as possible I spawn the VM Create process in parallel leveraging the brilliant Invoke-Parallel Powershell Script from Cookie.Monster just as I did for my Simultaneous Start|Stop all AzureRM VM’s script detailed here.

VM Creation Script

In my script at the bottom of this post I haven’t included the ‘invoke-parallel.ps1’. The link for it is in the paragraph above. You’ll need to either reference it at the start of your script, or include it in your script. If you want to keep it all together in a single script include it like I have in the screenshot below.

Parts you’ll need to edit

The first part of the script defines what, where and names associated with the development environment. Where will the lab be deployed (‘Australia East’), what will be the Resource Group name (‘MIM2016-Dev10’), the virtual network name in the resource group (‘MIM-Net10’), the storage account name (‘mimdev16021610’). Update each of these for the name you’ll be using for your environment.

#Global Variables
# Where do we want to put the VM’s
$global:locName ‘Australia East’
# Resource Group name
$global:rgName ‘MIM2016-Dev10’
# Virtual Network Name
$global:virtNetwork ‘MIM-Net10’
# Storage account names must be between 3 and 24 characters in length and use numbers and lower-case letters only
$global:stName ‘mimdev16021610’
# VMName
$global:NewVM $null

The VM’s that will be deployed are added to an array. The names used here will become the ComputerName for each VM. Change for the number of machines and the names you want.

# MIM Servers to Auto Deploy
$VMRole = @()
$VMRole += ,(‘MIMPortal1’)
$VMRole += ,(‘MIMPortal2’)
$VMRole += ,(‘MIMSync’)
$VMRole += ,(‘ADDC1’)
$VMRole += ,(‘ADDC2’)

The script will then prompt you to authN to AzureRM.

# Authenticate to the Azure Portal
Add-AzureRmAccount

The script will then prompt you provide a username and password that will be associated with each of the VM’s.

# Get the UserID and Password info that we want associated with the new VM’s.
$global:cred Get-Credential -Message “Type the name and password for the local administrator account that will be created for your new VM(s).”

The Resource Group will be created. The environment subnet and virtual network will be created in the resource group.
# Create Resource Group
New-AzureRmResourceGroup -Name $rgName -Location $locName
# Create RG Storage Account
$storageAcc New-AzureRmStorageAccount -ResourceGroupName $rgName -Name $stName -Type “Standard_GRS” -Location $locName
# Create RG Subnet
$singleSubnet New-AzureRmVirtualNetworkSubnetConfig -Name singleSubnet -AddressPrefix 10.0.0.0/24
# Create RG Network
$global:vnet New-AzureRmVirtualNetwork -Name $virtNetwork -ResourceGroupName $rgName -Location $locName -AddressPrefix 10.0.0.0/16 -Subnet $singleSubnet

A configuration for each of the VM’s that we listed in the array earlier will be created and added to a new array $VMConfig

Update the -VMSize “Standard_A1” for a different Azure VM Spec and  DiskSizeInGB for a different data disk size.

# VM Config for each VM
$VMConfig = @()
# Create VMConfigs and add to an array
foreach ($NewVM in $VMRole) {
# ******** Create IP and Network for the VM ***************************************
# *****We do this upfront before the bulk create of the VM**************


$pip New-AzureRmPublicIpAddress -Name $NewVM-IP1″ -ResourceGroupName $rgName -Location $locName -AllocationMethod Dynamic
$nic New-AzureRmNetworkInterface -Name $NewVM-NIC1″ -ResourceGroupName $rgName -Location $locName -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id
$vm New-AzureRmVMConfig -VMName $NewVM -VMSize “Standard_A1”
$vm Set-AzureRmVMOperatingSystem -VM $vm -Windows -ComputerName $NewVM -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
$vm Set-AzureRmVMSourceImage -VM $vm -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2012-R2-Datacenter -Version “latest”
$vm Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
# VM Disks. Deploying an OS and a Data Disk for each
$osDiskUri $storageAcc.PrimaryEndpoints.Blob.ToString() “vhds/WindowsVMosDisk$NewVM.vhd”
$DataDiskUri $storageAcc.PrimaryEndpoints.Blob.ToString() “vhds/WindowsVMDataDisk$NewVM.vhd”
$vm Set-AzureRmVMOSDisk -VM $vm -Name “windowsvmosdisk” -VhdUri $osDiskUri -CreateOption fromImage
$vm Add-AzureRmVMDataDisk -VM $vm -Name “windowsvmdatadisk” -VhdUri $DataDiskUri -CreateOption Empty -Caching ‘ReadOnly’ -DiskSizeInGB 10 -Lun 0

# Add the Config to an Array
$VMConfig += ,($vm)
# ******** End NEW VM ***************************************
}

And finally BAM, we let it loose. Using the Invoke-Parallel script each of the VM’s are created in AzureRM simultaneously.

# In Parallel Create all the VM’s
$VMConfig Invoke-Parallel -ImportVariables -ScriptBlockNew-AzureRmVM -ResourceGroupName $rgName -Location $locName -VM $_ }

The screenshot below is near the end of the create VM process. The MIMPortal1 VM that was first in the $VMConfig array is finished and the others are close behind.

The screenshot below shows all resources in the resource group. The Storage Account, the Network, each VM, its associated NIC and Public IP Address.

For me all that (including authN to AzureRM and providing the ‘username’ and ‘password’ for the admin account associated with each VM) executed in just under 13 minutes. Nice.

The full script? Sure thing, here it is.

Follow Darren Robinson on Twitter

Simultaneously Start|Stop all Azure Resource Manager Virtual Machines in a Resource Group

Problem

How many times have you wanted to Start or Stop all Virtual Machines in an Azure Resource Group ? For me it seems to be quite often, especially for development environment resource groups. It’s not that difficult though. You can just enumerate the VM’s then cycle through them and call ‘Start-AzureRMVM’ or ‘Start-AzureRMVM’. However, the more VM’s you have, that approach running serially as PowerShell does means it can take quite some time to complete. Go to the Portal and right-click on each VM and start|stop ?

There has to be a way of starting/shutting down all VM’s in a Resource Group in parallel via PowerShell right ?

Some searching and it seems common to use Azure Automation and Workflow’s to accomplish it. But I don’t want to run this on schedule or necessarily mess around with Azure Automation for development environments, or have to connected to the portal and kickoff the workflow.

What I wanted was a script that was portable. That lead me to messing around with ‘ScriptBlocks’ and ‘Start-Job’ functions in PowerShell. Passing variables in for locally hosted jobs running against Azure though was painful. So I found a quick clean way of doing it, that I detail in this post.

Solution

I’m using the brilliant Invoke-Parallel Powershell Script from Cookie.Monster, to in essence multi-thread and run in parallel the Virtual Machine ‘start’ and ‘stop’ requests.

In my script at the bottom of this post I haven’t included the ‘invoke-parallel.ps1’. The link for it is in the paragraph above. You’ll need to either reference it at the start of your script, or include it in your script. If you want to keep it all together in a single script include it like I have in the screenshot below.

My rudimentary PowerShell script takes two parameters;

  1. Power state. Either ‘Start’ or ‘Stop’
  2. Resource Group. The name of the Azure Resource Group containing the Virtual Machines you are wanting to start/stop. eg. ‘RG01’

<

p style=”background:white;”>Example: .\AzureRGVMPowerGo.ps1 -power ‘Start’ -azureResourceGroup ‘RG01’ or PowerShell .\AzureRGVMPowerGo.ps1 -power ‘Start’ -azureResourceGroup ‘RG01’

Note: If you don’t have a session to Azure in your current environment, you’ll be prompted to authenticate.

Your VM’s will simultaneously start/stop.

What’s it actually doing ?

It’s pretty simple. The script enumerates the VM’s in the Resource Group you’ve specified. It looks to see the status of the VM’s (Running or Deallocated) that is the inverse of the ‘Power’ state you’ve specified when running the script. It’ll start stopped VM’s in the Resource Group when you run it with ‘Start’ or it will stop all started VM’s in the Resource Group when you run it with ‘Stop’. Simples.

This script could also easily be updated to do other similar tasks. Like, delete all VM’s in a Resource Group.

Here it is

Enjoy.

Follow Darren Robinson on Twitter