Why and how I rebuilt my home network with Ubiquiti UniFi Networking

Remember the good old days of working from home, or checking your email/doing research for whatever you were working on and you had to plug-in the phone line to the modem and dialup your ISP or employer to access the internet? The upgrade to ISDN and having quick dial on demand access? Then the consumerization of WiFi and DSL and having always on connectivity to the internet from home.

Now in 2017 with the ubiquity of WiFi and typical house renovations and extensions you end up with a myriad of devices providing connectivity for entertainment, home automation and everything in-between. That is where my house was at (until a month ago). Add to it two high school students leveraging the interwebs for study, social and gaming means the heterogeneous organically grown network environment no longer holds up.

Something needed to change, but what? During my research I stumbled across others investigating the same predicament. I didn’t want another stop-gap band-aid solution. I wanted enterprise grade services that were reliable and affordable. Inspired by Troy Hunt’s posts here and here and some similar conversations with colleagues I sat in front of the TV and YouTube for a day over a recent long weekend and devised a plan that would work for my house and my family.

What did I want ?

With my consultant hat on I defined my requirements;

  1. Centralised administration
    • i.e. not having to log into 3 different Wi-Fi access points to change configuration, find connected devices etc
  2. Maximise throughput across Wi-Fi and ethernet to the internet
    • 802.11ac for Wi-Fi
    • Full 5G coverage
    • Gbit ethernet for connected wired devices
  3. Enterprise Grade VPN for integration with IaaS Services
  4. Security integration (video monitoring). Tactical first, ubiquitous coverage later
  5. Reporting and Auditing
    • Who’s connecting
    • What devices are doing what

Ubiquiti UniFi ticked all the boxes and based on positive local reviews I jumped in.

The Plan

What replaces what?

  1. Telstra cable broadband all-in-one modem, router, firewall, access point
    • The modem function stays enabled as it is HFC but will be put into Bridge Mode with the other functions (firewall and routing) to now be provided by the Ubiquiti Edge Router
  2. Billion switch and access point
    • At the other end of the house from the incoming Telstra connection I had a Billion all-in-one device. That was decommissioned and replaced with a Ubiquiti in-wall 802.11ac uni
  3. Cisco 8 port Gb Switch
    • This was replaced with the Ubiquiti 8 port managed switch (which includes 4 POE switch ports)
  4. Apple Airport Expresses (x2)
    • These were located in two dead zones wired in via ethernet but also acting as access points. They provided coverage for the outdoor entertaining areas, the pool, the downstairs lounge and the brewhouse. They were replaced with Ubiquiti 802.11ac Lite WiFi access points

In addition I also purchased the;

  1. Ubiquiti CloudKey to control everything and store the configuration
  2. I added an additional Ubiquiti in-wall 802.11ac unit in my sons bedroom. This gives maximum Wi-Fi coverage for him and his sister, but also an ethernet outlet to connect to his docking station for his MacBook and online gaming
  3. UniFi Video G3 IR HD Camera. Only one to start with to test a few scenarios out. If it works out it will be supplemented with additional units
  4. I had two Apple TV’s. One wired and one on Wi-Fi only. These were the 3rd Gen units and whilst they worked they really were apple centric. So streaming music to all units (including Airport Express) reliably meant using iTunes on a computer. I replaced these with Chromecast units (a combo of Audio and Video)
  5. Google Home to play nicely with the Chromecast and introduce a home automation assistant to the house though my existing Hue lights.

Putting it all together

Having invested the time in research, I’d watched a number of videos/tutorials from Chris and Willie. Check out this quick start guide from Chris and Willie’s tutorials here.

I took note of the configuration I had on my existing Telstra unit as I had some static address leases configured and a couple of ports enabled on the firewall. I installed the Ubiquiti Discovery Tool Chrome Extension on the laptop.

Essentially the process went like this:

  • Telstra Cable Broadband Ethernet connection to UniFi Security Gateway WAN Port
  • LAN Port of the UniFi Security Gateway connected to UniFi Switch
  • CloudKey connected to a UniFi POE Switch Port
  • Laptop connected via Ethernet to a UniFi Switch Port
  • UniFi AP-AC-Lite POE injector LAN Port connected to a UniFi Switch Port. UniFi AP-AC-Lite POE injector POE port connected to a UniFi AP-AC-Lite
  • UniFi Switch POE Port connected to the UniFi AC AP InWall unit
  • Via WiFi I connected to the existing Telstra modem configuration page and changed WAN port to Bridged Mode
  • I then turned off WiFi on the Telstra Modem using the button on the front
  • Powered on all the UniFi equipment and waited 5 mins

That all looked pretty similar to this

  • In the browser on the Ethernet connected laptop I went to 192.168.1.1 and could see the Unify Gateway had got an address and the configuration looked good
  • I started the Device Discovery Tool Chrome Extension, clicked on the UniFi Family button in the top right and let it find the devices on the network. It found all the network devices except the CloudKey. After clicking Find CloudKey it found the CloudKey.

I then followed this quick start guide from Chris. It was pretty straight forward and I followed my nose once I’d got through the first few steps. Pretty much everything had an update which I performed. I re-used the existing WiFi name I had previously to make reconfiguration easier to start with. But I did use a different subnet. As everything was pretty much DHCP enabled this didn’t cause any major probs. Just a couple of manual updates for the devices with static addresses.

Having just gotten off a 17hr flight from Dallas to Sydney and having been up for 14 hours before that, I forced myself back into the local timezone by performing the above. I had it all up and running within a couple of hours. Over the next few days I familiarised myself with the equip and the configs before physically locating all the components in their final resting spots.

Did I get what I wanted?

In fact it was the start of school holidays as I set everything up. As part of the initial configuration I enabled Deep Packet Inspection (DPI). The very next day looking into some of the features I noticed the following in the Statistics. Yes, my son had updated his Playstation games and had enjoyed a solid multi-hour gaming session with his mates online.

A couple of days later I noticed his sibling was keeping occupied with YouTube and iTunes whilst keeping in touch with her friends via Instagram and Snapchat. Combined with the Clients view I now had visibility what was connected and what was doing what on the home network. Even better with Cloud Access enabled I can do this at any time from anywhere.

WiFi Network Coverage

This was one of the big things I wanted to fix. The footprint of the property where I require full coverage is just over 350m2 (3767 sq ft). And ideally that coverage should be 5G. Using the Map functionality I uploaded the house/property plan and placed the units where I installed them and configured the map for the appropriate dimensions. I started out with all the Access Points configured with full power and Auto channel. The 2G coverage map looks like this. That’s some pretty good coverage.

The 5G coverage map looks like this. A small spot where coverage isn’t anticipated to be full, but I haven’t encountered any issues with connectivity possibly as the devices used there aren’t using 5G because they can’t or have dropped back to 2G. I’ll keep an eye on it, but maybe another AP in the pantry to cover the kitchen with full 5G maybe a future option. 

Configuration Updates

There was a series of firmware updates for all the devices about a week after the initial setup. After updating the Cloud Key, USG and Switch I spotted the “managed rolling upgrade” option and used that for the Access Points. Nice and simple. The Group Config option is also very nice allow the selection of multiple devices and making the same config change to all of them.

Camera Setup

The camera setup was so simple I was second guessing myself as to whether I’d done it correctly. I’d purchased the G3 HD Camera but held off on the NVR as I wanted to have a play and make sure I was happy with it first as I’ve had many unfavourable experiences with video cameras in the past. In doing my pre-purchase research I’d identified that Ubiquiti provide the NVR software for free for Windows and Linux.

I had an old laptop doing nothing, so I put Ubuntu on it, installed the NVR software and bam, it discovered my camera. Integrated with my Ubiquiti account means I can also access the camera and recordings from anywhere. I have the camera permanently fixed and configured to record on motion. It just works. Nighttime IR motion recording also works well.

The Timeline feature is very nice. Quickly catchup on the guard dogs irregular patrols of his back yard 🙂

VPN Setup

This is one piece I haven’t finished exploring and getting to where I want yet. I’ve attempted to create a site-to-site VPN from home to Azure which works but doesn’t appear to hold connection. More testing and configuration required.

Summary

Three weeks on and I’m extremely happy with the equip and its performance. I took the opportunity to also check each fly-lead attached to each device in the house and I biffed anything that wasn’t rated or was less that Cat 5E.

With school holidays coming to an end, and the network being performance tested daily with YouTube, Netflix, Sony Entertainment Network, FaceTime, Skype for Business etc, it hasn’t missed a beat. In fact I haven’t heard a peep about poor internet access, lagging online game performance, long ping times etc which I’m sure are common phrases other parents of millennial teenagers would know.

 

 

 

 

Getting started with Azure Cloud Shell

A few weeks back I noticed that I now had the option for the Azure Cloud Shell in the Azure Portal.

What is Azure Cloud Shell?

Essentially rather than having the Azure CLI installed on your local workstation, you can now initiate it from the Portal and you have automatically assigned (initiated as part of the setup) 5Gbytes of storage associated with it. So you can now create, manage and delete Azure resources using a centrally hosted CLI session. Each time you start your shell your homedrive will mount and your profile, scripts and whatever else you’ve stored in it will be available to you. Nice. Let’s do it.

Getting Started

Login to the Azure Portal and click on the Cloud Shell icon.

As this is the first time you’ve accessed it, you will not have any storage associated with your Azure Cloud Shell. You will be prompted for storage information.

Azure Files must reside in the same region as the machine being mounted to. Cloud Shell machines currently (July 2017) exist in the below regions:

Area Region
Americas East US, South Central US, West US
Europe North Europe, West Europe
Asia Pacific India Central, Southeast Asia

I hit the Advanced Settings to specify creation of a new Resource Group, Storage Account and File Share.

The UI doesn’t check for uniqueness of the configuration settings until it is written. So you might need a couple of attempts with the naming of your storage account. As you can see below it isn’t surprising that my attempt to use azcloudshell as a “Storage Account Name” was already taken.

Providing unique values for these options

.. let the initial creation go through just nicely. I now had a homedrive created for my profile and any files I create, store for my sessions.

As for commands you can use with the Azure CLI go have a look here for the full list that you can use to create, manage and delete your Azure resources.

Personally I’m currently doing a lot with Azure Functions. A list of the full range of Azure Functions CLI commands is available here.

The next thing I looked to do was to put my scripts etc into the clouddrive. I just navigated to the new StorageAccount that I created as part of this and uploaded via the browser.

Below you can see the file I uploaded on the right which appears in the directory in the middle pane.

Using the Azure CLI I changed directories and could see my uploaded file.

And that is pretty much it. Continue as you would with the CLI, but just now with it all centrally stored. Sweet.

How to build and deploy an Azure NodeJS WebApp using Visual Studio Code

Introduction

This week I had the need to build a small web application with a reasonably simple front end that will later be integrated inside a Portal. The web application isn’t going to be high use and didn’t necessitate deployment of infrastructure (VM’s). I’d messed with NodeJS a while back in this post where I configured a UI for Microsoft Identity Manager and Azure based functions.

In the back of my mind I knew I didn’t want to have to go for a full Visual Studio Project Solution for this, and with the recent updates to Visual Studio Code I figured it must be possible to do it using it. There wasn’t much around on doing it, so I dived in and worked it out for myself. Here I share the end-to-end process to make it easy for you to started.

Overview

What you will need on your development workstation before you start are the following components. Download and install them on your dev machine.

You will also need an Azure Subscription to where you will publish your NodeJS site.

This post details setting up Visual Studio Code to build a shell NodeJS site and deploy it to Azure using a local GIT Repository. Let’s get started.

Visual Studio Code Extensions

A really smart and handy extension for VS Code is Azure Tools for VS Code. Release a few months ago (January 2017), this extension allows you to quickly create a Web App (Resource Group, App Service, Application Service Plan etc) from within VS Code. With VSCode on your development machine from the prerequisites above click on the Extensions Icon (bottom left) in the VSCode menu and type Azure Tools. Click the green Install button.

Azure Tools for VS Code

Creating the NodeJS Site in VS Code

I had a couple of attempts at doing this before I found a quick, neat and repeatable method of getting started. In order to get the Web App deployed and accessible correctly in Azure I found it easiest to use the Sample Azure NodeJS Hello World example from here. Download that sample and extract the contents to a new folder on your workstation. I created a new path on mine named …\NodeJS\nodejssite and dropped the sample in there so it looked like below.

After creating the folder structure and putting the sample in it, whilst in the sub-directory type:

code .

That will startup Visual Studio Code in the newly created folder with the starter sample.

Install Express for NodeJS

To that base sample site we’ll install Express. From the Terminal tab in the lower pane type:

npm install -g express-generator

Express App

With Express now on our machine, lets add the Express App to our new NodeJS site. Type express in the Terminal window.

express

Accept that the directory is not empty

This will create the folder structure for Express.

Now to get all the files and modules for our site configured for our app run npm install

Now type npm start in the terminal window to start our new site.

The NodeJS site will start. Open a Web Browser and go to http://localhost:3000 and you should see the Express empty site.

Navigate to views => index.jade Update the text like I have below.

Refresh your browser window and you should see the text updated.

In the terminal windows press Cntrl + C to stop NodeJS.

Test Deploy to Azure

Now let’s do a test deploy of our shell site as an Azure WebApp.

Press Cntrl + Shift + P or from the View menu select Command Palette.

Type Azure: Login 

This will generate a code and give you a link to open in your browser and login

Paste in the code from the clipboard and select continue

Then login with your account for the Tenant where you want to deploy the WebApp too. You’ll then be authorized.

From the Command Palette type azure sub and choose Azure: List Azure Subscriptions and choose the subscription where you will create and deploy the WebApp

Now from the Command Palette type Azure Create a Web App (Simple).

Give the WebApp a name. This will become the WebApp Name, and the basis for the all the associated WebApp components. Use Create a Web App (Advanced) if you want to be more specific about the name of the App Resources etc.

If you watch the bottom VS Code Status bar you will see the Azure Tools extension create the new Resource Group, Web App and Web App Plan.

Login to the Azure Portal, select the new Web App.

Select Deployment Options and then Local Git Repository. Select OK.

Select Deployment credentials and provide a username and password. You’ll need this shortly to publish your site.

Click Overview. Copy the Git clone url.

Back in VS Code, select the GIT icon (under the magnifying glass) and from the top choose Initialize Repository.

Then in the terminal window type git remote add azure <git clone url> obtained from the step above.

Type Initial Commit as the message and click the tick icon in the Source Control menu bar.

Select and select Publish

Select azure as the remote target we setup earlier.

You’ll be prompted to authenticate. Use the account you created above in Deployment Credentials.

Back in the Azure Portal under the Web App under Deployment Options you will see the initial commit.

Click on Overview and you should see that it is running. Click on URL and the site will open in a new tab in your browser.

Updating our WebApp

Now, let’s make a change to our WebApp.

Back in VS Code, click on the files and folder icon in the top left corner, navigate to views => index.jade and update the title. Hit Cntrl + S (or select Save from the File menu). In Terminal below type npm start to start our NodeJS site locally.

Check out the update locally. In a browser navigate to the local NodeJS site on localhost:3000. You’ll see the changed page.

Select the Git icon on the left menu, give the update some text eg. ‘updated page text’ and select the tick from the top menu.

Select the and choose Push to publish the changes to our Azure WebApp.

Go back to your browser which was on the Azure WebApp URL and reload. Our change and been push and reflected in the WebApp.

Summary

Very quickly and easily using Visual Studio Code (with NodeJS and Git Desktop installed locally) we have;

  • Created an Azure WebApp
  • Created a base NodeJS site
  • Have a local NodeJS site we can develop
  • Publish it to Azure

Now go create something awesome.

How to access Microsoft Identity Manager Hybrid Report data using PowerShell, Graph API and oAuth2

Hybrid Reporting is a great little feature of Microsoft Identity Manager. A small agent installed on the MIM Sync Server will send reporting data to Azure for MIM SSPR and MIM Group activities. See how to install and configure it here.

But what if you want to get the reporting data without going to the Azure Portal and looking at the Audit Reports ? Enter the Azure AD Reports and Events REST API that is currently in preview.  It took me a couple of cracks and getting this working, because documentation is a little vague especially when accessing it via PowerShell and oAuth2. So I’ve written it up and hope it helps for anyone else looking to go down this route.

Gotchas

Accessing the Reports via the API has a couple of caveats that I had to work through:

  • Having the correct permissions to access the report data. Pretty much everything you read tells you that you need to be a Global Admin. Once I had my oAuth tokens I messed around a little and a was able to also get the following from back from the API when purposely using an identity that didn’t have the right permissions. The key piece is “Api request is not from global admin or security admin or security reader role”. I authorized the WebApp using an account that is in the Security Reader Role, and can successfully access the report data.

  • Reading the documentation here on MSDN I incorrectly assumed each category was the report name. Only when I called the “https://graph.windows.net//$metadata?api-version=beta”  and looked at the list of reports I noticed each report was plural.The three that I wanted to access (and report on) are obviously the MIM Hybrid Reports;
"Name":  "mimSsgmGroupActivityEvents",
"Name":  "mimSsprActivityEvents",
"Name":  "mimSsprRegistrationActivityEvents",

Here is the full list of Reports available as of 24 May 2017.

{
    "Name":  "b2cAuthenticationCountSummary",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cMfaRequestCount",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cMfaRequestEvent",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cAuthenticationEvent",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cAuthenticationCount",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cMfaRequestCountSummary",
    "LicenseRequired":  "False"
}
{
    "Name":  "tenantUserCount",
    "LicenseRequired":  "False"
}
{
    "Name":  "applicationUsageDetailEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "applicationUsageSummaryEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "b2cUserJourneySummaryEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "b2cUserJourneyEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "cloudAppDiscoveryEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "mimSsgmGroupActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "ssgmGroupActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "mimSsprActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "ssprActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "mimSsprRegistrationActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "ssprRegistrationActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "threatenedCredentials",
    "LicenseRequired":  "False"
}
{
    "Name":  "compromisedCredentials",
    "LicenseRequired":  "False"
}
{
    "Name":  "auditEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "accountProvisioningEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "signInsFromUnknownSourcesEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "signInsFromIPAddressesWithSuspiciousActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "signInsFromMultipleGeographiesEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "signInsFromPossiblyInfectedDevicesEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "irregularSignInActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "allUsersWithAnomalousSignInActivityEvents",
    "LicenseRequired":  "True"
}
{
    "Name":  "signInsAfterMultipleFailuresEvents",
    "LicenseRequired":  "False"
}
{
    "Name":  "applicationUsageSummary",
    "LicenseRequired":  "True"
}
{
    "Name":  "userActivitySummary",
    "LicenseRequired":  "False"
}
{
    "Name":  "groupActivitySummary",
    "LicenseRequired":  "True"
}

How to Access the Reporting API using PowerShell

What you need to do is;

  • Register a WebApp
    • Assign a reply to URL of https://localhost
    • Assign it Read.Directory permissions
  • Get an oAuth2 Authentication Code using an account that is either Global Admin or in the Security Admin or Security Reader Azure Roles
  • Use your Bearer and Refresh tokens to query for the reports you’re interested in

Register your WebApp

In the Azure Portal create a new Web app/API app and assign it https://localhost as the Reply URL. Record the Application ID for use in the PowerShell script.

Assign the Read Directory data permission as shown below

Obtain a key from the Keys option on your new Web App.  Record it for use in the PowerShell script.

Generate an Authentication Code, get a Bearer and Refresh Token

Update the following script, changing Lines 5 & 6 for the ApplicationID/ClientId and Client Secret for the WebApp you created above.

Run the script and you will be prompted to authenticate. Use an account in the tenant where you created the Web App that is a Global Admin or in the Security Admin or Security Reader Azure Roles. You will need to change the location where you want the refresh.token stored (line 18).

If you’ve done everything correctly you have authenticated, got an AuthCode which was then used to get your Authorization Tokens. The value of the $Authorization variable should look similar to this;

Now you can use the Refresh token to generate new Authorization Tokens when they time out, simply by calling the Get-NewTokens function included in the script above.

Querying the Reporting API

Now that you have the necessary prerequisites sorted you can query the Reporting API.

Here are a couple of simple queries to return some data to get you started. Update the script for the tenant name of your AzureAD. With the $Authorization values from the script above you can get data for the MIM Hybrid Reports.

Synchronizing Exchange Online/Office 365 User Profile Photos with FIM/MIM

Introduction

This is Part Two in the two-part blog post on managing users profile photos with Microsoft FIM/MIM. Part one here detailed managing users Azure AD/Active Directory profile photo. This post delves deeper into photos, specifically around Office 365 and the reason why you may want to manage these via FIM/MIM.

Background

User profile photos should be simple to manage. But in a rapidly moving hybrid cloud world it can be a lot more complex than it needs to be. The best summary I’ve found of this evolving moving target is from Paul Ryan here.

Using Paul’s sound advice we too are advising our customers to let users manage their profile photo (within corporate guidelines) via Exchange Online. However as described in this article photos managed in OnPremise Active Directory are synchronized to Azure AD and on to other Office365 services only once. And of course we want them to be consistent across AD DS, Azure AD, Exchange Online and all other Office365 Services.

This post details synchronizing user profile photos from Exchange Online to MIM for further synchronization to other systems. The approach uses a combination of Azure GraphAPI and Exchange Remote PowerShell to manage Exchange Online User Profile Photos.

The following graphic depicts the what the end goal is;

Current State

  • Users historically had a photo in Active Directory. DirSync/ADSync/AzureADConnect then synchronized that to Azure AD (and once only into Office 365).
  • Users update their photo in Office365 (via Exchange Online and Outlook Web Access)
    • the photo is synchronized across Office365 Services

Desired State

  • An extension of the Current State is the requirement to be able to take the image uploaded by users in Exchange Online, and synchronize it back to the OnPremise AD, and any other relevant services that leverage a profile photo
  • Have AzureADConnect keep AzureAD consistent with the new photo obtained from Office365 that is synchronized to the OnPrem Active Directory
  • Sync the current photo to the MIM Portal

Synchronizing Office365 Profile Photos

Whilst Part-one dealt with the AzureAD side of profile photos as an extension to an existing AzureAD PowerShell Management Agent for FIM/MIM, I’ve separated out the Office365 side to streamline it and make it as efficient as possible. More on that later. As such I’ve created a new PowerShell Management Agent specifically for Office365 User Profile Photos.

I’m storing the Exchange Online photo in the MIM Metaverse as a binary object just as I did for the AzureAD photo (but in a different attribute ). I’m also storing a checksum of the photos (as I did for the AzureAD Photo, but also in a different attribute) to make it easier for comparing what is in Azure AD and Exchange Online, to then be used to determine if changes have been made (eg. user updated their profile photo).

Photo Checksum

For generating the hash of the profile photos I’m using Get-Hash from the Powershell Community Extensions.  Whilst PowerShell has Get-FileHash I don’t want to write the profile photos out to disk and read them back in just to get the checksum. That slows the process up by 25%. You can get the checksum using a number of different methods and algorithms. Just be consistent and use the same method across both profile photos and you’ll be comparing apples with apples and the comparison logic will work.

Some notes on Photos and Exchange Online (and MFA)

This is where things went off on a number of tangents. Initially I tried accessing the photos using Exchange Online Remote PowerShell.

CAVEAT 1: If your Office365 Tenant is enabled for Multi-Factor Authentication (which it should be) you will need to get the Exchange Online Remote PowerShell Module as detailed here. Chances are you won’t have full Office365 Admin access though, so as long as the account you will be using is in the Recipient Management Role you should be able to go to the Exchange Control Panel using a URL like https://outlook.office365.com/ecp/?realm=<tenantname>&wa=wsignin1.0 where tenantname is something like customer.com.au From the Hybrid menu on in the right handside pane you will then be able to download the Microsoft.Online.CSE.PSModule.Client.application I had to use Internet Explorer to download the file and get it installed successfully. Once installed I used a few lines from this script here to load the Function and start my RPS session from within PowerShell ISE during solution development.

CAVEAT 2: The EXO RPS MFA PS Function doesn’t allow you to pass it your account password. You can pass it the identity you want to use, but not the password. That makes scheduled process automation with it impossible.

CAVEAT 3: The RPS session exposes the Get-UserPhoto cmdlet which is great. But the RPS session leverages the GraphAPI. The RPS PS Module doesn’t refresh it’s tokens, so if the import takes longer than 60 minutes then using this method you’re a bit stuffed.

CAVEAT 4: Using the Get-UserPhoto cmdlet detailed above, the syncing of photos is slow. As in I was only getting ~4 profile photos per minute slow. This also goes back to the token refresh issue as for pretty much any environment of the size I deal with, this is too slow and will timeout.

CAVEAT 5: You can whitelist the IP Address (or subnet) of your host so MFA is not required using Contextual IP Addressing Whitelisting. At that point there isn’t really a need to use the MFA Enabled PREVIEW EXO RPS function anyway. That said I still needed to whitelist my MIM Sync Server(s) from MFA to allow integration into the Graph API. I configured just the single host. The whitelist takes CIDR format so that looks like /32 (eg. 11.2.33.4/32)

Performance Considerations

As I mentioned above,

  • using the Get-UserPhoto cmdlet was slow. ~4 per minute slow
  • using the GraphAPI into Exchange Online and looking at each user and determining if they had a photo then downloading it, was also slow. Slow because at this customer only ~50% of their users have a photo on their mailbox. As such I was only able to retrieve ~145 photos in 25 minutes. *Note: all timings listed above were during development and actually outputting the images to disk to verify functionality. 

Implemented Solution

After all my trial and error on this, here is my final approach and working solution;

  1. Use the Exchange Online Remote PowerShell (non-MFA version) to query and return a collection of all mailboxes with an image *Note, add an exception for your MIM Sync host to the white-listed hosts for MFA (if your Office365 Tenant is enabled for MFA) so the process can be automated
  2. Use the Graph API to obtain those photos
    • with this I was able to retrieve ~1100 profile photos in ~17* minutes (after ~2 minutes to query and get the list of mailboxes with a profile photo)

Pre-requisites

There’s a lot of info above, so let me summarize the pre-requisties;

  • The Granfeldt PowerShell MA
  • Whitelist your FIM/MIM Sync Server from MFA (if your Office 365 environment is enabled for MFA)
  • Add the account you will run the MA as, that will in turn connect to EXO via RPS to the Recipient Management Role
  • Create a WebApp for the PS MA to use to access users Profile Photos via the Graph API (fastest method)
  • Powershell Community Extensions to generate the image checksum

Creating the WebApp to access Office365 User Profile Photos

Go to your Azure Portal and select the Azure Active Directory Blade from the Resource Menu bar on the left. Then select App Registrations and from the Manage Section of the Azure Active Directory menu, and finally from the top of the main pane select “New Application Registration“.

Give it a name and select Web app/API as the type of app. Make the sign-in URL https://localhost and then select Create.

Record the ApplicationID that you see in the Registered App Essentials window. You’ll need this soon.

Now select All Settings => Required Permissions. Select Read all users basic profiles in addition to Sign in and read user profile. Select Save.

Under Required Permissions select Add and then select 1 Select an API, and select Office 365 Exchange Online then click Select.

Choose 2 Select Permissions and then select Read user profiles and Read all users’ basic profiles. Click Select.

Select Grant Permissions

From Settings select Keys, give your key a Description, choose a key lifetime and select Save. RECORD the key value. You’ll need this along with the WebApp ApplicationID/ClientID for the Import.ps1 script.

Using the information from your newly registered WebApp, we need to perform the first authentication (and authorization of the WebApp) to the Graph API. Taking your ApplicationID, Key (Client Secret) and the account you will use on on the Management Agent (and that you have assigned the Recipient Management Role in Exchange Online) and run the script detailed in this post here. It will authenticate you to your new WebApp via the GraphAPI after asking you to provide the account you will use on the MA and Authorizing the permissions you selected when registering the app. It will also create a refresh.token file which we will give to the MA to automate our connection. The Authorization dialog looks like this.

Creating the Management Agent

Now we can create our Management Agent using the Granfeldt PowerShell Management Agent. If you haven’t created one before checkout a post like this one, that further down the post shows the creation of a Granfeldt PSMA. Don’t forget to provide blank export.ps1 and password.ps1 files on the directory where you place the PSMA scripts.

PowerShell Management Agent Schema.ps1

PowerShell Management Agent Import.ps1

As detailed above the PSMA will leverage the WebApp to read users Exchange Profile Photos via the Graph API. The Import script also leverages Remote Powershell into Exchange Online (for reasons also detailed above). The account you run the Management Agent as will need to be added to the Recipient Management Role Group in order to use Remote PowerShell into Exchange Online and get the information required.

Take the Import.ps1 script below and update;

  • Update lines 11, 24 and 42 for the path to where you have put your PSMA. Mine is under the Extensions directory in a directory named EXOPhotos.
  • copy the refresh.token generated when authenticating and authorizing the WebApp earlier into the directory you specified in line 42 above.
  • Create a Debug directory under the directory you specified in lines 11,24 and 42 above so you can see what the MA is doing as you implement and debug it the first few times.
  • I’ve written the Import to use Paged Imports, so make sure you tick the Paged Imports checkbox on the configuration of the MA
  •  Update Lines 79 and 80 with your ApplicationID and Client Secret that you recorded when creating your WebApp

 

Running the Exchange User Profile Photos MA

Now that you have created the MA, you should have select the EXOUser ObjectClass and the attributes defined in the schema. You should also create the EXOPhoto (as Binary) and EXOPhotoChecksum (as String) attributes in the Metaverse on the person ObjectType (assuming you are using the built-in person ObjectType).

Configure your flow rules to flow the EXOPhoto and EXOPhotoChecksum on the MA to their respective attributes in the MV.

Create a Stage Only run profile and run it. If you have done everything correctly you will see photos come into the Connector Space.

Looking at the Connector Space, I can see EXOPhoto and EXOPhotoChecksum have been imported.

After performing a Synchronization to get the data from the Connector Space into the Metaverse it is time to test the image that lands in the Metaverse. That is quick and easy via PowerShell and the Lithnet MIIS Automation PowerShell Module.

$me = Get-MVObject -ObjectType person -Attribute accountName -Value "drobinson"
$me.Attributes.EXOPhoto.Values.ValueBinary
[System.Io.File]::WriteAllBytes("c:\temp\myOutlookphoto.jpg" ,$me.Attributes.EXOPhoto.Values.ValueBinary )

The file is output to the directory with the filename specified.

Opening the file reveals correctly my Profile Photo.

Summary

In Part one we got the AzureAD/Active Directory photo. In this post we got the Office365 photo.

Now that we have the images from Office365 we need to synchronize any update to photos to Active Directory (and in-turn via AADConnect to Azure AD). Keep in mind the image size limits for Active Directory and that we retrieved the largest photo available from Office365 when synchronizing the photo on. There are a number of PowerShell modules for photo manipulation that will allow you to resize accordingly.

 

 

 

Getting started configuring the latest Microsoft Identity Manager IBM Notes Management Agent with Domino v9.x

Lotus Notes. My old nemesis as both a user and as an Administrator is back to haunt me again.

There’s a reasonable amount written by others on the trials and tribulations of getting the FIM/MIM Notes MA configured and working. However they are all referencing older versions of the MA and older versions of Domino. (If you are looking details on the previous versions checkout Michael’s great post here). The info on permissions is still valid, so make sure you’re on top of that.

The latest Notes MA (March 2017) is available from here and supports v9.x of Domino and v9.x of the Notes Client.

Here’s a couple of the differences straight up;

  • The v9 Lotus Notes Client does not require the v8 client requirements of setting up the MA using x86 architecture. Use “Process” as MA Architecture as per the documentation
  • I managed to get the v9.0.0 and v9.0.1 FP4 clients to work. The latest is v9.0.1 FP8 available from here
  • There are two technical references from Microsoft for the Notes MA. This is the old version. After some hunting I located the documentation for the latest release here. Yes the new version of the MA and the associated documentation only become available mid-March 2017

Here’s a couple of quick tips for getting up and running with the latest MIM Notes MA;

  • Install the v9.x Lotus Notes Client on the MIM Sync Server
    • Select Single User Install
  • Open the names.nsf and schema.nsf databases using the Notes ID you will be supplying to the Notes MA configuration

Enable Logging

Like anything, knowing what is going on, helps diagnose issues. This is how I enabled event log logging for the latest IBM Notes Management Agent

  • Edit the miiserver.exe.config file in the BIN folder. More than likely that will be in this path C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\Bin
  • Add the following lines as shown in the graphic below

<source name="ConnectorsLog" switchValue="Information">
   <listeners>
     <add name="ConnectorsLogListener" 
     type="System.Diagnostics.EventLogTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
     initializeData="ForefrontIdentityManager.ManagementAgent"
     traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" />
  </listeners>
</source>

Restart the Forefront Identity Manager Synchronization Service in services and when you run the Notes MA you will then see output in the Event Log under Forefront Identity Manager Management Agent.

Change the following line substituting Error, WarningInformation or Verbose for what you want logged.

<source name="ConnectorsLog" switchValue="Information">

If you want it to log to the Application Log then change the initializeData value to whatever you want. I used IBM Domino MA;

<source name="ConnectorsLog" switchValue="Information">
  <listeners>
    <add name="ConnectorsLogListener" 
    type="System.Diagnostics.EventLogTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
    initializeData="IBM Domino MA"
    traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" />
  </listeners>
</source>

… restart the Forefront Identity Manager Synchronization Service in services and when you run the Notes MA the logged data will land in the Application Log.

If you’ve used the Notes MA before, this should get you up and running with the latest.

An Azure Timer Function App to retrieve files via FTP and Remote PowerShell

Introduction

In an age of Web Services and API’s it’s an almost a forgotten world where FTP Servers exist. However most recently I’ve had to travel back in time and interact with a FTP server to get a set of files that are produced by other systems on a daily basis. These files are needed for some flat-file imports into Microsoft Identity Manager.

Getting files off a FTP server is pretty simple. But needing to do it across a number of different environments (Development, Staging and Production) meant I was looking for an easy approach that I could also replicate quickly across multiple environments. As I already had Remote PowerShell setup on my MIM Servers for other Azure Function Apps I figured I’d use an Azure Function for obtaining the FTP Files as well.

Overview

My PowerShell Timer Function App performs the following:

  • Starts a Remote PowerShell session to my MIM Sync Server
  • Imports the PSFTP PowerShell Module
  • Creates a local directory to put the files into
  • Connects to the FTP Server
  • Gets the files and puts them into the local directory
  • Ends the session

Pre-requisites

From the overview above there are a number of pre-requites that other blog posts I’ve written detail nicely the steps involved to appropriately setup and configure. So I’m going to link to those. Namely;

  • Configure your Function App for your timezone so the schedule is correct for when you want it to run. Checkout the WEBSITE_TIME_ZONE note in this post.

    WEBSITE_TIME_ZONE

  • You’ll need to configure your Server that you are going to put the files onto for Remote PowerShell. Follow the Enable Powershell Remoting on the FIM/MIM Sync Server section of this blogpost.
  • The credentials used to connect to the MIM Server are secured as detailed in the Using an Azure Function to query FIM/MIM Service section of this blog post.
  • Create a Timer PowerShell Function App. Follow the Creating your Azure App Service section of this post but choose a Timer Trigger PowerShell App.
    • I configured my Schedule for 1030 every day using the following CRON configuration
      0 30 10 * * *
  • On the Server you’ll be connecting to in order to run the FTP processes you’ll need to copy the PSFTP Module and files to the following directories. I unzipped the PSFTP files and copied the PSFTP folder and its contents to;
    • C:\Program Files\WindowsPowerShell\Modules
    • C:\Windows\System32\WindowsPowerShell\v1.0\Modules

Configuring the Timer Trigger Function App

With all the pre-requisites in place it’s time to configure the Timer Function App that you created in the pre-requisites.

The following settings are configured in the Function App Application Settings;

  • FTPServer (the server you will be connecting to, to retrieve files)
  • FTPUsername (username to connect to the FTP Sever with)
  • FTPPassword (password for the username above)
  • FTPSourceDirectory (FTP directory to get the files from)
  • FTPTargetDirectory (the root directory under which the files will be put)

ApplicationSettings

You’ll also need Application Settings for a Username and Password associated with a user that exists on the Server that you’ll be connecting to with Remote PowerShell. In my script below these application settings are MIMSyncCredUser and MIMSyncCredPassword

Function App Script

Finally here is a raw script. You’ll need to add appropriate error handling for your environment. You’ll also want to change lines 48 and 51 for the naming of the files you are looking to acquire. And line 59 for the servername you’ll be executing the process on.

Summary

A pretty quick and simple little Azure Function App that will run each day and obtain daily/nightly extracts from an FTP Server. Cleanup of the resulting folders and files I’m doing with other on-box processes.

This post is cross-blogged on both the Kloud Blog and Darren’s Blog.

Leveraging the Microsoft Graph API with PowerShell and OAuth 2.0

Background

Microsoft Graph is the evolvement of API’s into Microsoft Cloud Services. For me not being a developer, a key difference is interacting with with Graph API using OAuth 2.0 via PowerShell. Through a number of my previous posts I’ve interacted with the Graph API using client libraries such as the Microsoft.IdentityModel.Clients.ActiveDirectory library. This post details using PowerShell to talk directly to Graph API and managing Authentication and Authorization using OAuth 2.0 and Azure WebApp.

Leveraging the Graph API opens up access to the continually evolving Azure services as shown in the graphic below. Source graph.microsoft.io

Getting started with the Graph API, PowerShell and OAuth 2.0

The key difference between using a client library and going direct is you need to register and configure an Azure WebApp. It is super simple to do. Jump on over to the Office 365 App Registration Tool here. Sign in with an account associated with the Azure Tenant you are going to interact with. Depending on what you’re doing you’ll need to select the appropriate access.

Here’s the settings I selected for access to user profiles. Give the WebApp a name. This is the name that you’ll see in the OAuth Authorization step later on. I’ve highlighted the other key settings. Don’t worry about the SignIn and RedirectURL’s other than configuring HTTPS as we’ll be using PowerShell to access the WebApp.

Once you’ve registered the WebApp successfully you’ll get a Client ID and Client Secret. RECORD/WRITE THESE DOWN NOW as you won’t get access to your Client Secret again.

If you want to change what your WebApp has access to after creating it you can access it via the Classic Azure Portal. Select your Active Directory => Select Applications => Select the WebApp you created earlier  => Select Configure => (scroll to the bottom) Select Add Application. Depending on what you have in your subscription you can add access to your services as shown below.

Authenticating & Authorizing

In order to access the Graph API we need to get our Authorization Code. We only need to do this the first time.

This little script (modify with your Client ID and your Client Secret obtained earlier when we registered our WebApp) will prompt you to authenticate.

Using the account associated with the Web App you registered in the previous step authenticate.

You’ll be requested to authorize your application.

You will then have your AuthCode.

One thing to note above is admin_consent. This is from the URL passed to get the Authorization Code. &prompt=admin_consent is giving Admin Consent to all entities configured on the WebApp over just access for and to a single user.

Accessing the Graph API with OAuth 2.0 Access Token

Using your ClientID, ClientSecret and AuthCode you can now get your access token. I tripped up here and was getting Invoke-RestMethod : {“error”:”invalid_client”,”error_description”:”AADSTS70002: Error validating credentials.
AADSTS50012: Invalid client secret is provided.  Tracing back my steps and looking at my “Client Secret” I noticed the special characters in it that I hadn’t URL Encoded. After doing that I was successfully able to get my access token.

Looking at our AuthZ we can see that the Scope is what was selected when registering the WebApp.

Now using the Access Token I can query the Graph API. Here is getting my AAD Object.

Summary

In my case I now can access all users via the API. Here is what’s available. Using the Access Token and modifying the Invoke-RestMethod URI and Method (including -Body if you are doing a Post/Patch action) you are ready to rock and roll and all via PowerShell.

Your Access Token is valid for an hour. Before then you will need to refresh it. Just run the $Authorization = Invoke-RestMethod ….. line again. 

Follow Darren on Twitter @darrenjrobinson