Using AutoRest for PowerShell to generate PowerShell Modules

AutoRest for PowerShell

Recently the Beta of the AutoRest for PowerShell Generator has been made available. At the recent Microsoft MVP Summit in Seattle Garrett Serack gave those that were interested a 1 hr corridor session on getting started with AutoRest for PowerShell.

AutoRest is a tool that generates client libraries for accessing RESTful web services. Microsoft are moving towards using AutoRest to generate SDK’s for the API’s in the standard languages they provide SDK’s for. In addition the AutoRest for PowerShell generator aims to automate the generation of PowerShell Modules for Azure API’s.

In this post I’ll give an intro to getting started with AutoRest to generate PowerShell modules using an AutoRest example and then an Azure Function based API.

AutoRest for PowerShell Prerequisites

AutoRest is designed to be cross-platform. As such its dependencies are NodeJS, PowerShell Core and Dotnet Core.

AutoRest for PowerShell Installation

Installation is done via command line once you have NodeJS installed and configured

npm install -g autorest@beta

if you have previously installed AutoRest you will want to update to the latest version as updates and fixes are coming regularly

autorest --reset

AutoRest Update.PNG

Building your first AutoRest for PowerShell Module

The AutoRest documentation has a couple of examples that are worth using initially to get to know the process and the expected output from AutoRest.

From a PowerShell Core command prompt (type pwsh in a command window) make the following Invoke-WebRequest (IWR) to retrieve the XKCD Swagger/OpenAPI file. I pre-created a sub-directory named XKCD under the directory where I ran the following command.

iwr https://raw.githubusercontent.com/Azure/autorest/master/docs/powershell/samples/xkcd/xkcd.yaml -outfile ./xkcd/xkcd.yaml

Download XKCD Yaml.PNG

We can now start the process to generate the PowerShell module. Change directory to the directory where you put the xkcd.yaml file and run the following command.

autorest --powershell --input-file:./xkcd.yaml

AutoRest Generate Libs.PNG

Following a successful run a generated folder will be created. We can now build the PowerShell Module. We can add the -test flag so that at the end of Generating the Module, Pester will be used to test it. First run PowerShell Core pwsh  then build-module

pwsh 
./generated/build-module.ps1 -test

AutoRest - Build Module.PNG

With the module built and the tests passed we can run it. We need to load the module and look to see the cmdlets that have been built for the API.

.\generated\run-module.ps1 xkcd.psm1
get-command -module xkcd

AutoRest - Load Module.PNG

Using the Get-ComicForToday cmdlet we can query the XKCD API and get the Comic of the Day.

Get-XkcdComicForToday | fl

AutoRest - Comic of the Day.PNG

Taking it one step further we can download the comic of the day and open it, with this PowerShell one-liner.

invoke-webrequest (Get-XkcdComicForToday).img -outfile image.png ; & ./image.png

Download and Display XKCD Comic of the Day.PNG

Let’s Create a Simple API – BOFH Excuse Generator

I created an HTTP Trigger based PowerShell API Function that takes GET requests and returns a BOFH – Bastard Operator from Hell (warning link has audio) style excuse. For those that haven’t been in the industry for 20+ years or aren’t familiar with the Bastard Operator from Hell, it is essentially a set of (semi) fictional transcripts of user interactions from a Service Desk Operator (from Hells’) perspective set in a University environment. Part the schtick is an Excuse of the Day. My API when queried returns a semi plausible Excuse of the Day.

Here is my HTTP Trigger PowerShell Azure Function (v1). A simple random excuse generator.

I configured the function for anonymous, GET operations only and the route to be excuse

Testing the Function from my local machine showed it was working and returning an excuse.

Now it’s time to generate the OpenAPI Spec. Select the Azure Function App => Platform Features => API definition

Select Generate API definition template and the basics of the OpenAPI spec for the new API will be generated. I then updated it with the output application/json and what the API returns as shown in the highlighted sections below. Select Save 

Now we can test it out and we see that we have success.

Taking the OpenAPI Spec (YAML format file) for our BOFH Excuse API we go through the steps to successfully generate the PowerShell Module using AutoRest. Running the freshly baked cmdlet from the BOFH PowerShell Module returns a BOFH Excuse. Awesome.

Summary

Using AutoRest it is possible to generate PowerShell Modules for our API’s. Key to the successful generation is the definition of the OpenAPI Spec for our API. In my example above, obviously if you don’t define what the API call returns then the resulting cmdlet will query the API, but won’t return the result.

Empowering your long running PowerShell Automation Scripts with SMS/Text Notifications

18 months ago I wrote this post that detailed integrating Push Notifications into your scripts. That still works great, but does require that you have the associated Push Bullet application installed in your browser or on your devices. More recently I wrote about using Burnt Toast for Progress Dialogs’ for long running scripts. That too is all great if you are present on the host running those scripts. But what if you want something a little more native and ubiquitous? Notifications for those autonomous or long running scripts where you aren’t active on the host running them, and you don’t want the hassle of another application specific for that purpose? How about SMS/Text notifications?

This post details how to use Twilio (a virtual telco) from your PowerShell scripts to send SMS/Text alerts from your scripts so you can receive notifications like this;

Everything is on Fire.PNG

Twilio

Twilio is a virtual telco (amongst other products), that allows you to use services such as the mobile network via your application. For their SMS/TXT service they even give you a credit to get started with their service. To send SMS/TXT messages using their service from Australia each message is AU$0.0550.

Sign-up for a Twilio trial account, enable and verify your account. Take note of the following items as you will need them for your script;

  • service mobile number (initially Trial Number)
  • Account SID
  • your Auth Token

Using these pieces of information via the Twilio API we can send SMS/Text Notifications from your PowerShell scripts (well any language, but I’m showing you how with PowerShell). You can get your Service Number, Account SID and Auth Token from the Dashboard after registering for a Trial Account.

Trial Account Dashboard.PNG

To enable SMS/TXT go to the Twilio Programmable SMS Dashboard here and create a New Messaging Service.

For my use (notifications from scripts) I selected Notifications, Outbound Only.

Create New Messaging Service.PNG

Once created you will see the following on the Programmable SMS Dashboard. That’s it, you’ve activated SMS/TXT in Trial Mode.

SMS Dashboard.PNG

The Script

Here is a Send-TextNotification PowerShell Function that takes;

  • Mobile Number to send the notification to
  • Mobile Number the message is coming from
  • Message to send
  • AuthN info (Account SID and Account Token)
  • Account SID

Line 46 sends a SMS/TXT notification using my Send-TextNotification script. Update;

  • Line 32 for your Twilio Account SID
  • Line 34 for your Twilio Account Token
  • Line 40 for the mobile number to send the message to
  • Line 42 for the authorised number you verified to send from
  • Line 44 for your notification message

Summary

Using the Twilio service, my small function and a few parameters you can quickly add SMS/TXT notifications to your PowerShell scripts. Once you have it up and running I encourage you to upgrade your account and pay the few dollars for use of the service which also removes the “Sent from your Twilio trial account” text from the messages. Twilio also has a WhatsApp Notification Service.

IMG-9658.JPG

 

Recovering from USB device driver is still in memory / USB Composite Device has error (Code 38)

Something you don’t often think about is how many devices you plug into your computer …….. until you plug-in a device and it doesn’t show up or interact as expected. This post details how I recovered from such a situation so I can find it next time, and hopefully it also helps others recover quickly, rather than the numerous dead-ends I went through to fix the problem.

My Situation

I’ve been evaluating a number of different YubiKey’s recently and out of the blue they stopped being recognised. Below is a message from the YubiKey Manager indicating that there is no device inserted (when in actual fact there is).

Insert YubiKey

To prove the point, plugging in two YubiKey’s informs me I should only have one at a time.

Insert only one YubiKey

Going into Device Manager and selecting Show hidden devices reveals the plethora of USB devices I’ve previously inserted into my computer.

Yubikey Device Manager Properties

Looking into the USB Composite Device‘s with the warning symbols reveals

Windows cannot load the device driver for this hardware because a previous instance of the device driver is still in memory. (Code 38)

USB Composite Device Manager

The Fix

After trying numerous different avenues, this is how I repaired the issue and got up and running again.

Navigate to Windows => Settings => Update and Security => Troubleshoot => Run the troubleshooter

Hardware Troubleshooter.PNG

The troubleshooter will detect the USB Composite Device has error for the devices being problematic. Select one and continue.

USB Composite Devices has error

Select Apply this fix

Apply Fix

Repeat for other USB Composite Device has error errors. Then Restart the computer.

Restart PC

After Restart you should be back in action like I was.

YubiKey Manager - Success.PNG

Hope this helps someone else, and will help me find it next time too.

 

Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 3

Docker Virtual PowerShell Desktop Env to Internet - SaaS

This is the third and likely last post in this series. In Part 1 I introduced the capability to have Virtual PowerShell Environments using Docker and the full Windows 10 / Server 2019 Build 1809 container images. In Part 2 I detailed remotely access the Azure RM Windows 10 / Server 2019 host that contains the Docker Container with our full Windows 1809 environment (and therefore PowerShell Desktop).

In this post I’ll detail building a Docker Image based off of the Windows 1809 Container image. The resulting Docker Image will;

  • create a base PowerShell environment with the necessary PowerShell Modules for performing common Azure based administrative activities
  • using the Docker image for administrative functions
  • be accessible to be started using SSH from other SSH Clients (e.g Putty)

Building a Docker Image based off Windows 1809 Container

Using the capabilities I showed in in Part 2 of this series I’m going to build the image from Azure Cloud Shell that I’ll use to SSH into the Windows 10 AzureRM hosted Virtual Machine.

Having logged in to Azure Cloud Shell and connected to my Azure VM via SSH as detailed in Part 2. I then created a new CMD file named NewImage1809.cmd that has the following command inside it.

docker run -it --name psNov2018 mcr.microsoft.com/windows:1809 powershell

New Image Start

Running the commands hostname and dir c:\program files\WindowsPowerShell\Modules shows that we are inside the Windows 1809 Container Image.

Hostname and Existing Modules.PNG

I used the Cloud Shell Upload option to upload a New Env Setup.ps1 script that contains the PowerShell commands to install a bunch of PowerShell Modules. Using the Cloud Shell Editor I opened the file.

New Env Setup Script

Here is that series of commands.

I can then select the block of commands and paste it into the PowerShell terminal console below and hit enter for it to execute them.

Excute

One by one the modules are installed

Modules Installed

When completed enter exit.

Exit after Module Install

Now we can stop our Container Image.

docker stop psNov2018

Docker stop.PNG

and commit our changes to a new container named ‘powershell-env-image-nov18’

docker commit psNov2018 powershell-env-image-nov18

Docker Commit.PNG

Listing the docker images with

docker image ls

shows our new Image.

Docker Image List.PNG

We can now Run our new image with

docker run -it powershell-env-image-nov18:latest powershell

Run Image.PNG

We can see the modules we installed previously.

Image Module List.PNG

and we can import them.

Import Modules.PNG

Putty to PowerShell Virtual Environment

As good as Azure Cloud Shell is, and as convenient as it is for quick tasks and execution, you’re going to want to use an SSH Client. I’ll show using Putty, but you can use whatever your favourite client is. To connect to the environment I;

  • using the Putty Key Generator I loaded the OpenSSH Private Key generated in Part 1 and saved it in Putty ppk format
  •  using Putty Pageant I can use the ppk formatted key for my SSH session to the Windows 10 1809 host
    • Note: WinSCP can also utilise the ppk key for authentication which makes getting files onto the Host very easy
  • if you find you don’t automatically get your elevated session that allows you to start the Docker Container/Image then create the following registry key on the Windows 10/Server 2019 host and reconnect. DWORD (32-bit) value of 1 for LocalAccountTokenFilterPolicy
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System]
"LocalAccountTokenFilterPolicy"=dword:00000001

I can then connect with Putty using my key, and run the DockerPS.cmd file I showed in Part 2 which outputs the version of PowerShell.

Summary

In this post I’ve shown how to customise a Windows 1809 Container for a Virtual PowerShell environment, along with using client based SSH and SCP tools to connect to and manage the base Host.

Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 2

27 Nov 18 Part 3 is available here that details customizing 
an image and accessing it via other SSH clients with elevated
access.

In Part-1 of this series posted yesterday I showed that with Windows 10/Windows Server 2019 we can now have isolated virtual environments for PowerShell Desktop in Azure through containerization.

In this post I’ll show how I plan to leverage this capability from a mobility perspective. What we need to do first is enable elevated (privileged) access to our VM. My Client will be Azure Cloud Shell. My target/host is the Windows 10 1809 Virtual Machine I deployed in the last post.

Enabling SSH Key Based Privileged Authentication to our Windows 10 VM

To setup Key Based Access (over password access, which is required for elevated access) we need to configure the SSH Server and our Client.

SSH Server

On the Windows 10 Azure VM where we installed OpenSSH as per the first post here, we need to start the SSH-Agent. By default it is set to Disabled. Change the Startup Type, Start it and test it by adding the local user to the Agent. Using an elevated PowerShell session on the Azure Windows VM run;

Set-Service ssh-agent -startupType automatic
Start-Service ssh-agent
cd ~\
ssh-add .\.ssh\id_rsa

Add SSH Key to SSH-Agent on Server.PNG

SSH Client

As I’m using Azure Cloud Shell as my client, I started a Cloud Shell Session in my browser.

  • In Azure Cloud Shell generate a SSH Key using SSH-Keygen
    • Remember your passphrase as this will be required for accessing the Windows 10 Azure VM

Client SSH Keygen.PNG

  • Copy the key to the Windows 10 Azure VM
    • Run the command below (after changing it for your username and Windows VM IP Address) and provide your password to copy up the file
cd ~/
scp ./.ssh/id_rsa.pub username@Win10ServerIPAddress:C:\Users\userprofilename\.ssh\authorized_keys\

Copy Public Key from Client to Server.PNG

  • On the Server if C:\ProgramData\ssh\administrators_authorized_keys exists add your Public key that you copied into your home folder above into it. If C:\ProgramData\ssh\administrators_authorized_keys doesn’t exist then copy the authorized_keys file from your .ssh home directory (e.g c:\users\darrenjrobinson\.ssh ) to C:\ProgramData\ssh\administrators_authorized_keys
  • Edit the permission on the administrators_authorized_keys file.
    • Right-Click the file => Properties => Security => Advanced => Disable Inheritance => Choose “Convert inherited permissions into explicit permissions on this object” 
    • Remove Authenticated Users so that only System and Administrators remain as per the screen shot below. Then select Apply and then OK.

Administrators Authorized Keys.PNG

Testing SSH with Key Access

From our Azure Cloud Shell SSH to your Windows 10 Host;

ssh username@ipaddress

SSH Key Access.PNG

You will be prompted for the passphrase you gave when you generated the SSH key. Enter that and you will be authenticated using SSH to the Windows 10 VM.

SSH to Windows 10.PNG

Docker Access from Azure Cloud Shell in Browser

Now that we have Privileged Access to our Windows 10 VM, let’s try running a Windows 10 1809 Container and executing a PowerShell command to query the version of PowerShell available.

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

Run Docker.PNG

Wait a few seconds (maybe longer depending on the spec of your VM) and

PowerShell Desktop via Docker.PNG

Fantastic, we have a Container with PowerShell Desktop that we have accessed via Cloud Shell in a Browser.

Docker Access from Azure Cloud Shell in iOS Azure App

Using the Azure iOS App on my iPhone I started a Cloud Shell session and changed to my home directory cd ~\ where I had put a file named Connect-Win10.ps1 which contains

ssh username@ipaddressOfWin10Host

IMG-8441

I executed it and it prompted me for the passphrase for my SSH Key which I entered

IMG-8444

and I was then SSH’d into the Windows 10 VM.

IMG-8445

I did a dir d* and saw the DockerPS.cmd file I’d previously created. It contains the following command.

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

IMG-8446

Running that file

IMG-8447

starts the Docker Windows 1809 Container with the PowerShell command

IMG-8448

and I can see from my phone I’m have access to a PowerShell Desktop via Azure Cloud Shell and Docker from inside a Windows 10 VM based in Azure.

IMG-8449

Summary

This post has demonstrated that it is possible to get an elevated privileged session into a Windows 10 host using SSH, from which Docker Containers can be orchestrated and executed. By doing this from Azure Cloud Shell, it means that I can essentially login to a browser or app from anywhere in the world and access my Virtual PowerShell environments that in turn will allow world domination. Muwahahahah.

Got thoughts or feedback on this? Twitter || Blog

Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 1

22 Nov 18 Part 2 is available here that details accessing
the Docker Image via Azure Cloud Shell / SSH
27 Nov 18 Part 3 is available here that details customizing
an image and accessing it via other SSH clients with 
elevated access.

PowerShell Desktop Virtual Environments

If you’ve been working with PowerShell for any length of time you know that through its flexibility there can come challenges when using disparate PowerShell Modules and often their version dependencies. This isn’t just a PowerShell thing; Python can also trip you up in a similar manner.

Python however has Virtual Environments (virtualenv) capabilities which provides functionality to create an environment that contains all the necessary binaries required for the packages/libraries that a Python project would need. I’ve found this this very useful and I’ve wondered why I couldn’t do the same for PowerShell Desktop (not Core). PowerShell Desktop, PowerShell Core?

PowerShell Desktop vs PowerShell Core

As of August 2016 there are two PowerShell versions;

  • PowerShell Desktop
    • PowerShell 5.1 that runs on Windows and on top of the full .NET Framework stack
  • PowerShell Core
    • PowerShell Core 6.x that is cross platform (Windows, MacOSX, Linux)
      • Doesn’t run on the full .NET Framework

If you are a Windows/Directory Services Admin the likelihood of many of the PowerShell Modules you use running on PowerShell Core are slim. That’s because a lot of the modules you use require the full .NET Framework. And that isn’t available in PowerShell Core.

A Virtual PowerShell Desktop Env? Why is this only possible now?

In July this year Microsoft started providing Windows Container Images for the Insider releases (over and above Nano and Core OS builds). This was great, but meant you needed to be on the Insider Builds and were restricted to environments on physical hardware or VM’s migrated to Azure as there wasn’t an Azure Marketplace OS Version (Windows 10 or Server 2019 Preview) that met the minimum host requirements for the Insider Container images.

We’ve had to wait until Build 1809 became available in the Azure Marketplace which it did at the end of last week (w/e 18 November 2018). The Windows Container Version History shows that there was no 1803 Windows Image. But that’s all bygones now, as 1809 is finally here.

PowerShell Desktop Virtual Environments through Nested Virtualization

The screenshot below on first glance just looks like any command window in a virtual machine. But look a little closer;

  • Remote Desktop Session to an Azure Windows 10 1809 Virtual Machine (host.region.cloudapp.azure.com)
  • Docker Run Windows 1809 PowerShell $psversiontable
    • PowerShell Desktop 5.1 via Docker inside a Virtual Machine in Azure
      • BOOM!!

PowerShell Desktop Virt Env Nested Virtualization.PNG

Ok, so that is a single Docker Container with a full Windows 10 1809 environment running inside a Windows 10 Virtual Machine. But that means we can also add more containers and have multiple isolated PowerShell environments. Something like ….

Nested Virtual PowerShell Desktop Env.png

Wait, what, how? – The Overview

The high-level process is;

  • Provision a Windows 10 Virtual Machine (Build 1809 or later).
    • I recommend to deploy it in Azure, but you could do it in other virtualization environments that support Nested Virtualization
    • NOTE: As I write this Windows Server 2019 Build 1809 hasn’t hit the Azure Marketplace. When it does, as it has a common code-base it should work exactly the same.
  • Enable the OpenSSH Feature (I’ll be using this a little in this post but more in a future post)
  • Enable the Containers and Hyper-V Features
  • Install and configure Docker
  • Pull the Windows Build 1809 Container Image

Windows 10 Build 1809 Virtual Machine

I’m not going to give step-by-step details for deploying a Windows VM in Azure. If you’re looking to setup Virtual PowerShell Desktop Environments with Docker you should be able to deploy a Windows VM. That said you need to choose a VM Size and Version that will support “Nested Virtualization”. The Azure RM Dv3 and Ev3 Series VM’s do. If you get an error similar to this when running a Docker Image then change your VM Series to Dv3. I went with;

  • The Azure Marketplace has a image for Windows 10 Build 1809. Search for Windows 10 Pro, Version 1809
    • In order to run this VM as pragmatic as practical I chose the following size and configuration for my VM initially
      • Standard D2_v3 (2 vCPUs, 8 GB memory)
      • HDD over SSD
      • Un-managed disks
    • Enable SSH and RDP in the NSG configuration
      • initially we’ll need RDP to connect to the workstation
      • moving forward we’ll be using SSH

OpenSSH Server

OpenSSH Client and Server has been available for Windows for a while. Build 1809 though has streamlined the install process considerably. The base install and setup is now just a couple of commands away. The commands below will install the latest version of OpenSSH Server via PowerShell;

# Find OpenSSH Server
$openSSH = Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH*'

# Install OpenSSH Server
$sshServer = $openSSH | Select-Object name | Where-Object {$_.name -like "OpenSSH.Server*"}
$sshServer

Add-WindowsCapability -Online -Name $sshServer.Name

which when executed via VSCode looks like;

Install OpenSSH Server on Windows 10.PNG

By default the SSH Server service is configured for Manual startup. To configure it for Automatic Startup use the Set-Service cmdlet.

# Set SSH Server for Auto Startup
Get-Service sshd
Set-service sshd -StartupType Automatic

ssh Server Startup Automatic.PNG

Finally we need to increase the ClientAliveInterval setting in the sshd_config configuration file located in the %programdata%\ssh directory. I’ve made mine 3600 seconds (1 hour).

sshd ClientAliveInterval.PNG

Windows Containers / Docker Dependencies

# Install Containers / Docker Dependencies
Enable-WindowsOptionalFeature -Online -FeatureName containers –All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V –All
Restart the computer

Install Docker

Head on over to Docker and login (or create an account if you don’t already have one). Get Docker CE for Windows. I’m running 18.06.1-ce-win73.

Download and Install Docker.PNG

As we want a full Windows environment for PowerShell (not PowerShell Core on Linux) select “Use Windows containers” when installing Docker.

Use Windows Containers.PNG

At the end of the Docker install there is a reboot required.

Docker Install Complete.PNG

Get the Windows 1809 Base Container Image

We’re almost there. We need to get the recently released full Windows Image that will be the basis for our containers that will allow us to run full PowerShell environments. Don’t be confused by the Nano and Core images that have been available for quite some time. This is the FULL WINDOWS Build 1809 IMAGE.

As future Windows updates increment the version, the version you want to pull needs to be no greater than the host it is running on. Unlike the Insider Images the release versions follow the Release Number not the Build Number. Looking at the repository we can see that the image name is 1809 where-as its Build Number is 10.0.17763.134.

Docker Windows Image Registry.PNG

With the workstation restarted we SSH into it and pull the Windows 1809 Docker Image. I’ve given my Windows 10 VM a DNS name so I don’t need to figure out the IP Address each time I start it up. From a Windows command prompt to access your new VM (via IPAddress) use;

ssh username@IPAddressofWin10VM

Once we have a console on our Windows 10 VM we can pull the Windows 10 Docker Image.

docker pull mcr.microsoft.com/windows:1809

The image will be retrieved.

Pull Windows 1809 Base Image.PNG

After pulling the image it will be extracted. Depending on the spec of your VM this may take 10-20 minutes.

Extracting Windows Base Docker Image.PNG

After Extraction we have our base Container Image.

Completed Docker Image.PNG

In order to create a container from the command console via SSH we need to be elevated. I’ll cover that in the next post. So to validate we are able to create a container based on the full Windows 10 1809 image, RDP into the Windows 10 VM and open an elevated command prompt. Then type the command;

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

which will start a container using the Windows 10 1809 Image and run PowerShell with the command $PSVersionTable that will return the version of PowerShell.

PowerShell Desktop Virt Env Nested Virtualization

Summary

As you can see from the screenshot above, we have Nested Virtualization in an Azure Resource Manager Windows 10 Virtual Machine running a Docker Windows 10 1089 Container Image that allows us to run PowerShell Desktop 5.1. BOOM!!

That’s it for the first post, where I introduced the concept of Full Windows Docker Images supporting PowerShell Desktop in Azure. Stay tuned for the next post that starts putting this new functionality to good use.

Got thoughts or feedback on this? Twitter || Blog

An Azure PowerShell Trigger Function for MAC Address Vendor / Manufacturer Lookup

Recently I started working on another side IoT Project. As part of that I needed to identify the Vendor / Manufacturer of networking equipment. As you are probably aware each network device has a unique MAC Address. A MAC Address looks like this 60:5b:b4:f9:63:05The first 24 bits (6 hex characters) detail the vendor / manufacturer.

There are a number of online lookup tools to determine who the vendor is from the MAC address. And some like that one have an API to allow lookup too. If you are only looking up small volumes that is all good, but after that you get into subscription fee costs. I needed more than 1000 per day, but I also had a good idea of what the vendors were likely to be for a lot of my requests. So I rolled my own using an Azure Trigger Function.

Overview

The IEEE standards body maintains a list of the manufacturers assigned the 24 bit identifiers. A full list can be found here which is updated regularly. I downloaded this list and wrote a simple parser that created a PowerShell Object with the Hex, Base16 and Name of each Manufacturer.

I then extracted the manufacturers I expect to need to reference/lookup into a PSObject that is easily exportable and importable (export-clixml / import-clixml) and use that locally in my application. The full list to too large to keep locally so I exported the full list (again using export-clixml) and implemented a lookup as an Azure Function (that reads in the full list as a PSObject that takes ~1.7 seconds for 25,000+ records) which can then be queried with either Hex or Base16 as per the format in the IEEE list and the vendor name is returned.

Converting the IEEE List to a PowerShell Object

This little script will download the latest version of the OUI list and convert to a PowerShell Object.  The resulting object looks like this:

vendor base16 hex
------ ------ ---
Apple, Inc. F0766F 40-CB-C0
Apple, Inc. 40CBC0 40-98-AD
Apple, Inc. 4098AD 6C-4D-73

Update:

  • Line 4 for the local location to output the OUI List too
  • Line 39 for the PSObject file to create

If you want to query the file locally using PowerShell you can like this:

$query="64-70-33"
$result = $vendors | Select-Object | Where-Object {$_.hex -like $query}
$result
which will output
vendor base16 hex
------ ------ ---
Apple, Inc. 50A67F 64-70-33

If you want to extract all entries associated with a hardware vendor (e.g Apple) you can like this;

$apple = $vendors | Select-Object | Where-Object {$_.vendor -like "Apple*"}

and FYI, Apple have 671 registrations. Yes they make a LOT of equipment.

Azure Function

Here is the Azure Trigger PowerShell Function that takes a JSON object with a query containing the Base16 or Hex values for the 24bit Vendor Manufacturer and returns the Vendor / Manufacturer. e.g

{"query": "0A-00-27"}

Don’t forget to upload the Vendors.xml exported above to your Azure Function (you can drag and drop using Kudu) and update the path in Line 7.

An example PowerShell script to query would be similar to the following. Update $queryURI with the URI to your Azure Function.

$queryURI = "https://FUNCTIONAPP.azurewebsites.net/api/AZUREFUNCTION?code=12345678/uiEx6kse6NujQG0b4OIcjx6B2wHLhZepBD/8Jy6fFawg=="
$query = "0A-00-27"
$body = @{"query" = $query} | ConvertTo-Json
$result=Invoke-RestMethod-Method Post -Uri $queryURI-Body $body
$result
The output will then return the manufacturer name. e.g
Microsoft Corporation

To lookup all MAC addresses from your local windows computer the following snippet will do that after updating $queryURI for you Azure Function.

# Query MAC Address
$queryURI = "https://FUNCTIONAPP.azurewebsites.net/api/AZUREFUNCTION?code=12345678/uiEx6kse6NujQG0b4OIcjx6B2wHLhZepBD/8Jy6fFawg=="
$netAdaptors = Get-NetAdapter

foreach ($adaptor in $netAdaptors){
    $mac=$adaptor.MacAddress
    $macV=$mac.Split("-")
    $macLookup="$($macV[0])$($macV[1])$($macV[2])"
    $body=@{"query"=$macLookup} |ConvertTo-Json
    $result=Invoke-RestMethod-Method Post -Uri $queryURI-Body $body-Headers @{"content-type"="application/text"}
    Write-Host-ForegroundColor Blue $result
}

Summary

With the power of PowerShell it is quick to take a large amount of information and transform it into a usable collection that can then also be quickly exported and re-imported. It is also quickly searchable and thanks to Azure Functions supporting PowerShell it’s simple to stand-up the collection and query it as required programatically.

 

Lifecycle Management of Identities in SailPoint IdentityNow via API and PowerShell

Introduction

If you’ve been following along I’ve been posting about leveraging the SailPoint IdentityNow API for;

Now that I’ve covered Searching and Authoring all that is left is lifecycle management. And that’s what I’ll cover in this post. Updating and Deleting Entities via the API.

Updating SailPoint IdentityNow Entities

If you have not read the first post in this series, start there as ‘updating’ builds on top of Search/Reporting. It also covers enabling the API.

My quick start guide to updating IdentityNow Entities starts with searching to find the Entities (probably Users) you want to update. In my example below I’m searching for all objects on a Source. Then I iterate through the results and update them. I’m updating the Country attribute.

When updating an entity (e.g User) you need to perform a PATCH webrequest specifying the underlying ID (objectID) of the object. The URI format looks like;

https://orgName.api.identitynow.com/v2/accounts/2c91808365bd1f010165caf761625bcd?org=orgName

Example Script

Here is an example script. As per the previous two posts, change all the lines for your tenant and your API details.

  • Line 16 is the query for objects to update
  • Lines 39-41 is the attribute to update

Updating Manager

For manager, the attribute is a reference on the IdentityNow Source to the Manager. On my “External Entities” Source I locate the object representing the Manager and obtain their accountId (which in my case is firstname.lastname) and set that as the ManagerID. I then find the users that I want to update for this manager and update them as we did in the previous example, but with a reference to accountId of the Manager for the Manager attribute.

NOTE: When querying IdentityNow via the API the syntax is very important. Especially when also incorporating variables. If I have a variable $manager with a displayName value, that would normally contain a space. So we need to capture the whole string. Here is an example of doing that. So in order to query for $manager = “Rick Sanchez” in PowerShell that would be:

$queryManager = "attributes.displayName:"+'"'+"$($manager)"+'"'

which will give us attributes.displayName:”Rick Sanchez” which will return in my case the single object for Rich Sanchez not a list of references to Rick Sanchez.

Deleting SailPoint IdentityNow Entities

Deleting is very similar to Updating. Again the easiest method is to search and obtain the object(s) to be deleted and then delete via a DELETE webrequest specifying the underlying ID (objectID) of the object to be deleted. The URI looks like;

https://orgName.api.identitynow.com/v2/accounts/2c91808565bd1f110165cb628d1a702f?org=orgName

Example Script

Here is an example script. It searches IdentityNow based on object naming (see line 14), then finds the Source that the object is connected to that we wish to delete. In this example the Source is the one I created in the last post “External Entities”. Update for the name of your Source (line 25).

Summary

Using the API we can Search for Identities, Author and Update them.

Authoring Identities in SailPoint IdentityNow via the API and PowerShell

Introduction

A key aspect of any Identity Management project is having an Authoritative Source for Identity. Typically this is a Human Resources system. But what about identity types that aren’t in the authoritative source? External Vendors, contingent contractors and identities that are used by End User Computing systems such as Privileged Accounts, Service Accounts, Training Accounts.

Now some Identity Management Solutions allow you to Author identity through their Portals, and provide a nice GUI to create a user/training/service account. SailPoint IdentityNow however doesn’t have that functionality. However it does have an API and I’ll show you in the post how you can use it to Author identity into IdentityNow via the API.

Overview

So, now you’re thinking great, I can author Identity into IdentityNow via the API. But, am I supposed to get managers to interface with an API to kick off a workflow to create identities? Um, no. I don’t think we want to be giving them API access into our Identity Management solution.

The concept is this. An Identity Request WebApp would collect the necessary information for the identities to be authored and facilitate the creation of them in IdentityNow via the API. SailPoint kindly provide a sample project that does just that. It is available on Github here. Through looking at this project and the IdentityNow API I worked out how to author identity via the API using PowerShell. There were a few gotchas I had to work through so I’m providing a working example for you to base a solution around.

Getting Started

There are a couple of things to note.

  • Obviously you’ll need API access
  • You’ll want to create a Source that is of the Flat File type (Generic or Delimited File)
    • We can’t create accounts against Directly Connected Sources
  • There are a few attributes that are mandatory for the creation
    • At a minimum I supply id, name, givenName, familyName, displayName and e-mail
    • At an absolute bare minimum you need the following. Otherwise you will end up with an account in IdentityNow that will show as “Identity Exception”
      • id, name, givenName, familyName, e-mail*

* see note below on e-mail/email attribute format based on Source type

Creating a Flat File Source to be used for Identity Authoring

In the IdentityNow Portal under Admin => Connections => Sources select New.

Create New Source.PNG

I’m using Generic as the Source Type. Give it a name and description. Select Continue

New Generic Source.PNG

Assign an Owner for the Source and check the Accounts checkbox. Select Save.

New Source Properties.PNG

At the end of the URL of the now Saved new Source get and record the SourceID. This will be required so that when we create users via the API, they will be created against this Source.

SourceID.PNG

If we look at the Accounts on this Source we see there are obviously none.

Accounts.PNG

We’d better create some. But first you need to complete the configuration for this Source. Go and create an Identity Profile for this Source, and configure your Identity Mappings as per your requirements. This is the same as you would for any other IdentityNow Source.

Authoring Identities in IdentityNow with PowerShell

The following script is the bare minimum to use PowerShell to create an account in IdentityNow. Change;

  • line 2 for your Client ID
  • line 4 for your Client Secret
  • line 8 for your Tenant Org Name
  • line 12 for your Source ID
  • the body of the request for the account to be created (lines 16-21)

NOTE: By default on the Generic Source the email attribute is ’email’. By default on the Delimited Source the email attribute is ‘e-mail’. If your identities after executing the script and a correlation are showing as ‘Identity Exception’ then it’s probably because of this field being incorrect for the Source type. If in doubt check the Account Schema on the Source.

Execute the script and refresh the Accounts page. You’ll see we now have an account for Rick.

Rick Sanchez.PNG

Expanding Rick’s account we can see the full details.

Rick Full Details.PNG

Testing it out for a Bulk Creation

A few weeks ago I wrote this post about generating user data from public datasets. I’m going to take that and generate 50 accounts. I’ve added additional attributes to the Account Schema (for suburb, state, postcode, street). Here is a script combining the two.

Running the script creates our 50 users in conjunction to the couple I already had present.

Bulk Accounts Created.PNG

Summary

Using the IdentityNow API we can quickly leverage it to author identity into SailPoint IdentityNow. That’s the easy bit sorted. Now to come up with a pretty UI and a UX that passes the End-User usability tests. I’ll leave that with you.

Reporting on SailPoint IdentityNow Identities using the ‘Search’ (Beta) API and PowerShell

Introduction

SailPoint recently made available in BETA their new Search functionality. There’s some great documentation around using the Search functions through the IdentityNow Portal on Compass^. Specifically;

^ Compass Access Required

Each of those articles are great, but they are centered around performing the search via the Portal.  For some of my needs, I need to do it via the API and that’s what I’ll cover in this post.

*NOTE: Search is currently in BETA. There is a chance some functionality may change. SailPoint advise to not use this functionality in Production whilst it is in Beta.  

Enabling API Access

Under Admin => Global => Security Settings => API Management select New and give the API Account a Description.

New API Client.PNG

Client ID and Client Secret

ClientID & Secret.PNG

In the script to access the API we will take the Client ID and Client Secret and encode them for Basic Authentication to the IdentityNow Search API. To do that in PowerShell use the following example replacing ClientID and ClientSecret with yours.

$clientID = 'abcd1234567'
$clientSecret = 'abcd12345sdkslslfjahd'
$Bytes = [System.Text.Encoding]::utf8.GetBytes("$($clientID):$($clientSecret)")
$encodedAuth =[Convert]::ToBase64String($Bytes)

Searching

With API access now enabled we can start building some queries. There are two methods I’ve found. Using query strings on the URL and using JSON payloads as an HTTP Post. I’ll give examples of both.

PowerShell Setup

Here is the base of all my scripts for using PowerShell to access the IdentityNow Search.

Change;

  • line 3 for your Client ID
  • line 5 for your Client Secret
  • line 10 for your IdentityNow Tenant Organisation name (by default the host portion of the URL e.g https://orgname.identitynow.com )

Searching via URL Query String

First we will start with searching by having the query string in the URL.

Single attribute search via URL

$query = 'firstname EQ Darren'
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($query)" -Headers @{Authorization = "Basic $($encodedAuth)" }

Single Attribute URL Search.PNG

Multiple attribute search via URL

Multiple criteria queries need to be constructed carefully. The query below just looks wrong, yet if you place the quotes where you think they should go, you don’t get the expected results. The following works.

$query = 'attributes.firstname"="Darren" AND attributes.lastname"="Robinson"'

and it works whether you Encode the URL or not

$queryEncoded = [System.Web.HttpUtility]::UrlEncode($query)
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($queryEncoded)" -Headers @{Authorization = "Basic $($encodedAuth)" 

Multiple Attribute Query Search.PNG

Here is another searching based on identities having a connection to a source containing the word ‘Directory’ AND having less the 5 accounts

$URI = "https://$($org).api.identitynow.com/v2/search/identities?"
$query = '@access(source.name:*Directory*) AND entitlementCount:<5'
$Accounts = Invoke-RestMethod -Method Get -Uri "$($URI)limit=$($searchLimit)&query=$($query)" -Headers @{Authorization = "Basic $($encodedAuth)" }

Multiple Attribute Query Search2.PNG

Searching via HTTP Post and JSON Body

Now we will perform similar searches, but with the search strings in the body of the HTTP Request.

Single attribute search via POST and JSON Based Body Query

$body = @{"match"=@{"attributes.firstname"="Darren"}}
$body = $body | convertto-json 
$Accounts = Invoke-RestMethod -Method POST -Uri "$($URI)limit=$($searchLimit)" -Headers @{Authorization = "Basic $($encodedAuth)" } -ContentType 'application/json' -Body $body
Single Attribute JSON Search.PNG

Multiple attribute search via POST and JSON Based Body Query

If you want to have multiple criteria and submit it via a POST request, this is how I got it working. For each part I construct it and convert it to JSON and build up the body with each search element.

$body1 = @{"match"=@{"attributes.firstname"="Darren"}}
$body2 = @{"match"=@{"attributes.lastname"="Robinson"}}
$body = $body1 | ConvertTo-Json
$body += $body2 | ConvertTo-Json
$Accounts = Invoke-RestMethod -Method POST -Uri "$($URI)limit=$($searchLimit)" -Headers @{Authorization = "Basic $($encodedAuth)" } -ContentType 'application/json' -Body $body
Multiple Attribute JSON Search.PNG

Getting Full Identity Objects based off Search

Lastly now that we’ve been able to build queries via two different methods and we have the results we’re looking for, lets output some relevant information about them. We will iterate through each of the returned results and output some specifics about their sources and entitlements. Same as above, update for your ClientID, ClientSecret, Orgname and search criteria.

Extended Information.PNG

Summary

Once you’ve enabled API access and understood the query format it is super easy to get access to the identity data in your IdentityNow tenant.

My recommendation is to use the IdentityNow Search function in the Portal to refine your searches for what you are looking to return programmatically and then use the API to get the data for whatever purpose it is.