Microsoft Identity Manager PowerShell Management Agent for Oracle Internet Directory

Why a FIM/MIM PowerShell Management Agent for Oracle Internet Directory? Why not just use the Generic LDAP Connector for Microsoft Identity Manager? I needed an integration solution that was able to update an Oracle Database behind Oracle Internet Directory. That meant I required a solution that was able to use LDAP to get visibility as to who/what was in OID, but then make updates into an Oracle DB. That functionality I wanted to be contained on a single Management Agent, not an MA for the Database and another for LDAP. Another perfect fit for the Granfeldt PowerShell Management Agent. This post details an LDAP Forefront / Microsoft Identity Manager PowerShell Management Agent for Oracle Internet Directory. The example in this post provides a working example to discover/import OID LDAP objects.

If you haven’t used the Granfeldt PowerShell Management Agent (PSMA) before, see the Getting Started with the Granfeldt PowerShell Management Agent section of my Identity Manager Management Agents page here.

Schema Script

Below is my Schema Script for Oracle Internet Directory for the Person/inetOrgPerson objectclass. Depending on what you are using OID for and what the requirements for the OID Management Agent are, you may need to add additional attributes or remove any superfluous ones. I’m using the OID Guid as the anchor.

Import Script

Key functions of the Import Script are;

  • Delta Sync (using OID Change Log)
  • Full Sync (based off an LDAP Filter)
  • Paging of Results through the MA

Authentication

Authentication credentials are provided from the Management Agent through to the Import script via the Connectivity tab Username and Password configuration items.

Microsoft Identity Manager Oracle Internet Directory Management Agent Credentials.PNG
Microsoft Identity Manager Oracle Internet Directory Management Agent Credentials

Delta Sync

The Import Script uses the OID Change Log to determine objects of interest that have changed since the last sync. The import script writes a watermark file that contains the last changenumber used so it knows on the next sync what to look for. This post here has more details around Changelog.

Full Sync

Full Sync is performing an LDAP query against OID based on an LDAP Filter and bringing through to the Management Agent attributes specified on the MA Configuration. Essentially it is a Management Agent version of the PowerShell LDAP query I detailed here.

Paging of Results

If you have a large OID its always a good idea to page the results through the MA. The Import Script below utilises Paging on the Management Agent to process the objects. The method I’ve used in this example is a little different that what I’ve previously posted here and here. Objects returned from OID as per your LDAPFilter (line 207) are split into groups based on the PageSize you have configured for your Run Profile. This is done using the technique shown here for splitting a large collection into manageable chunks.

Configuration Updates

Using the sample Import.ps1 script below, update;

  • Line 10 for the Debug Output Log location
  • Line 12 for the Delta Sync OID Change Log watermark file location
  • Line 161 for the OID Server Name
  • Line 162 for the OID Server LDAP Port
  • Line 179 for the BaseDN to search from
  • Line 207 for the LDAP Filter for OID Objects of interest for the MA

Password and Export Scripts

As per the Getting Started with the Granfeldt PowerShell Management Agent section of my Identity Manager Management Agents page here these need to be present (and can be empty). What your MA needs to do will define whether you need to implement them and with what. For example I have implemented Password Sync to an Oracle DB using this method.

Summary

Using the flexibility of the Granfeldt PowerShell Management Agent for Microsoft Identity Manager we can integrate with diverse systems in bespoke ways. Hopefully this post gives you a leg up if you need to integrate with Oracle Internet Directory keeping in mind Exports or Password Sync could be to Oracle DB’s not just OID using LDAPModify.

Querying Oracle Internet Directory (LDAP) with PowerShell

If you are an IT Professional it is highly likely you are very familiar with Microsoft Active Directory and in turn PowerShell and LDAP. At some point though you may need to integrate with another LDAP directory such as Oracle Internet Directory and you find it isn’t as straight forward as Active Directory and the rich tooling it comes with. I’ve had to create interfaces with numerous LDAP directories over the years but its been quite a long time since I had to integrate with Oracle Internet Directory. That changed recently (as also seen in this post) and I had to get up to speed again with it, and work through the gotchas.

This post details a few steps to discovering and integrating with Oracle Internet Directory using PowerShell and the .NET System.DirectoryServices.Protocols.LDAPConnection Class. We start with connecting using LDP, validating our connectivity and credentials before translating that to PowerShell. You will need:

  • LDAP Servername (or IP Address)
    • check to see you have connectivity to it by being able to resolve the DNS name
  • LDAP Server Port
    • 389 and 636 are default ports for Standard and SSL connections. Chances are OID is on a different Port though
  • Username
    • e.g cn=ldapUser
  • Password
    • password for the ldapUser Account
  • Bind DN
    • the namespace of the LDAP Directory. e.g. dc=customer,dc=com,dc=au

Testing Connectivity to Oracle Internet Directory using Microsoft LDP

Using Microsoft LDP (that comes with the Remote Server Administration Tools (RSAT) for Windows operating systems) is the best approach to start with connecting to a foreign LDAP Directory such as Oracle Internet Directory.

Using the Connection => Connect function and providing the LDAP Server and Port provides us with the RootDSE Information. That, as shown below immediately tells us two important pieces of information. The version of Oracle Internet Directory (11.1.1.5.0) and where ChangeLog is (more on that later).

LDP Connect to OID.PNG
LDP Connect to OID

With a connection to the Oracle Internet Directory now established with can Bind (connect with credentials). From the Connection menu select Bind.

Simple BIND to OID.PNG
Simple BIND to OID

With credentials correct we can then go to the View menu and select Tree

Tree View of OID.PNG
Tree View of OID

Now we can see the OU Structure of Oracle Internet Directory.

Tree View of OID.PNG
Tree View of OID

Connecting to Oracle Internet Directory with PowerShell

Now that we have verified that the information we have for the LDAP Server, Directory and connection information is all correct we can try connecting using PowerShell.

Sample PowerShell LDAP Connection Script

Below is a sample PowerShell script to connect to Oracle Internet Directory. Change;

  • Line 3 for your LDAP Username
  • Line 4 for your LDAP Account password
  • Line 5 for the LDAP Servername
  • Line 6 for the LDAP Port
  • Line 9 for the base OU to start searching for users from

Line 18 is configured to only search one level under the base OU. If you have a complex OU structure you may need to change this to Subtree.

The script will then connect to Oracle Internet Directory and find the account we connected with displaying the values of its attributes.

Below shows running the script and returning the details of the account we connected with.

LDAP PowerShell Connection to Oracle Internet Directory.PNG
LDAP PowerShell Connection to Oracle Internet Directory

Immediately you can see the first problem connecting to OID. The attribute values are returned as a Byte Array. This isn’t ideal.

LDAP Helper Functions

From the PowerShell Gallery and this LDAP Module we can leverage the Get-LdapObject and Expand-Collection functions that will convert the LDAP responses for us. Put these two functions at the top of your script (or as a separate .ps1 script) and load in the script at the beginning of PowerShell LDAP Script.

The LDAP Request and Response change a little to use these functions but still leverage the same LDAP Connection. The timeout is specified against our connection and we call Get-LdapObject using that connection and our previous Filter and SearchBase. Scope is still OneLevel but can be changed to SubTree if required.

# Connect and Search
$ldapConnection.Timeout = new-timespan -Seconds 60
$ldapResponse = Get-LdapObject -LdapConnection $ldapConnection -LdapFilter $ldapSearchFilter -SearchBase $ldapSearchBase -Scope OneLevel
$ldapResponse

PowerShell Get-Ldap Object Script

Here is the full script with the two helper Functions.

The screenshot below shows the output in text rather than Byte Array. Excellent.

LDAP PowerShell Result in Text from Oracle Internet Directory.PNG
LDAP PowerShell Result in Text from Oracle Internet Directory

Oracle Internet Directory Change Log

Change Log is a function of many LDAP directories. It is especially useful when we are synchronising an LDAP Directory to another system as it means we don’t have to return all objects in it each time, but we can get the incremental changes (hence Change Log).

To query the Change Log in Oracle Internet Directory there are a couple of gotchas. You will need;

  • to query OID to see the latest Changenumber
  • only use OneLevel as the Scope for queries. Anything else and OID won’t return the info
  • The Base DN is what is shown when we initially connected using LDP (e.g cn=changelog)

Again starting with LDP we can query and get the Changenumbers. Using LDP Search;

  • use cn=changelog for the Base DN
  • objectclass = * for the Filter
  • One Level for the Scope
  • * for Attributes

And the last of the results will have the most recent changenumber.

Change Number from Oracle Internet Directory.PNG
Change Number from Oracle Internet Directory

Change Log with PowerShell

Now that we know we can get the ChangeLog with LDP, lets do it with PowerShell.

The following script re-uses our previous connection and all you should need to do is change the changeNumber in line 2 inline with your environment.

The screenshot below shows retrieving Change Log entries that are equal or newer that the Changenumber we used in our Filter. The changeFilter scopes down the changes to users in the searchBase so that we see the changes for users rather than other operational changes.

Change Log Results from Oracle Internet Directory
Change Log Results from Oracle Internet Directory

The individual users that have changed can be identified using the following one-liner.

$uniqueUsers = $changeResponse.Entries | ForEach-Object { $_.attributes["targetdn"][0] } | Get-Unique
$uniqueUsers

The Last Changenumber present in Change Log can be found with;

$lastChangeNumber= [int]$ChangeResponse.Entries[($ChangeResponse.Entries.Count-1)].Attributes["changeNumber"][0]

Summary

I recently had to reacquaint myself with OID. I’ve written it up so that next time isn’t as painful.

 

SailPoint IdentityNow to ServiceNow Ticketing Integration

Sailpoint IdentityNow to ServiceNow Ticket Integration

SailPoint IdentityNow comes with many connectors to allow provisioning and lifecycle management of entities in connected systems. However there will always be those systems that require some manual tasks/input. In those instances SailPoint IdentityNow to ServiceNow Ticketing Integration can create a ticket in ServiceNow that can then be tracked whilst those manual steps are fulfilled.

Integration of IdentityNow with ServiceNow doesn’t use a connector in the same sense as the other Sources do in IdentityNow. It uses an Integration Module. The SailPoint ServiceNow Integration Module (SIM) is configured using the SailPoint IdentityNow integration APIs. The Integration Module Configuration Guide on Compass here provides the basis of what is required to List Integrations, Create, Update and Delete Integrations. However I had a few difficulties completing this due to a couple of ambiguous (from the sample documentation) configuration items. This post details how I got it configured so I can find it next time.

All the following API calls leverage authentication using the v3 API AuthN method I detail in this post here.

List Integrations

This call does exactly what it says it does; list any integrations such as IdentityNow to ServiceNow Ticketing Integration. If you haven’t configured any yet, then it will return nothing otherwise you will get the full configuration for each integration. To list integrations the /integration/listSimIntegrations API is called using a GET operation.

$orgName = 'yourIdentityNowOrgName'
$integrationBaseURI = "https://$($orgName).api.identitynow.com/cc/api/integration"
$listIntegrations = Invoke-RestMethod -Method GET -Uri "$($integrationBaseURI)/listSimIntegrations" -Headers @{Authorization = "$($v3Token.token_type) $($v3Token.access_token)"}
The output below is from an integration for an Application from IdentityNow to Service now that also brings through details for the request. More details on that below on Create Integration.

Create an Integration

To create an integration the /integration/createSimIntegration API is called using a POST request with a JSON Body containing the Integration configuration.

$createIntegration = Invoke-RestMethod -Method Post -Uri "https://$($orgName).api.identitynow.com/cc/api/integration/createSimIntegration" -Headers @{Authorization = "$($v3Token.token_type) $($v3Token.access_token)"; "Content-Type" = "application/json"} -Body $createBody

Create ServiceNow Integration Configuration Document

A lot of the configuration is prescriptive as per the IdentityNow documentation. However there are a few items that aren’t always obvious.

The configuration object further below is for integration from IdentityNow to ServiceNow using Basic authentication.

  • Line 4 is the ServiceNow Service Account created for IdentityNow with the permissions detailed in the IdentityNow documentation
  • Line 5 is the password for the Service Account
  • Line 7 is a piece that isn’t (or wasn’t) in the documentation when we configured this.
Important
In order for IdentityNow to pass through all the details for the account the request is for, you need to also have a ServiceNow Source configured. Make sure you have your Correlation Rules setup so that accounts in ServiceNow match/join to IdentityNow. Essentially this will match the ServiceNow Record for who the request is for and populate the Service Request with all their details (from ServiceNow). The Source is required to be able to pass the ServiceNow Account ID associated identity with the IdentityNow request.

The Source Configuration screenshot below shows the basic ServiceNow Source configured using Basic Auth. Make sure you have your Correlation configuration configured to appropriately join Accounts. Take note of the name you give the Source and the Source ID (visible in the Browser URL when configuring the Source).

  • Line 9 is the mapping from the IdentityNow Source (Flat File/Generic) that you will be sending Service Requests through to ServiceNow for, and the ServiceNow Catalog Item. The IdentityNow Source ID is the externalID. You will need to get the Source Configuration via API to get this as detailed in this post.
  • Line 12 is the Virtual Appliance Cluster where the Integration will be configured for. The clusterExternalId can be retrieved via API as detailed in this post. It can be found under Configuration on a VA Cluster object

  • Lines 13 – 23 are what you want to pass to ServiceNow for the Service Request. Modify accordingly but this example will pass through the details of the request from IdentityNow. Create or Update x, y, z etc.
  • Line 26 is the IdentityNow Source ID of the Generic/Flat file source you are configuring for integration with ServiceNow. It’s the same as you used on Line 9 for the IdentityNow to ServiceNow Catalog Item mapping.
  • Lines 29 – 34 are the status mappings for the requests. You can configure how often ServiceNow is polled for status updates through the integration/setStatusCheckDetails API. Send a POST request to the API with the provisioningStatusCheckIntervalMinutes and provisioningMaxStatusCheckDays as shown below for check every 15mins and max days 90 (dev environment type settings).
# Schedule for Status Checks
$schConfig = '{"provisioningStatusCheckIntervalMinutes":15,"provisioningMaxStatusCheckDays":90}'

$scheduleIntegration = Invoke-RestMethod -Method Post -Uri "https://$($orgName).identitynow.com/cc/api/integration/setStatusCheckDetails" -Headers @{Authorization = "$($v3Token.token_type) $($v3Token.access_token)"; "Content-Type" = "application/json"} -Body $schConfig

ServiceNow Integration Configuration Document

Below is a sample IdentityNow to ServiceNow integration configuration.

Example Request in ServiceNow

With all that detail and how to, this is what you actually get. Here is an example of a request that has been generated in ServiceNow from IdentityNow via ServiceNow Integration.

Get an Integration

If you know the ID of an integration you can get it directly using the /getSimIntegration/{ID} Get API call. The ID can be retrieved using List Integrations as detailed at the beginning of this post.

# Get Integration
$getIntegration = Invoke-RestMethod -Method Get -Uri "https://$($orgName).api.identitynow.com/cc/api/integration/getSimIntegration/2c9180846a6a22c8016a75adafake" -Headers @{Authorization = "$($v3Token.token_type) $($v3Token.access_token)"; "Content-Type" = "application/json"}

Delete an Integration

To delete an integration is similar to the Get Integration call except the API endpoint is /deleteSimIntegration/{ID} and the operation is a Delete rather than a GET.

# Delete Integration
$deleteIntegration = Invoke-RestMethod -Method Delete -Uri "https://$($orgName).api.identitynow.com/cc/api/integration/deleteSimIntegration/2c9180856a6a22d0016a6ec2a3fake" -Headers @{Authorization = "$($v3Token.token_type) $($v3Token.access_token)"; "Content-Type" = "application/json"}

Summary

Rather a long post, but hopefully it will give anyone else trying to do this integration the leg up on how to get it operational a lot quicker than it took us.

Using PowerShell to query Oracle DB’s without using the Oracle Client – Oracle Data Provider for .NET

With every Identity and Access Management project comes the often tactical integration with heritage/legacy systems that can often assist with their decommissioning. That is exactly what I was having to do a couple of weeks ago with Oracle. My public frustration with installing the Oracle Client on a Windows Server 2016 host to allow me to integrate Microsoft Identity Manager with Oracle saw me rewarded with an unsolicited but fantastic response from Sylvan Laurence. The suggestion was to use the Oracle Data Provider for .NET The key benefit here is NO Oracle Client Install Required, and I can leverage the library with PowerShell.

Oracle Data Provider for .NET Tweet.PNG
Oracle Data Provider for .NET Tweet

Being pointed in the right direction by Sylvan got me out of the rabbit hole I was in (although I did have the Oracle Client installed configured and working with MIM) and investigating how I could get the custom integration I required easier and quicker. This post details how to quickly use PowerShell to connect to an Oracle database without requiring the Oracle Client by leveraging the Oracle Data Provider for .NET library.

Installing the Oracle Data Provider for .NET

The Oracle Data Provider for .NET can be obtained from Oracle here. But for a version with an installer that we can then leverage with PowerShell the latest version of the 32-bit installer package is ODAC 12.2c Release 1 (12.2.0.1.0) and available here and the 64-bit installer packaged is ODAC 12.2c Release 1 (12.2.0.1.0) available here. You will need to register a free Oracle account to be able to download it. I’m using the 64-bit ODP.NET Managed ODAC122cR1.zip version. Expand the archive to a temp directory.

As per Sylvan’s guidance the installation process is;

  • from an elevated command prompt
    • install_odm.bat c:\OracleDAC x64 true
Oracle Data Provider for .NET Installation.PNG
Oracle Data Provider for .NET Installation

Configuring the Oracle Data Provider for .NET

Just as if you’d installed the full Oracle Client, the Oracle Data Provider for .NET leverages the SQLNET.ora and TNSNames.ora configuration files. Samples are provided under the yourInstallPath\network\admin\sample directory. Your configuration files need to be in the yourInstallPath\network\admin directory.

Here are examples of mine.

sqlnet.ora

Check with the DBA team maintaining the environment but chances are no changes are required here.

# sqlnet.ora Network Configuration File: 

# This file is actually generated by netca. But if customers choose to 
# install "Software Only", this file wont exist and without the native 
# authentication, they will not be able to connect to the database on NT.

SQLNET.AUTHENTICATION_SERVICES= (NTS)
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

tnsnames.ora

Update for the Alias of the Oracle environment you are connecting to (available from the DBA team maintaining the environment). Mine is named IDM below. Likewise the FQDN of the Host and the Port it is configured to listen on.

# tnsnames.ora Network Configuration File
IDM =
   (DESCRIPTION =
       (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = oracle-db1.customer.com.au)(PORT = 16001))
       )
       (CONNECT_DATA =
          (SERVICE_NAME = IDM)
       )
   )

Hello World Query

Below is a Hello World query using PowerShell and ODP.NET. The query in line 12 should work against any Oracle SQL DB because it uses the built in table “dual”.

Update:

  • Line 2 if you installed ODP.NET to a different path
  • Line 6 for your SQL DB Username
  • Line 7 for your SQL DB Users Password
  • Line 8 for your SQL DB Alias (that matches what you have in your TNSNames.ora configuration file)

The screenshot below shows the output from running the Hello World Query.

Oracle Data Provider for .NET Query Result.PNG
Oracle Data Provider for .NET Query Result

Calling a Stored Procedure

The example above uses a simple SQL Query Select ‘Hello world!’ Greeting from dual” but what if you wanted to call a Stored Procedure? If you are just calling a Stored Procedure with no input or output you can update the CommandType to ‘Stored Procedure’

$command.CommandType = 'Stored Procedure'

and change the execution from ExecuteReader() to ExecuteNonQuery()

$reader = $command.ExecuteNonQuery()

Your QueryStatement is the name of your Stored Procedure. Successful execution of the Stored Procedure will return -1

If however you need to do something more complex you need to fallback to CommandType = ‘Text’, and define your SQL Call including the Stored Procedure like this;

$queryStatment = @"
DECLARE
   result varchar2(100);
   error varchar2(100);
BEGIN
   your.stored.procedure('$($any)', '$($variables)', '$($tobepassed)', result, error);
END;
"@

Successful execution of the Stored Procedure will return -1

A full example of that is shown below.

The screenshot below shows successful execution of a Stored Procedure as per the script above.

Oracle Data Provider for .NET Stored Procedure Execution Result.PNG
Oracle Data Provider for .NET Stored Procedure Execution Result

Summary

Using the Oracle Data Provider for .NET (ODP.Net) Oracle library we can use PowerShell to integrate with Oracle Databases executing queries and stored procedures. If you are looking to do something similar also checkout this Microsoft Devblog Post.

Get/Update SailPoint IdentityNow Global Reminders and Escalation Policies

SailPoint IdentityNow Access Requests for Roles or Applications usually require approvals which are configured on the associated Role or Application. The Approval could be by the Role/Application Owner, a Governance Group or the Requestor’s Manager. However for reminders and escalation policies the configuration is only available to be retrieved and set via the API. The SailPoint Identity Now api/v2/org API is used to configure these Global Reminders and Escalation Policies.

This post details how to get the configuration of your IdentityNow Org along with updating the the Global Reminders and Escalation Policies.

The PowerShell script below uses the v3 API Authentication process detailed here.

Update the script below for;

  • line 2 for your IdentityNow Orgname
  • line 5 for your IdentityNow Admin ID
  • line 6 for your IdentityNow Admin Password
  • line 16 for your Org v3 ClientID (obtained from SailPoint)
  • line 17 for your Org v3 ClientSecret (obtained from SailPoint)

Executing the script Line 35 will return the current configuration for your SailPoint IdentityNow Org.

$listOrgConfig = Invoke-RestMethod -Method GET -Uri "https://$($orgName).identitynow.com/api/v2/org" -Headers @{Authorization = "$($v3Token.token_type) $($v3Token.access_token)"}
  • lines 39-43 specify the configuration values for
    • daysBetweenReminders – Number of days between reminders or escalations
    • daysTillEscalation – Number of days from when the request is created to when the reminder/escalation process begins
    • maxReminders – Maximum number of reminders sent before starting the escalation process
    • fallbackApprover – The alias of the identity that wlll review the request if no one else reviews it
  • lines 46-50 build the configuration to write back to IdentityNow

and finally Line 53 updates the configuration in IdentityNow

$updateOrgConfig = Invoke-RestMethod -Method Patch -Uri "https://$($orgName).identitynow.com/api/v2/org" -Headers @{Authorization = "$($v3Token.token_type) $($v3Token.access_token)"; 'Content-Type' = 'application/json'} -Body ($approvalConfigBody | convertto-json)

The updated configuration is returned in the $updateOrgConfig variable. The following snippet shows the written config for Reminders and Escalations.

SailPoint IdentityNow Global Reminders and Escalation Policies
SailPoint IdentityNow Global Reminders and Escalation Policies

The Script

Will all the details described above, here is the script.

Summary

Using PowerShell with the v3 Authentication method and the v2 IdentityNow Org API  we can quickly get the Organisation configuration. We can then quickly update the Global Reminders and Escalation Policies. With a few changes other customer configurable (the majority are read/only) configuration options on the Org can also be updated. 

Getting started with PowerShell IoT on Raspbian (Raspberry Pi)

During a cleanup over the weekend I found a Raspberry Pi that wasn’t doing anything. I figured now that PowerShell Core had got to version 6.2.1 I should power it up and have a play around with maybe some sensors and outputs using PowerShell IoT. I ran into a couple of gotcha’s that took me some searching and head scratching to figure out, so I’m documenting them here for next time.

Installing PowerShell Core was straight forward. I followed the Microsoft Guide here and pretty quickly I was up and running with PowerShell Core 6.2.1.

PowerShell Core 6.2.1 on Raspbian.png

Enabling SPI and I2C Pins

By default on the latest Raspbian Stretch Lite build (that I installed – April 2019) you have to go into Preferences => Raspberry Pi Configuration and enable SPI and I2C if you are using displays etc.

GPIO Enable on Raspberry Pi for PowerShell IoT.png

GPIO Pins

Having built a number of projects with IoT Devices I thought I knew what GPIO Pins were and how to interact with them. However I quickly failed when trying to flip the outputs on GPIO pins using the Set-GPIOPin cmdlet from the Microsoft.PowerShell.IoT PowerShell Module. It turns out that the Microsoft PowerShell IoT Module utilises the Wiring Pi scheme for Pin numbering. Obviously having not done any IoT with a Raspberry Pi before I was not familiar with this underlying C Library. Therefore whilst I was trying to use the GPIO number 8 for example, what I really needed to be addressing was Pin 10.

The schematic below shows the 40 Pin GPIO connector with the WiringPi Pin numbers.

68747470733a2f2f6f63772e63732e7075622e726f2f636f75727365732f5f6d656469612f696f742f6c6162732f70696e732d72617370626572727970692e706e673f773d37303026746f6b3d333563383032.png

With that knowledge and a few Red Green Blue (RGB) LED’s wired to multiple GPIO outputs held high you can quickly get some nice colours.

IMG-0285.JPG

With the SSD1306 library you can quickly and easily output some text to an OLED display.

Note: After connecting the display you will need to reboot your Raspberry Pi to get the display showing your text. I didn’t spend too much time on it but I did notice that it didn’t have many capabilities like the SSD1306 library from Adafruit does for Python like fonts, text size, text position etc.

New-OledDisplay | Set-OledText -Value "pwsh Core 6.2.1      pwsh Core 6.2.1      pwsh Core 6.2.1      pwsh Core 6.2.1      pwsh Core 6.2.1      pwsh Core 6.2.1      pwsh Core 6.2.1      pwsh Core 6.2.1"

IMG-0287.JPG

Summary

The basics are there to set GPIO pins high and low which in turn could then trigger a relay that activates something else. The available libraries and examples only extend to a few other sensors and output devices. It was fun to have a quick mess around and understand the capabilities, and I’m sure it would be possible to develop some interesting projects with it. With Winter now here, maybe there will be a rainy weekend soon and I’ll be inspired to build something.

Azure AD Log Analytics KQL queries via API with PowerShell

Log Analytics is a fantastic tool in the Azure Portal that provides the ability to query Azure Monitor events. It provides the ability to quickly create queries using KQL (Kusto Query Language). Once you’ve created the query however you may want to run that query through automation negating the need to use the Azure Portal every time you want to get the associated report data.

In this post I detail;

  • creating a Log Analytic Workspace
  • enabling API Access
  • querying Log Analytics using the REST API with PowerShell
  • outputting data to CSV

Create a Workspace

We want to create a Workspace for our logs and queries. I created mine using the Azure Cloud Shell in the Azure Portal. I’m using an existing Resource Group. If you want it in a new Resource Group either create the RG through the portal or via the CLI using New-AzResourceGroup

$rgName = 'MYLogAnalytics-REPORTING-RG'
$location = 'australiaeast'
New-AzOperationalInsightsWorkspace -ResourceGroupName $rgName -Name Azure-Active-Directory-Logs -Location $location -Sku free

The Workspace will be created.

Create LogAnalytics Workspace.PNG

Next we need to get the logs into our Workspace. In the Azure Portal under Azure Active Directory => Monitoring => Diagnostic settings select + Add Diagnostic Setting and configure your Workspace to get the SignInLogs and AuditLogs.

API Access

In order to access the Log Analytics Workspace via API we need to create an Azure AD Application and assign it permissions to the Log Analytics API. I already had an Application I was using to query the Audit Logs so I added the Log Analytics to it.

On your Azure AD Application select Add a permission => APIs my organization uses and type Log Analytics => select Log Analytics API => Application permissions => Data.Read => Add permissions

Finally select Grant admin consent (for your Subscription) and take note of the API URI for your Log Analytics API endpoint (westus2.api.loganalytics.io) for me as shown below.

API Access to Log Analytics with KQL

Under Certificates and secrets for your Azure AD Application create a Client Secret and record the secret for use in your script.

Azure AD Application Secret.PNG

Link Log Analytics Workspace to Azure AD Application

On the Log Analytics Workspace that we created earlier we need to link our Azure AD App so that it has permissions to read data from Log Analytics.

On your Log Analytics Workspace select Access Control (IAM) => Add => Role = Reader and select your Azure AD App => save

Link Log Analytics Workspace to Azure AD Application.PNG

I actually went back and also assigned Log Analytics Reader access to my Azure AD Application as I encountered a couple of instances of InsufficientAccessError – The provided credentials have insufficient access to perform the requested operation

API Access to Log Analytics with KQL - Log Analytics Reader.PNG

Workspace ID

In order to query Log Analytics using KQL via REST API you will need your Log Analytics Workspace ID. In the Azure Portal search for Log Analytics then select your Log Analytics Workspace you want to query via the REST API and select Properties and copy the Workspace ID.

WorkspaceID for REST API Query.PNG

Querying Log Analytics via REST API

With the setup and configuration all done, we can now query Log Analytics via the REST API. I’m using my oAuth2 quick start method to make the requests. For the first Authentication request use the Get-AzureAuthN function to authenticate and authorise the application. Subsequent authentication events can use the stored refresh token to get a new access token using the Get-NewTokens function. The script further below has the parameters for the oAuth AuthN/AuthZ process.

#Functions
Function Get-AuthCode {
...
}
function Get-AzureAuthN ($resource) {
...
}
function Get-NewTokens {
...
}

#AuthN
Get-AzureAuthN ($resource)
# Future calls can just refresh the token with the Get-NewTokens Function
Get-NewTokens

To call the REST API we use our Workspace ID we got earlier, our URI for our Log Analytics API endpoint, a KQL Query which we convert to JSON and we can then call and get our data.

$logAnalyticsWorkspace = "d03e10fc-d2a5-4c43-b128-a067efake"
$logAnalyticsBaseURI = "https://westus2.api.loganalytics.io/v1/workspaces"
$logQuery = "AuditLogs | where SourceSystem == `"Azure AD`" | project Identity, TimeGenerated, ResultDescription | limit 50"
$logQueryBody = @{"query" = $logQuery} | convertTo-Json

$result = invoke-RestMethod -method POST -uri "$($logAnalyticsBaseURI)/$($logAnalyticsWorkspace)/query" -Headers @{Authorization = "Bearer $($Global:accesstoken)"; "Content-Type" = "application/json"} -Body $logQueryBody

Here is a sample script that authenticates to Azure as the Application queries Log Analytics and then outputs the data to CSV.

Summary

If you need to use the power of KQL to obtain data from Log Analytics programatically, leveraging the REST API is a great approach. And with a little PowerShell magic we can output the resulting data to CSV. If you are just getting started with KQL queries this document is a good place to start.

Output Log Analytics to CSV.PNG

VSCode Configuration Sync between environments

With every new project comes another development environment. Another installation of Visual Studio Code and the inevitable loss of productivity whilst you get all the necessary extensions installed and configured. If only there was a quick and painless way to perform a VSCode Configuration Sync between environments.

A quick search today for a method to sync configuration between VSCode environments revealed a VSCode Extension from Shan Ali Khan.

The VSCode Settings Sync Extension does everything the name implies leveraging GitHub Gists as the configuration store. The process is super simple too. Simply;

  • Install the Settings Sync Extension
  • Generate a GitHub Access Token to allow it to create a configuration Gist (one time task)
  • Upload your VSCode Configuration from your VSCode instance that has your Gold Config
  • Install the Settings Sync Extension on the target machine
  • Download your config and watch as it retrieves your configuration and installs all your extensions

The last two steps can then be repeated in each new environment. And if you enable Auto-upload (disabled by default) on your Master Configuration machine your configuration stored in the associated Gist will always be the latest. Likewise you can Auto Download your configuration to target machines (disabled by default).

The documentation for Settings Sync is super easy to follow and will have your environments in sync super quick.

The screenshot below shows Settings Sync downloading the 36 extensions I have configured that I use for the different projects I write into a new environment.

VSCode Configuration Settings Sync between computers
VSCode Configuration Settings Sync between computers

Thanks for your work Shan Ali Khan. I’ve donated to your great extension.

Windows Terminal with Tabs, on Steroids

PowerShell Cmdline Emojis Windows 10 Tabbed Terminal

At Microsoft Build last week, one of the many announcements was a new Windows Terminal.

If you spend anytime as an IT Support Person/ DevOps type role and you checkout that second link above you’ll be mightily keen for this new Terminal.

Tabs in a Terminal Window YES (heck I remember paying for a product to provide that to me in a browser) 15+ years ago; a Terminal Window that is a standard command prompt (with Unicode Support) YES; a Terminal Window for cross platform, CMD, PowerShell, PowerShell Core, Windows Subsystem for Linux DAMN YES. And of course you don’t want to have to wait for this, you want it now.

So did I, so I built the Preview Alpha Release. This post details how I did it.

Windows 10 Tabbed Terminal with icons
Windows 10 Tabbed Terminal with icons

Prerequisites

There are a few hoops you need to jump through to get on this right now, as it isn’t available as a download. It will be coming to Windows 10 in a few months, but let’s get it now.

  • Become a Windows Insider by registering for a Windows Insider Account here
  • Have a Windows 10 v 1903 build (via registering for Windows Insiders above)
    • the process to do this I show below
  • Inside your Windows 10 machine you will then need;
    • Windows 10 SDK v 1903
    • Visual Studio 2017 (I use 2019)
      • Choose the following Workloads
        • Desktop Development with C++
        • Universal Windows Platform Development
        • For Visual Studio 2019, you’ll also need to install the “v141 Toolset” and “Visual C++ ATL for x86 and x64”
    • Git for Windows command-line

Windows 10 Test Machine Version 1903

I built a Windows 10 1709 Virtual Machine in Azure from the Azure MarketPlace. Having connected to it, I needed to enable the Windows Insider Program on it. To do that select;

Windows => Settings => Update & Security => Windows Insider Program => Get Started

Enable Windows Insider.PNG

Select Link an account and provide the account you used to sign up for Windows Insiders.

Link an Account.PNG

If, when you attempt to link an account you get a blank login window/page when being prompted for your Windows Insider Account you may need to make a couple of changes to the Windows 10 Local Security Policy Security Options. Below is the configuration of my test Windows Insider Windows 10 Virtual Machine. I’ve highlighted a few options I needed to update.

Local Security Policy.PNG

Select the Skip Ahead to the next Windows release to update Windows 10.

Skip ahead to the next Windows Release.PNG

If you are doing this like I am on a Windows 10 Virtual Machine in Azure, you’ll first go from build 1709 to 1803.

Windows 1709 to 1803.PNG

After Windows 10 has updated to 1803 log back in, go back to Windows Insider Program and chose Skip ahead to the next Windows release.

Skip ahead to the next Windows Release - 1903.PNG

Under Settings => Update & Security => Windows Update and select Check for Updates and you will see Windows 10 version 1903 become available.

1803 to 1903.PNG

Under Windows => Settings => Update & Security => Enable Developer Mode

Enable Developer Mode.PNG

Terminal Application

With the other dependencies detailed in the prerequisites above (Windows 10 1903 SDK, Visual Studio etc) downloaded and installed on your Windows 10 machine we can get on to the fun bit of building the new Terminal. Create a folder where you want to put the source to build the terminal and from a command prompt change directory into it and run the following commands;

Git clone https://github.com/microsoft/Terminal.git
cd Terminal
git submodule update --init --recursive

Git Clone.PNG

Then in Visual Studio select Open a project or solution and open the Terminal Visual Studio Solution. Select SDK Version 10.0.18362.0 and No Upgrade for the Platform Toolset

Open the Solution in VS -1.PNG

Select Release and x64 and then from the Build menu Build Solution.

Build Release x64.PNG Finally, right click on CascadiaPackage and select Deploy

Deploy.PNG

Terminal (Dev) will then be available through the Start Menu.

Windows Terminal Dev.PNG

Opening the Windows Terminal will give you a single Terminal Window. Press Cntrl + T to open an additional tab. 

Use the drop down menu to select Settings and you will be presented with the JSON configuration document. See (below under Icons for mine that enables CMD, PWSH, PowerShell, WSL – Ubuntu and WSL – Suse.

Icons

To have icons for your terminal tabs obtain some 32×32 pixel icons for your different terminals and drop them into the RoamingState directory under the Windows Terminal App. For me that is

C:\Users\darrenjrobinson\AppData\Local\Packages\WindowsTerminalDev_8wekyb3d8bbwe\RoamingState

Then update your profiles.json configuration file located in the same directory and add the name of the appropriate icon for each terminal.

Summary

As much as we use nice UI’s for a lot of what we do as Devs/IT Pro’s, there are still numerous tasks we perform using terminal shells. A tabbed experience for these complete with customisation brings them into the 21st century. Now the wait for another month or two to have it delivered as part of the next Windows 10 Build.