Granfeldt PowerShell Management Agent Schema HRESULT: 0x80231343 Error

Yesterday I was modifying the Schema configuration on a Granfeldt PowerShell Management Agent on a Microsoft Identity Manager 2016 SP1 Server.

I was changing the Anchor attribute for a different attribute and on attempting to refresh the schema or view the configuration I got the following error;

Unable to retrieve schema. Error: Exception from HRESULT 0x80231343

Unable to retreive Schema.PNG

I knew I’d seen this before, but nothing was jumping to mind. And this was a particular large Schema script.

After some debugging I realized it was because the attribute I had changed the Anchor attribute too, was also listed in the Schema script. It wasn’t obvious as the attribute entry was multiple pages deep in the script.

Essentially if you are seeing the error Unable to retrieve schema. Error: Exception from HRESULT 0x80231343 there are two likely causes;

  • you haven’t declared an Object Class e.g User
    • $obj | Add-Member -Type NoteProperty -Name “objectClass|String” -Value “User”
  • the attribute you have as your anchor is also listed as an attribute in the schema script e.g
    • $obj | Add-Member -Type NoteProperty -Name “Anchor-ObjectId|String” -Value “333a7e07-e321-42ea-b0a5-820598f2adee”
    • $obj | Add-Member -Type NoteProperty -Name “ObjectId|String” -Value “333a7e07-e321-42ea-b0a5-820598f2adee”
      • you don’t need this entry, just the Anchor entry

Hopefully this helps me quickly find the reason next time I make a simple mistake like this in the Schema script.

 

Using SailPoint IdentityNow v3 API’s with PowerShell

The SailPoint IdentityNow SaaS product is evolving. I’ve previously posted about integrating with the IdentityNow API’s using PowerShell;

IdentityNow now has v3 API’s which are essentially the v2 and non-Published API’s with the added benefit of being able to obtain an oAuth token from a new oAuth Token endpoint. Unlike the v2 process for enabling API integration, v3 currently requires that SailPoint generate and provide you with the ClientID and Secret. This Compass document (at the very bottom) indicates that this will be the preferred method for API access moving forward.

The process to get an oAuth Token the process is;

  • Generate Credentials using your IdentityNow Admin Username and Password as detailed in my v1 Private API post
    • Lines 1-12
  • Use the credentials from the step above in the oAuth token request
  • Use the ClientID and Secret as Basic AuthN to the Token endpoint
  • Obtain an oAuth Token contained in the resulting $Global:v3Token variable

Authentication Script

Update:

  • Line 2 with your Org name
  • Line 5 with your Admin Login Name
  • Line 6 with your Admin Password
  • Line 15 with your SailPoint supplied ClientID
  • Line 16 with your SailPoint supplied Secret

Your resulting token will then look like this;

SailPoint v3 API oAuth Token.PNG

Using the v3 oAuth Access Token

So far I’ve found that I can use the oAuth Token to leverage the v2 and non-published API’s simply by using the JWT oAuth Token in the Header of the webrequest e.g

@{Authorization = "Bearer $($Global:v3Token.access_token)"}

Depending on which API you are interacting with you may also require Content-Type e.g

@{Authorization = "Bearer $($Global:v3Token.access_token)"; "Content-Type" = "application/json"}

Summary

Talk to your friendly SailPoint Support Rep and get your v3 API ClientID and Secret and discard this previous hack of scraping the Admin Portal for the oAuth Token saving a few hundred lines of code.

Enabling Requestable Roles in SailPoint IdentityNow using PowerShell

Recently I wrote this post about Retrieving, Creating, and Managing SailPoint IdentityNow Roles using PowerShell.

Last week SailPoint enhanced Roles with the ability to request them. The details are located on Compass here.

I had a number of Roles that we wanted to make requestable, so rather than opening each and using the Portal UI to enable them, I did it via the API using PowerShell.

As per my other Roles post, a JWT Bearer Token is required to leverage the Roles API’s. That is still the same. I covered how to obtain a JWT Bearer Token specifically for interacting with these API’s in this post here. I’m not going to cover that here so read that post to get up to speed with that process.

Enabling Roles to be Requestable

The following script queries to return all Roles, iterates through them to make them requestable. Update;

  • Line 2 for your IdentityNow Org Nam
  • after Line 9 you can refine the roles you wish to make requestable

Summary

Using the API we can quickly enable existing IdentityNow roles to be requestable.  When creating new Roles we can add in the attribute Requestable with the value True if we want them to be requestable.

Using Invoke-WebRequest calls within a Granfeldt PowerShell MA for Microsoft Identity Manager

MIM Sync Service Account IE Security Settings

If you use PowerShell extensively you should be familiar with the Invoke-RestMethod cmdlet and the ability for PowerShell to call API’s and receive information. The great thing about Invoke-RestMethod is the inbuilt conversion of the results to PowerShell Objects. However there are times when you need the raw response (probably because you are trying to bend things in directions they aren’t supposed to be; story of many of my integrations).

From within Granfeldt PowerShell Management Agent script(s) that use Invoke-WebRequest calls, these will in turn leverage the Internet Explorer COM API on the local machine. That means the account that is performing those tasks will need to have the necessary permissions / Internet Explorer configuration to do so.

Enabling Invoke-WebRequest for the FIM/MIM Synchronization Server Service Account

Logon to your Microsoft Identity Manager Synchronization Server using the Service Account associated with the Forefront Identity Manager Synchronization Service. If you’ve secured the service account properly you will need to temporarily allow that service account to Log on Locally.

FIM Sync Service Account.PNG

Open Internet Explorer and open Internet Options. Select the Security Tab, the Internet zone and select Custom Level.

Locate the Scripting section and set Active scripting to Enable. Set the Allow websites to prompt for information using scripted windows to Disable. Select Ok and Ok.

MIM Sync Service Account IE Security Settings.PNG

With the configuration completed, go back and change back the Forefront Identity Manager Service Accounts’ local permissions so it can’t logon locally.

Your Import / Export Granfeldt PowerShell Management Agent Scripts can now use Invoke-WebRequest where the requests/responses use the Internet Explorer COM API and respect the Internet Explorer security settings for the user profile that the requests are being made by.

Hopefully this helps someone else that is wondering why the scripts that work standalone fail when operating under the FIM/MIM Sync Service Account by the Granfeldt PowerShell Management Agent.

PowerShell – The underlying connection was closed: An unexpected error occurred on a send.

What should have been just another quick script as a WebRequest to get some data turned into a debugging session when both Invoke-RestMethod and Invoke-WebRequest returned The underlying connection was closed: An unexpected error occurred on a send.

Invoke-RestMethod

Invoke-RestMethod : The underlying connection was closed: An unexpected error occurred on a send.
At line:1 char:15
+ ... postdata2 = Invoke-RestMethod -Uri $post.URL -Method Get -UserAgent $ ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-RestMethod], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeRestMethodCommand

Invoke-WebRequest

Invoke-WebRequest : The underlying connection was closed: An unexpected error occurred on a send.
At line:3 char:21
+ ... $postdata = Invoke-WebRequest -Uri $post.URL -Method Get -UserAgent $ ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand

It’s not unusual for PowerShell defaults to have issues with TLS and the ambiguous nature of the error made me jump to the conclusion I probably just needed to enforce TLS 1.2 using this one-liner.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

But, no joy. Same error.

Looking at the certificate for the URL my script was connecting to show the certificate was valid.

Certificate is Valid.PNG

After a bunch of searching and actually getting a couple of working scenarios from (here and here) they weren’t ideal. The resolution was to allow TLS, TLS 1.1 and TLS 1.2 by using the following line before invoking the WebRequest.

Posting this as I know I’ll need it again in the future and it will allow me to find it quickly.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls -bor [Net.SecurityProtocolType]::Tls11 -bor [Net.SecurityProtocolType]::Tls12

 

 

 

 

Speeding up PowerShell lookups across large Collections

This week I needed to create a report based on information returned from two queries. The query results where contained in two separate collections (50k+ objects each). Taking the smaller filtered collection and looking up the other collection for the additional information using PowerShell like this proved frustrating slow:

$extraData = $collection2 | Where-Object {$_.UserPrincipalName -eq $collection1.UserPrincipalName } | Select-Object

An alternative then was to query directly (via an API) for the additional information whilst iterating through the main collection rather than searching for it in the other collection e.g.

foreach ($obj in $collection1){ $extraData = Invoke-RestMethod -method GET ...... }

That too was way too slow and wasn’t really being a nice NET citizen for the API on the end of 50k+ queries.

Solution

My solution was to join the two collections of objects and then build my report based off just one collection. Step in the Join-Object function from Warren F.

Join-Object provides a lot of flexibility on how and what to join between collections. For my requirements I just needed to use Join-Object to join based on a common key and bring in all the data from the other collection. That then looked like this in PowerShell;

$reportData = Join-Object -Left $collection1 -Right $collection2 -LeftJoinProperty UserPrincipalName -RightJoinProperty UserPrincipalName -Type AllInLeft
Whilst this one line to join my two collections takes just over an hour to execute my entire report now completes in less than 90 minutes vs the 5 1/2 hours it was previously taking to run. Thx psCookieMonster

Microsoft Graph and the $whatIf option

What we know today as the Microsoft Graph has evolved over the last few years from a number of different API’s that were developed by different product teams within Microsoft (e.g Azure AD, Office 365, Outlook). That doesn’t mean the old ones have gone away, but it does mean that we can connect to the Microsoft Graph API and leverage the API’s we used to interface with independently.

What this means is, where information is actually coming from is obfuscated. But there is a way to find out where it is coming from.

The $whatIf option is a mostly undocumented switch that provides visibility on the source of information returned by Microsoft Graph from differing back-end API’s. The easiest way to use this is via the Graph Explorer.

Running a query using Graph Explorer for /me returns a bunch of information.

Graph me base request.PNG

Running the same query with the $whatIf option returns what the /me call actually performs on the API.

Graph me base request with whatIf.PNG

In this case it returns the base user information as shown below in the $select query.

{“Description”:”Execute HTTP request”,”Uri”:”https://graph.windows.net/v2/dcd219dd-bc68-4b9b-bf0b-4a33a796be35/users(’48d31887-5fad-4d73-a9f5-3c356e68a038′)?$select=businessPhones,displayName,givenName,jobTitle,mail,mobilePhone,officeLocation,preferredLanguage,surname,userPrincipalName,id“,”HttpMethod”:”GET”}

If I change the query to a specific user and request information that comes from different products behind the Microsoft Graph and I use the $whatIf option we can see where the data is coming from.

https://graph.microsoft.com/v1.0/users/MeganB@M365x214355.onmicrosoft.com?$select=mail,aboutMe&whatIf

The screenshot below shows that the request actually executed 2 requests in parallel and merge the entity responses

Data from multiple APIs.PNG

The result then shows where the two queries went. In this case graph.windows.net and sharepoint.com.

{“Description”:”Execute 2 request in parallel and merge the entity responses”,”Request1“:{“Description”:”Execute HTTP request”,”Uri”:”https://graph.windows.net/v2/dcd219dd-bc68-4b9b-bf0b-4a33a796be35/users(‘MeganB@M365x214355.onmicrosoft.com’)?$select=mail”,”HttpMethod”:”GET”},”Request2“:{“Description”:”Execute HTTP request”,”Uri”:”https://m365x214355.sharepoint.com/_api/SP.Directory.DirectorySession/GetSharePointDataForUser(’48d31887-5fad-4d73-a9f5-3c356e68a038′)?$select=aboutMe”,”HttpMethod”:”GET”}}

Summary

The next time you’re trying to workout where information is coming from when using the Microsoft Graph API, try adding the $whatIf switch to your query and have a look at the Response.

Searching and Retrieving your GitHub Gists using PowerShell

GitHub Gists, I love them. I treat them as my personal snippet storage as well as the repository for many posts on this blog. If you are new to them, it is important to know that you can have public and secret Gists. Secret Gists thereby give you your private little snippet storage environment.

At some point though you’re going to want to retrieve a snippet based on a fragment of knowledge, such as an API, keyword or similar. Through the Gist Webpage you can search and it will give you a list of your Gists where your keyword exists and if you hit enter it will give you all Gists.

But what if you want to search your Gists based on other or more complex criteria (public vs secret). Step in the Gist API, and my scripting tool of choice. This post will quickly detail authenticating to GitHub, retrieving all your Gists and then narrowing down the one(s) you’re after. Of course there are PowerShell Modules that can do some of this, but I needed something light and portable without needing to install another module.

If nothing more this is an easy post for me to find, to retrieve my script to then find my Gists.

Obtaining a Personal Access Token

Login to Github using your account and head to https://github.com/settings/tokens and select Generate new token. Give it a description and select gist.

GistToken.PNG

After selecting Generate Token you will be shown your access key. Copy it and store it safely. It won’t be displayed again and is your password to access GitHub.

Simple retrieval and Search Script

Update the following script;

  • Line 2 for your username
  • Line 4 for your token
  • Line 9 for your search term

As it stands the script is looking for Secret Gists (as per line 21). Change to $true for Public. It will find all that meet the search query (line 9) in the Description field. It will then get the Raw Gist contents and output it to the console. Update accordingly for what you want to happen after retrieval.

 

 

 

Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 3

Docker Virtual PowerShell Desktop Env to Internet - SaaS

This is the third and likely last post in this series. In Part 1 I introduced the capability to have Virtual PowerShell Environments using Docker and the full Windows 10 / Server 2019 Build 1809 container images. In Part 2 I detailed remotely access the Azure RM Windows 10 / Server 2019 host that contains the Docker Container with our full Windows 1809 environment (and therefore PowerShell Desktop).

In this post I’ll detail building a Docker Image based off of the Windows 1809 Container image. The resulting Docker Image will;

  • create a base PowerShell environment with the necessary PowerShell Modules for performing common Azure based administrative activities
  • using the Docker image for administrative functions
  • be accessible to be started using SSH from other SSH Clients (e.g Putty)

Building a Docker Image based off Windows 1809 Container

Using the capabilities I showed in in Part 2 of this series I’m going to build the image from Azure Cloud Shell that I’ll use to SSH into the Windows 10 AzureRM hosted Virtual Machine.

Having logged in to Azure Cloud Shell and connected to my Azure VM via SSH as detailed in Part 2. I then created a new CMD file named NewImage1809.cmd that has the following command inside it.

docker run -it --name psNov2018 mcr.microsoft.com/windows:1809 powershell

New Image Start

Running the commands hostname and dir c:\program files\WindowsPowerShell\Modules shows that we are inside the Windows 1809 Container Image.

Hostname and Existing Modules.PNG

I used the Cloud Shell Upload option to upload a New Env Setup.ps1 script that contains the PowerShell commands to install a bunch of PowerShell Modules. Using the Cloud Shell Editor I opened the file.

New Env Setup Script

Here is that series of commands.

I can then select the block of commands and paste it into the PowerShell terminal console below and hit enter for it to execute them.

Excute

One by one the modules are installed

Modules Installed

When completed enter exit.

Exit after Module Install

Now we can stop our Container Image.

docker stop psNov2018

Docker stop.PNG

and commit our changes to a new container named ‘powershell-env-image-nov18’

docker commit psNov2018 powershell-env-image-nov18

Docker Commit.PNG

Listing the docker images with

docker image ls

shows our new Image.

Docker Image List.PNG

We can now Run our new image with

docker run -it powershell-env-image-nov18:latest powershell

Run Image.PNG

We can see the modules we installed previously.

Image Module List.PNG

and we can import them.

Import Modules.PNG

Putty to PowerShell Virtual Environment

As good as Azure Cloud Shell is, and as convenient as it is for quick tasks and execution, you’re going to want to use an SSH Client. I’ll show using Putty, but you can use whatever your favourite client is. To connect to the environment I;

  • using the Putty Key Generator I loaded the OpenSSH Private Key generated in Part 1 and saved it in Putty ppk format
  •  using Putty Pageant I can use the ppk formatted key for my SSH session to the Windows 10 1809 host
    • Note: WinSCP can also utilise the ppk key for authentication which makes getting files onto the Host very easy
  • if you find you don’t automatically get your elevated session that allows you to start the Docker Container/Image then create the following registry key on the Windows 10/Server 2019 host and reconnect. DWORD (32-bit) value of 1 for LocalAccountTokenFilterPolicy
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System]
"LocalAccountTokenFilterPolicy"=dword:00000001

I can then connect with Putty using my key, and run the DockerPS.cmd file I showed in Part 2 which outputs the version of PowerShell.

Summary

In this post I’ve shown how to customise a Windows 1809 Container for a Virtual PowerShell environment, along with using client based SSH and SCP tools to connect to and manage the base Host.

Searching & Returning all Objects/Users from a SailPoint IdentityNow Source

There are times when need to get an extract of all objects on an IdentityNow Source. Just a particular Source, not the object from the Identity Cube with attributes contributed from multiple sources.

I’ll cover how I do that in this post, which in turn also handles paging the results from IdentityNow as the SearchLimit is 2500 objects.

The basis of the logic is;

  • Define the Source to retrieve objects from
  • Define the number of results you wish to return per page (maximum is 2500)
  • Page results until you return the base object for all objects on the Source
  • Retrieve the Full Object details for each object

The Script

The following script has been written to run in VS Code and provide a Progress bar using the psInlineProgress PowerShell Module available from the PowerShell Gallery and here. If you are also running this via VSCode, after obtaining psInlineProgress update the psInlineProgress.psd1 file to change Line 36 as shown below. You should be able to find it in C:\Program Files\WindowsPowerShell\Modules\psInlineProgress\1.1

#PowerShellHostName = 'ConsoleHost'
PowerShellHostName = 'Visual Studio Code Host'

Update;

  • Line 3 for your IdentityNow API ClientID
  • Line 5 for your IdentityNow API ClientSecret
  • Line 9 for you IdentityNow Tenant name
  • Line 13 for the ID of the IdentityNow Source you want to retrieve entities from
  • Line 17 for the number of entries to return per page (2500 is the maximum)

Example

The output below shows using the script to return 2591 objects from an IdentityNow Source.

Search and return all objects on an IdentityNow Source

Summary

Using the v2/accounts IdentityNow API we can retrieve the base objects associated with an IdentityNow Source and then call it again with each objectID to retrieve the full object record. This can be useful if you want to then programatically extract and process the information rather than downloading a CSV via the IdentityNow Portal. Say for example ingestion into another system or Identity Management tool. But that’s a post for another time.