Azure AD Log Analytics KQL queries via API with PowerShell

Log Analytics is a fantastic tool in the Azure Portal that provides the ability to query Azure Monitor events. It provides the ability to quickly create queries using KQL (Kusto Query Language). Once you’ve created the query however you may want to run that query through automation negating the need to use the Azure Portal every time you want to get the associated report data.

In this post I detail;

  • creating a Log Analytic Workspace
  • enabling API Access
  • querying Log Analytics using the REST API with PowerShell
  • outputting data to CSV

Create a Workspace

We want to create a Workspace for our logs and queries. I created mine using the Azure Cloud Shell in the Azure Portal. I’m using an existing Resource Group. If you want it in a new Resource Group either create the RG through the portal or via the CLI using New-AzResourceGroup

$rgName = 'MYLogAnalytics-REPORTING-RG'
$location = 'australiaeast'
New-AzOperationalInsightsWorkspace -ResourceGroupName $rgName -Name Azure-Active-Directory-Logs -Location $location -Sku free

The Workspace will be created.

Create LogAnalytics Workspace.PNG

Next we need to get the logs into our Workspace. In the Azure Portal under Azure Active Directory => Monitoring => Diagnostic settings select + Add Diagnostic Setting and configure your Workspace to get the SignInLogs and AuditLogs.

API Access

In order to access the Log Analytics Workspace via API we need to create an Azure AD Application and assign it permissions to the Log Analytics API. I already had an Application I was using to query the Audit Logs so I added the Log Analytics to it.

On your Azure AD Application select Add a permission => APIs my organization uses and type Log Analytics => select Log Analytics API => Application permissions => Data.Read => Add permissions

Finally select Grant admin consent (for your Subscription) and take note of the API URI for your Log Analytics API endpoint (westus2.api.loganalytics.io) for me as shown below.

API Access to Log Analytics with KQL

Under Certificates and secrets for your Azure AD Application create a Client Secret and record the secret for use in your script.

Azure AD Application Secret.PNG

Link Log Analytics Workspace to Azure AD Application

On the Log Analytics Workspace that we created earlier we need to link our Azure AD App so that it has permissions to read data from Log Analytics.

On your Log Analytics Workspace select Access Control (IAM) => Add => Role = Reader and select your Azure AD App => save

Link Log Analytics Workspace to Azure AD Application.PNG

I actually went back and also assigned Log Analytics Reader access to my Azure AD Application as I encountered a couple of instances of InsufficientAccessError – The provided credentials have insufficient access to perform the requested operation

API Access to Log Analytics with KQL - Log Analytics Reader.PNG

Workspace ID

In order to query Log Analytics using KQL via REST API you will need your Log Analytics Workspace ID. In the Azure Portal search for Log Analytics then select your Log Analytics Workspace you want to query via the REST API and select Properties and copy the Workspace ID.

WorkspaceID for REST API Query.PNG

Querying Log Analytics via REST API

With the setup and configuration all done, we can now query Log Analytics via the REST API. I’m using my oAuth2 quick start method to make the requests. For the first Authentication request use the Get-AzureAuthN function to authenticate and authorise the application. Subsequent authentication events can use the stored refresh token to get a new access token using the Get-NewTokens function. The script further below has the parameters for the oAuth AuthN/AuthZ process.

#Functions
Function Get-AuthCode {
...
}
function Get-AzureAuthN ($resource) {
...
}
function Get-NewTokens {
...
}

#AuthN
Get-AzureAuthN ($resource)
# Future calls can just refresh the token with the Get-NewTokens Function
Get-NewTokens

To call the REST API we use our Workspace ID we got earlier, our URI for our Log Analytics API endpoint, a KQL Query which we convert to JSON and we can then call and get our data.

$logAnalyticsWorkspace = "d03e10fc-d2a5-4c43-b128-a067efake"
$logAnalyticsBaseURI = "https://westus2.api.loganalytics.io/v1/workspaces"
$logQuery = "AuditLogs | where SourceSystem == `"Azure AD`" | project Identity, TimeGenerated, ResultDescription | limit 50"
$logQueryBody = @{"query" = $logQuery} | convertTo-Json

$result = invoke-RestMethod -method POST -uri "$($logAnalyticsBaseURI)/$($logAnalyticsWorkspace)/query" -Headers @{Authorization = "Bearer $($Global:accesstoken)"; "Content-Type" = "application/json"} -Body $logQueryBody

Here is a sample script that authenticates to Azure as the Application queries Log Analytics and then outputs the data to CSV.

Summary

If you need to use the power of KQL to obtain data from Log Analytics programatically, leveraging the REST API is a great approach. And with a little PowerShell magic we can output the resulting data to CSV. If you are just getting started with KQL queries this document is a good place to start.

Output Log Analytics to CSV.PNG

Windows Terminal with Tabs, on Steroids

PowerShell Cmdline Emojis Windows 10 Tabbed Terminal

At Microsoft Build last week, one of the many announcements was a new Windows Terminal.

If you spend anytime as an IT Support Person/ DevOps type role and you checkout that second link above you’ll be mightily keen for this new Terminal.

Tabs in a Terminal Window YES (heck I remember paying for a product to provide that to me in a browser) 15+ years ago; a Terminal Window that is a standard command prompt (with Unicode Support) YES; a Terminal Window for cross platform, CMD, PowerShell, PowerShell Core, Windows Subsystem for Linux DAMN YES. And of course you don’t want to have to wait for this, you want it now.

So did I, so I built the Preview Alpha Release. This post details how I did it.

Windows 10 Tabbed Terminal with icons
Windows 10 Tabbed Terminal with icons

Prerequisites

There are a few hoops you need to jump through to get on this right now, as it isn’t available as a download. It will be coming to Windows 10 in a few months, but let’s get it now.

  • Become a Windows Insider by registering for a Windows Insider Account here
  • Have a Windows 10 v 1903 build (via registering for Windows Insiders above)
    • the process to do this I show below
  • Inside your Windows 10 machine you will then need;
    • Windows 10 SDK v 1903
    • Visual Studio 2017 (I use 2019)
      • Choose the following Workloads
        • Desktop Development with C++
        • Universal Windows Platform Development
        • For Visual Studio 2019, you’ll also need to install the “v141 Toolset” and “Visual C++ ATL for x86 and x64”
    • Git for Windows command-line

Windows 10 Test Machine Version 1903

I built a Windows 10 1709 Virtual Machine in Azure from the Azure MarketPlace. Having connected to it, I needed to enable the Windows Insider Program on it. To do that select;

Windows => Settings => Update & Security => Windows Insider Program => Get Started

Enable Windows Insider.PNG

Select Link an account and provide the account you used to sign up for Windows Insiders.

Link an Account.PNG

If, when you attempt to link an account you get a blank login window/page when being prompted for your Windows Insider Account you may need to make a couple of changes to the Windows 10 Local Security Policy Security Options. Below is the configuration of my test Windows Insider Windows 10 Virtual Machine. I’ve highlighted a few options I needed to update.

Local Security Policy.PNG

Select the Skip Ahead to the next Windows release to update Windows 10.

Skip ahead to the next Windows Release.PNG

If you are doing this like I am on a Windows 10 Virtual Machine in Azure, you’ll first go from build 1709 to 1803.

Windows 1709 to 1803.PNG

After Windows 10 has updated to 1803 log back in, go back to Windows Insider Program and chose Skip ahead to the next Windows release.

Skip ahead to the next Windows Release - 1903.PNG

Under Settings => Update & Security => Windows Update and select Check for Updates and you will see Windows 10 version 1903 become available.

1803 to 1903.PNG

Under Windows => Settings => Update & Security => Enable Developer Mode

Enable Developer Mode.PNG

Terminal Application

With the other dependencies detailed in the prerequisites above (Windows 10 1903 SDK, Visual Studio etc) downloaded and installed on your Windows 10 machine we can get on to the fun bit of building the new Terminal. Create a folder where you want to put the source to build the terminal and from a command prompt change directory into it and run the following commands;

Git clone https://github.com/microsoft/Terminal.git
cd Terminal
git submodule update --init --recursive

Git Clone.PNG

Then in Visual Studio select Open a project or solution and open the Terminal Visual Studio Solution. Select SDK Version 10.0.18362.0 and No Upgrade for the Platform Toolset

Open the Solution in VS -1.PNG

Select Release and x64 and then from the Build menu Build Solution.

Build Release x64.PNG Finally, right click on CascadiaPackage and select Deploy

Deploy.PNG

Terminal (Dev) will then be available through the Start Menu.

Windows Terminal Dev.PNG

Opening the Windows Terminal will give you a single Terminal Window. Press Cntrl + T to open an additional tab. 

Use the drop down menu to select Settings and you will be presented with the JSON configuration document. See (below under Icons for mine that enables CMD, PWSH, PowerShell, WSL – Ubuntu and WSL – Suse.

Icons

To have icons for your terminal tabs obtain some 32×32 pixel icons for your different terminals and drop them into the RoamingState directory under the Windows Terminal App. For me that is

C:\Users\darrenjrobinson\AppData\Local\Packages\WindowsTerminalDev_8wekyb3d8bbwe\RoamingState

Then update your profiles.json configuration file located in the same directory and add the name of the appropriate icon for each terminal.

Summary

As much as we use nice UI’s for a lot of what we do as Devs/IT Pro’s, there are still numerous tasks we perform using terminal shells. A tabbed experience for these complete with customisation brings them into the 21st century. Now the wait for another month or two to have it delivered as part of the next Windows 10 Build.

Building Apps in Azure with only Scripting Skills – Global Azure Bootcamp 2019

Today (27 April 2019) is Global Azure Bootcamp day. It is the 6th year for the free event that is run in communities all over the world to teach Azure Cloud technologies. This year is my second. Last year I presented on Creating the Internet of YOUR Things and today I’m presenting Building Apps in Azure with only Scripting Skills.

In this session I gave a beginners guide to building an Azure Web App using VSCode and NodeJS. In the demo’s we only needed to visit the Azure Portal for a small Azure Function for our Web App to integrate with.

The objective of the presentation was to show IT Pro’s who have never written a Web App before how to use our modern tools (VSCode) and Azure with Serverless infrastructure to provide a streamlined process for writing a Web App, with a very small amount of Javascript and CSS and of course PowerShell.

Here is the GitHub Repo for the small Web App that I used for the demo’s. A Bastard Operator from Hell Excuse Generator (that I also used in my AutoRest post here) and the PowerPoint presentation.

Building Apps in Azure with only Scripting Skills – April 2019

GitHub Repo

Below is a screenshot of an excuse and a suggested remediation beer.

What is today's excuse.PNG

See you next year for Global Azure Bootcamp 2020.

Querying SailPoint IdentityNow Virtual Appliance Clusters with PowerShell

Today I was configuring an Integration Module for SailPoint IdentityNow. As part of that integration I needed the ID of an IdentityNow Virtual Appliance Cluster. It seemed I hadn’t previously documented doing this and I couldn’t find my previous script. So here’s a quick post for the next time I need to get info about the SailPoint Identity Now Virtual Appliance Clusters in my environment.

The following script uses v3 Authentication as detailed in this post.

Update;

  • line 2 with your IdentityNow Orgname
  • line 5 with your Admin Account Name
  • line 6 with your Admin Password
  • line 16 with your IdentityNow v3 API Client ID
  • line 17 with your IdentityNow v3 API Client Secret

 

 

Outputting data from an Azure Function to Power BI with PowerShell

PowerShell Azure Function to Power BI via Event Hub and Stream Analytics

Last week I wrote this post that detailed how to use the Azure Table Storage output binding in an Azure PowerShell Function. As part of the same solution I’m working on, I also need to get data/events into Power BI for reporting dashboards. An Azure Function (PowerShell) has the ability to obtain the data but the path to Power BI requires a number of steps that start with using the Azure Function Event Hub output binding.

The end to end process is;

  • Azure Function obtains information from an API (Brewery DB)
    • processes the data and;
  • Sends the data to an Azure Event Hub
  • A Stream Analytics Job picks up the data and puts it into a Power BI Dataset.

Azure Function to Event Hub to Power BI.png

This post will detail the process of configuring this process using the same Beer Styles example from the Azure Table Storage Azure Output Binding post here and PowerShell as the documentation doesn’t give a working example for PowerShell.

The inputs and Azure Function are the same. Just the output from the Azure Function to Azure Event Hub is added. You can have multiple output bindings, so this example will add the Event Hub output whilst keeping the Azure Table Service output as well.

Azure Event Hub

Before we can add and configure an Azure Event Hub Output Binding on an Azure Function we need to create an Azure Event Hub Namespace.

From the Azure Portal, create a resource (Event Hub).

Created Event Hub Namespace.PNG

Once the Event Hub Namespace has been created, we need to create an Event Hub. Select the Event Hub Namespace => Event Hubs => + Event Hub and give it a name.

Created Event Hub.PNG

Power BI

From the Azure Portal create a resource Power BI Embedded and create a Workspace if you don’t already have one.

Azure Stream Analytics

Now we need to create the Stream Analytics Job that will take the event data from the Event Hub and put it into a Power BI Dataset.

From the Azure Portal create a resource Stream Analytics Job. Give it a name and select create.

Stream Analytics Job.PNG

Once created select your Stream Analytics Job => Inputs => Add stream input => Event Hub. Provide a job Alias, select your Azure Subscription, the Event Hub Namespace and Event Hub created earlier and select Create.

Stream Analytics Input.PNG

Select Outputs from your Stream Analytics Job => + Add => Power BI => Authorize to authorise access to Power BI. Provide an output Alias, select your Group workspace and provide a Dataset name and Table name => Save.

Stream Analytics Output.PNG

Select Query and update the query to copy the input to the output. Select Save. If you weren’t filtering what you wanted into the Power BI Dataset on the Azure Function (or other input to the Event Hub) you could filter it with a query here.

Stream Analytics Query.PNG

Select the Job Overview and select Start => Now.

Stream Analytics Job Overview Start.PNG

Azure Function App Event Hub Output

With all the wiring in place to get our data out to Power BI we can now configure our Azure Function App to output to our Event Hub.

Select your Azure Function => Integrate => Outputs => New Output => Event Hub => New Connection => Event Hub and select your Event Hub Namespace, Event Hub => Select.

Azure Function Output Event Hub.PNG

Update the Event Hub name to the name of your Event Hub and select Save.

Output Event Hub Name.PNG

Selecting our Azure Function from the previous blog post adding in the following line will also copy our output object to our Event Hub Output Binding.

 $outputArray | convertTo-json | Out-File $outputEventHubMessage

Az Func Output to Event Hub.PNG

Processing our Events

Executing our Azure Function sends the events to our Output Bindings.

Azure Function Run.PNG

We can see the progress through looking at our Event Hub Metrics and Stream Analytics Overview (spike in events as they are processed)

Job Processing.PNG

After a minute we can see the dataset has been created in Power BI.

Power BI Dataset.PNG

We can then create a report from the dataset

New Report.PNG

using the data that was ingested based off our Azure Function.

Create Report Info.PNG

Summary

Using the Azure Function Event Hub Output Binding in conjunction with Stream Analytics we can easily get data to Power BI for reporting and dashboards.

Leveraging the Azure Functions Table Storage Output Binding with PowerShell

Recently I wrote this post on using PowerShell to bulk load data into Azure Table Service. Whilst this method works great it does rely on the AzureRM PowerShell module to provide the ability to batch ingest data into Azure Table Service.

I’m working on a solution that requires levels of automation to obtain data from events from Microsoft Graph and ingest that data into Azure Table Service. That doesn’t work with the AzureRM PowerShell Module.

Azure Functions provide additional Bindings for Input and Output, but I’d never had the need to spend the time working it out how to output to Azure Table Storage (with PowerShell). The documentation covers examples for C#, Javascript, Java and Python. But what about PowerShell? Nothing. In this post I cover how to use the Azure Table Storage Azure Function Output Binding.

Azure Function Configuration

If you’re creating a new Azure Function App in 2019 and wanting to use PowerShell, after creating your Azure Function App you need to configure the Function app settings for Runtime version 1. This can only be configured prior to creating a Function.

Set Azure Function to v1.PNG

Using Azure Storage Explorer select your Storage Account associated with the Azure Function Plan and under Tables create the table you will be putting data into.

Azure Table Service myEvents Table.PNG

After creating your Azure PowerShell Function select Integrate and under Outputs add Azure Table Storage. Provide the Azure Storage Account Table that you created above.

Azure Function Table Service Output Binding.PNG

Example PowerShell Azure Function

Here is an example Azure PowerShell Function that connects to the BreweryDB API to obtain the 175 Beer Styles.

It then creates a PowerShell Object for each style and adds it to an Array of Beer Styles. The array is then converted to JSON and passed to the Azure Table Service Output Binding  outputTable configured earlier.

As this is just an example there is no error handling etc, but a working example of obtaining data, transforming it and sending it to Azure Table Service.

Looking at the Azure Table Service Table with Azure Storage Explorer after executing the Azure Function all the Beer Styles have been added.

Beer Styles Added to Azure Table Service.PNG

Summary

The Azure Table Service Output Binding for Azure Functions provides a quick and simple method to allow data to be ingested into Azure Table Service. An added benefit over my previous integration is that the data doesn’t need to be batched into batches of 100 records.

Forefront/Microsoft Identity Manager – Attempted to access an unloaded AppDomain

This post is more a note-to-self for future me in case I’m in this scenario again. Today I encountered the error Attempted to access an unloaded AppDomain.

I have a custom Forefront/Microsoft Identity Manager Management Agent that requires multiple credentials for the Web Service it is integrating with. In order to secure parts of the credentials that cannot be provided as part of the Connectivity configuration tab on the Management Agent Proporties I have generated them and exported them using Export-Clixml as detailed in this post here.

Today I was migrating a Management Agent from one environment to another and was sure I’d regenerated the credentials correctly. However the Management Agent wasn’t working as expected. Looking into the Applications and Services Logs => Forefront Identity Manager Management Agent Log ….

Forfront Identity Manager Management Agent.PNG

i found the following error ….

Unhandled exception, the CLR will not terminate: System.AppDomainUnloadedException: Attempted to access an unloaded AppDomain

Attempted to access an unloaded AppDomain.PNG

Retracing my steps, I had logged on to the Synchronisation Server with the incorrect credentials to generate my protected credentials XML file.

When the Synchronisation Server was running the Management Agent and attempting to run Import-clixml “credentialFilename” the credentials/account that had exported the credentials did not match the service account that the synchronisation server was running with. And the error listed above was thrown.

Summary

Import-clixml and Export-clixml do exactly what they are supposed too and respect the context by which the credentials were exported and will only be able to access them when imported under that same context. The error doesn’t really tell you that but does hint at it with Attempted to access an unloaded AppDomain if you know what you are trying to do.

 

Using AutoRest for PowerShell to generate PowerShell Modules

AutoRest for PowerShell

Recently the Beta of the AutoRest for PowerShell Generator has been made available. At the recent Microsoft MVP Summit in Seattle Garrett Serack gave those that were interested a 1 hr corridor session on getting started with AutoRest for PowerShell.

AutoRest is a tool that generates client libraries for accessing RESTful web services. Microsoft are moving towards using AutoRest to generate SDK’s for the API’s in the standard languages they provide SDK’s for. In addition the AutoRest for PowerShell generator aims to automate the generation of PowerShell Modules for Azure API’s.

In this post I’ll give an intro to getting started with AutoRest to generate PowerShell modules using an AutoRest example and then an Azure Function based API.

AutoRest for PowerShell Prerequisites

AutoRest is designed to be cross-platform. As such its dependencies are NodeJS, PowerShell Core and Dotnet Core.

AutoRest for PowerShell Installation

Installation is done via command line once you have NodeJS installed and configured

npm install -g autorest@beta

if you have previously installed AutoRest you will want to update to the latest version as updates and fixes are coming regularly

autorest --reset

AutoRest Update.PNG

Building your first AutoRest for PowerShell Module

The AutoRest documentation has a couple of examples that are worth using initially to get to know the process and the expected output from AutoRest.

From a PowerShell Core command prompt (type pwsh in a command window) make the following Invoke-WebRequest (IWR) to retrieve the XKCD Swagger/OpenAPI file. I pre-created a sub-directory named XKCD under the directory where I ran the following command.

iwr https://raw.githubusercontent.com/Azure/autorest/master/docs/powershell/samples/xkcd/xkcd.yaml -outfile ./xkcd/xkcd.yaml

Download XKCD Yaml.PNG

We can now start the process to generate the PowerShell module. Change directory to the directory where you put the xkcd.yaml file and run the following command.

autorest --powershell --input-file:./xkcd.yaml

AutoRest Generate Libs.PNG

Following a successful run a generated folder will be created. We can now build the PowerShell Module. We can add the -test flag so that at the end of Generating the Module, Pester will be used to test it. First run PowerShell Core pwsh  then build-module

pwsh 
./generated/build-module.ps1 -test

AutoRest - Build Module.PNG

With the module built and the tests passed we can run it. We need to load the module and look to see the cmdlets that have been built for the API.

.\generated\run-module.ps1 xkcd.psm1
get-command -module xkcd

AutoRest - Load Module.PNG

Using the Get-ComicForToday cmdlet we can query the XKCD API and get the Comic of the Day.

Get-XkcdComicForToday | fl

AutoRest - Comic of the Day.PNG

Taking it one step further we can download the comic of the day and open it, with this PowerShell one-liner.

invoke-webrequest (Get-XkcdComicForToday).img -outfile image.png ; & ./image.png

Download and Display XKCD Comic of the Day.PNG

Let’s Create a Simple API – BOFH Excuse Generator

I created an HTTP Trigger based PowerShell API Function that takes GET requests and returns a BOFH – Bastard Operator from Hell (warning link has audio) style excuse. For those that haven’t been in the industry for 20+ years or aren’t familiar with the Bastard Operator from Hell, it is essentially a set of (semi) fictional transcripts of user interactions from a Service Desk Operator (from Hells’) perspective set in a University environment. Part the schtick is an Excuse of the Day. My API when queried returns a semi plausible Excuse of the Day.

Here is my HTTP Trigger PowerShell Azure Function (v1). A simple random excuse generator.

I configured the function for anonymous, GET operations only and the route to be excuse

Testing the Function from my local machine showed it was working and returning an excuse.

Now it’s time to generate the OpenAPI Spec. Select the Azure Function App => Platform Features => API definition

Select Generate API definition template and the basics of the OpenAPI spec for the new API will be generated. I then updated it with the output application/json and what the API returns as shown in the highlighted sections below. Select Save 

Now we can test it out and we see that we have success.

Taking the OpenAPI Spec (YAML format file) for our BOFH Excuse API we go through the steps to successfully generate the PowerShell Module using AutoRest. Running the freshly baked cmdlet from the BOFH PowerShell Module returns a BOFH Excuse. Awesome.

Summary

Using AutoRest it is possible to generate PowerShell Modules for our API’s. Key to the successful generation is the definition of the OpenAPI Spec for our API. In my example above, obviously if you don’t define what the API call returns then the resulting cmdlet will query the API, but won’t return the result.

VSCode Virtual Environments using your Browser

It’s no secret I’m a huge fan of virtual environments and PowerShell. Late last year I wrote this series on Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure A lot of the back story for that three post series was to have virtual environments for PowerShell.

Moving forward six months and I’m at the beginning of the journey towards migrating from PowerShell Desktop to PowerShell Core. The quickest way to get started with PowerShell Core is to use the Windows 10 feature of Windows Subsystem for Linux. Of course you should also have made (or be making) the migration from PowerShell ISE to VSCode.

So what if you could have Virtual VSCode environments accessible via a browser as your IDE for building and using PowerShell Core? Well you can.

Here is VSCode running a PowerShell Core command via a browser from a Ubuntu based Windows Subsystem for Linux environment in a Windows 10 Virtual Machine running in Azure.

Powershell Core on Linux on Windows via VSCode in Browser.PNG

Prerequisites

A key component of this magic is Code Server. Code Server is an open source self-contained environment for VSCode designed to offload the IDE from lower powered environments (tablets, Chromebooks etc) and is currently supported on Linux and Mac (OSX) with Windows coming soon.

If you have an environment with either of those then all good, as you were. If you only have Windows then on Windows 10 install Windows Subsystem for Linux via the Microsoft Store App. My Windows System for Linux flavour is Ubuntu.

WSL Ubuntu.PNG

Code Server

Download Code Server from the Github Releases page here and save it to your local machine. Running Ubuntu I downloaded the Linux binary. Untar it using

tar -xf yourDownloadedversion.tar.gz

Untar Code Server

Change into the extracted directory with the code-server executable in and run code-server

./code-server

Take note of the password as you will need this when you connect to Code Server via your browser.

Start Code Server

If the host you are running Code Server is remote (as mine is, in Azure) make sure you have networking configured for the default port (8443). I had to update my NSG for the inbound port of 8443.

Note: this isn’t a normal configuration, allowing source any for incoming ports. I have this locked down to the necessary source IP’s (but I’m not going to show them here).

Inbound Security Rules.PNG

Then in your browser you can hit the your host on port 8443 and provide the password that was displayed when starting Code Server. You will then have VSCode running in essentially a Virtual Environment accessible via a browser.

VSCode running in a Browser

PowerShell Core

My Windows Subsystem for Linux Ubuntu version is currently 18.04 and I already had downloaded and installed PowerShell Core for Ubuntu 18.04

lsb_release -a

Installation Instructions are here

Ubuntu Windows Subsystem for Linux Version.PNG

Install the PowerShell VSCode Extension along with any of the other extensions you regularly use. BOOM, you now have a Virtual VSCode environment for use with PowerShell Core on (and in my case on Ubuntu under Windows 10).

PowerShell VSCode Extension

Summary

What a crazy time to be in the IT industry. Nested disparate operating systems running IDE’s in browsers executing cross-platform scripting languages. Wow.

To run multiple sessions on the same host use the -p (port) command to specify a different port for each instance.

Code Server specifying Port.PNG

Empowering your long running PowerShell Automation Scripts with SMS/Text Notifications

18 months ago I wrote this post that detailed integrating Push Notifications into your scripts. That still works great, but does require that you have the associated Push Bullet application installed in your browser or on your devices. More recently I wrote about using Burnt Toast for Progress Dialogs’ for long running scripts. That too is all great if you are present on the host running those scripts. But what if you want something a little more native and ubiquitous? Notifications for those autonomous or long running scripts where you aren’t active on the host running them, and you don’t want the hassle of another application specific for that purpose? How about SMS/Text notifications?

This post details how to use Twilio (a virtual telco) from your PowerShell scripts to send SMS/Text alerts from your scripts so you can receive notifications like this;

Everything is on Fire.PNG

Twilio

Twilio is a virtual telco (amongst other products), that allows you to use services such as the mobile network via your application. For their SMS/TXT service they even give you a credit to get started with their service. To send SMS/TXT messages using their service from Australia each message is AU$0.0550.

Sign-up for a Twilio trial account, enable and verify your account. Take note of the following items as you will need them for your script;

  • service mobile number (initially Trial Number)
  • Account SID
  • your Auth Token

Using these pieces of information via the Twilio API we can send SMS/Text Notifications from your PowerShell scripts (well any language, but I’m showing you how with PowerShell). You can get your Service Number, Account SID and Auth Token from the Dashboard after registering for a Trial Account.

Trial Account Dashboard.PNG

To enable SMS/TXT go to the Twilio Programmable SMS Dashboard here and create a New Messaging Service.

For my use (notifications from scripts) I selected Notifications, Outbound Only.

Create New Messaging Service.PNG

Once created you will see the following on the Programmable SMS Dashboard. That’s it, you’ve activated SMS/TXT in Trial Mode.

SMS Dashboard.PNG

The Script

Here is a Send-TextNotification PowerShell Function that takes;

  • Mobile Number to send the notification to
  • Mobile Number the message is coming from
  • Message to send
  • AuthN info (Account SID and Account Token)
  • Account SID

Line 46 sends a SMS/TXT notification using my Send-TextNotification script. Update;

  • Line 32 for your Twilio Account SID
  • Line 34 for your Twilio Account Token
  • Line 40 for the mobile number to send the message to
  • Line 42 for the authorised number you verified to send from
  • Line 44 for your notification message

Summary

Using the Twilio service, my small function and a few parameters you can quickly add SMS/TXT notifications to your PowerShell scripts. Once you have it up and running I encourage you to upgrade your account and pay the few dollars for use of the service which also removes the “Sent from your Twilio trial account” text from the messages. Twilio also has a WhatsApp Notification Service.

IMG-9658.JPG