Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 3

Docker Virtual PowerShell Desktop Env to Internet - SaaS

This is the third and likely last post in this series. In Part 1 I introduced the capability to have Virtual PowerShell Environments using Docker and the full Windows 10 / Server 2019 Build 1809 container images. In Part 2 I detailed remotely access the Azure RM Windows 10 / Server 2019 host that contains the Docker Container with our full Windows 1809 environment (and therefore PowerShell Desktop).

In this post I’ll detail building a Docker Image based off of the Windows 1809 Container image. The resulting Docker Image will;

  • create a base PowerShell environment with the necessary PowerShell Modules for performing common Azure based administrative activities
  • using the Docker image for administrative functions
  • be accessible to be started using SSH from other SSH Clients (e.g Putty)

Building a Docker Image based off Windows 1809 Container

Using the capabilities I showed in in Part 2 of this series I’m going to build the image from Azure Cloud Shell that I’ll use to SSH into the Windows 10 AzureRM hosted Virtual Machine.

Having logged in to Azure Cloud Shell and connected to my Azure VM via SSH as detailed in Part 2. I then created a new CMD file named NewImage1809.cmd that has the following command inside it.

docker run -it --name psNov2018 mcr.microsoft.com/windows:1809 powershell

New Image Start

Running the commands hostname and dir c:\program files\WindowsPowerShell\Modules shows that we are inside the Windows 1809 Container Image.

Hostname and Existing Modules.PNG

I used the Cloud Shell Upload option to upload a New Env Setup.ps1 script that contains the PowerShell commands to install a bunch of PowerShell Modules. Using the Cloud Shell Editor I opened the file.

New Env Setup Script

Here is that series of commands.

I can then select the block of commands and paste it into the PowerShell terminal console below and hit enter for it to execute them.

Excute

One by one the modules are installed

Modules Installed

When completed enter exit.

Exit after Module Install

Now we can stop our Container Image.

docker stop psNov2018

Docker stop.PNG

and commit our changes to a new container named ‘powershell-env-image-nov18’

docker commit psNov2018 powershell-env-image-nov18

Docker Commit.PNG

Listing the docker images with

docker image ls

shows our new Image.

Docker Image List.PNG

We can now Run our new image with

docker run -it powershell-env-image-nov18:latest powershell

Run Image.PNG

We can see the modules we installed previously.

Image Module List.PNG

and we can import them.

Import Modules.PNG

Putty to PowerShell Virtual Environment

As good as Azure Cloud Shell is, and as convenient as it is for quick tasks and execution, you’re going to want to use an SSH Client. I’ll show using Putty, but you can use whatever your favourite client is. To connect to the environment I;

  • using the Putty Key Generator I loaded the OpenSSH Private Key generated in Part 1 and saved it in Putty ppk format
  •  using Putty Pageant I can use the ppk formatted key for my SSH session to the Windows 10 1809 host
    • Note: WinSCP can also utilise the ppk key for authentication which makes getting files onto the Host very easy
  • if you find you don’t automatically get your elevated session that allows you to start the Docker Container/Image then create the following registry key on the Windows 10/Server 2019 host and reconnect. DWORD (32-bit) value of 1 for LocalAccountTokenFilterPolicy
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System]
"LocalAccountTokenFilterPolicy"=dword:00000001

I can then connect with Putty using my key, and run the DockerPS.cmd file I showed in Part 2 which outputs the version of PowerShell.

Summary

In this post I’ve shown how to customise a Windows 1809 Container for a Virtual PowerShell environment, along with using client based SSH and SCP tools to connect to and manage the base Host.

Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 1

22 Nov 18 Part 2 is available here that details accessing
the Docker Image via Azure Cloud Shell / SSH
27 Nov 18 Part 3 is available here that details customizing
an image and accessing it via other SSH clients with 
elevated access.

PowerShell Desktop Virtual Environments

If you’ve been working with PowerShell for any length of time you know that through its flexibility there can come challenges when using disparate PowerShell Modules and often their version dependencies. This isn’t just a PowerShell thing; Python can also trip you up in a similar manner.

Python however has Virtual Environments (virtualenv) capabilities which provides functionality to create an environment that contains all the necessary binaries required for the packages/libraries that a Python project would need. I’ve found this this very useful and I’ve wondered why I couldn’t do the same for PowerShell Desktop (not Core). PowerShell Desktop, PowerShell Core?

PowerShell Desktop vs PowerShell Core

As of August 2016 there are two PowerShell versions;

  • PowerShell Desktop
    • PowerShell 5.1 that runs on Windows and on top of the full .NET Framework stack
  • PowerShell Core
    • PowerShell Core 6.x that is cross platform (Windows, MacOSX, Linux)
      • Doesn’t run on the full .NET Framework

If you are a Windows/Directory Services Admin the likelihood of many of the PowerShell Modules you use running on PowerShell Core are slim. That’s because a lot of the modules you use require the full .NET Framework. And that isn’t available in PowerShell Core.

A Virtual PowerShell Desktop Env? Why is this only possible now?

In July this year Microsoft started providing Windows Container Images for the Insider releases (over and above Nano and Core OS builds). This was great, but meant you needed to be on the Insider Builds and were restricted to environments on physical hardware or VM’s migrated to Azure as there wasn’t an Azure Marketplace OS Version (Windows 10 or Server 2019 Preview) that met the minimum host requirements for the Insider Container images.

We’ve had to wait until Build 1809 became available in the Azure Marketplace which it did at the end of last week (w/e 18 November 2018). The Windows Container Version History shows that there was no 1803 Windows Image. But that’s all bygones now, as 1809 is finally here.

PowerShell Desktop Virtual Environments through Nested Virtualization

The screenshot below on first glance just looks like any command window in a virtual machine. But look a little closer;

  • Remote Desktop Session to an Azure Windows 10 1809 Virtual Machine (host.region.cloudapp.azure.com)
  • Docker Run Windows 1809 PowerShell $psversiontable
    • PowerShell Desktop 5.1 via Docker inside a Virtual Machine in Azure
      • BOOM!!

PowerShell Desktop Virt Env Nested Virtualization.PNG

Ok, so that is a single Docker Container with a full Windows 10 1809 environment running inside a Windows 10 Virtual Machine. But that means we can also add more containers and have multiple isolated PowerShell environments. Something like ….

Nested Virtual PowerShell Desktop Env.png

Wait, what, how? – The Overview

The high-level process is;

  • Provision a Windows 10 Virtual Machine (Build 1809 or later).
    • I recommend to deploy it in Azure, but you could do it in other virtualization environments that support Nested Virtualization
    • NOTE: As I write this Windows Server 2019 Build 1809 hasn’t hit the Azure Marketplace. When it does, as it has a common code-base it should work exactly the same.
  • Enable the OpenSSH Feature (I’ll be using this a little in this post but more in a future post)
  • Enable the Containers and Hyper-V Features
  • Install and configure Docker
  • Pull the Windows Build 1809 Container Image

Windows 10 Build 1809 Virtual Machine

I’m not going to give step-by-step details for deploying a Windows VM in Azure. If you’re looking to setup Virtual PowerShell Desktop Environments with Docker you should be able to deploy a Windows VM. That said you need to choose a VM Size and Version that will support “Nested Virtualization”. The Azure RM Dv3 and Ev3 Series VM’s do. If you get an error similar to this when running a Docker Image then change your VM Series to Dv3. I went with;

  • The Azure Marketplace has a image for Windows 10 Build 1809. Search for Windows 10 Pro, Version 1809
    • In order to run this VM as pragmatic as practical I chose the following size and configuration for my VM initially
      • Standard D2_v3 (2 vCPUs, 8 GB memory)
      • HDD over SSD
      • Un-managed disks
    • Enable SSH and RDP in the NSG configuration
      • initially we’ll need RDP to connect to the workstation
      • moving forward we’ll be using SSH

OpenSSH Server

OpenSSH Client and Server has been available for Windows for a while. Build 1809 though has streamlined the install process considerably. The base install and setup is now just a couple of commands away. The commands below will install the latest version of OpenSSH Server via PowerShell;

# Find OpenSSH Server
$openSSH = Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH*'

# Install OpenSSH Server
$sshServer = $openSSH | Select-Object name | Where-Object {$_.name -like "OpenSSH.Server*"}
$sshServer

Add-WindowsCapability -Online -Name $sshServer.Name

which when executed via VSCode looks like;

Install OpenSSH Server on Windows 10.PNG

By default the SSH Server service is configured for Manual startup. To configure it for Automatic Startup use the Set-Service cmdlet.

# Set SSH Server for Auto Startup
Get-Service sshd
Set-service sshd -StartupType Automatic

ssh Server Startup Automatic.PNG

Finally we need to increase the ClientAliveInterval setting in the sshd_config configuration file located in the %programdata%\ssh directory. I’ve made mine 3600 seconds (1 hour).

sshd ClientAliveInterval.PNG

Windows Containers / Docker Dependencies

# Install Containers / Docker Dependencies
Enable-WindowsOptionalFeature -Online -FeatureName containers –All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V –All
Restart the computer

Install Docker

Head on over to Docker and login (or create an account if you don’t already have one). Get Docker CE for Windows. I’m running 18.06.1-ce-win73.

Download and Install Docker.PNG

As we want a full Windows environment for PowerShell (not PowerShell Core on Linux) select “Use Windows containers” when installing Docker.

Use Windows Containers.PNG

At the end of the Docker install there is a reboot required.

Docker Install Complete.PNG

Get the Windows 1809 Base Container Image

We’re almost there. We need to get the recently released full Windows Image that will be the basis for our containers that will allow us to run full PowerShell environments. Don’t be confused by the Nano and Core images that have been available for quite some time. This is the FULL WINDOWS Build 1809 IMAGE.

As future Windows updates increment the version, the version you want to pull needs to be no greater than the host it is running on. Unlike the Insider Images the release versions follow the Release Number not the Build Number. Looking at the repository we can see that the image name is 1809 where-as its Build Number is 10.0.17763.134.

Docker Windows Image Registry.PNG

With the workstation restarted we SSH into it and pull the Windows 1809 Docker Image. I’ve given my Windows 10 VM a DNS name so I don’t need to figure out the IP Address each time I start it up. From a Windows command prompt to access your new VM (via IPAddress) use;

ssh username@IPAddressofWin10VM

Once we have a console on our Windows 10 VM we can pull the Windows 10 Docker Image.

docker pull mcr.microsoft.com/windows:1809

The image will be retrieved.

Pull Windows 1809 Base Image.PNG

After pulling the image it will be extracted. Depending on the spec of your VM this may take 10-20 minutes.

Extracting Windows Base Docker Image.PNG

After Extraction we have our base Container Image.

Completed Docker Image.PNG

In order to create a container from the command console via SSH we need to be elevated. I’ll cover that in the next post. So to validate we are able to create a container based on the full Windows 10 1809 image, RDP into the Windows 10 VM and open an elevated command prompt. Then type the command;

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

which will start a container using the Windows 10 1809 Image and run PowerShell with the command $PSVersionTable that will return the version of PowerShell.

PowerShell Desktop Virt Env Nested Virtualization

Summary

As you can see from the screenshot above, we have Nested Virtualization in an Azure Resource Manager Windows 10 Virtual Machine running a Docker Windows 10 1089 Container Image that allows us to run PowerShell Desktop 5.1. BOOM!!

That’s it for the first post, where I introduced the concept of Full Windows Docker Images supporting PowerShell Desktop in Azure. Stay tuned for the next post that starts putting this new functionality to good use.

Got thoughts or feedback on this? Twitter || Blog

Azure VM Docker CreateContainer Error (0xc0370102)

A real quick post as I’ve just managed to figure out what was causing this error and why my Docker container wasn’t being created.

I’d just deployed a brand new Windows 10 VM in Azure Resource Manager and installed Docker. But when I attempted to create a new container I was getting the error below in both the Docker console and in the Application Event Log.

---snippet start---
Handler for POST /v1.38/containers/6f6e2c37142a63db7a19e2533a97b8affad52b735d7cfcbbd2b69ce87ef4c36c/start
returned error: container 6f6e2c37142a63db7a19e2533a97b8affad52b735d7cfcbbd2b69ce87ef4c36c 
encountered an error during CreateContainer: failure in a Windows system 
call: The virtual machine could not be started because a required feature 
is not installed. (0xc0370102) extra info:
 {"SystemType":"Container","Name":"6f6e2c37142a63db7a19e2533a97b8affad52b735
d7cfcbbd2b69ce87ef4c36c","Owner":"docker","IgnoreFlushesDuringBoot":true,
"LayerFolderPath":"C:\\ProgramData\\Docker\\windowsfilter\\6f6e2c37142a63db7
a19e2533a97b8affad52b735d7cfcbbd2b69ce87ef4c36c","Layers":[{"ID":"4457d5ac-
9bda-5212-b74c-
---snippet end---

The key words out of all that are:

CreateContainer: failure in a Windows system call: The virtual machine could not be started because a required feature is not installed. (0xc0370102)

…. but search as you might, you can’t find anything anywhere. Through my troubleshooting process I tried creating a Hyper-V VM which I could, but it wouldn’t start with an error the a Hyper-V Process was not installed/running.

Thinking through want I was trying to do and the errors from both Hyper-V and Docker I realized it need to be something to do with Nested Virtualization.

Light-bulb Moment

Support for Nested Virtualization in Azure VM’s is dependent on the generation of the underlying host. When Microsoft started supporting the Intel Broadwell processors, they also enabled Nested Virtualization on some VM Sizes.

Changing my VM Size from a DS_v2 to a DS_v3 was all that I had to do. My Docker Containers are now running on a Windows 10 Virtual Machine in Azure Resource Manager.

FYI, Azure Dv3 and Ev3 VM’s support Nested Virtualization and I’m sure there are also now others. Hopefully I’ve saved someone else the two hours I’ve just lost.

Deploying a SailPoint IdentityNow Virtual Appliance in Azure

UPDATE: 10 October 2018
SailPoint now support and provide guidance on deploying 
IdentityNow Virtual Appliances in Azure. 
See this document on Compass for more details

Introduction

The CentOS image that SailPoint provide for the IdentityNow Virtual Appliance that performs integration between ‘Sources’ and IdentityNow is VMWare based. I don’t have any VMWare Infrastructure to run it on and really didn’t want to run up any VMWare environments for this component. All my other infrastructure is in Azure. I’d love to run my VA(s) in Azure too.

In discussions with SailPoint I understand it is simply a case that they haven’t certified their CentOS image on Azure. So I figured I’d convert the VM, get it into Azure and see if it works from my Sandpit environment. This blog post details how I got it working.

Disclaimer: If you use this for more than a Sandpit/Test environment let your SailPoint CSM know. This isn’t an approved process or a support configuration. That said it works for me.

Overview

This is the high-level process I threw together that worked for me.

  1. Obtain the CentOS Image from the IdentityNow Virtual Appliance Setup
  2. Convert the VMWare VMDK image to Hyper-V VHD format using VirtualBox vboxmanage (free)
  3. From the Azure MarketPlace create a Seed VM based on CentOS (with new Resource Group, Storage Account, Virtual Network etc)
  4. Upload the VHD to the Azure Storage Account (associated with VM from Step 3) using Azure Storage Explorer
  5. Create a new VM based off the VM from Step 3 to use the disk from Step 4 as the Operating System disk
  6. Log in and configure the Virtual Appliance

Convert VMWare VM to Hyper-V.png

Prerequisites

  1. Virtual Box (for the disk image converter). You could probably do it with other tools but I’ve used this before and it just works.
  2. Enough hard disk space for the VA image and the converted image. The base image is ~2.8Gb and when converted to a fixed disk image it becomes ~128Gb (which can compress to ~3Gb for initial upload).
  3. Azure Storage Explorer. We’ll need this to upload the converted virtual disk to Azure.

SailPoint Virtual Appliance CentOS VMWare Image

To download the CentOS VMWare Image login to the Admin section of your IdentityNow Tenant.  Under Admin => Connections => Virtual Appliances create a New Cluster. Select that Cluster then Virtual Appliances => New 

Download the Appliance Package 

Create New VA.PNG

Converting the CentOS VMWare Virtual Disk to a Fixed Hyper-V Virtual Disk

I already had Virtual Box installed on my computer. I had to give the full path to VBoxManage (as shown below) and called it with the switches to convert the image;

vboxmanage clonehd –format VHD –variant Fixed

The –variant Fixed switch takes the dynamic image and converts it to Fixed as this is a requirement in Azure.

ConvertVADisk 1.PNG

The image conversion started and completed in under ten minutes.

Converted Fixed.PNG

Creating an Azure CentOS VM

In the Azure Portal I created a New Resource and chose CoreOS.

NewCoreOS 1

I gave it a name, chose HDD as the disk type and gave it a Username and Password.

NewCoreOS 2

I chose sizing in line with the recommendations for a Virtual Appliance.

NewCoreOS 3

And kept everything else simple (for my sandpit environment).

NewCoreOS 4

After the VM had deployed I had a Resource Group with the necessary Virtual Network, Storage Account etc.

Resource Group.PNG

Upload the Converted Disk to Azure Storage

I created a vhd container (in the Storage Group associated with the VM I just created) to hold the new VHD. Using Azure Storage Explorer I then uploaded the converted image. Select Page Blob for the blob type.

Upload VHD

You’ll want to have a decent internet connection to do this. I converted the SailPoint image on an Azure VM (to which I added a 256Gb data disk too). I then uploaded the new 128Gb VHD disk image from within Azure to the target Resource Group in about 75 minutes.

Upload VHD 2

Below I show the SailPoint Virtual Appliance CentOS OS converted disk image uploaded to Azure Storage Account Blob Storage.

Upload VHD 3.PNG

Generate SAS Token / Get Blob URI

We won’t used a SAS Token, but this just gives easy access to the Storage Blob URL. Right click on the VHD Blob and select Generate Shared Access Signature. Select Create.

Right Click - Get Shared Access Signature

Copy the URL. We’ll need parts of this for the script to create a new CentOS VM with our VA Disk Image.

Get VHD and BLOB Details

Create the new VM for our Virtual Appliance

Update the script below for:

  • The Resource Group you created the Seed VM in (line 2)
  • The Seed VM Name (line 4)
  • The Seed VM Subnet Name (line 6)

Each of those are easily obtained from the Seed VM Summary as highlighted below.

  •  update the Disk Blob details in Live 8 and 10 as copied earlier

After stepping through the script to create the new VM, and happy with the new name etc, I executed the New-AzureRMVM command.

Create New VM

And the VM was created in a couple of minutes.

Create VM Initiated

Accessing the new VM

Getting the IP address from the new VM Summary I SSH’d into it.

VM Started

And logged in with the default credentials from SailPoint. (Windows Subsystem for Linux is awesome).

SSH In to VA

Next Steps

  1. Change the password on your Virtual Appliance (passwrd)
  2. Create a DNS Name, update the configuration as per SailPoint VA Configuration tasks
  3. Create the VA and Test the Connection from the IdentityNow Portal
  4. Delete your original SeedVM as it is no longer required
  5. Add an NSG to the new VM
  6. Create another VM in a different location for High Availability and configure it in IdentityNow

Below shows my Azure based Virtual Appliance connected and all setup.

Cluster Up and Running.PNG

Summary

Whilst not officially supported it is possible to convert the SailPoint Virtual Appliance VMWare based image to an Azure compatible Hyper-V image and assign it as the Operating System disk on an Azure Linux (CoreOS) Virtual Machine. If you need to do something similar I hope my approach gives you some ideas.

If you then need to create another Virtual Appliance in Azure you have a Data Disk you can assign to a VM and upload to wherever it needs to be for creation of another Virtual Appliance VM.

How to quickly recover from a FAILED AzureRM Virtual Machine using Powershell

Problem

I have a development sandpit in Azure which I use quite a lot to test and mess with different ideas and concepts. This week when shutting it down things didn’t go that smoothly. All but one virtual machine finally stopped and de-allocated, but one virtual machine just didn’t make it. I tried resizing the VM. I tried changing the configuration of it and obviously tried starting it up many times via the portal and Powershell all without any success.

Failed VM.png

I realised I was at the point where I just needed to build a new VM but I wanted to attach the OS vhd from the failed VM to it so I could recover my work and not have to re-install the applications and tools on the OS drive.

Solution

I created a little script that does the basics of what I needed to recover a simple single HDD VM and keep most of the core settings. If your VM has multiple disks you can probably attach the other HDD’s manually after you have a working VM again.

Via the AzureRM Portal select your busted VM and get the values highlighted in the screenshot below for VM Name, VM Resource Group and VM VNet Subnet.

InputsForScript.png

The Script then:

  • Takes the values for the information you got above as variables for the;
    • Broken VM Name
    • Broken VM Resource Group
    • Broken VM VNet Subnet
  • Queries the Failed VM and obtains the
    • VM Name
    • VM Location
    • VM Sizing
    • VM HDD information (location etc)
    • VM Networking
  • Copies the VHD from the failed virtual machine to a new VHD file so you don’t have the bite the bullet and kill the failed VM off straight away to release the vhd from the VM.
    • Names the copied VHD <BustedVMName-TodaysDate(YYYYMMDD)>
  • Creates a new Virtual Machine with;
    • Original Name suffixed by ‘2’
    • Same VM Sizing as the failed VM
    • Same VM Location as the failed VM
    • Assigns the copy of the failed VM OS Disk to the new VM
    • Creates a new Public IP for the VM named
    • Creates a new NIC for the VM named

Summary

With only 3 inputs using this script you can quickly recovery your busted and broken failed AzureRM virtual machine and be back up and running quickly. Once you’ve verified your VM is all good, delete the failed one.

VMList.png

Take this script, change lines 2, 4 and 6 for your failed or VM you want to clone to reflect your Resource Group, Virtual Machine Name and Virtual Machine Subnet. Step through it and let it loose.

Follow Darren on Twitter @darrenjrobinson