Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 2

27 Nov 18 Part 3 is available here that details customizing 
an image and accessing it via other SSH clients with elevated
access.

In Part-1 of this series posted yesterday I showed that with Windows 10/Windows Server 2019 we can now have isolated virtual environments for PowerShell Desktop in Azure through containerization.

In this post I’ll show how I plan to leverage this capability from a mobility perspective. What we need to do first is enable elevated (privileged) access to our VM. My Client will be Azure Cloud Shell. My target/host is the Windows 10 1809 Virtual Machine I deployed in the last post.

Enabling SSH Key Based Privileged Authentication to our Windows 10 VM

To setup Key Based Access (over password access, which is required for elevated access) we need to configure the SSH Server and our Client.

SSH Server

On the Windows 10 Azure VM where we installed OpenSSH as per the first post here, we need to start the SSH-Agent. By default it is set to Disabled. Change the Startup Type, Start it and test it by adding the local user to the Agent. Using an elevated PowerShell session on the Azure Windows VM run;

Set-Service ssh-agent -startupType automatic
Start-Service ssh-agent
cd ~\
ssh-add .\.ssh\id_rsa

Add SSH Key to SSH-Agent on Server.PNG

SSH Client

As I’m using Azure Cloud Shell as my client, I started a Cloud Shell Session in my browser.

  • In Azure Cloud Shell generate a SSH Key using SSH-Keygen
    • Remember your passphrase as this will be required for accessing the Windows 10 Azure VM

Client SSH Keygen.PNG

  • Copy the key to the Windows 10 Azure VM
    • Run the command below (after changing it for your username and Windows VM IP Address) and provide your password to copy up the file
cd ~/
scp ./.ssh/id_rsa.pub username@Win10ServerIPAddress:C:\Users\userprofilename\.ssh\authorized_keys\

Copy Public Key from Client to Server.PNG

  • On the Server if C:\ProgramData\ssh\administrators_authorized_keys exists add your Public key that you copied into your home folder above into it. If C:\ProgramData\ssh\administrators_authorized_keys doesn’t exist then copy the authorized_keys file from your .ssh home directory (e.g c:\users\darrenjrobinson\.ssh ) to C:\ProgramData\ssh\administrators_authorized_keys
  • Edit the permission on the administrators_authorized_keys file.
    • Right-Click the file => Properties => Security => Advanced => Disable Inheritance => Choose “Convert inherited permissions into explicit permissions on this object” 
    • Remove Authenticated Users so that only System and Administrators remain as per the screen shot below. Then select Apply and then OK.

Administrators Authorized Keys.PNG

Testing SSH with Key Access

From our Azure Cloud Shell SSH to your Windows 10 Host;

ssh username@ipaddress

SSH Key Access.PNG

You will be prompted for the passphrase you gave when you generated the SSH key. Enter that and you will be authenticated using SSH to the Windows 10 VM.

SSH to Windows 10.PNG

Docker Access from Azure Cloud Shell in Browser

Now that we have Privileged Access to our Windows 10 VM, let’s try running a Windows 10 1809 Container and executing a PowerShell command to query the version of PowerShell available.

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

Run Docker.PNG

Wait a few seconds (maybe longer depending on the spec of your VM) and

PowerShell Desktop via Docker.PNG

Fantastic, we have a Container with PowerShell Desktop that we have accessed via Cloud Shell in a Browser.

Docker Access from Azure Cloud Shell in iOS Azure App

Using the Azure iOS App on my iPhone I started a Cloud Shell session and changed to my home directory cd ~\ where I had put a file named Connect-Win10.ps1 which contains

ssh username@ipaddressOfWin10Host

IMG-8441

I executed it and it prompted me for the passphrase for my SSH Key which I entered

IMG-8444

and I was then SSH’d into the Windows 10 VM.

IMG-8445

I did a dir d* and saw the DockerPS.cmd file I’d previously created. It contains the following command.

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

IMG-8446

Running that file

IMG-8447

starts the Docker Windows 1809 Container with the PowerShell command

IMG-8448

and I can see from my phone I’m have access to a PowerShell Desktop via Azure Cloud Shell and Docker from inside a Windows 10 VM based in Azure.

IMG-8449

Summary

This post has demonstrated that it is possible to get an elevated privileged session into a Windows 10 host using SSH, from which Docker Containers can be orchestrated and executed. By doing this from Azure Cloud Shell, it means that I can essentially login to a browser or app from anywhere in the world and access my Virtual PowerShell environments that in turn will allow world domination. Muwahahahah.

Got thoughts or feedback on this? Twitter || Blog

Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 1

22 Nov 18 Part 2 is available here that details accessing
the Docker Image via Azure Cloud Shell / SSH
27 Nov 18 Part 3 is available here that details customizing
an image and accessing it via other SSH clients with 
elevated access.

PowerShell Desktop Virtual Environments

If you’ve been working with PowerShell for any length of time you know that through its flexibility there can come challenges when using disparate PowerShell Modules and often their version dependencies. This isn’t just a PowerShell thing; Python can also trip you up in a similar manner.

Python however has Virtual Environments (virtualenv) capabilities which provides functionality to create an environment that contains all the necessary binaries required for the packages/libraries that a Python project would need. I’ve found this this very useful and I’ve wondered why I couldn’t do the same for PowerShell Desktop (not Core). PowerShell Desktop, PowerShell Core?

PowerShell Desktop vs PowerShell Core

As of August 2016 there are two PowerShell versions;

  • PowerShell Desktop
    • PowerShell 5.1 that runs on Windows and on top of the full .NET Framework stack
  • PowerShell Core
    • PowerShell Core 6.x that is cross platform (Windows, MacOSX, Linux)
      • Doesn’t run on the full .NET Framework

If you are a Windows/Directory Services Admin the likelihood of many of the PowerShell Modules you use running on PowerShell Core are slim. That’s because a lot of the modules you use require the full .NET Framework. And that isn’t available in PowerShell Core.

A Virtual PowerShell Desktop Env? Why is this only possible now?

In July this year Microsoft started providing Windows Container Images for the Insider releases (over and above Nano and Core OS builds). This was great, but meant you needed to be on the Insider Builds and were restricted to environments on physical hardware or VM’s migrated to Azure as there wasn’t an Azure Marketplace OS Version (Windows 10 or Server 2019 Preview) that met the minimum host requirements for the Insider Container images.

We’ve had to wait until Build 1809 became available in the Azure Marketplace which it did at the end of last week (w/e 18 November 2018). The Windows Container Version History shows that there was no 1803 Windows Image. But that’s all bygones now, as 1809 is finally here.

PowerShell Desktop Virtual Environments through Nested Virtualization

The screenshot below on first glance just looks like any command window in a virtual machine. But look a little closer;

  • Remote Desktop Session to an Azure Windows 10 1809 Virtual Machine (host.region.cloudapp.azure.com)
  • Docker Run Windows 1809 PowerShell $psversiontable
    • PowerShell Desktop 5.1 via Docker inside a Virtual Machine in Azure
      • BOOM!!

PowerShell Desktop Virt Env Nested Virtualization.PNG

Ok, so that is a single Docker Container with a full Windows 10 1809 environment running inside a Windows 10 Virtual Machine. But that means we can also add more containers and have multiple isolated PowerShell environments. Something like ….

Nested Virtual PowerShell Desktop Env.png

Wait, what, how? – The Overview

The high-level process is;

  • Provision a Windows 10 Virtual Machine (Build 1809 or later).
    • I recommend to deploy it in Azure, but you could do it in other virtualization environments that support Nested Virtualization
    • NOTE: As I write this Windows Server 2019 Build 1809 hasn’t hit the Azure Marketplace. When it does, as it has a common code-base it should work exactly the same.
  • Enable the OpenSSH Feature (I’ll be using this a little in this post but more in a future post)
  • Enable the Containers and Hyper-V Features
  • Install and configure Docker
  • Pull the Windows Build 1809 Container Image

Windows 10 Build 1809 Virtual Machine

I’m not going to give step-by-step details for deploying a Windows VM in Azure. If you’re looking to setup Virtual PowerShell Desktop Environments with Docker you should be able to deploy a Windows VM. That said you need to choose a VM Size and Version that will support “Nested Virtualization”. The Azure RM Dv3 and Ev3 Series VM’s do. If you get an error similar to this when running a Docker Image then change your VM Series to Dv3. I went with;

  • The Azure Marketplace has a image for Windows 10 Build 1809. Search for Windows 10 Pro, Version 1809
    • In order to run this VM as pragmatic as practical I chose the following size and configuration for my VM initially
      • Standard D2_v3 (2 vCPUs, 8 GB memory)
      • HDD over SSD
      • Un-managed disks
    • Enable SSH and RDP in the NSG configuration
      • initially we’ll need RDP to connect to the workstation
      • moving forward we’ll be using SSH

OpenSSH Server

OpenSSH Client and Server has been available for Windows for a while. Build 1809 though has streamlined the install process considerably. The base install and setup is now just a couple of commands away. The commands below will install the latest version of OpenSSH Server via PowerShell;

# Find OpenSSH Server
$openSSH = Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH*'

# Install OpenSSH Server
$sshServer = $openSSH | Select-Object name | Where-Object {$_.name -like "OpenSSH.Server*"}
$sshServer

Add-WindowsCapability -Online -Name $sshServer.Name

which when executed via VSCode looks like;

Install OpenSSH Server on Windows 10.PNG

By default the SSH Server service is configured for Manual startup. To configure it for Automatic Startup use the Set-Service cmdlet.

# Set SSH Server for Auto Startup
Get-Service sshd
Set-service sshd -StartupType Automatic

ssh Server Startup Automatic.PNG

Finally we need to increase the ClientAliveInterval setting in the sshd_config configuration file located in the %programdata%\ssh directory. I’ve made mine 3600 seconds (1 hour).

sshd ClientAliveInterval.PNG

Windows Containers / Docker Dependencies

# Install Containers / Docker Dependencies
Enable-WindowsOptionalFeature -Online -FeatureName containers –All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V –All
Restart the computer

Install Docker

Head on over to Docker and login (or create an account if you don’t already have one). Get Docker CE for Windows. I’m running 18.06.1-ce-win73.

Download and Install Docker.PNG

As we want a full Windows environment for PowerShell (not PowerShell Core on Linux) select “Use Windows containers” when installing Docker.

Use Windows Containers.PNG

At the end of the Docker install there is a reboot required.

Docker Install Complete.PNG

Get the Windows 1809 Base Container Image

We’re almost there. We need to get the recently released full Windows Image that will be the basis for our containers that will allow us to run full PowerShell environments. Don’t be confused by the Nano and Core images that have been available for quite some time. This is the FULL WINDOWS Build 1809 IMAGE.

As future Windows updates increment the version, the version you want to pull needs to be no greater than the host it is running on. Unlike the Insider Images the release versions follow the Release Number not the Build Number. Looking at the repository we can see that the image name is 1809 where-as its Build Number is 10.0.17763.134.

Docker Windows Image Registry.PNG

With the workstation restarted we SSH into it and pull the Windows 1809 Docker Image. I’ve given my Windows 10 VM a DNS name so I don’t need to figure out the IP Address each time I start it up. From a Windows command prompt to access your new VM (via IPAddress) use;

ssh username@IPAddressofWin10VM

Once we have a console on our Windows 10 VM we can pull the Windows 10 Docker Image.

docker pull mcr.microsoft.com/windows:1809

The image will be retrieved.

Pull Windows 1809 Base Image.PNG

After pulling the image it will be extracted. Depending on the spec of your VM this may take 10-20 minutes.

Extracting Windows Base Docker Image.PNG

After Extraction we have our base Container Image.

Completed Docker Image.PNG

In order to create a container from the command console via SSH we need to be elevated. I’ll cover that in the next post. So to validate we are able to create a container based on the full Windows 10 1809 image, RDP into the Windows 10 VM and open an elevated command prompt. Then type the command;

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

which will start a container using the Windows 10 1809 Image and run PowerShell with the command $PSVersionTable that will return the version of PowerShell.

PowerShell Desktop Virt Env Nested Virtualization

Summary

As you can see from the screenshot above, we have Nested Virtualization in an Azure Resource Manager Windows 10 Virtual Machine running a Docker Windows 10 1089 Container Image that allows us to run PowerShell Desktop 5.1. BOOM!!

That’s it for the first post, where I introduced the concept of Full Windows Docker Images supporting PowerShell Desktop in Azure. Stay tuned for the next post that starts putting this new functionality to good use.

Got thoughts or feedback on this? Twitter || Blog

Enabling and Scripting Azure Virtual Machine Just-In-Time Access

Last week (19 July 2017) one of Microsoft’s Azure Security Center’s latest features went from Private Preview to Public Preview. The feature is Azure Just in time Virtual Machine Access.

What is Just in time Virtual Machine access?

Essentially JIT VM Access is a wrapper for automating an Azure Network Security Group rule set for access to an Azure VM(s) for a temporal period on a set of network ports restricted to a source IP/Network.

Personally I’d done something a little similar earlier in the year by automating the update of an NSG inbound rule to allow RDP only for my current public IP Address. Details on that are here. But that is essentially now redundant.

Enabling Just in time VM Access

In the Azure Portal Select the Security Center icon.

In the central pane you will find an option to Enable Just in time VM Access. Select that link.

In the right hand pane you will then see a link for Try Just in time VM Access. Select that.

If you have not previously enabled the Security Center you will need to select a Pricing Tier. The Free Tier does not include the JIT VM Access, but you should get an option for a 60 day trial for the Standard Tier that does.

With everything enabled you can select Recommended to see a list of VM’s that JIT VM Access can be enabled for.

I’ve selected one of mine from the list and then selected Enable JIT on 1 VM.

In the Enable JIT VM Config you can add and remove ports as required. Also the maximum timeframe for the access. The Per-Request for source IP will enable the rule for the requester and their current IP.  Select Ok.

With the rule configured you can now Request access

When requesting access we can tailor the access based on what is in the rule. Select the ports we want from within the policy and IP Range or Current IP and reduce the timeframe if required. Then select Open Ports.

For the VM we can now see that JIT VM Access has been requested and is currently active.

Looking at the Network Security Group that is associated with the VM we can see the rules that JIT VM Access has put in place. We can also see that the rules are against my current IP Address.

Automating JIT VM Access Requests via PowerShell

Now that we have Just-in-time VM Access all configured for our VM, the reality is I just want to invoke the access request via PowerShell, start-up my VM (as they would normally be stopped unless in use) and utilise the resource.

The script below is a simplified version of the my previous script to automate NSG rules  detailed here. It assumes you enabled JIT VM Access as per the manual process above, and that your VM would normally be in an off state and you’re about to enable access, start it up and connect.

You will need to have the AzureRM and the new Azure-Security-Center PowerShell Modules. If you are running PowerShell 5.1 or later you can install them by un-remarking lines 3 and 5.

Update lines 13, 15 and 19 for your Resource Group name, Virtual Machine name and the link to your RDP file. Update line 21 for the number of hours to request access (in line with your policy).

Line 28 uses the new Invoke-ASCJITAccess cmdlet from the Azure-Security-Center Powershell module to request access. 

Summary

This simplifies the management of NSG Rules for access to VM’s and reduces the exposure of VM’s to brute force attacks. It also simplifies for me the access to a bunch of VM’s I only have running on an ad-hoc basis.

Looking into the Azure-Security-Center PowerShell module there are cmdlets to also manage the JIT Policies.

How to quickly recover from a FAILED AzureRM Virtual Machine using Powershell

Problem

I have a development sandpit in Azure which I use quite a lot to test and mess with different ideas and concepts. This week when shutting it down things didn’t go that smoothly. All but one virtual machine finally stopped and de-allocated, but one virtual machine just didn’t make it. I tried resizing the VM. I tried changing the configuration of it and obviously tried starting it up many times via the portal and Powershell all without any success.

Failed VM.png

I realised I was at the point where I just needed to build a new VM but I wanted to attach the OS vhd from the failed VM to it so I could recover my work and not have to re-install the applications and tools on the OS drive.

Solution

I created a little script that does the basics of what I needed to recover a simple single HDD VM and keep most of the core settings. If your VM has multiple disks you can probably attach the other HDD’s manually after you have a working VM again.

Via the AzureRM Portal select your busted VM and get the values highlighted in the screenshot below for VM Name, VM Resource Group and VM VNet Subnet.

InputsForScript.png

The Script then:

  • Takes the values for the information you got above as variables for the;
    • Broken VM Name
    • Broken VM Resource Group
    • Broken VM VNet Subnet
  • Queries the Failed VM and obtains the
    • VM Name
    • VM Location
    • VM Sizing
    • VM HDD information (location etc)
    • VM Networking
  • Copies the VHD from the failed virtual machine to a new VHD file so you don’t have the bite the bullet and kill the failed VM off straight away to release the vhd from the VM.
    • Names the copied VHD <BustedVMName-TodaysDate(YYYYMMDD)>
  • Creates a new Virtual Machine with;
    • Original Name suffixed by ‘2’
    • Same VM Sizing as the failed VM
    • Same VM Location as the failed VM
    • Assigns the copy of the failed VM OS Disk to the new VM
    • Creates a new Public IP for the VM named
    • Creates a new NIC for the VM named

Summary

With only 3 inputs using this script you can quickly recovery your busted and broken failed AzureRM virtual machine and be back up and running quickly. Once you’ve verified your VM is all good, delete the failed one.

VMList.png

Take this script, change lines 2, 4 and 6 for your failed or VM you want to clone to reflect your Resource Group, Virtual Machine Name and Virtual Machine Subnet. Step through it and let it loose.

Follow Darren on Twitter @darrenjrobinson