A quick start guide for Deploying and Configuring Node-RED as an Azure WebApp

 

Introduction

I’ve been experimenting and messing around with IoT devices for well over 10 years. Back then it wasn’t called IoT, and it was very much a build it and write it yourself approach.

Fast forward to 2017 and you can buy a microprocessor for a couple of dollars that includes WiFi. Environmental sensors are available for another couple of dollars and we can start to publish environmental telemetry without having to build circuitry and develop code. And rather than having to design and deploy a database to store the telemetry (as I was doing 10+ years ago) we can send it to SaaS/PaaS services and build dashboards very quickly.

This post provides a quick start guide to those last few points. Visualising data from IoT devices using Azure Platform-as-a-Service services. Here is a rudimentary environment dashboard I put together very quickly.

NodeRedDashboardUI.PNG

Overview

Having played with numerous services recently for my already API integrated IoT devices, I knew I wanted a solution to visualise the data, but I didn’t want to deploy dedicated infrastructure and I wanted to keep the number of moving parts to a minimum. I looked at getting my devices to publish their telemetry via MQTT which is  great solution (for scale or rapidly changing data), but when you are only dealing with a handful of sensors and data that isn’t highly dynamic it is over-kill. I wanted to simply poll the devices as required and obtain the current readings and visualise it. Think temperature, pressure, humidity.

Through my research I like the look of Node-RED for its quick and simplistic approach to obtaining data and manipulating it for presentation. Node-RED relies on NodeJS which I figured I could deploy as an Azure WebApp (similar to what I did here). Sure enough I could. However not long after I got it working I discovered this project. A Node-RED enabled NodeJS Web App you can deploy straight from Github. Awesome work Juan Manuel Servera.

Prerequisites

The quickest way to start then is to use Juan’s Azure WebApp wrapper for Node-RED. Follow his instructions to get that deployed to your Azure Subscription. Once deployed you can navigate to your Node-RED WebApp and you should see something similar to the image below.

DefaultNode-Red.PNG

The first thing you need to do is secure the app. From your WebApp Application Settings in the Azure Portal, use Kudu to navigate to the WebApp files. Under your wwwroot/WebApp directory you will find the settings.js file. Select the file and select the Edit (pencil) icon.

Node-Red-Settings

Comment in the adminAuth section around lines 93-100. To generate the encrypted password on a local install of NodeJS I ran the following command and copied the hash. Change ‘whatisagoodpassword’ for your desired password.

node -e "console.log(require('bcryptjs').hashSync(process.argv[1], 8));" whatisagoodpassword

Select Save, then Restart your application.

Node-Red-Password

On loading your WebApp again you will be prompted to login. Login and lets get started.

Login

Configuring Node-RED

Now it is time to pull in some data and visualise it. I’m not going to go into too much detail as what you want to do is probably quick different to me.

At its simplest though I want to trigger on a timer a call to my sensor API’s to return the values and display them as either text, a graph or a gauge. Below graphically shows he configuration for the dashboard shown above.

ConfigureNodeRED.PNG

For each entity on the dashboard there is and input. For me this is trigger every 15 minutes. That looks like this.

Get-15mins

Next is the API to get the data. The API I’m calling is an open GET with the API key in the URL so looks like this.

HTTPRequest

With the JSON response from the API I retrieve the temperature value and return it as msg for use in the UI.

Function

I then have the Gauge for Temperature. I’ve set the minimum and maximum values and gone with the defaults for the colours.

Guage

I’m also outputting debug info during setup for the raw response from the Function ….

DebugParsedOutput

….. and from the parsed function.

DebugHTTPRequest

These appear in the Debug pane on the right hand side.

DebugWindow

After each configuration change simply select Deploy and then switch over to your Node-RED WebApp. That will looks like your URI for your WebApp with UI on the end eg. http://.azurewebsites.net/ui

Conclusion

Thanks to Azure PaaS services and the ability to use a graphical IoT tool like Node-RED we can quickly deploy a solution to visualise IoT data without having to deploy any backend infrastructure. The only hardware is the IoT sensors, everything else is serverless.

Getting started developing Custom Actions for the Google Assistant (Home)

Introduction

Whilst I was in the USA recently I bought myself a Google Home. My home already had Hue Lights, Chromecast on a couple of TV’s and I’m a big user of Spotify (Premium). It was very quick to get it up and running and doing simple tasks, but I started thinking about what custom things I could get it to do. Could I get it to call custom/private API to get some information and let me know the result? The answer is yes, and that is what this post will cover. I have a few temperature sensors that have a RestAPI that can be used to query the temperature. I got it working pretty quickly (one evening) for not having messed with the Google App Engine for many years. But there were a number of steps required, and as this functionality will form the basis for more complex functionatlity I’ve documented the step by step process I used.

But here is the result. A command, a query and custom response.

Prerequisites

  • First up you are going to need to have a Google Account. Seeing as you are probably reading this because you want to do something similar, you will have a Google Home and thereby a Google account, so you’re already covered on that point. The account associated with your Google Home is the one you NEED to use with the additional services, as it is what will tie everything together
  • You will need to register for an api.ai account. Sign in with the Google Account you have linked to your Google Home
  • You will need to also register for a Google Cloud account. Sign in with the Google Account you have linked to your Google Home
  • Download and install the Google Cloud CLI. Even though I did this on a Windows machine, I actually installed the CLI in Ubuntu via the Windows Subsystem for Linux. If you want that option, here is how to get started with the Windows Subsystem for Linux
    • change into your homedir eg cd ~/ and then run curl https://sdk.cloud.google.com | bash

Getting Started

Now it is time to start our Project. In your gCloud CLI window create a project directory. I named mine insidetemp as I will be calling an IOT Temperature sensor to get the temperature inside one of my beer brewing sheds.

inside temp.PNG

We will now create a small Javascript script that will call a RestAPI to retrieve the current temperature from an IOT Temperature Sensor. The script below does that. Modify it for whatever unauthenticated API call you want to make. You’ll need to change lines 6, 7 and 24 for building the URL you will call to get data. Line 8 contains the Function name. Mine is called insideTemp. If you name yours different, then everywhere I use ‘insideTemp’ (not case) substitute what you changed your function name to.

With your modified script (I’ve changed the api key so it won’t work as is), in your application directory create a new file named index.js and paste in your script. e.g in Linux run nano index.js 

After pasting in your script, use Cntrl + O to save as index.js and Cntrl + X to quit.

custom action function
custom action function

Run gcloud auth login and go through the authentication process using the same Google Account you used for api.ai and Google Cloud. The CLI will give you a URL you need to paste into your browser and authorize access to Google Cloud for your Google Account. Once completed get the code generated from the authorization and paste that back into the CLI

gcloud authentication
gcloud authentication

Now we will create a Google Cloud Storage Bucket for our script. I named mine the same as the Function in the script (line 8) but all in lowercase (as lower case is required).
gsutil mb gs://insidetemp

GCloud Storage Bucket.PNG

You should then be able to browse to your storage. My URL is below. Updated for your project name.

https://console.cloud.google.com/storage/browser/insidetemp?project=insidetemp

Google Cloud Storage
Google Cloud Storage

We will now upload our script and create the Function that we will call from Google Home. Change the command below for your project and script function name.

gcloud beta functions deploy insideTemp –stage-bucket insidetemp –trigger-http 

Upload and create function
Upload and create function

Once uploaded and the Function has been created you will get a HTTP Trigger URL. Copy this as you’ll need it shortly. If you need to update or change the script do that locally and run the gcloud beta functions deploy command again.

Function Created
Function Created

Validating the Function

Using a browser you should be able to browse to your Function. The URL should look something like this. Change for your project name. https://console.cloud.google.com/functions/details/us-central1/insideTemp?project=insidetemp&tab=general&duration=PT1H

If the Storage account is different (because you already had one) then navigate to https://console.cloud.google.com/functions and use select your project from the menu in the top left next to Google Cloud Platform.

Select the Testing menu then click on the Test the function button. If your script is all good it will execute and get the value back from the API. You can see mine returned 21 degrees from my IOT Temp Sensor.

Test the Function
Test the Function

Wiring up our Project to Google Home

Now that we have an HTTP Function that can retrieve info from a RestAPI let’s have that as an action in Google Home. Head on over to the Actions API here;

https://console.cloud.google.com/apis/api/actions.googleapis.com/overview

At the top of the middle pane click enable if it isn’t already enabled.

Enable Google Actions API
Enable Google Actions API

Now head over to the api.ai Agents Console https://console.api.ai/api-client/#/agents and select Create Agent.

Create Agent
Create Agent

Give it a name, choose your timezone and language and select Save.

New Agent Details
New Agent Details

Select Create Intent

Create Intent
Create Intent

Provide a couple of phrases you will speak to Google Home. Don’t worry about any other settings for now. Select Save.

Create Intent Detail
Create Intent Detail

From the Left Menu select Fulfillment.

Fulfillment
Fulfillment

Enable Webhook and paste in the URL that you got after uploading the Function Script via the gCloud cli. Select Save from the bottom of the page.

Fulfillment Webhook
Fulfillment Webhook

Back in Intents, scroll to the bottom and click Fulfillment and enable Use webhook. Select Save

Intent Webhook
Intent Webhook

From the left menu select Integrations 

Integrations
Integrations

Enable Actions on Google

Actions on Google
Actions on Google

Open Settings and select  Update Draft.

Actions Trigger
Actions Trigger

then select Test 

Actions Test
Actions Test

Update: I did also remove the Default Welcome Intent and use one of my Intent phrases. 

Test will new be active. Select Close

Actions Test
Actions Test

Head back to api.ai and select Intents. Under Response choose the Actions on Google menu and enable it. Select Save.

Intent Actions on Google
Intent Actions on Google

Testing the Custom Action via the API Console

Within the api.ai console in the right hand pane in the Try it now box type in your Intent. One of mine is what is the inside temp. This tests our integration and successfully returns the result from querying the API via our Function.

Test Success
Test Success

If you select Show JSON from the bottom right you can see the processing that went on. We can see that the Agent got the query, used the Webhook to go to Fulfillment to use the Function to call the API to get our information and provide the response.

Success JSON.PNG

Testing the Custom Action via Google Home

Go to Actions on Google https://console.actions.google.com/u/0/ and select Add/Import Project. You should see a project named NewAgent. Select it

New Actions Project
New Actions Project

Then Select Import Project

Import Project
Import Project

Step through to provide info for the Agent, including images etc as if you are going to publically publish this function. Select Save.

App Info
App Info

Finally select Test Draft. DO NOT SELECT SUBMIT DRAFT FOR REVIEW.

Test Draft
Test Draft

As we are in ‘test’ mode we follow the  Ok Google wake command with Talk to . So for my project it is Ok Google, Talk to Inside Temperature.

And we are done.

Summary

Very quickly we’ve created a custom command that queries our own API to get data and have the response spoken to us. This can then be expanded to do any number of different tasks as long as you have something to query or update and the desire to do it from your Google Home.

How to use a Powershell Azure Function to Tweet IoT environment data

Overview

This blog post details how to use a Powershell Azure Function App to get information from a RestAPI and send a social media update.

The data can come from anywhere, and in the case of this example I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the difference in this post is outputting the manipulated data to social media (Twitter) whilst still using a TimerTrigger Powershell Azure Function App to perform the work and leverage the “serverless” Azure Functions model.

Prerequisites

The following are prerequisites for this solution;

Create a folder on your local machine for the Powershell Module then save the module to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name InvokeTwitterAPIs -Path c:\temp\twitter

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption so you only pay for what you use, and select an appropriate location and Storage Account.

Create a Twitter App

Head over to http://dev.twitter.com and create a new Twitter App so you can interact with Twitter using their API. Give you Twitter App a name. Don’t worry about the URL too much or the need for the Callback URL. Select Create your Twitter Application.

Select the Keys and Access Tokens tab and take a note of the API Key and the API Secret. Select the Create my access token button.

Take a note of your Access Token and Access Token Secret. We’ll need these to interact with the Twitter API.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function. For my app I’m changing from the default of a 5 min schedule to hourly on the top of the hour. I did this after I’d already created the Function App as shown below. To update the schedule I edited the Function.json file and changed the schedule to “schedule”: “0 0 * * * *”

Give your Function App a name and select Create.

Configure Azure Function App Application Settings

In your Azure Function App select “Configure app settings”. Create new App Settings for your Twitter Account, Twitter Account AccessToken, AccessTokenSecret, APIKey and APISecret using the values from when you created your Twitter App earlier.

Deployment Credentials

If you haven’t already configured Deployment Credentials for your Azure Function Plan do that and take note of them so you can upload the Twitter Powershell module to your app in the next step.

Take note of your Deployment Username and FTP Hostname.

https://dl.dropboxusercontent.com/u/76015/BlogImages/FunctionIOTWeather/AppDevSettings.png

Upload the Twitter Powershell Module to the Azure Function App

Create a sub-directory under your Function App named bin and upload the Twitter Powershell Module using a FTP Client. I’m using WinSCP.

From the Applications Settings option start Kudu.

Traverse the folder structure to get the path do the Twitter Powershell Module and note it.

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help lines for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help line for the module so we can see in the logs that the module was imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Twitter Module.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and sends a tweet out to Twitter.

Viewing the Tweet

And here is the successful tweet.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a social media platform. The input could easily be business data from an API and the output a corporate social platform such as Yammer.

Follow Darren on Twitter @darrenjrobinson

How to use a Powershell Azure Function App to get RestAPI IoT data into Power BI for Visualization

Overview

This blog post details using a Powershell Azure Function App to get IoT data from a RestAPI and update a table in Power BI with that data for visualization.

The data can come from anywhere, however in the case of this post I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the major change is to use a TimerTrigger Azure Function to perform the work and leverage the “serverless” Azure Functions model. No need for a reporting server or messing around with Windows scheduled tasks.

Azure Function PowerBI

Prerequisites

The following are the prerequisites for this solution;

  • The Power BI Powershell Module
  • Register an application for RestAPI Access to Power BI
  • A Power BI Dataset ready for the data to go into
  • AzureADPreview Powershell Module

Create a folder on your local machine for the Powershell Modules then save the modules to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name PowerBIPS -Path C:\temp\PowerBI
Save-Module -Name AzureADPreview -Path c:\temp\AzureAD 

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption Plan for the Hosting Plan so you only pay for what you use, and select an appropriate location and Storage Account.

CreateFunctionApp2

Register a Power BI Application

Register a Power BI App if you haven’t already using the link and instructions in the prerequisites. Take a note of the ClientID. You’ll need this in the next step.

Configure Azure Function App Application Settings

In this example I’m using Azure Functions Application Settings for the Azure AD AccountName, Password and the Power BI ClientID. In your Azure Function App select “Configure app settings”. Create new App Settings for your UserID and Password for Azure (to access Power BI) and our PowerBI Application Client ID. Select Save.

Not shown here I’ve also placed the URL’s for the RestAPI’s that I’m calling to get the IoT environment data as Application Settings variables.

UIDPWD&PBIClientID

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function App. The default of a 5 min schedule should be perfect. Give it a name and select Create.

CreateAzureFunction

Upload the Powershell Modules to the Azure Function App

Now that we have created the base of our Function App we’re going to need to upload the Powershell Modules we’ll be using that are detailed in the prerequisites. In order to upload them to your Azure Function App, go to App Service Settings => Deployment Credentials and set a Username and Password as shown below. Select Save.

Deployment Credentials

Take note of your Deployment Username and FTP Hostname.

AppDevSettings

Create a sub-directory under your Function App named bin and upload the Power BI Powershell Module using a FTP Client. I’m using WinSCP.

UploadPB PS Module

To make sure you get the correct path to the powershell module from Application Settings start Kudu.

Traverse the folder structure to get the path to the Power BI Powershell Module and note the path and the name of the psm1 file.

PBIPSM

Now upload the Azure AD Preview Powershell Module in the same way as you did the Power BI Powershell Module.

UploadAAD PS Module

Again using Kudu validate the path to the Azure AD Preview Powershell Module. The file you are looking for is the Microsoft.IdentityModel.Clients.ActiveDirectory.dll” file. My file after uploading is located in “D:\home\site\wwwroot\MyAzureFunction\bin\AzureADPreview\2.0.0.33\Microsoft.IdentityModel.Clients.ActiveDirectory.dll”

AAD Kudu

This library is used by the Power BI Powershell Module.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Power BI Powershell Module. Include the get-help line for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain. Select Save and Run.

CheckModule

Below is my output. I can see the output from the Power BI Module get-help command. I can see that the module was successfully loaded.

ModuleLoaded

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and puts the data directly into Power BI.

Viewing the data in Power BI

In Power BI it is then quick and easy to select our Inside and Outside temperature readings referenced against time. This timescale is overnight so both sensors are reading quite close to each other.

Graph

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a visualization of IoT data into Power BI. The input could easily be business data from an API and the output a real time reporting dashboard.

Follow Darren on Twitter @darrenjrobinson