Sunday, 28 April 2019

Whose Turn Is it? An OpenHAB / Google Home / Hack (part 3)

In this third part of my mini-series on life-automation via hacking home-automation, I want to show how I was able to ask our Google Home whose turn it was for movie night, and have "her" respond with an English sentence.

First a quick refresher on what we have so far. In part 1, I set up an incredibly-basic text-file-munging "persistence" system for recording the current person in a rota via OpenHAB. We can query and rotate (both backwards and forwards) the current person, and there's also a cron-like task that rotates the person automatically once a week. The basic pattern can be (and has been!) repeated for multiple weekly events.

In part 2, I exposed the state of the MovieNight item to the "outside world" via the MyOpenHAB RESTful endpoint, and then wrote a lambda function that translates a Google Dialogflow webhook POST into a MyOpenHAB GET for any given "intent"; resulting in the following architecture:

Here are the pertinent screens in Dialogflow where things are set up.

First, the "training phrases" which guide Google's machine-learning into picking the correct Intent:

On the Fulfillment tab is where I specify the URL of the webhook handler and feed in the necessary auth credentials (which it proxies through to OpenHAB):

From Integrations -> Google Assistant -> Integration Settings is where I "export" the Intents I want to be usable from the Google Home:

The final piece of the puzzle is invoking this abomination via a voice command. Within the Dialogflow console it is very straightforward to test your 'fulfillment' (i.e. your webhook functionality) via typing into the test panel on the side, but actually "going live" so you can talk with real hardware requires digging in a little deeper. There's a slightly-odd relationship between the Google Actions console (which is primarily concerned with getting an Action into the Actions Directory) and the Dialogflow console (which is all about having conversations with "agents"). They are aware of each other to a pretty-good extent (as you'd hope for two sibling Google products) but they are also a little confusing to get straight in your head and/or working together.

You need to head over to the Actions Console to actually "release" your helper so that a real-life device can use it. An "Alpha" release makes sure random people on the internet can't start using your private life automation software!

I really wanted to be able to ask the Google Assistant in a conversational style; "Hey Google, whose turn is it for Movie Night this week?" - in the same way one can request a Spotify playlist. But it turns out to be effectively-impossible to have a non-publicly-released "app" work in this way.

Instead the human needs to explicitly request to talk to your app. So I renamed my app "The Marshall Family Helper" to make it as natural-sounding as it can be. A typical conversation will now look like this:

Human: "Hey Google, talk to The Marshall Family Helper"

Google: "Okay, loading the test version of The Marshall Family Helper"

(Short pause)

{beep} "You can ask about Movie Night or Take-Away"

"Whose turn is it for movie night?"

(Long pause)

"It's Charlotte's turn"


Some things to note. The sentence after the first {beep} is what I've called my "Table of Contents" intent - it is automatically invoked when the Marshall Family Helper is loaded - as discovery is otherwise a little difficult. The "short pause" is usually less than a second, and the "long pause" around 3-4 seconds - this is a function of the various latencies as you can see in the system diagram above - it's something I'm going to work on tuning. At the moment automatically selects the Sydney point-of-presence as the host for my webhook lambda, which would normally be excellent, but as it's being called from Google and making a call to MyOpenHAB, I might spend some time finding out where geographically those endpoints are and locating the lambda more appropriately.

But, it works!

Saturday, 30 March 2019

Whose Turn Is it? An OpenHAB / Google Home / Hack (part 2)

So in the first part of this "life-automation" mini-series, we set up some OpenHAB items that kept track of whose turn it was to do a chore or make a decision. That's fine, but not super-accessible for the whole family, which is where our Google Home Mini comes in.

First, (assuming you've already configured and enabled the OpenHAB Cloud service to expose your OpenHAB installation at we add our MovieNight to our exposed items going out to the MyOpenHAB site. To do this, use the PaperUI to go to Services -> MyOpenHAB and add MovieNight to the list. Note that it won't actually appear at until the state changes ...

Next, using an HTTP client such as Postman, we hit (sending our email address and password in a Basic Auth header) and sure enough, we get back Charlotte.

Unfortunately, as awesome as it would be, the Google Home Assistant can't "natively" call a RESTful API like the one at MyOpenHAB, but it *can* if we set up a custom Action to do it, via a system called Dialogflow. This can get very involved as it is capable of amazing levels of "conversation" but here's how I solved this for my simple interaction needs:

So over in the Dialogflow console, we set up a new project, which will use a webhook for "fulfillment", so that saying "OK Google, whose turn is it for movie night?"* will result in the MovieNight "Intent" firing, making a webhook call over to a lambda, which in turn makes the RESTful request to the MyOpenHAB API. Phew!

I've mentioned before as the next-generation Heroku - and until now have just used it as a React App serving mechanism - but it also has sleek backend deployment automation (that's like Serverless minus the tricksy configuration file) that was just begging to be used for a job like this.

The execution environment inside a lambda is super-simple. Define a function that takes a Node request and response, and do with them what you will. While I really like lambdas, I think they are best used in the most straightforward way possible - no decision-making, no state - a pure function of its inputs that can be reasoned about for all values over all time at once (a really nice way of thinking about the modern "declarative" approach to writing software that I've stolen from the amazing Dan Abramov).

This particular one is a little gem - basically proxying the POSTed webhook call from Google, to a GET of the OpenHAB API. Almost everything this lambda needs is given to it - the Basic authentication header from Google is passed straight through to the OpenHAB REST call, the URL is directly constructed from the name of the intent in the webhook request, and the response from OpenHAB gets plopped into an English sentence for the Google Assistant to say. The only real snag is that the body of the POST request is not made directly available to us, so I had to add a little helper to provide that:

'use strict';

const bent = require('bent');

// Helper function to get the body from a POST
function processPost(request, response, callback) {
  var queryData = "";
  if (typeof callback !== 'function') return null;

  if (request.method == 'POST') {
    request.on('data', function (data) {
      queryData += data;
      if (queryData.length > 1e6) {
        queryData = "";
        response.writeHead(413, { 'Content-Type': 'text/plain' }).end();

    request.on('end', function () {

  } else {
    response.writeHead(405, { 'Content-Type': 'text/plain' });

// Proxy a Dialogflow webhook request to an OpenHAB REST call
module.exports = async (request, response) => {
  processPost(request, response, async (bodyString) => {
    const requestBody = JSON.parse(bodyString);
    const intent = requestBody.queryResult.intent.displayName;
    const uri = `${intent}/state`;
    const auth = request.headers['authorization'];

    console.log(`About to hit OpenHAB endpoint: ${uri}`);
    const getString = bent('string', { 'Authorization': auth });    
    const body = await getString(uri);

    console.log(`OpenHAB response: ${body}`);

    const json = {
      fulfillmentText: `It's ${body}'s turn.`,
    const jsonString = JSON.stringify(json, null, 2);
    response.setHeader('Content-Type', 'application/json'); 
    response.setHeader('Content-Length', jsonString.length); 

It returns the smallest valid JSON response to a Dialogflow webhook request - I did spend some time with the various client libraries available to do this, but they seemed like overkill when all that is needed is grabbing one field from the request and sending back one line of JSON!

We're almost there! Now to wire up this thing so we can voice-command it ...

(*) That's the theory at least - see Part 3 for the reality ...

Thursday, 28 February 2019

Whose Turn Is it? An OpenHAB Hack (part 1)

As my young family grows up, we have our little routines - one of which is the weekly Movie Night. On a rotating basis, each family-member gets to choose the movie that we'll watch, as a family, on a Saturday night. Looking at other screens is not allowed during this time - it's a Compulsory Family Fun Night if you like. The thing is, maybe I'm getting too old, but it frequently seems very difficult to remember whose turn it is. Maybe we skipped a week due to some other activity, or nobody can remember exactly because it was a group decision. Anyway, something that computers are especially good at is remembering things, so I decided to extend my existing OpenHAB home (device) automation to include home process automation too!

Unlike the similarly-named Amazon Alexa "skill" which appears to a) be totally random and b) not actually work very well, I wanted something that would intelligently rotate the "turn" on a given schedule (weekly being my primary requirement). I also wanted to keep the essentials running locally, on the Raspberry Pi that runs my OpenHAB setup. I'm sure you could move this entirely into the cloud should you wish, but doing it this way has allowed me to start with the basics and scale up.

First step; create a simple text file with one participant name per line: ${OPENHAB_USERDATA}/movienight.txt (i.e. /var/lib/openhab2/movienight.txt on my system):

Make sure that the openhab user can read and write it.

Now we use the exec binding to create a Thing that reads the first line of this file via the head command-line tool, once every 6 hours (21600 seconds). Unfortunately as you'll see in all the snippets below, there seems to be no way to access environment variables when defining these file locations; so while I'd love to write ${OPENHAB_USERDATA}/movienight.txt, I have to use the hard-coded path: /var/lib/openhab2/movienight.txt.


Thing exec:command:movienight "Movie Night" @ "Living Room" 
  [command="head -1 /var/lib/openhab2/movienight.txt", 

Here are the items that fetch, display and adjust the current movie night, respectively. It's useful to be able to adjust the rotation, for example if we skipped a week, so need to back out the automatically-changed value.

Switch FetchMovieNight {channel="exec:command:movienight:run"}

String MovieNight "Whose turn is it?" 

Switch AdjustMovieNight

We expose the items in the sitemap:

  Frame label="Household rotas" {
    Text item=MovieNight label="Whose Movie Night is it?"
    Switch item=AdjustMovieNight
           label="Adjust Movie Night"
           mappings=[ON="Rotate", OFF="Unrotate"]
Which results in the following in Basic UI:

Now for the weekly-rotation part. First, a simple Bash script to rotate the lines of a text file such as the one above. That is, after running ./ movienight.txt, the topmost line becomes the bottom-most:


if [[ $# -eq 2 ]] 
        # Assume a -r flag provided: Reverse mode
        TAIL=`tail -n 1 $2`
        echo ${TAIL} > $TMPFILE
        head -n -1 $2 >> $TMPFILE 
        mv $TMPFILE $2
        HEAD=`head -1 $1`
        tail -n +2 $1 > $TMPFILE 
        echo ${HEAD} >> $TMPFILE
        mv $TMPFILE $1

And now we can automate it using a time-based rule in OpenHAB - each Saturday night at 9pm, as well as supporting rotation "by hand":

rule "Rotate Movie Night - weekly"
  Time cron "0 0 21 ? * SAT *"    
  logInfo("cron", "Rotating movie night...")
    "/home/pi/ /var/lib/openhab2/movienight.txt"

rule "Adjust Movie Night"
  Item AdjustMovieNight received command

  val reverseFlag = if (receivedCommand == ON) "" else "-r"

  val results = executeCommandLine(
    "/home/pi/ " + 
    reverseFlag + 
    " /var/lib/openhab2/movienight.txt", 5000)

  # If anything went wrong it will be displayed in the log:
  logInfo("AdjustMovieNight", "Results: " + results)

Now this is fine, but believe me when I tell you that having a text field available in a web page somewhere is simply not enough to achieve a winning SAF (Spousal Acceptance Factor). So onwards we must plunge into being able to ask the Google Home whose turn it is ...

Thursday, 31 January 2019

New Year, New Side Project

Launching into the new year with a clean-slate restart of an old side-project that I got stuck on a few years back. I'd built it using Scala and the Play Framework, as was my preference at that time. The problem was, this "purely server-side" solution was simply wrong for this particular problem, which would be far better served being entirely client-side (this will make more sense when I release it!).

Thus, the 2019 reboot of this project will be featuring and utilising all the front-end tech I've been enjoying in the last couple of years:

  • React - for me, the most enjoyable way to build dynamic websites; and with the introduction of Hooks, it's just getting better and better
  • Next.js - a framework that elegantly combines hand-in-glove with React, doing all the hard work to get code-splitting, server-side rendering and routing to be as easy as it can be
  • - from the makers of Next, this is like the next evolution of Heroku - push code and it's deployed. To a global CDN. Awesome
  • TypeScript - 2018's surprise packet was the explosive arrival of TypeScript as a serious way to write, build and ship JavaScript code. As a types fan and Babel-phobe from way back, this is a win for me in a couple of ways
  • Styled-Components with Styled-System - Although I could argue that CSS-in-JS isn't really needed on a one-man project, the power of Styled-Components combined with the control of Styled-System is just too nice to ignore

The output of all this tech will be a responsive PWA that you can "install" to your device and use completely offline. No network connection, no data, no native Android/iOS code. This is the future!

Thursday, 6 December 2018

Green Millhouse: OK Google, turn on the living room air conditioner

My Broadlink RM3 IR blaster has been working pretty well, so I thought I'd share how I've been using it with a couple of IR-controlled air conditioners in my house, to control them with the Google Home Assistant via OpenHAB.

The RM3 sits in a little niche that has line-of-sight to both these devices (a Daikin in the living room, and a Panasonic in the dining area). Using the RM-Bridge Android app, the fun2code website and the method I've documented over on the OpenHAB forums), I've learnt the ON and OFF codes for each device, and put them into transforms/




The "basic" way of invoking the commands is by using a String Item for your IR-blaster device, and a Switch in your sitemap, like this:

String RM3_MINI {channel="broadlink:rm3:34-ea-34-58-9d-5b:command"}

sitemap default label="My house" {
  Frame label="Aircon" {
      label="Dining Area"

Which gives you this:

... which is completely fine for basic UI control. But if you want the Google Home Assistant (aka "OK Google") to be able to operate it, it won't work. The reason for this is that the Switchable trait that you have to give the item can only take the simple values ON and OFF, not a string like PANASONIC_AIRCON_ON. So while it *might* work if you named your remote control commands ON and OFF, you're hosed if you want to add a second switchable device.
The best solution I could find was to set up a second Item, which is literally the most basic Switch you can have. You can also give it a label that makes it easier to remember and say when commanding your Google Home device. You then use a rule to issue the "command" to match the desired item's state. I'll demonstrate the difference by configuring the Living Room aircon in this Google-friendly way:

String RM3_MINI {channel="broadlink:rm3:34-ea-34-58-9d-5b:command"}
Switch AC_LIVING_ONOFF "Living Room Air Conditioner" [ "Switchable" ] 

Notice the "label" in the Switch is the one that will be used for Google voice commands.

rule "Translate Living Room ON/OFF to aircon state"
  Item AC_LIVING_ONOFF changed 
  val isOn = (AC_LIVING_ONOFF.state.toString() == "ON")
  val daikinState = if(isOn) "DAIKIN_AIRCON_ON" else "DAIKIN_AIRCON_OFF"

This rule keeps the controlling channel (RM3_MINI) in line with the human-input channel (the OpenHAB UI or the Google Home Assistant input). Finally the sitemap:

sitemap default label="My house" {
  Frame label="Aircon" {
    Switch item=AC_LIVING_ONOFF label="Living Room On/Off" 

I quite like the fact that the gory detail of which command to send (the DAIKIN_AIRCON_ON stuff) is not exposed in the sitemap by doing it this way. You also get a nicer toggle switch as a result:

The final step is to ensure the AC_LIVING_ONOFF item is being exposed via the MyOpenHAB connector service, which in turn links to the Google Assistant integration. And now I can say "Hey Google, turn on the living room air conditioner" and within 5 seconds it's spinning up.

Wednesday, 7 November 2018

Green Millhouse - The Broadlink Binding part 3

Apologies to people who were interested in my Broadlink binding for OpenHAB 2.0 ... I migrated my code to the 2.4.0 family and things apparently stopped working. I was short on spare time so assumed something major had changed in the OpenHAB binding API and didn't have time to investigate. Turns out it was actually my own stupid fault, refactoring to clean up the code, I'd managed to cut out the "happy path" that actually updated OpenHAB with values from the device <facepalm />.

Anyway, back into it now with a couple of extra things rectified as well:
This version also keeps track of the ScheduledFuture that is used to periodically poll the Broadlink device, and correctly calls cancel() on it if the Broadlink binding is getting disposed. This should put an end to those
...the handler was already disposed
errors flooding the logs.

This version finally addresses the "reconnect" issues; i.e. when a device unexpectedly falls off the network, re-authentication with it would, more often than not, fail. The fix was to reset most of the state associated with the device once we lose contact with it. In particular, the packet counter, deviceId and deviceKey variables that we obtained from the previous authentication response.

As a result of this, my A1 Environmental Sensor seems to be working in all scenarios. Next up is long-term robustness testing while sorting out device discovery, making sure my RM3 Mini works, and checking the multiple heterogeneous devices will all play nicely together.

This version reflects my testing with my RM3 Mini IR blaster. This device seems more sensitive to certain parameters in the authentication procedure - and it turned out that resetting the packet counter (as introduced in BETA-3) is not needed and can actually cause re-authentication to fail for this device. So there's actually less code in this release, but more functionality - which is always nice.
Device discovery is also working now too. From the OpenHAB Paper UI, go to
Configuration -> Things -> '+' -> Broadlink Binding and wait 10 seconds for the network broadcast to complete. If you have several Broadlink devices active on your network it may take a couple of scans to pick them all up.

Small refactors for code cleanliness, e.g. grouping all networking functions together in Also added specific polling code for the SP3 smart switch device; prior to now this was using the same polling code as an SP2 device, but based on comments on this post from Jorg, it has its own function now, with extra diagnostic logging to hopefully pinpoint what is going on. I suspect that it doesn't reply in the same way as an SP2, so this is an "investigative" release; expect BETA-6 to follow with (hopefully) fixed support for the SP3. I may have to resort to buying one myself if we can't sort it out over the internet ;-)

Fixed misidentification of an SP3 switch as an SP2 (appears to have been a typo in the original JAR). After investigating Jorg's incorrect polling issue, I took a look over at the python-broadlink module code for inspiration, and lo and behold, there was more going on there. This module seems to have had a lot of love (42 contributors!) and seems to have addressed all the quirks of these devices, so I had no hesitations in implementing the changes; namely that testing payload[4] == 1 is not sufficient. The Python code also checks against 3 and 0xFD. Looks like the LS bit of that byte is the one that matters, but if this works, that's fine!

Added a ton more weird-and-wonderful variants of the RM2, courtesy of the aforementioned Python library. Broadlink seem to be constantly updating their device names (RM2, RM2 Pro Plus, RM2 Pro Plus R1, RM2 Pro Plus 2, etc etc), giving a new identification code to each one. I can't fathom a method to their coding scheme, so for the moment, we just have to play catch-up. In the case where we can't identify what seems to be a Broadlink device, I'm now logging (at ERROR level) the identification code we found, so that people out there can feature-request the support for their device.

With thanks to excellent Github contributor FreddyFox, querying SP2/SP3 switch state should now be working. The old code attempted to decode the state of the switch from the encrypted payload, when of course it needs to be decrypted first. Thanks again FreddyFox!

The main feature of this version is support for Dynamic IP Addresses.
There is a new switch available under the advanced Thing properties (open the SHOW MORE area in PaperUI) - it’s marked “Static IP” and it defaults to ON. If you are on a network where your Broadlink device might periodically be issued a different IP address, move it to the OFF position.
Now, if we lose communications with this Thing, rather than mark it OFFLINE, we’ll go into a mini-Discovery mode, and attempt to find the device on the network again, using its MAC address (which never changes). If we find the device, we update its current IP address in its Thing config, and everything continues without a hiccup. If we fail to find it, it goes OFFLINE as normal.
I should give credit to the LIFX binding which uses a very similar scheme - by chance I happened to come across it and realised it would be great for Broadlink devices too. As an extra bonus, it prompted me to redesign the entire discovery system and make it asynchronous; as a result it is now MUCH more reliable at finding devices (particularly if you have multiple Broadlink devices on your network) without having to scan repeatedly.

This version fixes a few small issues:
  • RM3 devices can now have a polling frequency specified (this was an omission from the thing-types.xml configuration file)
  • Thing logging extracted to its own class and improved: Device status ONLINE/OFFLINE/undetermined shown in logs as ^/v/?
  • Each ThingHandler's network socket is now explicitly closed when we lose contact with the device. This seems to help subsequent reconnections.

This version attempts to improve general reliability. The Broadlink network protocol always acknowledges every command sent to a device, so logically, the sendPacket() and receivePacket() functions have been coalesced to sendAndReceivePacket(). This in turn allowed for a simple retry mechanism to be implemented. If we time out waiting for a response from the device, we immediately retry, sending the packet again. Together with some improved logging, this should hopefully be enough to fix (or at least understand) devices prematurely being marked "offline" on unreliable networks.

Here's the latest JAR anyway, put it into your addons directory and let me know in the comments how it goes for you:

Friday, 24 August 2018

Building a retro game with React.js. Part 5 - Fill 'er up

By now I had a good portion of the "player's part" of the game complete. Movement around the game area, drawing lines, detecting intersection with existing lines and preventing illegal movement were all done, using the div-based "drawing" that I was familiar with. But now it was time to draw the filled colour block that results when a closed area has been completed. And I was looking at a wall of complexity that I didn't know how to scale.

The answer was to challenge my own comfort zone. Allow myself to quote, myself:
I could do that with an HTML canvas, but I'm (possibly/probably wrongly) not going to use a canvas. I'm drawing divs dammit!
After all, if ever there was a time and place to learn how to do something completely new, isn't it a personal "fun" project? After a quick scan around the canvas-in-React landscape I settled upon React-konva, which has turned out to be completely awesome for my needs, as evidenced by the amount of code changes actually needed to go from my div-based way of drawing a line to the Konva way:
The old way:

const Line = styled('div')`
  position: absolute;
  border: 1px solid ${FrenzyColors.CYAN};
  box-sizing: border-box;
  padding: 0;

renderLine = (line, lineIndex) => {
  const [top, bottom] = minMax(line[2], line[4]);
  const [left, right] = minMax(line[1], line[3]);
  const height = (bottom - top) + 1;
  const width = (right - left) + 1;
  return <Line key={`line-${lineIndex}`}
                  style={{ top: `${top}px`, 
                           left: `${left}px`,
                           width: `${width}px`,
                           height: `${height}px` }} />;

The new way:

import { Line } from 'react-konva';

renderLine = (line, lineIndex) => {
  return (

... and the jewel in the crown? Here's how simple it is to draw a closed polygon with the correct fill colour; by total fluke, my "model" for lines and polygons is virtually a one-to-one match with the Konva Line object, making this amazingly straightforward:
renderFilledArea = (filledPolygon) => {
  const { polygonLines, color } = filledPolygon;
  const flatPointsList = polygonLines.reduce((acc, l) => {
    const firstPair = l.slice(1, 3);
    return acc.concat(firstPair);
  }, []);

  return (

Time Tracking
Probably at around 20 hours of work now.