Saturday 28 December 2019

SOTSOG 2019

It's been quite a while since I last did a SOTSOG awards ceremony but there have definitely been some standouts for Standing On The Shoulders Of Giants this year. So without any further ado:

React.js

I wouldn't consider building a front-end with anything else. Hooks have been a game-changer, taking the declarative approach to the next level.

Apollo GraphQL

Version 2.x of the reference GraphQL client library brought vast improvements, and the React Hooks support fits in beautifully. Combine it with top-notch documentation and you've got a stone-cold winner for efficient and elegant data communications.

Next.js and now.sh

The dynamic duo from Zeit have almost-instantly become my go-to for insanely-quick prototyping and deployment. The built-in Lambda deployment system is ridiculously good. Just try it - it's free.

Honourable Mentions

Typescript and ES6

For making JavaScript feel like a grown-up language. Real compile-time type safety and protection from the dreaded undefined is not a function, the functional goodness of map(), reduce() and Promises without pain.

Visual Studio Code

Two Microsoft products getting a nod?!? Amazing to see - a stunning turnaround this decade from Redmond. VSCode has simply exploded onto most developers' launch bars thanks to its speed, flexibility, incredible rate of feature development and enormous plugin community. At this price point, it's very hard to look further for an IDE.

Friday 22 November 2019

Micro-optimisations

I finally got around to setting up some aliases for git commands that I issue many, many times a day. Can't believe it's taken me this long to do it. I've also placed them in a file in my Dropbox so I'll always be able to add them to any machine I work on regularly.

alias gs="git status"
alias gcm="git checkout master"
alias gp="git pull"
alias gd="git diff"
alias gcam="git commit -am"

Although I have a few other oft-used and favourite git commands, namely:

  • git push - to push code to the server
  • git merge master - to merge the code from master with code on this branch
  • git checkout - - to switch to the previously-used branch (analogous to cd -)
I consider them either too powerful to be made dangerously accessible (in the first two cases) or, conversely, and perhaps not intuitively at first, too valuable to "forget" behind an alias (for the last case). I only recently discovered the - option to git checkout, but it's so good I'm deliberately trying to burn it into my brain.

This actually harks back to my first ever job as a professional engineer; we were using the mighty and fearsome ClearCase version control system, and I was tempted to shortcut some of the arcane commands required, but my manager (very wisely) cautioned similarly against aliasing away complexity. Don't underestimate the power of repetition for both muscle- and conventional memory!

Sunday 27 October 2019

What's in a name?

You've probably seen the old joke.
There are only 2 hard problems in Computer Science:

  1. Naming things
  2. Cache invalidation
  3. Off-by-one errors

But like every good joke there's a substantial core of truth in it. And there's a good reason why Naming Things is top of that list. It's hard. And with most geeks being complexity-addicts, we like to ladle additional difficulties on top of the naming problem; for example:

  • Compiler restrictions - Java insists on one-class-per-file, with a match between filename and class name
  • Linter rules - Requiring interfaces to start with I or implementations with Impl
  • Usability problems - if all your files are called index.js (because they live in different directories), navigating your text editor's tabs becomes a challenge
  • Readability problems - Spending too long with AbstractInjectedDAOFactoryDecorator and AbstractInjectedDAOFactoryDecoratorBuilder-type names will wear out your sanity even faster than your keyboard

But the thing is, it's worth it. Especially in the increasingly-decomposed, microservicey, lambda-esque landscape, being able to latch onto a variable and/or class because its name is well-chosen and descriptive, can be the difference between a laser-like open file by name, make change, test, commit, DONE and a flailing find in all files, open a dozen of them, trace execution path, make speculative change, get it wrong, repeat until somehow fixed loop.

Better yet, as your team grows in experience, your shared vocabulary grows with it. Those well-named classes become a shorthand; you can use them as examples or references and everyone knows what you're on about - much like the Gang of Four Design Patterns are/were all about. I say "were" because of the 23 original GoF patterns, I would argue only a handful are in common usage today and have achieved the universality of understanding the authors were hoping for: Singleton, Iterator, Observer (although probably referred to as publisher-subscriber), Adapter/Proxy (often used interchangeably) and Factory Method.

I note that the design patterns that have truly "taken off" are without exception, the "simple" ones. I mean, look at Adapter (diagrams from the Black Wasp site also linked above):


... and now compare with Abstract Factory:

All those classes. All of them need names. And in Java at least, each one living in its own file. Possibly in different directories too. Is it any wonder that people have shied away from these complex patterns?

Sunday 8 September 2019

I like to watch

Remember how computers were supposed to do the work, so people had time to think?

Do you ever catch yourself operating in a tight loop, something like:

  10 Edit file
  20 Invoke [some process] on file
  30 Read output of [some process]
  40 GOTO 10
Feels a bit mechanical, no? Annoying having to switch from the text editor to your terminal window, yes?

Let's break down what's actually happening in your meatspace loop*:

  10 Edit file in text editor
  15 Save file (e.g. Ctrl-S)
  17 Change active window to Terminal (e.g. Cmd/Alt+Tab)
  20 Press [up-arrow] once to recall the last command
  22 Visually confirm it's the expected command
  25 Hit [Enter] to execute the command
  30 Wait for the command to complete
  35 Interpret the output of the command
  40 Cmd/Alt+Tab back to the editor
  45 GOTO 10
That's even more blatant! Most of the work is telling the computer what to do next, even though it's exactly the same as the last iteration.
(*) Props to anyone recognising the BASIC line-numbering reference 😂

Wouldn't something like this be better?

  10 Edit file in text editor
  15 Save file (e.g. Ctrl-S)
  20 Swivel eyeballs to terminal window
  30 Wait for the command to complete
  35 Interpret the output of the command
  40 Swivel eyeballs back to editor
  45 GOTO 10

It can be better

If you've got npm on your system somewhere it's as simple as:

  $ npm install -g watch
and arranging your UI windows suitably. Having multiple monitors is awesome for this. Now by invoking:
  $ watch '[processing command with arguments]' dir1 dir2 ... dirN
you have the machine on your side. As soon as you save any file in dir1, dir2 etc, the command will be run for you. Here are some examples:

Validate a CircleCI build configuration
You're editing the circleci/config.yml of a CircleCI continuously-integrated project. These YAML files are notoriously tricky to get right (whitespace matters...🙄) - so you can get the circleci command-line tool to check your work each time you save the file:
  $ brew install circleci
  $ watch 'circleci config validate' .circleci
Validate a Terraform configuration
You're working on a Terraform infrastructure-as-code configuration. These .TF files can have complex interrelationships - so you can get the terraform command-line tool to check your work each time you save the file:
  $ brew install terraform
  $ watch 'terraform validate' .
Auto-word-count an entire directory of files
You're working on a collection of files that will eventually be collated together into a single document. There's a word-limit applicable to this end result. How about running wc to give you a word-count whenever you save any file in the working directory?:
  $ watch 'wc -w *.txt' .

Power tip

Sometimes, the command in your watch expression is so quick (and/or its output so terse), you can't tell whether you're seeing the most-recent output. One way of solving this is to prefix the time-of-day to the output - a quick swivel of the eyeballs to the system clock will confirm which execution you're looking at:

  $ watch 'echo `date '+%X'` `terraform validate`' .
  > Watching .
  13:31:59 Success! The configuration is valid. 
  13:32:23 Success! The configuration is valid.
  13:34:41 Success! The configuration is valid.
  

Saturday 31 August 2019

Serviio 2.0 on Raspbian "Buster"

After a recent SD card corruption (a whole-house power outage while the Raspberry Pi Model 3B must have been writing to the SD card), I have been forced to rebuild the little guy. It's been a good opportunity to get the latest-and-greatest, even if it means sometimes the various instructional guides are slightly out-of-date. Here's what I did to get the Serviio Media Streamer, version 2.0, to work on Raspbian Buster [Lite](*):

Install dependencies

Coming from the "lite" install, we'll need a JDK, plus the various encode/decode tools Serviio uses to do the heavy lifting:

$ sudo su
# apt-get update
# apt-get install ffmpeg x264 dcraw
# apt-get install --no-install-recommends openjdk-11-jdk

Download and unpack Serviio 2.0

This will result in serviio being located in /opt/serviio-2.0:
# wget http://download.serviio.org/releases/serviio-2.0-linux.tar.gz
# tar -xvzf serviio-2.0-linux.tar.gz -C /opt

Set up the Serviio service in systemd

Create a serviio user, and give them ownership of the install directory:
# useradd -U -s /sbin/nologin serviio
# chown -R serviio:serviio /opt/serviio-2.0
Create the a file serviio.service with the following contents:
[Unit]
Description=Serviio media Server
After=syslog.target network.target

[Service]
User=serviio
ExecStart=/opt/serviio-2.0/bin/serviio.sh
ExecStop=/opt/serviio-2.0/bin/serviio.sh -stop

[Install]
WantedBy=multi-user.target
Copy it into position, enable the service, and reboot:
# cp serviio.service /etc/systemd/system
# systemctl enable serviio.service
# reboot

Verify, Configure, Enjoy

After reboot, check things are happy:
$ sudo systemctl status serviio.service 
● serviio.service - Serviio media Server
   Loaded: loaded (/etc/systemd/system/serviio.service; enabled;)
   Active: active (running) since Sat 2019-08-31 16:54:48 AEST; 7min ago
   ...
It's also very handy to watch the logs while using Serviio:
$ tail -f /opt/serviio-2.0/logs/serviio.log
This is also a good time to set up the filesystem mount of the video media directory from my NAS, which the NAS user naspi has been given read-only access to:
 
  $ sudo apt-get install cifs-utils
  $ sudo vim /etc/fstab

(add line:)

//mynas/video /mnt/NAS cifs username=naspi,password=naspi,auto
Next, use a client app (I enjoy the Serviidroid app for Android devices) to locate and configure the instance, remembering that paths to media directories are always from the Pi's point of view, e.g. /mnt/NAS/movies if using the above example mount.

* This guide is very much based on the linuxconfig.org guide for Serviio 1.9, with updates as needed.

Saturday 27 July 2019

React: 24 years' experience ;-)

I wrote my first HTML page in 1995. Unfortunately I don't have a copy, but given the era you can be pretty confident it would have consisted of a wall of Times New Roman text in sizes h1, h2 and p broken up with some <hr>s, a repeated background image in a tiled style, and, down at the bottom of the page, an animated Under Construction GIF.

My mind turns to that first img tag I placed on the page. I would have Yahoo-ed or AltaVista-ed (we're well before the Googles, remember) for a reference on HTML syntax, and awkwardly vi-ed this line into existence:

  <img src="images/undercon.gif" width="240" height="80">

Undoubtedly I would have forgotten a double-quote or two, got the filename wrong and/or messed up the sizing at first, but eventually, my frustrated bash of the [F5] key would have shown the desired result in Netscape Navigator, and delivered me a nice little dopamine hit to boot. I learnt something, I built something, and it worked. Addictive.

24 years later while reflecting on how far we've come in many areas, I've just realised one of the many reasons why I enjoy developing React.js apps so much. Here's a random snippet of JSX, the XML-like syntax extension that is a key part of React:

  <CancelButton bg="red" color="white" width="6em" />

Seem a little familiar?

One of the most successful parts of the React world is the "component model" which strives to break pages down into small, composable, testable elements, combined with a "declarative style" where, to quote React demigod Dan Abramov:

"we can describe the process at all points in time simultaneously"

If you've written any HTML before, writing JSX feels totally frictionless, because the <img> tag has been encapsulating complex behaviour (namely, converting a simple string URL into a web request, dealing with errors, and rendering the successfully-fetched data as a graphical image with the desired attributes) in a declarative way, since 1993.

And it turns out to have been another web god, Marc Andreessen, who originally suggested the basic form of the <img> tag. Truly we are standing on the shoulders of giants.

Saturday 29 June 2019

Things I will never do again

Again reflecting on twenty years in the software development industry, there are a number of things I can be pretty certain I will never do again. Either because "we don't do things like that any more" or because I will simply refuse. Sadly, it's often the latter!

In no particular order:

  • Work on a codebase with tens/hundreds of thousands of lines of code and no unit tests
  • Deploy production code to a Windows Server that faces the Internet
  • Build production code on my development machine
  • Work alongside someone whose title is XML Architect
  • Use JBoss, or indeed any "Application Server"
  • Do meaningful work on a project without signing a contract and/or being paid for at least some portion of it
  • Have to wear a suit and tie
  • Have to log into Jenkins slave machines to delete files because they've run out of disk
  • Configure builds in Jenkins by tediously pointing and clicking
  • Work in a building in the centre of a big city, where an entire floor is devoted to hosting and running racks of servers, storage, switches etc
  • Have to ensure that a website works on a browser with JavaScript disabled because "accessibility"
  • Write code that has no tangible benefit to users, but will "game" an executive's KPIs in order to achieve a bonus

Tuesday 28 May 2019

Whose Turn Is it? An OpenHAB / Google Home / now.sh Hack (part 4 - The Rethink)

The "whose turn is it?" system was working great, and the kids loved it, but the SAF (Spousal Acceptance Factor) was lower than optimal, because she didn't trust that it was being kept up-to-date. We had a number of "unusual" weekends where we didn't have a Movie Night, and she was concerned that the "roll back" (which of course, has to be manually performed) was not being done. The net result of which being, a human still had to cast their mind back to when the last movie night was, whose turn it was, and what they chose! FAIL.

Version 2 of this system takes these human factors into account, and leverages the truly "conversational" aspect of using DialogFlow, to actually extract NOUNS from a conversation and store them in OpenHAB. Instead of an automated weekly rotation scheme which you ASK for information, the system has morphed to a TELL interaction. When it IS a Movie Night, a human TELLS the Google Home Mini somewhat like this:

Hey Google, for Movie Night tonight we watched Movie Name. It was Person's choice.

or;

Hey Google, last Friday it was Person's turn for Movie Night. we watched Movie Name.

To do this, we use the "parameters" feature of DialogFlow to punch the nouns out of a templated phrase. It's not quite as rigid as it sounds due to the machine-learning magic that Google runs on your phrases when you save them in DialogFlow. Here's how it's set up; with the training phrases:

Kudos to Google for the UI and UX of this tricky stuff - it's extremely intuitive to set up, and easy to spot errors thanks to the use of coloured regions. Here's where the parameters get massaged into a suitable state for my webhook Lambda. Note the conversion into a single pipe-separated variable (requestBody) which is then PUT into the OpenHAB state for the item that has the same name as this Intent, e.g. LastMovieNight.

Within OpenHAB, almost all of the complexity in working out "who has the next turn" is now gone. There's just a tiny rule that, when the item called LastMovieNight is updated (i.e. by the REST interface), appends it to a "log" file for persistence purposes:

rule "Append Last Movie Night"
when
    Item LastMovieNight received update
then 
    executeCommandLine(
      "/home/pi/writelog.sh /var/lib/openhab2/movienight-logs.txt " + 
      LastMovieNight.state, 
      5000) 
end

(writelog.sh is just a script that effectively just does echo ${2} >> $1 - it seems like OpenHAB's executeCommandLine really should be called executeScript because you can't do anything directly).

The flip side is being able to query the last entry. In this case the querying side is very straightforward, but the trick is splitting out the |-separated data into something that the Google Home can speak intelligibly. I've seen this called "having a good VUI" (Voice User Interface) so let's call it that.

Given that the result of querying the MyOpenHAB's interface for /rest/items/LastMovieNight/state will return:

Sophie|2019-05-26T19:00:00+10:00|Toy Story 2

I needed to be able to "slice" up the pipe-separated string into parts, in order to form a nice sentence. Here's what I came up with in the webhook lambda:

...
const { restItem, responseForm, responseSlices } = 
  webhookBody.queryResult.parameters;
...
// omitted - make the REST call to /rest/items/${restItem}/state,
// and put the resulting string into "body"
...
if (responseSlices) {
   const expectedSlices = responseSlices.split('|');
   const bodySlices = body.split('|');
   if (expectedSlices.length !== bodySlices.length) {
     fulfillmentText = `Didn't get ${expectedSlices.length} slices`;       
   } else {
     const responseMap = expectedSlices.map((es, i) => {
       return { name: es, value: bodySlices[i] } 
     });

     fulfillmentText = responseMap.reduce((accum, pair) => {
       const regex = new RegExp(`\\\$${pair.name}`);
       let replacementValue = pair.value;
       if (pair.name === 'RELATIVE_DATE') {
         replacementValue = moment(pair.value).fromNow();        
       }
       return accum.replace(regex, replacementValue);  
     }, responseForm);   
   }
}

Before I try and explain that, take a look at how it's used:

The whole thing hinges on the pipe-separators. By supplying a responseSlices string, the caller sets up a mapping of variable names to array slices, the corresponding values of which are then substituted into the responseForm. It's completely neutral about what the variable names are, with the one exception: if it finds a variable named RELATIVE_DATE it will treat the corresponding value as a date, and apply the fromNow() function from moment.js to give a nicely VUI-able string like "3 days ago". The result of applying these transformations to the above pipe-separated string is thus:

"The last movie night was 3 days ago, when Sophie chose Toy Story 2"

Job done!

Sunday 28 April 2019

Whose Turn Is it? An OpenHAB / Google Home / now.sh Hack (part 3)

In this third part of my mini-series on life-automation via hacking home-automation, I want to show how I was able to ask our Google Home whose turn it was for movie night, and have "her" respond with an English sentence.

First a quick refresher on what we have so far. In part 1, I set up an incredibly-basic text-file-munging "persistence" system for recording the current person in a rota via OpenHAB. We can query and rotate (both backwards and forwards) the current person, and there's also a cron-like task that rotates the person automatically once a week. The basic pattern can be (and has been!) repeated for multiple weekly events.

In part 2, I exposed the state of the MovieNight item to the "outside world" via the MyOpenHAB RESTful endpoint, and then wrote a lambda function that translates a Google Dialogflow webhook POST into a MyOpenHAB GET for any given "intent"; resulting in the following architecture:

Here are the pertinent screens in Dialogflow where things are set up.

First, the "training phrases" which guide Google's machine-learning into picking the correct Intent:

On the Fulfillment tab is where I specify the URL of the now.sh webhook handler and feed in the necessary auth credentials (which it proxies through to OpenHAB):

From Integrations -> Google Assistant -> Integration Settings is where I "export" the Intents I want to be usable from the Google Home:

The final piece of the puzzle is invoking this abomination via a voice command. Within the Dialogflow console it is very straightforward to test your 'fulfillment' (i.e. your webhook functionality) via typing into the test panel on the side, but actually "going live" so you can talk with real hardware requires digging in a little deeper. There's a slightly-odd relationship between the Google Actions console (which is primarily concerned with getting an Action into the Actions Directory) and the Dialogflow console (which is all about having conversations with "agents"). They are aware of each other to a pretty-good extent (as you'd hope for two sibling Google products) but they are also a little confusing to get straight in your head and/or working together.

You need to head over to the Actions Console to actually "release" your helper so that a real-life device can use it. An "Alpha" release makes sure random people on the internet can't start using your private life automation software!

I really wanted to be able to ask the Google Assistant in a conversational style; "Hey Google, whose turn is it for Movie Night this week?" - in the same way one can request a Spotify playlist. But it turns out to be effectively-impossible to have a non-publicly-released "app" work in this way.

Instead the human needs to explicitly request to talk to your app. So I renamed my app "The Marshall Family Helper" to make it as natural-sounding as it can be. A typical conversation will now look like this:

Human: "Hey Google, talk to The Marshall Family Helper"

Google: "Okay, loading the test version of The Marshall Family Helper"

(Short pause)

{beep} "You can ask about Movie Night or Take-Away"

"Whose turn is it for movie night?"

(Long pause)

"It's Charlotte's turn"

{beep}

Some things to note. The sentence after the first {beep} is what I've called my "Table of Contents" intent - it is automatically invoked when the Marshall Family Helper is loaded - as discovery is otherwise a little difficult. The "short pause" is usually less than a second, and the "long pause" around 3-4 seconds - this is a function of the various latencies as you can see in the system diagram above - it's something I'm going to work on tuning. At the moment now.sh automatically selects the Sydney point-of-presence as the host for my webhook lambda, which would normally be excellent, but as it's being called from Google and making a call to MyOpenHAB, I might spend some time finding out where geographically those endpoints are and locating the lambda more appropriately.

But, it works!

Saturday 30 March 2019

Whose Turn Is it? An OpenHAB / Google Home / now.sh Hack (part 2)

So in the first part of this "life-automation" mini-series, we set up some OpenHAB items that kept track of whose turn it was to do a chore or make a decision. That's fine, but not super-accessible for the whole family, which is where our Google Home Mini comes in.

First, (assuming you've already configured and enabled the OpenHAB Cloud service to expose your OpenHAB installation at myopenhab.org) we add our MovieNight to our exposed items going out to the MyOpenHAB site. To do this, use the PaperUI to go to Services -> MyOpenHAB and add MovieNight to the list. Note that it won't actually appear at myopenhab.org until the state changes ...

Next, using an HTTP client such as Postman, we hit https://myopenhab.org/rest/items/MovieNight/state (sending our email address and password in a Basic Auth header) and sure enough, we get back Charlotte.

Unfortunately, as awesome as it would be, the Google Home Assistant can't "natively" call a RESTful API like the one at MyOpenHAB, but it *can* if we set up a custom Action to do it, via a system called Dialogflow. This can get very involved as it is capable of amazing levels of "conversation" but here's how I solved this for my simple interaction needs:

So over in the Dialogflow console, we set up a new project, which will use a webhook for "fulfillment", so that saying "OK Google, whose turn is it for movie night?"* will result in the MovieNight "Intent" firing, making a webhook call over to a now.sh lambda, which in turn makes the RESTful request to the MyOpenHAB API. Phew!

I've mentioned now.sh before as the next-generation Heroku - and until now have just used it as a React App serving mechanism - but it also has sleek backend deployment automation (that's like Serverless minus the tricksy configuration file) that was just begging to be used for a job like this.

The execution environment inside a now.sh lambda is super-simple. Define a function that takes a Node request and response, and do with them what you will. While I really like lambdas, I think they are best used in the most straightforward way possible - no decision-making, no state - a pure function of its inputs that can be reasoned about for all values over all time at once (a really nice way of thinking about the modern "declarative" approach to writing software that I've stolen from the amazing Dan Abramov).

This particular one is a little gem - basically proxying the POSTed webhook call from Google, to a GET of the OpenHAB API. Almost everything this lambda needs is given to it - the Basic authentication header from Google is passed straight through to the OpenHAB REST call, the URL is directly constructed from the name of the intent in the webhook request, and the response from OpenHAB gets plopped into an English sentence for the Google Assistant to say. The only real snag is that the body of the POST request is not made directly available to us, so I had to add a little helper to provide that:

'use strict';

const bent = require('bent');

// Helper function to get the body from a POST
function processPost(request, response, callback) {
  var queryData = "";
  if (typeof callback !== 'function') return null;

  if (request.method == 'POST') {
    request.on('data', function (data) {
      queryData += data;
      if (queryData.length > 1e6) {
        queryData = "";
        response.writeHead(413, { 'Content-Type': 'text/plain' }).end();
        request.connection.destroy();
      }
    });

    request.on('end', function () {
      callback(queryData);
    });

  } else {
    response.writeHead(405, { 'Content-Type': 'text/plain' });
    response.end();
  }
}

// Proxy a Dialogflow webhook request to an OpenHAB REST call
module.exports = async (request, response) => {
  processPost(request, response, async (bodyString) => {
    const requestBody = JSON.parse(bodyString);
    const intent = requestBody.queryResult.intent.displayName;
    const uri = `https://myopenhab.org/rest/items/${intent}/state`;
    const auth = request.headers['authorization'];

    console.log(`About to hit OpenHAB endpoint: ${uri}`);
    
    const getString = bent('string', { 'Authorization': auth });    
    const body = await getString(uri);

    console.log(`OpenHAB response: ${body}`);

    const json = {
      fulfillmentText: `It's ${body}'s turn.`,
    };
    const jsonString = JSON.stringify(json, null, 2);
    response.setHeader('Content-Type', 'application/json'); 
    response.setHeader('Content-Length', jsonString.length); 
    response.end(jsonString); 
  });
};

It returns the smallest valid JSON response to a Dialogflow webhook request - I did spend some time with the various client libraries available to do this, but they seemed like overkill when all that is needed is grabbing one field from the request and sending back one line of JSON!

We're almost there! Now to wire up this thing so we can voice-command it ...


(*) That's the theory at least - see Part 3 for the reality ...

Thursday 28 February 2019

Whose Turn Is it? An OpenHAB Hack (part 1)

As my young family grows up, we have our little routines - one of which is the weekly Movie Night. On a rotating basis, each family-member gets to choose the movie that we'll watch, as a family, on a Saturday night. Looking at other screens is not allowed during this time - it's a Compulsory Family Fun Night if you like. The thing is, maybe I'm getting too old, but it frequently seems very difficult to remember whose turn it is. Maybe we skipped a week due to some other activity, or nobody can remember exactly because it was a group decision. Anyway, something that computers are especially good at is remembering things, so I decided to extend my existing OpenHAB home (device) automation to include home process automation too!

Unlike the similarly-named Amazon Alexa "skill" which appears to a) be totally random and b) not actually work very well, I wanted something that would intelligently rotate the "turn" on a given schedule (weekly being my primary requirement). I also wanted to keep the essentials running locally, on the Raspberry Pi that runs my OpenHAB setup. I'm sure you could move this entirely into the cloud should you wish, but doing it this way has allowed me to start with the basics and scale up.

First step; create a simple text file with one participant name per line: ${OPENHAB_USERDATA}/movienight.txt (i.e. /var/lib/openhab2/movienight.txt on my system):

Charlotte
Mummy
Daddy
Sophie
Make sure that the openhab user can read and write it.

Now we use the exec binding to create a Thing that reads the first line of this file via the head command-line tool, once every 6 hours (21600 seconds). Unfortunately as you'll see in all the snippets below, there seems to be no way to access environment variables when defining these file locations; so while I'd love to write ${OPENHAB_USERDATA}/movienight.txt, I have to use the hard-coded path: /var/lib/openhab2/movienight.txt.

$OPENHAB_CONF/things/householdrota.things:

Thing exec:command:movienight "Movie Night" @ "Living Room" 
  [command="head -1 /var/lib/openhab2/movienight.txt", 
   interval=21600,
   timeout=5, 
   autorun=true
]

Here are the items that fetch, display and adjust the current movie night, respectively. It's useful to be able to adjust the rotation, for example if we skipped a week, so need to back out the automatically-changed value.

$OPENHAB_CONF/items/householdrota.items:
Switch FetchMovieNight {channel="exec:command:movienight:run"}

String MovieNight "Whose turn is it?" 
  {channel="exec:command:movienight:output"}

Switch AdjustMovieNight

We expose the items in the sitemap:

$OPENHAB_CONF/sitemaps/default.sitemap:
  ...
  Frame label="Household rotas" {
    Text item=MovieNight label="Whose Movie Night is it?"
    Switch item=AdjustMovieNight
           label="Adjust Movie Night"
           mappings=[ON="Rotate", OFF="Unrotate"]
  }
  ...
Which results in the following in Basic UI:

Now for the weekly-rotation part. First, a simple Bash script to rotate the lines of a text file such as the one above. That is, after running ./rotate.sh movienight.txt, the topmost line becomes the bottom-most:

Mummy
Daddy
Sophie
Charlotte
/home/pi/rotate.sh:
#!/bin/bash

TMPFILE=$(mktemp)
if [[ $# -eq 2 ]] 
then
        # Assume a -r flag provided: Reverse mode
        TAIL=`tail -n 1 $2`
        echo ${TAIL} > $TMPFILE
        head -n -1 $2 >> $TMPFILE 
        mv $TMPFILE $2
else
        HEAD=`head -1 $1`
        tail -n +2 $1 > $TMPFILE 
        echo ${HEAD} >> $TMPFILE
        mv $TMPFILE $1
fi

And now we can automate it using a time-based rule in OpenHAB - each Saturday night at 9pm, as well as supporting rotation "by hand":


$OPENHAB_CONF/rules/householdrota.rules:
rule "Rotate Movie Night - weekly"
when
  Time cron "0 0 21 ? * SAT *"    
then 
  logInfo("cron", "Rotating movie night...")
  executeCommandLine(
    "/home/pi/rotate.sh /var/lib/openhab2/movienight.txt"
  )
  FetchMovieNight.sendCommand(ON);
end


rule "Adjust Movie Night"
when
  Item AdjustMovieNight received command
then 

  val reverseFlag = if (receivedCommand == ON) "" else "-r"

  val results = executeCommandLine(
    "/home/pi/rotate.sh " + 
    reverseFlag + 
    " /var/lib/openhab2/movienight.txt", 5000)

  # If anything went wrong it will be displayed in the log:
  logInfo("AdjustMovieNight", "Results: " + results)
  FetchMovieNight.sendCommand(ON);
end

Now this is fine, but believe me when I tell you that having a text field available in a web page somewhere is simply not enough to achieve a winning SAF (Spousal Acceptance Factor). So onwards we must plunge into being able to ask the Google Home whose turn it is ...

Thursday 31 January 2019

New Year, New Side Project

Launching into the new year with a clean-slate restart of an old side-project that I got stuck on a few years back. I'd built it using Scala and the Play Framework, as was my preference at that time. The problem was, this "purely server-side" solution was simply wrong for this particular problem, which would be far better served being entirely client-side (this will make more sense when I release it!).

Thus, the 2019 reboot of this project will be featuring and utilising all the front-end tech I've been enjoying in the last couple of years:

  • React - for me, the most enjoyable way to build dynamic websites; and with the introduction of Hooks, it's just getting better and better
  • Next.js - a framework that elegantly combines hand-in-glove with React, doing all the hard work to get code-splitting, server-side rendering and routing to be as easy as it can be
  • now.sh - from the makers of Next, this is like the next evolution of Heroku - push code and it's deployed. To a global CDN. Awesome
  • TypeScript - 2018's surprise packet was the explosive arrival of TypeScript as a serious way to write, build and ship JavaScript code. As a types fan and Babel-phobe from way back, this is a win for me in a couple of ways
  • Styled-Components with Styled-System - Although I could argue that CSS-in-JS isn't really needed on a one-man project, the power of Styled-Components combined with the control of Styled-System is just too nice to ignore

The output of all this tech will be a responsive PWA that you can "install" to your device and use completely offline. No network connection, no data, no native Android/iOS code. This is the future!