Thursday 31 December 2020

Quick 2020 A-Z Wrapup

So 2020 was a thing, huh?

In between all the incredibly bad stuff, there were a few cool techy things worth remembering:

  • Apple released their M1 Silicon which is ushering in a new level of performance and power-efficiency
  • OpenHAB Version 3.0 has rolled out and fixed a lot of quirks and clunky behaviours, improving the WAF of this automation platform still further
  • Tesla shipped its millionth electric car and became the world's most valuable carmaker
  • Zeit rebranded as Vercel and continued to build Next.js as the best framework for React apps

Stay safe!

Sunday 29 November 2020

Micro-optimisation #1874: NPM script targets

These days I spend most of my working day writing TypeScript/Node/React apps. I still love (and work with) Scala but in the main, it's the faster-moving Javascript world where most of the changes are taking place. One of the best things about the NPM/Yarn workflow that these apps all share, is the ability to declare "scripts" to shortcut common development tasks. It's not new (make has entered the chat) but it's very flexible and powerful. The only downside is, there's no definitive convention for naming the tasks.

One project might use start (i.e. yarn start) to launch the application in development mode (e.g. with hot-reload and suchlike) while another might use run:local (i.e. yarn run:local) for a similar thing. The upshot being, a developer ends up opening package.json in some way, scrolling down to the scripts stanza and looking for their desired task, before carefully typing it in at the command prompt. Can we do better?

Phase 1: The 's' alias

Utilising the wonderful jq, we can very easily get a very nice first pass at streamlining the flow:
alias s='cat package.json | jq .scripts'
This eliminates scrolliing past all the unwanted noise of the package.json (dependencies, jest configuration, etc etc) and just gives a nice list of the scripts:
john$ s
  "build": "rm -rf dist && yarn compile && node scripts/build.js ",
  "compile": "tsc -p .",
  "compile:watch": "tsc --watch",
  "lint": "yarn eslint . --ext .ts",
  "start:dev": "source ./scripts/ && concurrently \"yarn compile:watch\" \"nodemon\"",
  "start": "source ./scripts/ && yarn compile && node dist/index",
  "test": "NODE_ENV=test jest --runInBand",
  "test:watch": "yarn test --watch",
  "test:coverage": "yarn test --coverage"
A nice start. But now while you can see the list of targets, you've still got to (ugh) type one in.
What if ...

Phase 2: Menu-driven

TIL about the select BASH built-in command which will make an interactive menu out of a list of options. So let's do it!

# Show the scripts in alphabetical order, so as to match the
# numbered options shown later
cat package.json | jq '.scripts | to_entries | sort_by(.key) | from_entries'

SCRIPTS=$(cat package.json | jq '.scripts | keys | .[]' --raw-output)

select script in $SCRIPTS
  yarn $script
I've got that aliased to sm (for "script menu") so here's what the flow looks like now:
john$ sm
  "build": "rm -rf dist && yarn compile && node scripts/build.js ",
  "compile": "tsc -p .",
  "compile:watch": "tsc --watch",
  "lint": "yarn eslint . --ext .ts",
  "start": "source ./scripts/ && yarn compile && node dist/index",
  "start:dev": "source ./scripts/ && concurrently \"yarn compile:watch\" \"nodemon\"",
  "test": "NODE_ENV=test jest --runInBand",
  "test:coverage": "yarn test --coverage",
  "test:watch": "yarn test --watch"
1) build	  4) lint	    7) test
2) compile	  5) start	    8) test:coverage
3) compile:watch  6) start:dev	    9) test:watch
#? 9
yarn run v1.21.1
$ yarn test --watch
$ NODE_ENV=test jest --runInBand --watch
... and away it goes. For a typical command like yarn test:watch I've gone from 15 keystrokes plus [Enter] to sm[Enter]9[Enter] => five keystrokes, and that's not even including the time/keystroke saving of showing the potential targets in the first place instead of opening package.json in some way and scrolling. For something I might do tens of times a day, I call that a win!

Saturday 31 October 2020

Micro-optimisation #392: Log-macros!

Something I find myself doing a lot in the Javascript/Node/TypeScript world is logging out an object to the console. But of course if you're not careful you end up logging the oh-so-useful [Object object], so you need to wrap your thing in JSON.stringify() to get something readable.

I got heartily sick of doing this so created a couple of custom keybindings for VS Code to automate things.

Wrap in JSON.stringify - [ Cmd + Shift + J ]

Takes the selected text and wraps it in a call to JSON.stringify() with null, 2 as the second and third args to make it nicely indented (because why not given it's a macro?); e.g.:

console.log(`Received backEndResponse`)
console.log(`Received ${JSON.stringify(backEndResponse, null, 2)}`)

Label and Wrap in JSON.stringify - [ Cmd + Shift + Alt + J ]

As the previous macro, but repeats the name of the variable with a colon followed by the JSON, for clarity as to what's being logged; e.g.:

console.log(`New localState`)
console.log(`New localState: ${JSON.stringify(localState, null, 2)}`)

How do I set these?

On the Mac you can use ⌘-K-S to see the pretty shortcut list, then hit the "Open Keyboard Shortcuts (JSON)" icon in the top-right to get the text editor to show the contents of keybindings.json. Then paste away!

// Place your key bindings in this file to override the defaults
    "key": "cmd+shift+j",
    "command": "editor.action.insertSnippet",
    "when": "editorTextFocus",
    "args": {
      "snippet": "JSON.stringify(${TM_SELECTED_TEXT}$1, null, 2)"
    "key": "cmd+shift+alt+j",
    "command": "editor.action.insertSnippet",
    "when": "editorTextFocus",
    "args": {
      "snippet": "${TM_SELECTED_TEXT}: ${JSON.stringify(${TM_SELECTED_TEXT}$1, null, 2)}"

Sunday 13 September 2020

Micro-optimisation #9725: Checkout the mainline

Very soon (October 1, 2020) Github will be making main the default branch of all new repositories instead of master. While you make the transition over to the new naming convention, it's handy to have an abstraction over the top for frequently-issued commands. For me, git checkout master is one of my faves, so much so that I've already aliased it to gcm. Which actually makes this easier - main and master start with the same letter...

Now when I issue the gcm command, it'll check if main exists, and if not, try master and remind me that this repo needs to be migrated. Here's the script


# Try main, else master and warn about outdated branch name

MAIN_BRANCH=`git branch -l | grep main`

if [[ ! -z ${MAIN_BRANCH} ]]; then
  git checkout main
  echo "No main branch found, using master... please fix this repo!"
  git checkout master

I run it using this alias:

alias gcm='~/bin/'

So a typical execution looks like this:

mymac:foo john$ gcm
No main branch found, using master... please fix this repo!
Switched to branch 'master'
Your branch is up to date with 'origin/master'.       
mymac:foo john$ 

Monday 24 August 2020

Micro-optimisation #6587: Git push to Github

I've said it before; sometimes the best automations are the tiny ones that save a few knob-twirls, keystrokes or (as in this case) a drag-copy-paste, each and every day.

It's just a tiny thing, but I like it when a workflow gets streamlined. If you work on a modern Github-hosted codebase with a Pull-Request-based flow, you'll spend more than a few seconds a week looking at this kind of output, which happens the first time you try to git push to a remote that doesn't have your branch:

mymac:foo john$ git push
fatal: The current branch red-text has no upstream branch.
To push the current branch and set the remote as upstream, use

    git push --set-upstream origin red-text

mymac:foo john$ git push --set-upstream origin red-text
Counting objects: 24, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (16/16), done.
Writing objects: 100% (24/24), 2.79 KiB | 953.00 KiB/s, done.
Total 24 (delta 9), reused 0 (delta 0)
remote: Resolving deltas: 100% (9/9), completed with 9 local objects.
remote: Create a pull request for 'red-text' on GitHub by visiting:
 * [new branch]        red-text -> red-text
Branch 'red-text' set up to track remote branch 'red-text' from 'origin'.

The desired workflow is clear and simple:

  • Push and set the upstream as suggested
  • Grab the suggested PR URL
  • Use the Mac open command to launch a new browser window for that URL
... so lets automate it!


# Try and push, catch the suggested action if there is one:

SUGG=`git push --porcelain 2>&1 | grep "git push --set-upstream origin"` 

if [[ ! -z ${SUGG} ]]; then
  echo "Doing suggested: ${SUGG}"

  URL=`${SUGG} --porcelain 2>&1 | grep remote | grep new | grep -o "https.*"`

  if [[ ! -z ${URL} ]]; then
    echo "Opening URL ${URL}"
    open $URL
    echo "No PR URL found, doing nothing"

I run it using this alias:

alias gpgh='~/bin/'

So a typical execution looks like this:

mymac:foo john$ gpgh
Doing suggested:     git push --set-upstream origin mybranch
Opening URL        
mymac:foo john$ 

Wednesday 29 July 2020

TASCAM FireOne on MacOS High Sierra: finally dead

I suppose it had to happen, but today, my TASCAM FireOne Firewire audio interface just ceased to work properly - namely, the audio input has a constant clicking sound making it unusable.

I suppose I should feel fortunate that it has lasted this long; I mean, look at the MacOS compatibility chart:

- yep 10.4 and 10.5 only, yet here I am on High Sierra (10.13) and it's only just turned up its toes.

It's even less whelming on the Windows side:

... XP only (!)

So now I'm on the hunt for a good interface that will last as long as this one did. Firewire seems to have been effectively killed by Apple, and Thunderbolt interfaces are incredibly expensive, so it'll be back to good ol' USB I guess. I'm thinking that with the rise of podcasts etc, Apple will be obligated to ensure USB audio interfaces that use no additional drivers (aka "Class-Compliant devices") work really well for the foreseeable future...

Thursday 25 June 2020

Home-Grown Mesh Networking Part 2 - When It Doesn't Work ...

A few months ago I shared some tips on using existing Wifi gear to make your own mesh network.

Turns out though, it might not be as easy as I initially made out. In particular, I was noticing the expected switchover as I walked down the corridor in the middle of my house:

... was simply not happening. I would be "stuck" on Channel 3 (red) or Channel 9 (green) based on whatever my Mac had woken up with.

Lots of Googling later, and the simplest diagnostic tool on the Mac turns out to be Wireless Diagnostics -> Info - take a snapshot, turn off that AP, and wait for the UI to update. Then stick them side by side and eyeball them:

I wasted quite some time following a wild goose because of the differing Country Codes - it's not really something you can change in most consumer AP/routers so I thought I was in trouble, until I discovered that "X1" really just means "not broadcast" so I decided to ignore it, which turns out to be fine.

Now the other thing that was most definitely not fine was the different Security policies. I thought I had these set up to match perfectly, but as you can see, MacOS thought different (pun not intended).

When is WPA2 Not WPA2?

The D-Link AP (on Channel 9, on the right in the above screenshot) was supposedly in "WPA2 Personal" mode but the Mac was diagnosing it as just WPA v1. This is most definitely enough of a difference for it to NOT seamlessly switch channels. Even more confusingly, some parts of the MacOS network stack will report this as WPA2. It's quite tricky to sort out, especially when you have access points of different vintages, from different manufacturers, who use different terminology, but what worked for me (on the problematic D-Link) was using Security Mode WPA-Personal together with WPA Mode WPA2 plus explicitly setting the Cipher Type to AES and not the default TKIP and AES:

This last change did the trick for me, and I was able to get some automatic channel-switching, but the Mac was still holding on to the Channel 3 network for much longer than I would have liked. I could even stand in the same room as the Channel 9 AP (i.e. in the bottom-left corner of the heat map above) and not switch to Channel 9.

Performance Anxiety

The clue, yet again, is in the Info window above. In particular, the Tx Rate field. It would seem that rather than just na├»vely choosing an AP based on signal strength, MacOS instead (or perhaps also) checks the network performance of the candidate AP. And look at the difference between the newer dual-band Linksys on Channel 3 (145 Mbps) and the D-Link on Channel 9 (26 Mbps)!

There are plenty of ways to increase your wireless data rate, the most effective being switching to be 802.11n exclusively, as supporting -b and -g devices slows down the whole network, but (as usual) I hit a snag - my Brother 2130w wireless (-only) laser printer needs to have an 802.11g network. As it lives in the study, a couple of metres from the D-Link AP, I'd had the D-Link running "mixed mode" to support it.

Printers were sent from hell to make us miserable

As it turns out, the mixed-mode signal from the Linksys at the other end of the house is good enough for the printer (being mains-powered it's probably got very robust WiFi) and so I could move the D-Link to "n-only". But there was a trick. There's always a trick ...
You need to make sure the printer powers up onto the 802.11g network - it doesn't seem to be able to "roam" - which, again, makes sense - it's a printer. It knows to join a network called MILLHOUSE and will attempt to do so - and if the "n-only" network is there, it'll try to join it and never succeed.
So powering down the n-only AP, rebooting the printer, checking it's online (ping it), then powering the n-only AP back up again should do the trick.


Moving the D-Link AP to n-only doubled the typical Tx Rate at the front of the house to around 50 Mbps, the result being that the Mac now considers Channel 9 to be good enough to switch to as I move towards that end of the house. It still doesn't switch quite as fast as I'd like, but it gets there, and doesn't drop any connections while it switches, which is great.

Here's the summary of the setup now:

Living RoomStudy
DeviceLinksys X6200D-Link DIR-655
IP Address10.
Band2.4 GHz2.4 GHz
Bandwidth20 MHz20 MHz
SecurityWPA2 PersonalWPA Personal (sic)
WPA Moden/aWPA2 Only
Cipher Typen/aAES Only

Stand by for even more excruciating detail about my home network in future updates!

Sunday 17 May 2020

Home Automation In The Small; Part 2

Continuing on the theme of home automation in the small, here's another tiny but pleasing hack that leverages the Chromecast and Yamaha receiver bindings in OpenHAB.

To conclude a happy Spotify listening session, we like to tell the Google Home to "stop the music and turn off the Living Room TV" - "Living Room TV" being the name of the Chromecast attached to HDMI2 of the Yamaha receiver.

While this does stop the music and turn off the television, the amplifier remains powered up. Probably another weird HDMI control thing. It's just a small detail, but power wastage annoys me, so here's the fix.

The trick with this one is ensuring we catch the correct state transition; namely, that the Chromecast's running "app" is the Backdrop and the state is "idling". If those conditions are true, but the amp is still listening to HDMI2, there's obviously nothing else interesting being routed through the amp so it's safe to shut it down. Note that the type of LivingRoomTV_Idling.state is an OnOffType so we don't compare to "ON", it has to be ON (i.e. it's an enumerated value) - some fun Java legacy there ...


rule "Ensure Yamaha amp turns off when Chromecast does"
  Item LivingRoomTV_App changed
  logInfo("RULE.CCP", "Chromecast app: " + LivingRoomTV_App.state)
  logInfo("RULE.CCP", "Chromecast idle: " + LivingRoomTV_Idling.state)
  logInfo("RULE.CCP", "Yamaha input: " + Yamaha_Input.state )

  if (LivingRoomTV_App.state == "Backdrop") {
    if (LivingRoomTV_Idling.state == ON) {
       if (Yamaha_Input.state == "HDMI2") {
         logInfo("RULE.CCP", "Forcing Yamaha to power off") 

Sunday 26 April 2020

Home Automation In The Small

When you say "Home Automation" to many people they picture some kind of futuristic Iron-Man-esque fully-automatic robot home, but often, the best things are really very small. Tiny optimisations that make things just a little bit nicer - like my "Family Helper" that remembers things for us. It's not for everyone, and it's not going to change the world, but it's been good for us.

In that vein, here's another little optimisation that streamlines out a little annoyance we've had since getting a Google Chromecast Ultra. We love being able to ask the Google Home to play something on Spotify, and with the Chromecast plugged directly into the back of my Yamaha AV receiver via HDMI, it sounds fantastic too. There's just one snag, and fixing it means walking over to the AV receiver and changing the input to HDMI2 ("Chromecast") manually, which (#firstworldproblems) kinda undoes the pleasure of being able to use voice commands.

It comes down to the HDMI CEC protocol, which is how the AV receiver is able to turn on the TV, and how the Chromecast turns on the AV receiver. It's cool, handy, and most of the time it works well. However, when all the involved devices are in standby/idle mode, and a voice command to play music on Spotify is issued, here's what seems to be happening:

Time Chromecast AV receiver Television
1Woken via network
2Sends CEC "ON" to AVR
4Switches to HDMI2
5AV stream starts
6Detects video
7Sends CEC "ON" to TV
9Routes video to TV
10"Burps" via analog audio out
11Hears the burp on AV4
12Switches to AV4

Yes, my TV (a Sony Bravia from 2009) does NOT have HDMI ARC (Audio Return Channel) which may or may not address this. However, I'm totally happy with this TV (not-"smart" TVs actually seem superior to so-called "smart" TVs in many ways).

The net effect is you get a few seconds of music from the Chromecast, before the accompanying video (i.e. the album art image that the Chromecast Spotify app displays) causes the TV to wake up, which makes the amp change to it, which then silences the music. It's extremely annoying, especially when a small child has requested a song, and they have to semi-randomly twiddle the amp's INPUT knob until they get back to the Chromecast input.

But, using the power of the Chromecast and Yamaha Receiver OpenHAB bindings, and OpenHAB's scripting and transformation abilities, I've been able to "fix" this little issue, such that there is less than a second of interrupted sound in the above scenario.

The approach

The basic approach to solve this issue is:

  • When the Chromecast switches to the Spotify app
  • Start polling (every second) the Yamaha amp
  • If the amp input changes from HDMI2, force it back
  • Once 30s has elapsed or the input has been forced back, stop polling

Easy right? Of course, there are some smaller issues along the way that need to be solved, namely:
  • The Yamaha amp already has a polling frequency (10 minutes) which should be restored
  • There's no way to (easily) change the polling frequency

The solution


First of all, we need to write a JavaScript transform function, because in order to change the Yamaha polling frequency, we'll need to download the Item's configuration as JSON, alter it, then upload it back into the Item:


(function(newRefreshValuePipeJsonString) {
  var logger = Java.type("org.slf4j.LoggerFactory").getLogger("rri"); 
  logger.warn("JS got " + newRefreshValuePipeJsonString);
  var parts = newRefreshValuePipeJsonString.split('|');
  logger.warn("JS parts: " + parts.length);
  var newRefreshInterval = parts[0];
  logger.warn("JS new refresh interval: " + newRefreshInterval);
  var entireJsonString = parts[1];
  logger.warn("JS JSON: " + entireJsonString);
  var entireThing = JSON.parse(entireJsonString);
  var config = entireThing.configuration;
  logger.warn("JS config:" + JSON.stringify(config, null, 2));

  // Remove the huge and noisy album art thing:
  config.albumUrl = "";
  config.refreshInterval = newRefreshInterval;
  logger.warn("JS modded config:" + JSON.stringify(config, null, 2));

  return JSON.stringify(config);
Apologies for the verbose logging, but this is a tricky thing to debug. The signature of an OpenHAB JS transform is effectively (string) => string so if you need to get multiple arguments in there, you've got to come up with a string encoding scheme - I've gone with pipe-separation, and more than half of the function is thus spent extracting the args back out again!
Basically this function takes in [new refresh interval in seconds]|[existing Yamaha item config JSON], does the replacement of the necessary field, and returns the new config JSON, ready to be uploaded back to OpenHAB.


Some preconditions:

  • A Chromecast Thing is set up in OpenHAB
    • With #appName channel configured as item LivingRoomTV_App
  • A Yamaha AVReceiver Thing is set up in OpenHAB
    • With (main zone) #power channel configured as item Yamaha_Power
    • and
    • (Main zone) #input channel configured as item Yamaha_Input


val AMP_THING_TYPE="yamahareceiver:yamahaAV"
val AMP_ID="5f9ec1b3_ed59_1900_4530_00a0dea54f93"
val AMP_URL = "http://localhost:8080/rest/things/" + AMP_THING_ID

var Timer yamahaWatchTimer = null
rule "Ensure AVR is on HDMI2 when Chromecast starts playing music"
  Item LivingRoomTV_App changed
  logInfo("RULE.CCAST", "Chromecast app is: " + LivingRoomTV_App.state)

  if(yamahaWatchTimer !== null) {
    logInfo("RULE.CCAST", "Yamaha is already being watched - ignoring")

  if (LivingRoomTV_App.state == "Spotify") {
    logInfo("RULE.CCAST", "Forcing Yamaha to power on") 

    // Fetch the Yamaha thing's configuration:  
    var yamahaThingJson = sendHttpGetRequest(AMP_URL)
    logInfo("RULE.CCAST", "Existing config is: " + yamahaThingJson)

    // Replace the refresh interval field with 1 second:
    var newYamahaConfig = transform(
      "1|" + yamahaThingJson

    logInfo("RULE.CCAST", "New config is: " + newYamahaConfig)

    // PUT it back using things/config:
      AMP_URL + "/config", 

    logInfo("RULE.CCAST", "Forcing Yamaha to HDMI2") 
    logInfo("RULE.CCAST", "Forced Yamaha to HDMI2") 

    logInfo("RULE.CCAST", "Will now watch the Yamaha for the next 30")
    logInfo("RULE.CCAST", "sec & force it back to HDMI2 if it wavers") 
    val DateTimeType ceasePollingTime = now.plusMillis(30000)

    yamahaWatchTimer = createTimer(now, [ |
      if(now < ceasePollingTime){
        logInfo("RULE.CCAST", "Yamaha input: " + Yamaha_Input.state) 
        if (Yamaha_Input.state.toString() != "HDMI2") {
          logInfo("RULE.CCAST", "Force PUSH") 
      else {
        logInfo("RULE.CCAST", "Polling time has expired.")
        logInfo("RULE.CCAST", "Will not self-schedule again.") 
        var revertedYamahaConfig = transform(
          "JS", "replaceRefreshInterval.js", 
          "600|" + yamahaThingJson
          AMP_URL + "/config", 
        logInfo("RULE.CCAST", "Yamaha polling reverted to 10 minutes.") 
        yamahaWatchTimer = null

Some things to note. This uses the "self-triggering-timer" pattern outlined in the OpenHAB community forums, reads the configuration of a Thing using the REST interface as described here, and is written in the XTend dialect which is documented here.

Monday 30 March 2020

Home-grown mesh networking

With many people now working from home every day, there's a lot more interest in improving your home WiFi coverage; and a lot of people's default answer to this question is "get a mesh network". The thing is, these things are expensive, and if you've upgraded your home network and/or WAN connection in the last 10 years (and have the bits left in a drawer somewhere) you probably actually have everything you need to build your own mesh network already.
Here's what you need to do (presented in the order that should cause minimal disruption to your home network):
Establish which router you want to be the "master"
This may be the only router currently running, the best-positioned Wifi-wise, the one with the WAN connection, all of the aforementioned, or something else.
Configure the master AP

  • We'll reflect this router's status with its static IP address; ending in .1
  • If you rely on a router to provide DHCP, make it this one
  • Set your Wifi channel to 1,2 or 3 (for non-US locations) and do not allow it to "hop" automatically
  • I'll refer to this channel as CM
  • If possible, set the Wifi transmit power to LOW

Configure your (first) slave AP

  • Give it a static IP address ending in .2 (or .n for the nth device)
  • Disable DHCP
  • Set your Wifi channel to CM +5 (for non-US locations) (e.g. 6 if CM is 1) and do not allow it to "hop" automatically
  • The logic behind this is to avoid overlapping frequencies
  • Let's call this channel CS
  • If possible, set the Wifi transmit power to LOW
  • Set your SSID, WPA scheme and password exactly as per the master

Connect master and slave via wired Ethernet
Oh and if neither of those devices is your WAN connection device, then that needs to be wired to this "backbone" too. This is super-important for good performance. If an AP can only get to the internet via Wifi, it'll be battling its own clients for every internet conversation. The Googleable name for this is "wired backhaul" or "Ethernet backhaul" and it's well worth drilling some holes and fishing some cable to get it. Don't skimp on this cable either - go for Cat6, even if your devices only (currently) go to 100Mbps.
Tune it
Grab a Wifi analyser app for your phone - IP Tools and Farproc's Wifi Analyser work well on Android. Your best option on iOS is called Speed Test - Wifi Signal Strength by Xiaoyan Huang.
Using the signal strength view, start walking from your master device towards your first slave. You should see the signal strength on channel CM start dropping and the strength of CS increase. Now if you've got some control over Wifi transmit strength, this is where you can "tune" the point at which your portable Wifi devices will start looking around for a "better option" - typically at around -70 to -75dBm. Remember, you actually want them to start getting "uncomfortable" quite quickly, so that they begin scanning earlier, and find the better option before you even notice any glitch. That's why we dropped our signal strength when we set the APs up - we don't want them to be too "sticky" to any given AP.
A real-life example
Prior warning - I'm a geek, so my network configuration might be a little more involved than yours, but the basics remain the same.
I have 4 devices of interest:
  • WAN Modem - a TP-Link Archer v1600v that has a broken* Wifi implementation, so is just being used as a WAN Modem
  • DHCP Server - a Raspberry Pi running dnsmasq - a bit more flexible than what's in most domestic routers
  • Living area AP - a Linksys X6200 router/AP
  • Home office AP - a D-link DIR-655 router/AP
You'll note that those APs are most definitely not state-of-the-art. When you use wired backhaul, you really don't need anything very fancy to get a strong mesh network!
Here's how they are physically laid out:

Pink lines are Gigabit Ethernet running on Cat6 cables. The red arrow is the WAN connection, which arrives at the front of the house and is terminated in the home office. That long curved pink line is the "backhaul" - it leaves the home office through a neat RJ45 panel in the skirting board, runs under the house, and surfaces through another RJ45 panel in the back of a closet in the bathroom - a little unusual, but there is power available and it is excellently positioned to cover the living area of the house as you can probably see.
Here's the configuration:
  • WAN Modem - Static IP
  • DHCP Server - Static IP, hands out addresses with network gateway set to
  • Living area AP - Static IP, Wifi channel 3, transmit power LOW
  • Home office AP - Static IP, Wifi channel 9, transmit power LOW
And that's it!
I've done a little visualisation of the signal strength using my pet project react-chromakeyed-image (more on that in another post):

You can see that the whole house is now bathed in a good strong signal, from either the living area (red) AP or the home office (green) and the only questionable area is on one side of that other front room (bottom of image), which is a playroom and doesn't need strong Wifi anyway.
(*) It actually seems to be that IPv6 advertisements can't be turned off and it advertises the "wrong" DNS addresses.

Thursday 27 February 2020

The Open-Topped Box

I'd always thought that my Preferred Working Arrangement™ was the classic "Developer In A Box" - keep them insulated from all possible sources of "noise", politics or distraction, feed them work to be done (and coffee) and results come out. It turns out that's not quite true. There's actually an even better arrangement; The Open-Topped Box.

I coined the phrase during a one-on-one with my boss; I was praising the way we'd been all working to see what was coming up next, and agreeing on priorities and how best to get maximum return for the lowest effort.

In a company-wide meeting, we'd been shown a 2-dimensional graph plotting (developer) effort versus (overall company) reward, something I'd never seen before (I don't know if it has a formal name or inventor):

You can intuitively see where you'd like every project or feature to sit ...

On the next slide it had been populated with various possible future initiatives marked appropriately; something like this:

The Big Boss then spent a good amount of time explaining his (very astute) reasoning of the potential business value of each proposed feature, and then delegated where necessary to get explanation of the development effort involved. In the end, it made it crystal clear why (to use the example chart above) Project E made the most sense to run with - providing the (equal-) most business value at the (equal-) lowest effort.

Although we were completely free to challenge the placement of anything on the two-dimensional chart, we couldn't fault it, and instead came away feeling both informed and energised for the next development push. So by all means keep your devs in a box; but let them see what's coming down next.

Friday 31 January 2020

OpenHAB Broadlink Binding situation report

After #dadlife, #newjob and #otherstuff got in the way for a while last year, I got back into my role as maintainer of the OpenHAB Broadlink device binding. My first priority was to create a binding JAR that would actually work with the newly-published OpenHAB version 2.5. As the Broadlink binding is still not part of the official OpenHAB binding repository, it doesn't "automagically" get the necessary changes applied when the upstream APIs change. Luckily, it wasn't too much work.

My priorities for 2020 for this binding remain unchanged; get it to a high-quality state, make it pass the (extremely strict) linter guidelines for OpenHAB code, and get it merged into the official codebase. For me to consider it high-quality, there are still the following tasks to do:

  • Get a solid chunk of it covered by unit tests to prevent regressions; and
  • Redesign the device-discovery and identification areas of the code, to make adding new devices easier
Unit Tests

Prior to OpenHAB 2.5, bindings/addons that wished to define tests had to create an entire second project that shadowed the production code and could only be run via a strange incantation to Maven which did horrible OSGi things to run integration-style tests. Essentially, OpenHAB addons were not unit-testable by conventional means. Which given most addons are developed by unpaid volunteers, naturally meant that hardly any addons had tests.

Fortunately, one of the major changes in the 2.5 architecture has been a move towards more Java-idiomatic unit testing. Finally, classic JUnit-style unit testing with Mockito mocking will be available for fast, reliable testing within the binding. I'll be shooting for at least 60% test coverage before I'll consider submitting a PR to OpenHAB.

Discovery redesign

I've been told that new versions of some popular Broadlink devices will be arriving in 2020. In anticipation of that, I want to make it much easier to add a new device. At the moment it requires defining a new subclass of BroadlinkBaseThingHandler (which is par-for-the-course for OpenHAB, being a pretty standard Java app), but also adding "magic numbers" in a number of places to assist in looking-up and identifying devices during "discovery" and also when they boot up. I want to consolidate this such that everything needed to support a device is located within one .java file - i.e. adding support for a new device will require exactly two changes in Git:

  • The new .java file containing all the required information to support the new device; and
  • Adding a reference to this class somewhere to "pick it up".
I see no technical reason why this can't happen, and consider it only fair if maintenance of the binding will (at least partly) be a burden on the core OpenHAB team. So again, I won't be submitting this binding to become official until that work is complete.

Thanks for all the kind words from users/testers of this binding - it's very rewarding to hear people using it with great success!