Sunday, 20 October 2024
Google Home Minis (1st Gen) bricked
Sunday, 30 October 2022
Dispatchables Part 3; Make It So
In the previous part of this series about implementing a "dispatchable" for solar-efficient charging of (AA and AAA) batteries, I'd worked out that with a combination of the Google Assistant's Energy Storage trait (made visible through the openHAB Google Assistant Charger integration) and a small amount of local state, it looked like in theory, I could achieve my aim of a voice-commanded (and -queryable) system that would allow efficient charging for a precise amount of time. Let's now see if we can turn theory into practice.
First step is to copy all the configuration from the openHAB Charger device type into an items file:
$OPENHAB_CONF/items/dispatchable.items
Group chargerGroup
{ ga="Charger" [ isRechargeable=true, unit="SECONDS" ] }
Switch chargingItem (chargerGroup)
{ ga="chargerCharging" }
Switch pluggedInItem (chargerGroup)
{ ga="chargerPluggedIn" }
Number capacityRemainItem (chargerGroup)
{ ga="chargerCapacityRemaining" }
Number capacityFullItem (chargerGroup)
{ ga="chargerCapacityUntilFull" }
You'll note the only alterations I made was to change the unit to SECONDS as that's the best fit for our timing system, and a couple of renames for clarity. Here's what they're all representing:
- chargingItem: are the batteries being charged at this instant?
- pluggedInItem: has a human requested that batteries be charged?
- capacityRemainSecondsItem: how many seconds the batteries have been charging for
- capacityFullSecondsItem: how many seconds of charging remain
If we look at the openHAB UI at this point we'll just have a pile of NULL values for all these items:
Now it's time to write some rules that will get sensible values into them all. There are four in total, and I'll explain each one in turn rather than dumping a wall of code.Rule 1: Only charge if it's wanted, AND if we have power to spare
$OPENHAB_CONF/rules/dispatchable.rules
rule "Make charging flag true if wanted and in power surplus"
when
Item currentPowerUsage changed
then
if (pluggedInItem.state == ON) {
if (currentPowerUsage.state > 0|W) {
logInfo("dispatchable", "[CPU] Non-zero power usage");
chargingItem.postUpdate(OFF);
} else {
logInfo("dispatchable", "[CPU] Zero power usage");
chargingItem.postUpdate(ON);
}
}
end
This one looks pretty similar to the old naïve rule we had way back in version 1.0.0, and it pretty-much is. We've just wrapped it with the "intent" check (pluggedInItem) to make sure we actually need to do something, and offloaded the hardware control elsewhere. Which brings us to...
Rule 2: Make the hardware track the state of chargingItem
$OPENHAB_CONF/rules/dispatchable.rules
rule "Charge control toggled - drive hardware"
when
Item chargingItem changed to ON or
Item chargingItem changed to OFF
then
logInfo("dispatchable", "[HW] Charger: " + chargingItem.state);
SP2_Power.sendCommand(chargingItem.state.toString());
end
The simplest rule of all, it's a little redundant but it does prevent hardware control "commands" getting mixed up with software state "updates".
Rule 3: Allow charging to be requested and cancelled
$OPENHAB_CONF/rules/dispatchable.rules
rule "Charge intent toggled (pluggedIn)"
when
Item pluggedInItem changed
then
if (pluggedInItem.state == ON) {
// Human has requested charging
logInfo("dispatchable", "[PIN] charge desired for: ");
logInfo("dispatchable", capacityFullSecondsItem.state + "s");
capacityRemainSecondsItem.postUpdate(0);
// If possible, begin charging immediately:
if (currentPowerUsage.state > 0|W) {
logInfo("dispatchable", "[PIN] Awaiting power-neutrality");
} else {
logInfo("dispatchable", "[PIN] Beginning charging NOW");
chargingItem.postUpdate(ON);
}
} else {
logInfo("dispatchable", "[PIN] Cancelling charging");
// Clear out all state
capacityFullSecondsItem.postUpdate(0);
capacityRemainSecondsItem.postUpdate(0);
chargingItem.postUpdate(OFF);
}
end
This rule is where things start to get a little trickier, but it's pretty straightforward. The key thing is setting or resetting the three other variables to reflect the user's intent. If charging is desired we assume that the "how long for" variable has already been set correctly and zero the "how long have you been charging for" counter. Then, if the house is already power-neutral, we start. Otherwise we wait for conditions to be right (Rule 1).
If charging has been cancelled we can just clear out all our state. The hardware will turn off almost-immediately because of Rule 2.
Rule 4: Keep timers up-to-date
$OPENHAB_CONF/rules/dispatchable.rules
rule "Update charging timers"
when
Time cron "0 0/1 * * * ?"
then
if (pluggedInItem.state == ON) {
// Charging has been requested
if (chargingItem.state == ON) {
// We're currently charging
var secLeft = capacityFullSecondsItem.state as Number - 60;
capacityFullSecondsItem.postUpdate(secLeft);
logInfo("dispatchable", "[CRON] " + secLeft + "s left");
var inc = capacityRemainSecondsItem.state as Number + 60;
capacityRemainSecondsItem.postUpdate(inc);
// Check for end-charging condition:
if (secLeft <= 0) {
// Same as if user hit cancel:
logInfo("dispatchable", "[CRON] Reached target.");
pluggedInItem.postUpdate(OFF);
}
}
}
end
This last rule runs once a minute, but only does anything if the user asked for charging AND we're doing so. If that's the case, we decrement the time left" by 60 seconds, and conversely increase the "how long have they been charging for" by 60 seconds. Yes, I know this might not be strictly accurate but it's good enough for my needs.The innermost if statement checks for the happy-path termination condition - we've hit zero time left! - and toggles the flag which will once-again lower the intent flag, thus causing Rule 3 to fire, which in turn will cause Rule 2 to fire, and turn off the hardware.
UI Setup
This has ended up being quite the journey, and we haven't even got the Google integration going yet! The last thing for this installment is to knock up a quick control/status UI so that we can see that it actually works correctly. Here's what I've got in my openHAB "Overview" page:
The slider is wired to capacityFullSecondsItem, with a range of 0 - 21600 (6 hours) in 60-second increments, and 6 "steps" marked on the slider corresponding to integer numbers of hours for convenience. The toggle is wired to pluggedInItem. When I want to charge some batteries, I pull the slider to my desired charge time and flip the switch. Here's a typical example of what I get in the logs if I do this during a sunny day:[PIN] charge desired for: 420 seconds [PIN] Beginning charging immediately [HW] Charger: ON [CRON] 360s left ... [CRON] 120s left [CRON] 60s left [CRON] 0s left [CRON] Reached desired charge. Stopping [PIN] Cancelling charging [HW] Charger: OFF
Saturday, 17 September 2022
Dispatchables Part 2; Computer, enhance!
As usual with software, Dispatchables v1.0.0 wasn't ideal. In fact, it didn't really capture the "Dispatchable" idea at all. What if I don't have any batteries that need charging? Wouldn't it be better to only enable the charger if there was actually charging work to be done? And for how long? We need a way to specify intent.
Here's what I'd like to be able to tell the charging system:
- I have flat batteries in the charger
- I want them to be charged for a total of {x} hours
To me, that looks like a perfect job for a voice-powered Google Assistant integration. Let's go!
Googlification phase 1
First, let's equip our Broadlink smart power socket item with the required ga attribute so we can control it via the openHAB Google Assistant Action.
$OPENHAB_CONF/items/powerpoints.items:
Switch SP2_Power "Battery Charger Power" {
channel="broadlink:sp2:34-ea-34-84-86-d1:powerOn",
ga="Switch"
}
If I go through the setup steps in the Google Assistant app on my phone, I can now see "Battery Charger Power" as a controllable device. And sure enough, I can say "Hey Google, turn on the battery charger" and it all works. Great!
Now, we need to add something to record the intent to perform battery-charging when solar conditions allow, and something else that will track the number of minutes the charger has been on for, since the request was made. Note that this may well be over multiple distinct periods, for example if I ask for 6 hours of charging but there's only one hour of quality daylight left in the day, I would expect the "dispatch" to be resumed the next day once conditions were favourable again. Once we've hit the desired amount of charging, the charger should be shut off and the "intent" marker reset to OFF. Hmmm... 🤔
Less state === Better state
Well, my first optimisation on the way to solving this is to streamline the state. I absolutely do not need to hold multiple distinct but highly-related bits of information:
- Intent to charge
- Desired charge duration
- Amount of time remaining in this dispatch
We can actually do it all with one variable, the Dead Timer "pattern" (if you can call it such a thing) I learnt from an embedded developer (in C) almost 20 years ago:
unsigned int warning_led_timer = 0;
/* Inside main loop, being executed once per second */
while (warning_led_timer > 0) {
warning_led_timer--;
/* Enable the LED, or turn it off if no longer needed */
enable_led(WARNING_LED, warning_led_timer > 0);
}
/* ...
* Somewhere else in the code that needs to show
* the warning LED for 3 seconds
*/
warning_led_timer = 3;
It encapsulates:
- intent - anyone setting the timer to a non-zero value
- desired duration - the initial non-zero value
- duration remaining - whatever value the variable is currently holding; and
- termination - when the variable hits zero
Objectives
What I'd like to be able to do is have this conversation with the Google Assistant:
Hey Google, charge the batteries for five hours
"Okay, I'll charge the batteries for five hours"
... with all the underlying "dispatchable" stuff I've talked about being done transparently. And for bonus points:
Hey Google, how much charge time remaining?
"There are three hours and 14 minutes remaining"
So as it turns out, the Google Assistant has an Energy Storage trait which should allow the above voice commands (or similar) to work, as it can be mapped into the openHAB Charger Device Type. It's all starting to come together - I don't have a "smart charger" (i.e. for an electric vehicle) but I think I can simulate having one using my "dead timer"!
Tuesday, 28 May 2019
Whose Turn Is it? An OpenHAB / Google Home / now.sh Hack (part 4 - The Rethink)
The "whose turn is it?" system was working great, and the kids loved it, but the SAF (Spousal Acceptance Factor) was lower than optimal, because she didn't trust that it was being kept up-to-date. We had a number of "unusual" weekends where we didn't have a Movie Night, and she was concerned that the "roll back" (which of course, has to be manually performed) was not being done. The net result of which being, a human still had to cast their mind back to when the last movie night was, whose turn it was, and what they chose! FAIL.
Version 2 of this system takes these human factors into account, and leverages the truly "conversational" aspect of using DialogFlow, to actually extract NOUNS from a conversation and store them in OpenHAB. Instead of an automated weekly rotation scheme which you ASK for information, the system has morphed to a TELL interaction. When it IS a Movie Night, a human TELLS the Google Home Mini somewhat like this:
Hey Google, for Movie Night tonight we watched Movie Name. It was Person's choice.
or;Hey Google, last Friday it was Person's turn for Movie Night. we watched Movie Name.
To do this, we use the "parameters" feature of DialogFlow to punch the nouns out of a templated phrase. It's not quite as rigid as it sounds due to the machine-learning magic that Google runs on your phrases when you save them in DialogFlow. Here's how it's set up; with the training phrases:
Kudos to Google for the UI and UX of this tricky stuff - it's extremely intuitive to set up, and easy to spot errors thanks to the use of coloured regions. Here's where the parameters get massaged into a suitable state for my webhook Lambda. Note the conversion into a single pipe-separated variable (requestBody) which is then PUT into the OpenHAB state for the item that has the same name as this Intent, e.g. LastMovieNight.
Within OpenHAB, almost all of the complexity in working out "who has the next turn" is now gone. There's just a tiny rule that, when the item called LastMovieNight is updated (i.e. by the REST interface), appends it to a "log" file for persistence purposes:
rule "Append Last Movie Night"
when
Item LastMovieNight received update
then
executeCommandLine(
"/home/pi/writelog.sh /var/lib/openhab2/movienight-logs.txt " +
LastMovieNight.state,
5000)
end
(writelog.sh is just a script that effectively just does echo ${2} >> $1 - it seems like OpenHAB's executeCommandLine really should be called executeScript because you can't do anything directly).
The flip side is being able to query the last entry. In this case the querying side is very straightforward, but the trick is splitting out the |-separated data into something that the Google Home can speak intelligibly. I've seen this called "having a good VUI" (Voice User Interface) so let's call it that.
Given that the result of querying the MyOpenHAB's interface for /rest/items/LastMovieNight/state will return:
Sophie|2019-05-26T19:00:00+10:00|Toy Story 2
I needed to be able to "slice" up the pipe-separated string into parts, in order to form a nice sentence. Here's what I came up with in the webhook lambda:
...
const { restItem, responseForm, responseSlices } =
webhookBody.queryResult.parameters;
...
// omitted - make the REST call to /rest/items/${restItem}/state,
// and put the resulting string into "body"
...
if (responseSlices) {
const expectedSlices = responseSlices.split('|');
const bodySlices = body.split('|');
if (expectedSlices.length !== bodySlices.length) {
fulfillmentText = `Didn't get ${expectedSlices.length} slices`;
} else {
const responseMap = expectedSlices.map((es, i) => {
return { name: es, value: bodySlices[i] }
});
fulfillmentText = responseMap.reduce((accum, pair) => {
const regex = new RegExp(`\\\$${pair.name}`);
let replacementValue = pair.value;
if (pair.name === 'RELATIVE_DATE') {
replacementValue = moment(pair.value).fromNow();
}
return accum.replace(regex, replacementValue);
}, responseForm);
}
}
Before I try and explain that, take a look at how it's used:
The whole thing hinges on the pipe-separators. By supplying a responseSlices string, the caller sets up a mapping of variable names to array slices, the corresponding values of which are then substituted into the responseForm. It's completely neutral about what the variable names are, with the one exception: if it finds a variable named RELATIVE_DATE it will treat the corresponding value as a date, and apply the fromNow() function from moment.js to give a nicely VUI-able string like "3 days ago". The result of applying these transformations to the above pipe-separated string is thus:
"The last movie night was 3 days ago, when Sophie chose Toy Story 2"
Job done!
Sunday, 28 April 2019
Whose Turn Is it? An OpenHAB / Google Home / now.sh Hack (part 3)
In this third part of my mini-series on life-automation via hacking home-automation, I want to show how I was able to ask our Google Home whose turn it was for movie night, and have "her" respond with an English sentence.
First a quick refresher on what we have so far. In part 1, I set up an incredibly-basic text-file-munging "persistence" system for recording the current person in a rota via OpenHAB. We can query and rotate (both backwards and forwards) the current person, and there's also a cron-like task that rotates the person automatically once a week. The basic pattern can be (and has been!) repeated for multiple weekly events.
In part 2, I exposed the state of the MovieNight item to the "outside world" via the MyOpenHAB RESTful endpoint, and then wrote a lambda function that translates a Google Dialogflow webhook POST into a MyOpenHAB GET for any given "intent"; resulting in the following architecture:
Here are the pertinent screens in Dialogflow where things are set up.
First, the "training phrases" which guide Google's machine-learning into picking the correct Intent:
On the Fulfillment tab is where I specify the URL of the now.sh webhook handler and feed in the necessary auth credentials (which it proxies through to OpenHAB):
From Integrations -> Google Assistant -> Integration Settings is where I "export" the Intents I want to be usable from the Google Home:
The final piece of the puzzle is invoking this abomination via a voice command. Within the Dialogflow console it is very straightforward to test your 'fulfillment' (i.e. your webhook functionality) via typing into the test panel on the side, but actually "going live" so you can talk with real hardware requires digging in a little deeper. There's a slightly-odd relationship between the Google Actions console (which is primarily concerned with getting an Action into the Actions Directory) and the Dialogflow console (which is all about having conversations with "agents"). They are aware of each other to a pretty-good extent (as you'd hope for two sibling Google products) but they are also a little confusing to get straight in your head and/or working together.
You need to head over to the Actions Console to actually "release" your helper so that a real-life device can use it. An "Alpha" release makes sure random people on the internet can't start using your private life automation software!
I really wanted to be able to ask the Google Assistant in a conversational style; "Hey Google, whose turn is it for Movie Night this week?" - in the same way one can request a Spotify playlist. But it turns out to be effectively-impossible to have a non-publicly-released "app" work in this way.
Instead the human needs to explicitly request to talk to your app. So I renamed my app "The Marshall Family Helper" to make it as natural-sounding as it can be. A typical conversation will now look like this:
Human: "Hey Google, talk to The Marshall Family Helper"
Google: "Okay, loading the test version of The Marshall Family Helper"
(Short pause)
{beep} "You can ask about Movie Night or Take-Away"
"Whose turn is it for movie night?"
(Long pause)
"It's Charlotte's turn"
{beep}
Some things to note. The sentence after the first {beep} is what I've called my "Table of Contents" intent - it is automatically invoked when the Marshall Family Helper is loaded - as discovery is otherwise a little difficult. The "short pause" is usually less than a second, and the "long pause" around 3-4 seconds - this is a function of the various latencies as you can see in the system diagram above - it's something I'm going to work on tuning. At the moment now.sh automatically selects the Sydney point-of-presence as the host for my webhook lambda, which would normally be excellent, but as it's being called from Google and making a call to MyOpenHAB, I might spend some time finding out where geographically those endpoints are and locating the lambda more appropriately.
But, it works!
Saturday, 30 March 2019
Whose Turn Is it? An OpenHAB / Google Home / now.sh Hack (part 2)
So in the first part of this "life-automation" mini-series, we set up some OpenHAB items that kept track of whose turn it was to do a chore or make a decision. That's fine, but not super-accessible for the whole family, which is where our Google Home Mini comes in.
First, (assuming you've already configured and enabled the OpenHAB Cloud service to expose your OpenHAB installation at myopenhab.org) we add our MovieNight to our exposed items going out to the MyOpenHAB site. To do this, use the PaperUI to go to Services -> MyOpenHAB and add MovieNight to the list. Note that it won't actually appear at myopenhab.org until the state changes ...
Next, using an HTTP client such as Postman, we hit https://myopenhab.org/rest/items/MovieNight/state (sending our email address and password in a Basic Auth header) and sure enough, we get back Charlotte.
Unfortunately, as awesome as it would be, the Google Home Assistant can't "natively" call a RESTful API like the one at MyOpenHAB, but it *can* if we set up a custom Action to do it, via a system called Dialogflow. This can get very involved as it is capable of amazing levels of "conversation" but here's how I solved this for my simple interaction needs:
So over in the Dialogflow console, we set up a new project, which will use a webhook for "fulfillment", so that saying "OK Google, whose turn is it for movie night?"* will result in the MovieNight "Intent" firing, making a webhook call over to a now.sh lambda, which in turn makes the RESTful request to the MyOpenHAB API. Phew!
I've mentioned now.sh before as the next-generation Heroku - and until now have just used it as a React App serving mechanism - but it also has sleek backend deployment automation (that's like Serverless minus the tricksy configuration file) that was just begging to be used for a job like this.
The execution environment inside a now.sh lambda is super-simple. Define a function that takes a Node request and response, and do with them what you will. While I really like lambdas, I think they are best used in the most straightforward way possible - no decision-making, no state - a pure function of its inputs that can be reasoned about for all values over all time at once (a really nice way of thinking about the modern "declarative" approach to writing software that I've stolen from the amazing Dan Abramov).
This particular one is a little gem - basically proxying the POSTed webhook call from Google, to a GET of the OpenHAB API. Almost everything this lambda needs is given to it - the Basic authentication header from Google is passed straight through to the OpenHAB REST call, the URL is directly constructed from the name of the intent in the webhook request, and the response from OpenHAB gets plopped into an English sentence for the Google Assistant to say. The only real snag is that the body of the POST request is not made directly available to us, so I had to add a little helper to provide that:
'use strict';
const bent = require('bent');
// Helper function to get the body from a POST
function processPost(request, response, callback) {
var queryData = "";
if (typeof callback !== 'function') return null;
if (request.method == 'POST') {
request.on('data', function (data) {
queryData += data;
if (queryData.length > 1e6) {
queryData = "";
response.writeHead(413, { 'Content-Type': 'text/plain' }).end();
request.connection.destroy();
}
});
request.on('end', function () {
callback(queryData);
});
} else {
response.writeHead(405, { 'Content-Type': 'text/plain' });
response.end();
}
}
// Proxy a Dialogflow webhook request to an OpenHAB REST call
module.exports = async (request, response) => {
processPost(request, response, async (bodyString) => {
const requestBody = JSON.parse(bodyString);
const intent = requestBody.queryResult.intent.displayName;
const uri = `https://myopenhab.org/rest/items/${intent}/state`;
const auth = request.headers['authorization'];
console.log(`About to hit OpenHAB endpoint: ${uri}`);
const getString = bent('string', { 'Authorization': auth });
const body = await getString(uri);
console.log(`OpenHAB response: ${body}`);
const json = {
fulfillmentText: `It's ${body}'s turn.`,
};
const jsonString = JSON.stringify(json, null, 2);
response.setHeader('Content-Type', 'application/json');
response.setHeader('Content-Length', jsonString.length);
response.end(jsonString);
});
};
It returns the smallest valid JSON response to a Dialogflow webhook request - I did spend some time with the various client libraries available to do this, but they seemed like overkill when all that is needed is grabbing one field from the request and sending back one line of JSON!
We're almost there! Now to wire up this thing so we can voice-command it ...
(*) That's the theory at least - see Part 3 for the reality ...










