Monday, 27 February 2023

Stepping up, and back, with the new Next.js "app" directory

I'm toying around with a new web-based side project and I thought it was time to give the latest Next.js version a spin. Although I've used Create-React-App (generally hosted on Netlify) more recently, I've dabbled with Next.js in one capacity or another since 2018, and this time some server-side requirements made it a better choice.

The killer feature of the 2023 "beta version" of Next.js (which I assume will eventually be named Next.js 14) is the app directory, which takes Next's already-excellent filesystem-based routing (i.e. if you create a file called bar.tsx in a directory called foo, you'll find it served up at /foo/bar without writing a line of code) and amps it up. A lot.

I won't try and reiterate their excellent documentation, but their nested layouts feature is looking like an absolute winner from where I'm sitting, and I'd like to explain why by taking you back in time. I've done this before when talking about React-related stuff when I joked that the HTML <img> tag was like a proto-React component. And I still stand by that comparison; I think this instant familiarity is one of the fundamental reasons why React has "won" the webapp developer mindshare battle.

Let me take you back to 1998. The Web is pretty raw, pretty wild, and mostly static pages. My Dad's website is absolutely no exception. I've meticulously hand-coded it in vi as a series of stand-alone HTML pages which get FTP'ed into position on his ISP's web server. Although I'm dimly aware of CSS, it's mainly still used for small hacks like removing the underlines from links (I still remember being shown the way to do this with an inline style tag and thinking it would never take off) - and I'm certainly not writing a separate .css file to be included by every HTML file. As a result, everything is styled "inline" so-to-speak, but not even in the CSS way; just mountains of widths and heights and font faces all over the place. It sucked, but HTML was getting better all the time so we just put up with it, used all that it offered, and automated what we could. Which was exactly what I did. If you dare to inspect the source of the above Wayback Machine page, you'll see that it uses HTML frames (ugh), which was a primitive way of maintaining a certain amount of UI consistency while navigating around the site.

The other thing I did to improve UI consistency, was a primitive form of templating. Probably more akin to concatenation, but I definitely had a header.htm which was crammed together with (for-example) order-body.htm to end up with order.htm using a DOS batch file that I ran to "pre-process" everything prior to doing an FTP upload - a monthly occurrence as my Dad liked to keep his "new arrivals" page genuinely fresh. Now header.htm definitely wasn't valid HTML as it would have had unclosed tags galore, but it was re-used for several pages that needed to look the same, and that made me feel efficient.

And this brings me to Next.js and the nesting layouts functionality I mentioned before. To achieve what took me a pile of HTML frames, some malformed HTML documents and a hacky batch file, all I have to do is add a layout.tsx and put all the pages that should use that UI alongside it. I can add a layout.tsx in any subdirectory and it will apply from there "down". Consistency via convention over configuration, while still nodding to the hierarchical filesystem structures we've been using since Before The Web. It's just really well thought-out, and a telling example of how much thought is going into Next.js right now. I am on board, and will be digging deeper this year for sure.

Sunday, 29 January 2023

Sneaking through the Analog Hole

I perhaps-foolishly recently agreed to perform a media-archiving task. A series of books-on-tape (yes, on physical audio cassettes), almost unplayable at this point in the century, needed to be moved onto a playable media. For this particular client, that meant onto Audio CDs (OK so we're moving forward, but not too far!). I myself didn't have a suitable playback device, but quickly located a bargain-priced solution, second-hand on eBay (of course) - an AWA E-F34U that appears to be exclusively distributed by the Big W retail chain here in Australia:

This device purports to be a one-USB-cable solution to digitising the contents of analogue cassettes. Unfortunately, the example I just purchased had extremely severe issues with its USB implementation. The audio coming straight off the USB cable would jump between perfectly fine for a few seconds, to glitchy, stuttering and repeating short sections, to half-speed slooooow with the attendant drop in pitch. Unusable.

I only hope that the problem is isolated to my unit (which was cheap and described as "sold untested" so I have no-one to blame but myself) - if not, someone's done a really bad job at their USB Audio implementation. Luckily, the USB Power works absolutely fine, so I had to resort to the old "Analog Hole" solution via my existing (rather nice) USB Audio Interface, a Native Instruments Komplete Audio 1 which I picked up after my previous interface, a TASCAM FireOne, finally kicked the bucket.

In the following picture, you can see my digitising solution. AWA tape transport (powered by USB) to 3.5mm headphone socket, through a 1/4" adaptor to a short guitar lead and into the Komplete Audio 1's Line In. From there, it goes in via the KA1's (fully-working!) USB connection to GarageBand on the Mac. A noise gate and a little compression are applied, and once each side of each tape has been captured, it gets exported directly to an MP3 file. I intend to present the client with not only the Audio CDs but also a data CD containing these MP3s so that future media formats can hopefully be more easily accommodated.

What if I didn't already have a USB audio interface? Would the client have given up, with their media stuck in the analog era, never to be heard again?

It amused me that analog technology was both the cause of this work - in that this medium and the ability to play it has gone from ubiquitous in the 1980s to virtually extinct - and its solution, using an analog interface to get around a deficient digital one.

Sunday, 18 December 2022

Three D's of 3D Printing in 2022

I've been somewhat fascinated with 3D printing ever since becoming aware of it a decade ago, but it was prohibitively expensive to get into it when machines were in the four-digit USD$ range and seemed likely to be limited to somewhat-unreliably producing useless tchotchkes at vastly higher cost. Things have changed. A lot.

Declining costs

Cost of entry
My new(ish) printer is the Cocoon Create Modelmaker, a respin/reskin of the Wanhao i3 Mini - which if you follow the links, you'll note is USD$199 brand new, but I got mine second-hand on eBay for AUD$100. I'm a sucker for an eBay bargain. When I picked it up, the seller (who was upgrading to a model with a larger print bed) also gave me a crash course in printing and then threw in an almost-full 300m spool of filament to get me started - another AUD$20 saved. So I'm already at the f*@k it point as far as up-front investment goes.

Cost of materials
I'm picking up 300m rolls of PLA filament from eBay for AUD$20-$24 delivered, and I'm choosing local suppliers so they typically get delivered within 3 days. I could go even cheaper if I used Chinese suppliers. The biggest thing I've printed so far was a case for my Raspberry Pi 3B+, part of a 19" rack mount setup (I'm also a sucker for anything rackmounted) - that took 21 hours and used about 95c of filament. So really, it's starting to approach "free to make" as long as you don't place too much value on your own time...

Damn Fine Software

Seven years ago, Scott Hanselman documented his early experiences with 3D printing; there was a lot of rage and frustration. Maybe I've just been lucky, maybe buying a printer that had already been used, tweaked, and enhanced (with 3d-printed upgrade parts) was a galaxy-brain genius move, but honestly, I've had very little trouble, and I'd estimate less than 50c of material has ended up in the bin. Happy with that. I think the tools have moved on supremely in that time, and awesomely, they're all FREE and most are also Open-Source.

Cura
Ultimaker Cura takes an STL file and "slices" it into something your actual hardware can print, via a G-Code file. It's analogous to the JVM taking generic Java bytecodes and translating them to x86 machine language or whatever. Anyway, it does a great job, and it's free.

OctoPrint
My first 3D printing "toolchain" consisted of me slicing in Cura on my Mac followed by saving the file to a micro SD card (via an adapter), then turning around to place the (unsheathed) micro SD card into my printer's front-panel slot, and instructing it to print. This was fine, but the "sneakernet"-like experience was annoying (I kept losing the SD adapter) and the printer made a huge racket being on a table in the middle of the room. Then I discovered OctoPrint, an open-source masterpiece that network-enables any compatible 3D printer with a USB port. I pressed my otherwise-idle 5-series Intel NUC into service and it's been flawless, allowing me to wirelessly submit jobs to the printer, which now resides in a cupboard, reducing noise and increasing temperature stability (which is good for print quality)

Tinkercad
It didn't take long for me to want a little more than what Thingiverse et al could provide. Thingiverse's "Remix" culture is just awesome - a hardware equivalent to open-sourced software - but my experience of CAD was limited to a semester in university bashing up against Autodesk's AutoCAD, so I figured it would just be too hard for a hobbyist like me to create new things. Then I discovered Tinkercad, a free web application by, of all companies, Autodesk! This app features one of the best tutorial introductions I've ever seen; truly, I could get my 10-year old daughter productive in this software thanks to that tutorial. And the whole thing being on the web makes it portable and flexible. Massive kudos to Autodesk for this one.

Do It

The useless tchotchke era is over; I've been using my printer to replace lost board game tokens, organise cables, rackmount loose devices, and create LEGO parts that don't exist yet. As far as I'm concerned it's virtually paid for itself already, and I'm still getting better as a designer and operator of the machine. If you've been waiting for the right time to pounce, I strongly recommend picking up a used 3D printer and giving it a whirl.

Saturday, 26 November 2022

AWS Step Functions - a pretty-good v1.0

I've been using Amazon's Step Functions functionality a fair bit at work recently, as a way to orchestrate and visualise a migration process that involves some Extract-Transform-Load steps and various other bits, each one being an AWS Lambda.

On the whole, it's been pretty good - it's fun to watch the process chug along with the flowchart-like UI automagically updating (I can't show you any screenshots unfortunately, but it's neat). There have been a couple of reminders however that this is a version 1.0 product, namely:

Why can't I resume where I failed before?

With our ETL process, frequently we'll detect a source data problem in the Extract or Transform stages. It would be nice if after fixing the data in place, we could go back to the failed execution and just ask it to resume at the failed step, with all of the other "state" from the execution intact.

Similarly, if we find a bug in our Extract or Transform lambdas themselves, it's super-nice to be able to monkey-patch them right there and then (remembering of course to update the source code in Git as well) - but it's only half as nice as it could be. If we could fix the broken lambda code and then re-run the execution that uncovered the bug, the cycle time would be outstanding

Why can't you remember things for me?

Possibly-related to the first point, is the disappointing discovery that Step Functions have no "memory" or "context" if you prefer, where you can stash a variable for use later in the pipeline. That is you might expect to be able to declare 3 steps like this:

    Extract Lambda
      Inputs:
        accountId
      Outputs: 
        pathToExtractedDataBucket
        
    Transform Lambda
       Inputs:
         pathToExtractedDataBucket
       Outputs:
         pathToTransformedDataBucket
         
    Load Lambda
       Inputs:
         accountId
         pathToTransformedDataBucket
       Outputs:
         isSuccessful
  
But unfortunately that simply will not work (at time of writing, November 2022). The above pipeline will fail at runtime because accountId has not been passed through the Transform lambda in order for the Load lambda to receive it!

For me, this really makes a bit of a mockery of the reusability and composability of lambdas with step functions. To fix the situation above, we have to make the Extract Lambda emit the accountId and Transform Lambda aware of and pass through accountId even though it has no interest in, or need for it!; that is:

   Extract Lambda
      Inputs:
        accountId
      Outputs: 
        accountId
        pathToExtractedDataBucket
        
    Transform Lambda
       Inputs:
         accountId
         pathToExtractedDataBucket
       Outputs:
         accountId
         pathToTransformedDataBucket
         
    Load Lambda
       Inputs:
         accountId
         pathToTransformedDataBucket
       Outputs:
         isSuccessful
  
That's really not good in my opinion, and makes for a lot of unwanted cluttering-up of otherwise reusable lambdas, dealing with arguments that they don't care about, just because some other entity needs them. Fingers crossed this will be rectified soon, as I'm sure I'm not the first person to have been very aggravated by this design.

Sunday, 30 October 2022

Dispatchables Part 3; Make It So

In the previous part of this series about implementing a "dispatchable" for solar-efficient charging of (AA and AAA) batteries, I'd worked out that with a combination of the Google Assistant's Energy Storage trait (made visible through the openHAB Google Assistant Charger integration) and a small amount of local state, it looked like in theory, I could achieve my aim of a voice-commanded (and -queryable) system that would allow efficient charging for a precise amount of time. Let's now see if we can turn theory into practice.

First step is to copy all the configuration from the openHAB Charger device type into an items file:

$OPENHAB_CONF/items/dispatchable.items
Group  chargerGroup 
{ ga="Charger" [ isRechargeable=true, unit="SECONDS" ] }

Switch chargingItem         (chargerGroup) 
{ ga="chargerCharging" }

Switch pluggedInItem        (chargerGroup) 
{ ga="chargerPluggedIn" }

Number capacityRemainItem   (chargerGroup) 
{ ga="chargerCapacityRemaining" }

Number capacityFullItem     (chargerGroup) 
{ ga="chargerCapacityUntilFull" }

You'll note the only alterations I made was to change the unit to SECONDS as that's the best fit for our timing system, and a couple of renames for clarity. Here's what they're all representing:

  • chargingItem: are the batteries being charged at this instant?
  • pluggedInItem: has a human requested that batteries be charged?
  • capacityRemainSecondsItem: how many seconds the batteries have been charging for
  • capacityFullSecondsItem: how many seconds of charging remain
I could have used the "proper" dead timer pattern of saying "any non-zero capacityFullSecondsItem indicates intent" but given the Charger type requires all four variables to be implemented anyway, I went for a crisper definition. It also helps with the rule-writing as we'll shortly see.

If we look at the openHAB UI at this point we'll just have a pile of NULL values for all these items:

Now it's time to write some rules that will get sensible values into them all. There are four in total, and I'll explain each one in turn rather than dumping a wall of code.


Rule 1: Only charge if it's wanted, AND if we have power to spare

$OPENHAB_CONF/rules/dispatchable.rules
rule "Make charging flag true if wanted and in power surplus"
when
  Item currentPowerUsage changed 
then
  if (pluggedInItem.state == ON) {
    if (currentPowerUsage.state > 0|W) {
      logInfo("dispatchable", "[CPU] Non-zero power usage");
      chargingItem.postUpdate(OFF);
    } else {
      logInfo("dispatchable", "[CPU] Zero power usage");
      chargingItem.postUpdate(ON);
    }
  } 
end
This one looks pretty similar to the old naïve rule we had way back in version 1.0.0, and it pretty-much is. We've just wrapped it with the "intent" check (pluggedInItem) to make sure we actually need to do something, and offloaded the hardware control elsewhere. Which brings us to...


Rule 2: Make the hardware track the state of chargingItem

$OPENHAB_CONF/rules/dispatchable.rules
rule "Charge control toggled - drive hardware" 
when
  Item chargingItem changed to ON or
  Item chargingItem changed to OFF
then
  logInfo("dispatchable", "[HW] Charger: " + chargingItem.state);
  SP2_Power.sendCommand(chargingItem.state.toString());
end
The simplest rule of all, it's a little redundant but it does prevent hardware control "commands" getting mixed up with software state "updates".


Rule 3: Allow charging to be requested and cancelled

$OPENHAB_CONF/rules/dispatchable.rules
rule "Charge intent toggled (pluggedIn)" 
when
  Item pluggedInItem changed
then
  if (pluggedInItem.state == ON) {
    // Human has requested charging 
    logInfo("dispatchable", "[PIN] charge desired for: ");
    logInfo("dispatchable", capacityFullSecondsItem.state + "s");
    capacityRemainSecondsItem.postUpdate(0);
    // If possible, begin charging immediately:
    if (currentPowerUsage.state > 0|W) {
      logInfo("dispatchable", "[PIN] Awaiting power-neutrality");
    } else {
      logInfo("dispatchable", "[PIN] Beginning charging NOW");
      chargingItem.postUpdate(ON);
    }
  } else {
    logInfo("dispatchable", "[PIN] Cancelling charging");
    // Clear out all state
    capacityFullSecondsItem.postUpdate(0);
    capacityRemainSecondsItem.postUpdate(0);
    chargingItem.postUpdate(OFF);
  }
end
This rule is where things start to get a little trickier, but it's pretty straightforward. The key thing is setting or resetting the three other variables to reflect the user's intent.

If charging is desired we assume that the "how long for" variable has already been set correctly and zero the "how long have you been charging for" counter. Then, if the house is already power-neutral, we start. Otherwise we wait for conditions to be right (Rule 1).
If charging has been cancelled we can just clear out all our state. The hardware will turn off almost-immediately because of Rule 2.


Rule 4: Keep timers up-to-date

$OPENHAB_CONF/rules/dispatchable.rules
rule "Update charging timers"
when
  Time cron "0 0/1 * * * ?"
then
  if (pluggedInItem.state == ON) {
    // Charging has been requested
    if (chargingItem.state == ON) {
      // We're currently charging
      var secLeft = capacityFullSecondsItem.state as Number - 60; 
      capacityFullSecondsItem.postUpdate(secLeft);
      logInfo("dispatchable", "[CRON] " + secLeft + "s left");
      
      var inc = capacityRemainSecondsItem.state as Number + 60; 
      capacityRemainSecondsItem.postUpdate(inc);

      // Check for end-charging condition:
      if (secLeft <= 0) {
        // Same as if user hit cancel:
        logInfo("dispatchable", "[CRON] Reached target.");
        pluggedInItem.postUpdate(OFF);
      }
    } 
  } 
end
This last rule runs once a minute, but only does anything if the user asked for charging AND we're doing so. If that's the case, we decrement the time left" by 60 seconds, and conversely increase the "how long have they been charging for" by 60 seconds. Yes, I know this might not be strictly accurate but it's good enough for my needs.

The innermost if statement checks for the happy-path termination condition - we've hit zero time left! - and toggles the flag which will once-again lower the intent flag, thus causing Rule 3 to fire, which in turn will cause Rule 2 to fire, and turn off the hardware.

UI Setup

This has ended up being quite the journey, and we haven't even got the Google integration going yet! The last thing for this installment is to knock up a quick control/status UI so that we can see that it actually works correctly. Here's what I've got in my openHAB "Overview" page:

The slider is wired to capacityFullSecondsItem, with a range of 0 - 21600 (6 hours) in 60-second increments, and 6 "steps" marked on the slider corresponding to integer numbers of hours for convenience. The toggle is wired to pluggedInItem. When I want to charge some batteries, I pull the slider to my desired charge time and flip the switch. Here's a typical example of what I get in the logs if I do this during a sunny day:
[PIN] charge desired for: 420 seconds
[PIN] Beginning charging immediately
[HW] Charger: ON
[CRON] 360s left
...
[CRON] 120s left
[CRON] 60s left
[CRON] 0s left
[CRON] Reached desired charge. Stopping
[PIN] Cancelling charging
[HW] Charger: OFF

Saturday, 17 September 2022

Dispatchables Part 2; Computer, enhance!

As usual with software, Dispatchables v1.0.0 wasn't ideal. In fact, it didn't really capture the "Dispatchable" idea at all. What if I don't have any batteries that need charging? Wouldn't it be better to only enable the charger if there was actually charging work to be done? And for how long? We need a way to specify intent.

Here's what I'd like to be able to tell the charging system:

  • I have flat batteries in the charger
  • I want them to be charged for a total of {x} hours

To me, that looks like a perfect job for a voice-powered Google Assistant integration. Let's go!

Googlification phase 1

First, let's equip our Broadlink smart power socket item with the required ga attribute so we can control it via the openHAB Google Assistant Action.

$OPENHAB_CONF/items/powerpoints.items:
Switch SP2_Power "Battery Charger Power" { 
  channel="broadlink:sp2:34-ea-34-84-86-d1:powerOn", 
  ga="Switch" 
}

If I go through the setup steps in the Google Assistant app on my phone, I can now see "Battery Charger Power" as a controllable device. And sure enough, I can say "Hey Google, turn on the battery charger" and it all works. Great!

Now, we need to add something to record the intent to perform battery-charging when solar conditions allow, and something else that will track the number of minutes the charger has been on for, since the request was made. Note that this may well be over multiple distinct periods, for example if I ask for 6 hours of charging but there's only one hour of quality daylight left in the day, I would expect the "dispatch" to be resumed the next day once conditions were favourable again. Once we've hit the desired amount of charging, the charger should be shut off and the "intent" marker reset to OFF. Hmmm... 🤔

Less state === Better state

Well, my first optimisation on the way to solving this is to streamline the state. I absolutely do not need to hold multiple distinct but highly-related bits of information:

  • Intent to charge
  • Desired charge duration
  • Amount of time remaining in this dispatch
... that just looks like an OOP beginner's first try at a domain object. Huh. Remember Java Beans? Ugh.

We can actually do it all with one variable, the Dead Timer "pattern" (if you can call it such a thing) I learnt from an embedded developer (in C) almost 20 years ago:


  unsigned int warning_led_timer = 0;

  /* Inside main loop, being executed once per second */
  
  while (warning_led_timer > 0) {
    warning_led_timer--;
    
    /* Enable the LED, or turn it off if no longer needed */
    enable_led(WARNING_LED, warning_led_timer > 0);
  }
  
  /* ...
  * Somewhere else in the code that needs to show
  * the warning LED for 3 seconds
  */
  warning_led_timer = 3;
It encapsulates:
  • intent - anyone setting the timer to a non-zero value
  • desired duration - the initial non-zero value
  • duration remaining - whatever value the variable is currently holding; and
  • termination - when the variable hits zero
Funny that a single well-utilised variable in C (of all things) can actually achieve one of the stated goals of OO (encapsulation) isn't it? All depends on your point of view I guess. Okay. Let's step back a little bit and see what we can do here.

Objectives

What I'd like to be able to do is have this conversation with the Google Assistant:

Hey Google, charge the batteries for five hours
"Okay, I'll charge the batteries for five hours"

... with all the underlying "dispatchable" stuff I've talked about being done transparently. And for bonus points:

Hey Google, how much charge time remaining?
"There are three hours and 14 minutes remaining"

So as it turns out, the Google Assistant has an Energy Storage trait which should allow the above voice commands (or similar) to work, as it can be mapped into the openHAB Charger Device Type. It's all starting to come together - I don't have a "smart charger" (i.e. for an electric vehicle) but I think I can simulate having one using my "dead timer"!

Sunday, 28 August 2022

"Dispatchables" with OpenHAB and PowerPal

I read a while back about the concept of "dispatchable" energy sources - namely, ones that can be brought on- or off-stream at virtually no notice, at a desired output level. As an enthusiastic solar-power owner/operator, the idea of tuning my energy consumption to also be dispatchable, suited to the output of my rooftop solar cells, makes a lot of sense.

My first tiny exploration into this field will use OpenHAB to automate "dispatch" of a non-time-critical task: recharging some batteries, to a time that makes best use of the "free" solar energy coming from my roof.

Just to be clear, I'm referring to charging domestic AA and AAA batteries here; I'm not trying to run a PowerWall!

OMG PPRO REST API FTW

To get the necessary insight into whether my house is running "in surplus" power, I'm using my PowerPal PRO which offers a simple RESTful API. If you send off a GET with suitable credentials to

https://readings.powerpal.net/api/v1/device/{{SERIAL_NUMBER}}
you get something like:

{
    "serial_number": "000abcde",
    "total_meter_reading_count": 443693,
    "pruned_meter_reading_count": 0,
    "total_watt_hours": 4246285,
    "total_cost": 1380.9539,
    "first_reading_timestamp": 1627948800,
    "last_reading_timestamp": 1659495300,
    "last_reading_watt_hours": 0,
    "last_reading_cost": 0.00062791666,
    "available_days": 364,
    "first_archived_date": "2021-04-13",
    "last_archived_date": "2022-08-02"
}

It's pretty straightforward to translate that into an openHAB Thing definition using the HTTP Binding that will get us the current watt-hours reading every 60 seconds (which is how often the device phones home)

$OPENHAB_CONF/things/powerpal.thing:
Thing http:url:powerpal "PowerPal" [
  baseURL="https://readings.powerpal.net",
  headers="Authorization=MyPowerPalAPIKey", 
    "Accept=application/json",
  timeout=2000,
  bufferSize=1024,
  refresh=60] {
    Channels:
      Type number : powerUsage "Newest Power Usage" 
      [ stateExtension="/api/v1/device/000abcde", 
      stateTransformation="JSONPATH:$.last_reading_watt_hours", 
      mode="READONLY" ]
}
You can get MyPowerPalAPIKey as used above, by opening the PowerPal mobile app and going to Guidance -> Generate an API Key.

That's it for the "physical" (Thing) layer. Lets move up the stack and define an Item that we can work with in a Rule.

$OPENHAB_CONF/items/powerpal.items:
Number:Power currentPowerUsage "Current Power Usage [%d W]" 
  {channel="http:url:powerpal:powerUsage"}

... and if you're me, nothing will happen, and you will curse openHAB and its constant changes. Make sure you've actually got the HTTP Binding installed or it'll all just silently fail. I wasn't able to see the list of Official Bindings because of some weird internal issue. So I had to do a full sudo apt-get update && sudo apt-get upgrade openhab before I could get it.

Then, fun times ensued because the PowerPal API uses a slightly-strange way of providing authentication, which didn't fit very well with how the HTTP binding wants to do it. I had to go spelunking through the binding's source code to figure out how to specify the Authorization header myself.

Now we can finally get to the "home automation bus" bit of openHAB ... we define a rule that's watching for power usage changes, and triggers my Broadlink SP2 smart power switch on or off depending on whether we're net-zero.

$OPENHAB_CONF/rules/dispatchable.rules:
rule "Charge batteries if in power surplus"
when
  Item housePowerUsage changed 
then
  logInfo("dispatchable", "Power: " + housePowerUsage.state);

  if (SP2_Power.state === ON && housePowerUsage.state > 0|W) {
    logInfo("dispatchable", "Charger -> OFF");
    SP2_Power.sendCommand(OFF);
  }
  if (SP2_Power.state === OFF && housePowerUsage.state == 0|W) {
    logInfo("dispatchable", "Charger -> ON");
    SP2_Power.sendCommand(ON);
  }
end

And we're all done!

What's that weird |W stuff? that's an inline conversion to a Number:Power object, so that comparisons can be performed - a necessary, if slightly awkward aspect of openHAB's relatively-new "Units Of Measurement" feature.

What does it look like? Here's the logs from just after 9am:

09:06:37 [dispatchable] - Power: 3 W
09:07:37 [dispatchable] - Power: 2 W
09:08:37 [dispatchable] - Power: 3 W
09:09:37 [dispatchable] - Power: 2 W
09:12:37 [dispatchable] - Power: 3 W
09:13:37 [dispatchable] - Power: 2 W
09:16:37 [dispatchable] - Power: 1 W
09:18:37 [dispatchable] - Power: 0 W
09:18:37 [dispatchable] - Charger -> ON

So the query to PowerPal is obviously running on the 37th second of each minute. There are "missing" entries because we're only logging anything when the power figure has changed. You can see the panels gradually creating more power as the sun's incident angle/power improves, until finally at 9:18, we hit power neutrality and the charger is turned on. Not bad.