tag:blogger.com,1999:blog-80258346479657919992024-03-14T14:01:40.810+11:00The Millhouse Group BlogSoftware Development in the 21st CenturyJohnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.comBlogger197125tag:blogger.com,1999:blog-8025834647965791999.post-60748893520463289962024-02-24T10:04:00.001+11:002024-03-05T12:54:45.232+11:00My Perfect AWS Console<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvPtxY36OUy1MQBAgyvFhmlevZWDaPOr2ZgVqOYcAocr7JGU0mkCD_T2u1kN1xiR7ySU-jruTfNDRqjtFchzpqCua1q3W97h1gKKtmKewA9JOGxj0VB4ru1mTjOlhYgvY-rhx5Ysk4LpyJFfzzexQ_BZgocaBEqu4LWf1Qv7NzBnsqAnEoh_5RS63neeM/s258/aws-console.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="91" data-original-width="258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvPtxY36OUy1MQBAgyvFhmlevZWDaPOr2ZgVqOYcAocr7JGU0mkCD_T2u1kN1xiR7ySU-jruTfNDRqjtFchzpqCua1q3W97h1gKKtmKewA9JOGxj0VB4ru1mTjOlhYgvY-rhx5Ysk4LpyJFfzzexQ_BZgocaBEqu4LWf1Qv7NzBnsqAnEoh_5RS63neeM/s400/aws-console.png"/></a></div>
Yeah that's literally it.
I love AWS and use a decent portion of their offerings, but could really honestly get by with 2 of the OG AWS features, and one relative newcomer.
## AWS S3
The performance these days is absolutely top-notch (without even going down the [Directory Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html) route). It's cheap enough that with a well-designed path structure, you can put just-about any workflow that can be represented with JSON into it. As in, you probably don't need [Step Functions](https://aws.amazon.com/step-functions/).
## AWS Lambda
I can't remember the last time I've needed a server that hangs around all the time, whether for work or side-gigs. Lambdas just fit _so well_ with modern request-response patterns that it's difficult to justify anything else. Add some [Provisioned Concurrency](https://docs.aws.amazon.com/lambda/latest/operatorguide/provisioned-scaling.html) if you really need nice warm caches and connections, but you still get the super-fast deployment and observability of functions-in-the-cloud. And you're not limited to 30-second execution time any more either (it's currently [up to 15 minutes](https://blog.awsfundamentals.com/lambda-limitations)), so you can wait for those slow 3rd-party APIs.
*Protip:* The Lambda Test Console allows you to store (and share!) test JSON payloads for each lambda. This can be a superb way to perform ad-hoc jobs, or re-process things that didn't quite work right the first time. Add a `dryRun?: boolean` option to the input shape and pass it though your lambda code to check things before opening the taps.
## AWS AppSync
Sure the web console is a little clunky and bug-ridden (it won't reauthenticate its own IAM session so your queries will eventually just ... die) but if you've got a GraphQL interface deep inside some WAF-protected VPN, this is a great way to give it a poke.Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-78433695317007266262024-01-28T16:35:00.005+11:002024-02-09T16:43:55.750+11:00My Apps, 2024Following a bit of a blogger-trend, here's the stuff I use on the daily.
I've omitted things I simply don't use, like custom launchers, podcast listeners, RSS, tracking and/or Mastodon clients;
- Mail service: __GMail__
- Tasks: __Drafts in GMail__
- Cloud storage: __Dropbox__
- Web browser: __Chrome__
- Calendar: __Google Calendar__
- Weather: __BOM__ (Melbourne, Australia) app
- Video: __Netflix, Disney Plus, Amazon Prime Video__
- Music (Listening): __Spotify__, Spotify via Google Home and/or Chromecast
- Music (Creation): __GarageBand__
- Passwords: __1Password__
- Notes: __Drafts in GMail__
- Code: __Visual Studio Code__
- Terminal: __Terminal.app__
- Search: __Google__
This list has shown me how much I depend on Google
a) not being evil; and
b) not just giving up on a product because they're bored of it
... which concerns me a little.
While it still exists though 😉, I *do* highly recommend the use of __Drafts in GMail__ as your general-purpose, cross-platform notes/todo app. You can attach files of arbitrary size, they sync to everything/everywhere *fast* (faster than Dropbox) and it's free (free-r than Dropbox... hmmm 🤔)Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-34453230770134387282023-12-30T14:57:00.007+11:002024-01-08T15:10:38.457+11:002023 End-of-year wrapupAnother year over, with some satisfying accomplishments:
- [Frenzy.js](https://frenzyjs.themillhousegroup.com/) is now fully-playable, albeit lacking a few things from the original
- I've been [checking out Svelte](https://blog.themillhousegroup.com/2023/08/searching-for-next-spa.html) and enjoying learning that refreshing approach to front-end dev
- and in work-related stuff, I've learnt heaps about migrations, ETL, AWS tech like Step Functions, Terraform and TypeScript
In 2024, my goals will be:
- Finish Frenzy.js completely and [document it](https://blog.themillhousegroup.com/search/label/frenzy.js)
- Build [something cool](https://blog.themillhousegroup.com/2023/08/searching-for-next-spa.html) in Svelte and show the world
- Harness _The Biggest Buzzword since The Blockchain_ (AI/ML) to do something useful
That last one is a biggie but it does seem like an idea whose time has finally come (unlike a certain "let's make money off suckers using a slow database" technology one could mention!) Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-17264378279601548112023-11-26T22:17:00.003+11:002023-12-09T12:50:52.365+11:00Mermaid.js is incredibly coolAs mentioned [earlier](https://blog.themillhousegroup.com/2023/10/markdown-and-mermaid-on-blogger-in-2023.html) I'm now able to embed [Mermaid.js](https://mermaid.js.org/) diagrams directly into this blog and I'm going to just post *exclusively* about just how great Mermaid is. If you've __ever__ drawn a technical diagram in __anything__ from MS Paint through [Draw.io](https://www.drawio.com/) and even [Excalidraw](https://excalidraw.com/) and then had to convert the diagram into a PNG or JPEG in order to embed it into a document, knowing full-well that the slightest change is going to involve re-doing a significant chunk of that work, **get Mermaid into your life right now**.
### Mmmm Pie
I mean, look at this:
```mermaid
pie title Pets adopted by volunteers
"Dogs" : 386
"Cats" : 85
"Rats" : 15
```
That's *four* lines of simple text:
```
pie title Pets adopted by volunteers
"Dogs" : 386
"Cats" : 85
"Rats" : 15
```
### Flowcharts with flair
```mermaid
flowchart TD
subgraph First Approach
A[Start] --> B{Does it work?}
B -->|Yes| C[OK]
B --->|No| D[Rethink]
end
subgraph Second Approach
F[Start] --> G{Does it work?}
G -->|Yes| H[OK]
G -->|No| J[Oh dear]
end
D --> F
```
Is just:
```
flowchart TD
subgraph First Approach
A[Start] -- > B{Does it work?}
B -->|Yes| C[OK]
B --->|No| D[Rethink]
end
subgraph Second Approach
F[Start] --> G{Does it work?}
G -->|Yes| H[OK]
G -->|No| J[Oh dear]
end
D --> F
```
I really like how easy it is to declare those `subgraph` elements for visual grouping. And did you notice how a different length "arrow" hints to Mermaid to render a longer arc? Delicious.
Expect to see a lot more diagrams like these around here in the future!Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-83418126794626690782023-10-23T09:15:00.003+11:002023-10-23T09:32:57.563+11:00Markdown (and Mermaid) on Blogger in 2023Building on the fine work of [cs905s](href="https://github.com/cs905s/md-in-blogger") on GitHub, I wanted to Markdown-enable my own blog and thought the process could be brought up-to-date - seven years have passed after all, and _that_ work was in turn based on something from [2011](http://blog.chukhang.com/2011/09/markdown-in-blogger.html)!
Blogger has, slowly but surely, changed since those instructions were written, so at the very least, here's an updated guide on how to do it.
### Extra Goals
I had a couple of extra requirements over the previous solution, however.
- I want to write **my whole post** in [GitHub-flavoured Markdown](https://github.github.com/gfm/); I use it all day (for PR descriptions, in Slack, writing documentation) and I'm pretty sick of the verbosity of `<strong>BOO</strong>` compared to `**BOO**`!
- I also _never want to have to type `<` ever again_ so I want the script to perform that replacement for me.
- I have fallen in love with [Mermaid](https://mermaid.js.org/) for Markdown-inspired/embedded diagrams and want them to _Just Work_ in my blog in the same way GitHub does it, with a ` ```mermaid ... ``` ` code fence
For a fairly-techy blog, just those changes, plus being able to use backticks and other speedups, _just like it's GitHub/Slack_ is really valuable to me.
Here's an example embedded Mermaid flowchart, just because I can now:
```mermaid
flowchart TD
A[Have Blogger-hosted Blog] -->|Configure Markdown| B(Write blog post)
B --> C{Has
markdown-enabled
label
?}
C -->|Yes| D[Render Markdown to HTML post body]
D --> E[Hide original Markdown area]
C -->|No| F[Leave post body untouched]
F --> G{Has
pre with class markdown
?}
G -->|Yes| H[Render pre to HTML]
E --> I{Has
pre with class mermaid
?}
H --> I
I --> |Yes| J[Render with Mermaid.js]
I --> |No| K[Done]
```
So once you've added a `markdown-enabled` **Label** on your post, the entire **blog post body** will be considered the Markdown source. I decided to "opt-in" like this as I've got a couple-of-hundred non-Markdown-annotated blog posts that I didn't really fancy going back and opting-out of. Well, actually I did try to automate this but lost data in the process, so aborted that little yak-shaving side-mission.
The script will also remove that particular label from the DOM so nobody will see that "load-bearing label" 😉.
The source is [here on GitHub](https://github.com/themillhousegroup/md-in-blogger), and I'll endeavour to keep it working well on the Blogger platform over time. Check the README for the step-by-step instructions if you want to Markdown-enable your own blog!
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-37672626120511601352023-09-30T20:37:00.002+10:002023-10-22T12:46:23.084+11:00Frenzy.js - the saga of flood-fill<p>
It may surprise you to know that yes, I am back working on Frenzy.js after a multi-year hiatus. It's a bit surreal working on an "old-skool" (pre-hooks) React app but the end is actually in sight. As of September 2023 I have (by my reckoning) about 80% of the game done;
<ul>
<li>Basic geometry (it's all scaled 2x)</li>
<li>Levels with increasing numbers of Leptons with increasing speed</li>
<li>Reliable collision detection</li>
<li>High-score table that persists to a cookie</li>
<li><i>Mostly</i>-reliable calculation of the area to be filled</li>
<li>Accurate emulation of the game state, particularly when the game "pauses"</li>
</ul>
</p>
<p>
The big-ticket items I still need to complete are:
<ul>
<li>Implement "chasers" on higher levels</li>
<li>Fine-tune the filled-area calculation (it still gets it wrong sometimes)</li>
<li><b>Animated flood-fill</b></li>
<li>Player-start, player-death and Lepton-death animations</li>
<li>(unsure) Sound.
</ul>
</p>
<h5>It all comes flooding back</h5>
<p>
To remind you (if you were an 80s Acorn kid) or (far more likely) educate you on what I mean by <i>"flood-fill"</i>, here's <a href="https://en.wikipedia.org/wiki/Frenzy_%281984_video_game%29" target="_blank">Frenzy</a> doing its thing in an <a href="https://github.com/TomHarte/CLK" target="_blank">emulator</a>; I've just completed drawing the long vertical line, and now the game is flood-filling the smaller area:
</p>
<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='400' height='322' src='https://www.blogger.com/video.g?token=AD6v5dyRel_vp4OS4F4xUBMG7mKFEjm9w2aBMrwI11wkOkKY3SuqZNxQRBPdmSoVjoSWcj7JqkezSq5TAGOyJd1FdQ' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div>
<p>
I wanted to replicate this distinctive style of flood-fill <strong>exactly</strong> in my browser-based version, and it's been quite the labour of love. My first attempt (that actually <i>worked</i> there were many iterations that <i>did not</i>) was so comically slow that I almost gave up on the whole idea. Then I took a concrete pill and decided that if I couldn't get a multiple-GHz multi-cored MONSTER of a machine to replicate a single-cored 2MHz (<a href="https://en.wikipedia.org/wiki/Acorn_Electron#Hardware" target="_blank">optimistically</a>) 8-bit grot-box from the early 1980s, I may as well just give up...
</p>
<p>
The basic concept for this is:
<pre>
Given a polygonal area A that needs to be flood-filled;
Determine bottom-rightmost inner point P within A.
The "frontier pixels" is now the array [P]
On each game update "<a href="https://www.npmjs.com/package/react-game-kit#loop-" target="_blank">tick</a>":
Expand each of the "frontier pixels" to the N,S,E and W; but
Discard an expansion if it hits a boundary of A
Also discard it if the pixel has already been filled
The new "frontier pixels" is all the undiscarded pixels
Stop when the "frontier pixels" array is empty
</pre>
I got this to be pretty efficient using a bit-field "sparse array" to quickly check for already-filled pixels. In the browser, I could perform the per-tick operations in less than 0.1 milliseconds for any size of <tt>A</tt>. Not too surprising given the entire game area is only 240x180 pixels, and the maximum possible polygonal area could only ever be half that big: 21,600 pixels.
</p>
<p>
The problem now became efficiently shifting the big pile'o'filled-pixels from the algorithm onto the HTML5 canvas that is the main gameplay area. I'm using the excellent <a href="https://konvajs.org/docs/react/index.html" target="_blank">React Konva</a> library as a nice abstraction over the canvas, but the principal problem is that a canvas <a href="https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API" target="_blank">doesn't expose per-pixel operations in its API</a>, and nor does Konva. The Konva team has done an admirable job making their code as performant as possible, but my first cut (instantiating a pile of tiny 1x1 <tt>Rect</tt>s on each tick) simply couldn't cope once the number of pixels got significant:
</p>
<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='400' height='322' src='https://www.blogger.com/video.g?token=AD6v5dx2TeVuUT8euhaQNimBLYLnENWEx7HW7f_E87SrDBACW-1nF6ONv7SgTIKc5w3KfDodsrCX_eSH4AoWLbWllg' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div>
<p>
This has led me down a quite-interesting rabbit-hole at the intersection of <a href="https://stackoverflow.com/questions/4899799/whats-the-best-way-to-set-a-single-pixel-in-an-html5-canvas" target="_blank">HTML5 Canvas</a>, React, React-Konva, and general "performance" stuff which is familiar-yet-different. There's an <a href="https://www.measurethat.net/Benchmarks/Show/1664/1" target="_blank">interesting benchmark</a> set up for this, and the results are all over the shop depending on browser and platform. Mobile results are predictably terrible but I'm deliberately not targeting them. This was a game for a "desktop" (before we called them that) and it needs keyboard input. I contemplated some kind of gestural control but it's just not good enough I think, so I'd rather omit it.
</p>
<p>
What I need to do, is find a way to automagically go from a big dumb pile of individual filled pixels into a suitable collection of optimally-shaped polygons, implemented as Konva <tt><a href="https://konvajs.org/api/Konva.Line.html" target="_blank">Line</a></tt>s.
</p>
<h5>The baseline</h5>
<p>
In code, what I first naïvely <i>had</i> was:
<pre class="prettyprint typescript">
type Point = [number, number]
// get the latest flood fill result as array of points
const filledPixels:Array<Point> = toPointArray(sparseMap);
// Simplified a little - we use some extra options
// on Rect for performance...
return filledPixels.map((fp) =>
<Rect x={fp[0]}
y={fp[1]}
width={1}
height={1}
lineCap="square"
fillColor="red"
/>
);
</pre>
With the above code, the worst-case render time while filling the worst-case shape (a box 120x180px) was <strong>123ms</strong>. Unacceptable.
What I <i>want</i> is:
<pre class="prettyprint typescript">
// Konva just wants a flat list of x1,y1,x2,y2,x3,y3
type Poly = Array<number>;
// get the latest flood fill result as array of polys
const polys:Array<number> = toOptimalPolyArray(sparseMap);
// far-fewer, much-larger polygons
return polys.map((poly) =>
<Line points={poly}
lineCap="square"
fillColor="red"
closed
/>
);
</pre>
</p>
<p>
So how the hell do I write <tt>toOptimalPolyArray()</tt>?
</p>
<h5>Optimisation step 1: RLE FTW</h5>
<p>
My Googling for "pixel-to-polygon" and "pixel vectorisation" failed me, so I just went from first principles and tried a <a href="https://en.wikipedia.org/wiki/Run-length_encoding" target="_blank">Run-Length-Encoding</a> on each line of the area to be filled. As a first cut, this should dramatically reduce the number of Konva objects required.
Here's the worst-case render time while filling the worst-case shape (a box 120x180px): <strong>4.4ms</strong>
</p>
<h5>Optimisation step 2: Boxy, but good</h5>
<p>
I'd consider this to be a kind of <i>half-vectorisation</i>. Each row of the area is optimally vectorised into a line with a start and end point. The next step would be to iterate over the lines, and simply merge lines that are "stacked" directly on top of each other. Given the nature of the shapes being filled is typically highly rectilinear, this felt like it would "win" quite often. Worst-case render time now became: <strong>1.9ms</strong>
</p>
<h5>Optimisation step 3: Know your enemy</h5>
<p>
I felt there was still <i>one more</i> optimisation possible, and that is to exploit the fact that the game <strong>always</strong> picks the bottom-right-hand corner in which to start filling. Thus there is a very heavy bias towards the fill at any instant looking something like this:
<pre>
----------------
| |
| |
| P|
| LL|
| LLL|
| LLLL|
| LLLLL|
| LLLLLL|
| LLLLLLL|
| LLLLLLLL|
| LLLLLLLLL|
| LLLLLLLLLL|
| LLLLLLLLLLL|
| LLLLLLLLLLLL|
| LLLLLLLLLLLLL|
|SSSSSSSSSSSSSS|
|SSSSSSSSSSSSSS|
|SSSSSSSSSSSSSS|
----------------
</pre>
where
<ul>
<li><tt>P</tt> is an unoptimised pixel</li>
<li><tt>L</tt> is a part line, that can be fairly efficiently represented by my "half-vectorisation", and</li>
<li><tt>S</tt> is an optimal block from the "stacked vectorisation" approach</li>
</ul>
You can see there are still a large number of lines (the <tt>L</tt>s and the <tt>P</tt>) bogging down the canvas. They all share a common right-hand edge, and then form a perfect right-triangle. I started implementing this change but ended up aborting that code. Worst-case render time is already significantly below the "tick" rate, and the code was getting pretty complex. Okay, it's not <i>optimal<b></b></i> optimal, but it's Good Enough. Whew.
</p>
<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='400' height='322' src='https://www.blogger.com/video.g?token=AD6v5dweiFwkRTpi-5m7O7Cj3ci0QCt7hNrbqQ45DQ7Xx_RaYtZRR_q3W9SUwdwrGamu0F66DOi5-wZTynTEyPmjZg' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div>Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-2337567687156128632023-08-27T16:46:00.004+10:002023-10-22T12:46:33.098+11:00Searching for the next SPA<p>I've been quite taken with a particular style of casual, "clever" game that rose to prominence during The COVID Years but still has a charm that keeps me visiting almost daily:
<ul>
<li><strong><a href="https://www.nytimes.com/games/wordle/index.html" target="_blank">Wordle</a></strong> (the original, and the "rags to riches" ideal)</li>
<li><strong><a href="https://www.merriam-webster.com/games/quordle/#/" target="_blank">Quordle</a></strong> (a beautiful initial implementation, albeit reduced now)</li>
<li><strong><a href="https://heardle-wordle.com/" target="_blank">Heardle</a></strong> (recently escaped from the clutches of Spotify)</li>
</ul>
and most-recently:
<ul>
<li><strong><a href="https://www.nytimes.com/games/connections" target="_blank">Connections</a></strong> at the New York Times
</ul>
</p>
<p>
There are a heap of common factors amongst these games (and I'll optimistically include <a href="https://blog.themillhousegroup.com/2022/06/introducing-cardle.html" target="_blank">my</a> own <a href="https://cardle.themillhousegroup.com" target="_blank">Cardle</a> here too) that I think make feel so "nice":
<ul>
<li>Rejection of <strong>obvious monetization</strong> strategies</li>
<li>Feel resolutely <strong>mobile-first</strong> in UI/UX (large elements, zero scrolling!)</li>
<li>Delightful levels of <strong>polish</strong> (micro-interactions, animation etc)</li>
<li><strong>Focused</strong>; not just a Single *Page* App, but almost a <strong>Single *Pane* App</strong></li>
</ul>
</p>
<p>
Of course there's also the little matter of having a great idea with suitably nice mechanics and frequently creativity (<strong><a href="https://www.nytimes.com/games/connections" target="_blank">Connections</a></strong> I think excels in having creative, challenging content by Wyna Liu that is pitched just <i>*chef's kiss*</i>) but I think there are still areas of the word-association, letter-oriented game landscape to be explored.
</p>
<p>
For this next one I also will be trying out <strong><a href="https://svelte.dev/" target="_blank">Svelte</a></strong> after reading the superb <a href="https://joshcollinsworth.com/blog/antiquated-react" target="_blank">"Things you forgot (or never knew) because of React"</a> by Josh Collinsworth which very nicely articulated what feels a little "ick" about React development these days, and paints a very nice picture on what's on the other side of the fence. The scoped-styling and in-built animation abilities in particular seem like a perfect fit for this kind of app.
</p>
<p>Now I just need an idea...</p>Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-11765619527758130682023-07-30T16:27:00.002+10:002023-10-22T12:46:41.421+11:00Can you handle the truth?<p>
JavaScript/ECMAScript/TypeScript are officially everywhere these days and with them comes the <a href="https://github.com/rwaldron/idiomatic.js/#cond" target="_blank">idiomatic use of truthiness checking</a>.
</p>
<p>
At work, recently I had to fix a nasty bug where the truthiness of an optional value was used to determine what "mode" to be in, instead of a perfectly-good enumerated type located nearby. Let me extrapolate this into a worked example that might show how dangerous this is:
</p>
<pre class="prettyprint typescript">
type VehicleParameters = {
roadSpeed: number;
engineRPM: number;
...
cruiseControlOn: boolean;
cruiseControlSpeed: number | undefined;
}
</pre>
and imagine, running a few times a second, we had a function:
<pre class="prettyprint typescript">
function maintainCruiseSpeed(vp: VehicleParameters) {
const { roadSpeed, cruiseControlSpeed } = vp;
if (cruiseControlSpeed ?? cruiseControlSpeed < roadSpeed) {
accelerate();
}
}
</pre>
<p>
Let's suppose the driver of this vehicle hits "SET" on their cruise control stalk to lock in their current speed of 100km/h as their desired automatically-maintained speed. The control module sets the <tt>cruiseControlOn</tt> boolean to <tt>true</tt>, and copies the current value of <tt>roadSpeed</tt> (being <tt>100</tt>) into <tt>cruiseControlSpeed</tt>
</p>
<p>
Now imagine the driver disengages cruise control, and the boolean is correctly set to <tt>false</tt>, but the <tt>cruiseControlSpeed</tt> is retained, as it is very common for a cruise system to have a RESUME feature that goes back to the previously-stored speed.
</p>
<p>
And all of a sudden we have an <strong>Unintended Acceleration</strong> situation. Yikes.
</p>
<h5>As simple as can be, but no simpler</h5>
<p>
Don't get me wrong, I <i>like</i> terse code; one of the reasons I liked Scala so much was the succinctness after escaping from the famously long-winded <a href="http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html" target="_blank">Kingdom of Nouns</a>. I also loathe redundant and/or underperforming fields, in particular Booleans that shadow another bit of state, e.g.:
</p>
<pre class="prettyprint typescript">
const [isLoggedIn] = useState(false);
const [loggedInUser] = useState(undefined);
</pre>
<p>
That kind of stuff drives me insane. What I definitely <i>really like</i> is when we can be Javascript-idiomatic AND use the power of TypeScript to <strong>prevent</strong> <i>combinations of things that should not be</i>. How?
</p>
<h5>Typescript Unions have entered the chat</h5>
<p>Let's define some types that model the behaviour we want:
<ul>
<li>When cruise is turned <strong>on</strong> we need target speed, there's no resume speed</li>
<li>When cruise is turned <strong>off</strong> we zero the target speed, and the resume speed</li>
<li>When cruise is set to <strong>coast</strong> (or the brake pedal is pressed) we zero the target speed, but store a resume speed</li>
<li>When cruise is turned <strong>on</strong> we need a target speed to get back to, and there's no resume speed</li>
</ul>
</p>
<pre class="prettyprint typescript">
type VehicleParameters = {
roadSpeed: number;
engineRPM: number;
cruiseControlSettings: CruiseControlSettings;
}
type CruiseControlSettings =
CruiseOnSettings |
CruiseOffSettings |
CruiseCoastSettings |
CruiseResumeSettings
type CruiseOnSettings = {
mode: CruiseMode.CruiseOn
targetSpeedKmh: number;
resumeSpeedKmh: 0;
}
type CruiseOffSettings = {
mode: CruiseMode.CruiseOff
targetSpeedKmh: 0;
resumeSpeedKmh: 0;
}
type CruiseCoastSettings = {
mode: CruiseMode.CruiseCoast
targetSpeedKmh: 0;
resumeSpeedKmh: number;
}
type CruiseResumeSettings = {
mode: CruiseMode.CruiseResume
targetSpeedKmh: number;
resumeSpeedKmh: 0;
}
</pre>
<p>
Let's also write a new version of <tt>maintainCruiseSpeed</tt>, still in idiomatic ECMAScript (i.e. using truthiness):
</p>
<pre class="prettyprint typescript">
function maintainCruiseSpeed(vp: VehicleParameters) {
const { roadSpeed, cruiseControlSettings } = vp;
if (cruiseControlSettings.targetSpeedKmh < roadSpeed) {
accelerate();
}
}
</pre>
<p>
And finally, let's try and update the cruise settings to an illegal combination:
</p>
<pre class="prettyprint typescript">
function illegallyUpdateCruiseSettings():CruiseControlSettings {
return {
mode: CruiseMode.CruiseOff,
targetSpeedKmh: 120,
resumeSpeedKmh: 99,
}
}
</pre>
... but notice now, you can't; you get a TypeScript error:
<pre class="prettyprint typescript">
Type
'{ mode: CruiseMode.CruiseOff; targetSpeedKmh: 120;
resumeSpeedKmh: number; }'
is not assignable to type 'CruiseControlSettings'.
Types of property 'targetSpeedKmh' are incompatible.
Type '120' is not assignable to type '0'
</pre>
<p>
I'm not suggesting that TypeScript types will unequivocally save your critical code from endangering human life, but a little thought expended on sensibly modelling conditions <i>just might help</i>.
</p>Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-36109935625435906432023-06-25T12:17:00.001+10:002023-10-22T12:46:49.084+11:00In praise of ETL, part three: Love-you-Load-time<p>
Finishing off my <a href="https://blog.themillhousegroup.com/search/label/etl" target="_blank">three-part series</a> as a newly-minted ETL fanboi, we get to the <strong>Load</strong> stage. One could be forgiven thinking there is not a great deal to get excited about at this point; throw data at the new system and it's Job Done. But as usual with ETL, there are hidden pleasures lurking that you might not have considered until you've got your feet wet.
</p>
<h5>Mechanical Sympathy</h5>
<p>
In the "customer migration" ETL project I worked on, the output of the Transform stage for a given customer was dumped into a JSON file in an AWS S3 bucket. At Load time, the content of a bucket was scooped out and fed into the target system. Something we noticed quite quickly as the migration project ramped up, was that the new system did not "like" being hit with too many "create new customer" API calls per second. It was pretty simple to implement a rate limit system in the Load stage (only!) to ensure we were being mechanically-sympathetic to the new system, while still being able to go as fast as possible in the other stages.
</p>
<h5>Optimal Throughput</h5>
<p>
Indeed, we had a similar rate-limit in our <strong>Extract</strong> for the benefit of our <i>source</i> system(s) - albeit at a higher rate as its API seemed to be able handle reading a fair bit faster than the new system's API could write. And there's another benefit - we weren't being throttled by the speed of the slower system; we could still extract as fast as the source would allow, transform and buffer into S3, then load at the optimal speed for the new system. You could get fancy and call it <i>Elastic Scaling</i> or somesuch, but really, if we'd used some monolithic process to try and do these customer migrations, we wouldn't have had the fine-grained control.
</p>
<h5>Idempotency is Imperative</h5>
<p>
One last tip; strive to ensure your Load stage does not alter the output of the Transform in <i>any</i> way, or you'll lose one of the key advantages of the whole ETL architecture. If you can't look at a transform file (e.g. a JSON blob in an S3 bucket in our case) and know that it's <i>exactly</i> what was sent to the target system, then your debugging just got a whole lot harder. Even something as innocent as populating a <tt>createdAt</tt> field with <tt>new Date()</tt> could well bite you (for example if the date has to be in a particular format). If you've got to do something like that, consider passing the date in, in the correct format, as an additional parameter to the Load stage, so there's at least some evidence of what the field was actually set to. There's really nothing worse than not being able to say with confidence what you actually sent to the target system.
</p>
<p>
We didn't do this, but if there was a "next time" I'd also store a copy of this payload in an S3 bucket as well, just for quick verification purposes.
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-34610287508278416642023-05-06T11:56:00.000+10:002023-10-22T12:46:58.029+11:00In praise of ETL, part two; Trouble-free Transforms<p>
Continuing with my series about the <a href="https://blog.themillhousegroup.com/2023/03/in-praise-of-etl-part-one-es-are-good.html" target="_blank">unexpected pleasures of ETL</a>, we get to the <strong>Transform</strong> stage.
</p>
<p>
I've already mentioned in the previous post the huge benefit of separating the extraction and transformation stages, giving an almost-limitless source of test fixtures for transform unit tests. On the subject of testing, it strikes me that there's a parallel in ETL with the classic "cost of failure" model that is typically used to justify adoption of unit tests to dinosaur-era organisations that don't use them; viz:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibmaoPBM46teRC6Mpx8oFPgk4w186icB8-CxW8mJOoEwBJUv7U-JeNwqoyPkISU0No0qwHmeyLTuABnmWLGYj5eehaCsu8-8Nn1blJ4P_UPLInLPY611Y9kczmrIyADXjlniUsCnex9UC3yNh-J2nuqQIcwf6tBZujQuu4dcSusRO0NV1aOU0stQGP/s2456/sw-cost-chart.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="1360" data-original-width="2456" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibmaoPBM46teRC6Mpx8oFPgk4w186icB8-CxW8mJOoEwBJUv7U-JeNwqoyPkISU0No0qwHmeyLTuABnmWLGYj5eehaCsu8-8Nn1blJ4P_UPLInLPY611Y9kczmrIyADXjlniUsCnex9UC3yNh-J2nuqQIcwf6tBZujQuu4dcSusRO0NV1aOU0stQGP/s400/sw-cost-chart.png"/></a></div>
(Graph from <a href="https://deepsource.com/blog/exponential-cost-of-fixing-bugs/" target="_blank">DeepSource.com</a>)
</p>
<p>
My contention is that failure in each E/T/L stage has a similar cost profile (of course YMMV, but it applied in our customer-migration scenario);
</p>
<h5>An error during <strong>Extract</strong></h5>
<ul>
<li>(Assuming an absolute minimum of logic exists in the extraction code)</li>
<li>Most likely due to a source system being overloaded/overwhelmed by sheer number of extraction requests occurring at once</li>
<li>Throttle them, and retry the process</li>
<li>Easy and cheap to restart</li>
<li>Overall <strong>INEXPENSIVE</strong></li>
</ul>
<h5>An error during <strong>Transform</strong></h5>
<ul>
<li>Easy to reproduce via unit tests/fixtures</li>
<li>Rebuild, redeploy, re-run</li>
<li>Overall <strong>MEDIUM EXPENSE</strong></li>
</ul>
<h5>An error during <strong>Load</strong></h5>
<ul>
<li>Most likely in "somebody else's code"</li>
<li>Investigation/rectification may require cross-functional/cross-team/cross-company communications</li>
<li>Re-run for this scenario may be blocked if target system needs to be cleared down</li>
<li>Overall <strong>HIGH EXPENSE</strong></li>
</ul>
<p>
Thus it behooves us (such a great vintage phrase that!) to get our transforms nice and tight; heavy unit-testing is an obvious solution here but also careful consideration of what the approach to transforming questionable data should be. In our case, our initial transform attempts took the pragmatic, <a href="https://devopedia.org/postel-s-law" target="_blank">Postel</a>-esque "accept garbage, don't throw an error, return something sensible" approach. So upon encountering invalid data for example, we'd log a warning, and transform it to an <tt>undefined</tt> object or empty array as appropriate.
<br/>
<br/>
This turned out to be a problem, as we weren't getting enough feedback about the <strong>sheer amount</strong> of bad input data we were simply skimming over, resulting in gaps in the data being loaded into the new system.
<br/>
<br/>
So in the next phase of development, we became willingly, brutally "fragile", throwing an error as soon as we encountered input data that wasn't ideal. This would obviously result in a lot of failed ETL jobs, but it <strong>alerted us to the problems</strong> which we could then mitigate in the source system or with code fixes (and unit tests) as needed.
<br/>
<br/>
Interestingly, it turned out that in the "long tail" of the customer migration project, we had to return back (somewhat) to the "permissive mode" in order to get particularly difficult customer accounts to be migrated. The approach at that point was to migrate them with known holes in their data, and fix them in the TARGET system.
</p>
<p>
Here's my crude visualisation of it. I don't know if this mode of code evolution has a name but I found it interesting.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi04NhEl9Ggj4XvS5EeuN5mkIhFuV3os9bjaSix4eNJqjQOkdpGPcIBD1mi6bbBm_Bwscqxx9_Wbsjp6gL9IcInGDecT675utONOFjtLu4P7Lm7BvKgtq1cbAi2h-nWrBaA4R_BdBQQo9S1Jr0Y8OjAKWyJdHv_Tt2L74JjcP7LUwhPrAYmMjqT_xkm/s1482/bellcurve.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="763" data-original-width="1482" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi04NhEl9Ggj4XvS5EeuN5mkIhFuV3os9bjaSix4eNJqjQOkdpGPcIBD1mi6bbBm_Bwscqxx9_Wbsjp6gL9IcInGDecT675utONOFjtLu4P7Lm7BvKgtq1cbAi2h-nWrBaA4R_BdBQQo9S1Jr0Y8OjAKWyJdHv_Tt2L74JjcP7LUwhPrAYmMjqT_xkm/s400/bellcurve.jpg"/></a></div>
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-66235906587805576672023-04-16T11:54:00.002+10:002023-10-22T12:47:08.049+11:00Micro-Optimisation #393: More Log Macros!<p>
I've posted some of my <a href="https://blog.themillhousegroup.com/2020/10/micro-optimisation-392-log-macros.html" target="_blank">VSCode Log Macros</a> previously, but wherever there is repetitive typing, there are further efficiencies to be gleaned!
</p>
<h5>Log, Label and Prettify a variable - [ Ctrl + Option + Command + J ]</h5>
<p>
You know what's better than having the <i>contents</i> of your <tt>console.log()</tt> autogenerated?
</p>
<p>
Having the <i>whole thing</i> inserted for you!
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD2s6OIf1osOpc5fhXCsiMDhmE2_UojqMlpOaBQy7mtVVCgthjKniWuMPrHmkglp_ANFVtHWXBLCEgB3lT7bPMtj-K-PmjEdwbwsy6eDn-MgdHVF_pVjmNRPgqlAXb8WruJCz_ISNHwvUmBspRknYpEtVSTAbFo0q8VNfqiavs6JkXJWYZfQ0lHGF6/s1272/log-and-label.gif" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="302" data-original-width="1272" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD2s6OIf1osOpc5fhXCsiMDhmE2_UojqMlpOaBQy7mtVVCgthjKniWuMPrHmkglp_ANFVtHWXBLCEgB3lT7bPMtj-K-PmjEdwbwsy6eDn-MgdHVF_pVjmNRPgqlAXb8WruJCz_ISNHwvUmBspRknYpEtVSTAbFo0q8VNfqiavs6JkXJWYZfQ0lHGF6/s400/log-and-label.gif"/></a></div>
<h5>How do I add this?</h5>
<p>
On the Mac you can use ⌘-K-S to see the pretty shortcut list, then hit the "Open Keyboard Shortcuts (JSON)" icon in the top-right to get the text editor to show the contents of <tt>keybindings.json</tt>. And by the way, execute the command <strong>Developer: Toggle Keyboard Shortcuts Troubleshooting</strong> to get diagnostic output on what various special keystrokes map to in VSCode-speak (e.g. on a Mac, what Ctrl, Option and Command actually do)
</p>
<h6><tt>keybindings.json</tt></h6>
<pre class="prettyprint json">
// Place your key bindings in this file to override the defaults
[
{
"key": "ctrl+meta+alt+j",
"when": "editorTextFocus",
"command": "runCommands",
"args": {
"commands": [
{
"command": "editor.action.copyLinesDownAction"
},
{
"command": "editor.action.insertSnippet",
"args": {
"snippet": "\nconsole.log(`${TM_SELECTED_TEXT}: ${JSON.stringify(${TM_SELECTED_TEXT}$1, null, 2)}`);\n"
}
},
{
"command": "cursorUp"
},
{
"command": "editor.action.deleteLines"
},
{
"command": "cursorDown"
},
{
"command": "editor.action.deleteLines"
},
],
}
}
]
</pre>
<p>
This one uses the new <i>(for April 2023, VSCode v1.77.3)</i> <tt>runCommands</tt> command, which, as you might infer, allows commands to be chained together in a keybinding. A really nice property of this is that you can Command-Z your way back out of the individual commands; very helpful for debugging the keybinding, but also potentially just nice-to-have.
</p>
<p>
The trick here is to retain the text selection so that <tt>${TM_SELECTED_TEXT}</tt> can continue to contain the right thing, without clobbering whatever might be in the editor clipboard at this moment. We do this by copying the line down. This helpfully keeps the selection right on the variable where we want it. We then blast over the top of the selection with the logging line, but by sneakily inserting <tt>\n</tt> symbols at each end, we break up the old line into 3 lines, where the middle one is the only one we want to keep. So we delete the above and below.
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-88118746491689145272023-03-25T11:08:00.001+11:002023-10-22T12:47:31.386+11:00In praise of ETL, part one; E's are good <p>I've <a href="https://blog.themillhousegroup.com/2022/11/aws-step-functions-pretty-good-v10.html" target="_blank">written previously</a> about how at work I've been using an <a href="https://en.wikipedia.org/wiki/Extract,_transform,_load" target="_blank">ETL</a> (Extract, Transform, Load) process for customer migrations, but that was mostly in the context of a use case for <a href="https://docs.aws.amazon.com/step-functions/index.html" target="_blank">AWS Step Functions</a>.
</p>
<p>
Now I want to talk about ETL itself, and how good it's been as an approach. It's been around for a while so one would expect it to have merits, but I've found some aspects to be particularly neat and wanted to call them out specifically. So here we go.
</p>
<h3>An <strong>Extract</strong> is a perfect test fixture</h3>
<p>
I'd never realised this before, but the very act of storing the data you plan on Transforming, and Loading, is <strong>tremendously</strong> powerful. Firstly, it lets you see <i>exactly</i> what data your Transform was acting upon; secondly, it gives you replay-ability using that exact data (if that's what you want/need) and thirdly, you've got an instant source of test fixture data for checking how your transform code handles <i>that one weird bug</i> that you just came across in production.
</p>
<p>
My workflow for fixing transform-stage bugs literally became:
<ul>
<li>Locate JSON extract file for the process that failed</li>
<li>Save as local JSON file in test fixtures directory of the transform code</li>
<li>Write a test to attempt to transform this fixture (or sub-component of it)</li>
<li>Test should fail as the production code does</li>
<li>Fix transform code, test should now pass</li>
<li>Commit fixed code, new test(s) and fixture</li>
<li>Release to production</li>
<li>Re-run ETL process; bug is gone</li>
</ul>
</p>Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-14677797668289777062023-02-27T22:02:00.004+11:002023-10-22T12:46:00.724+11:00Stepping up, and back, with the new Next.js "app" directory<p>
I'm toying around with a new web-based side project and I thought it was time to give the latest <b><a href="https://nextjs.org/" target="_blank">Next.js</a></b> version a spin. Although I've used <a href="https://create-react-app.dev/" target="_blank">Create-React-App</a> (generally hosted on <a href="https://www.netlify.com/" target="_blank">Netlify</a>) more recently, I've dabbled with Next.js in one capacity or another since 2018, and this time some server-side requirements made it a better choice.
</p>
<p>
The killer feature of the 2023 "beta version" of Next.js (which I assume will eventually be named <b>Next.js 14</b>) is the <tt>app</tt> directory, which takes Next's already-excellent filesystem-based routing (i.e. if you create a file called <tt>bar.tsx</tt> in a directory called <tt>foo</tt>, you'll find it served up at <tt>/foo/bar</tt> without writing a line of code) and amps it up. A lot.
</p>
<p>
I won't try and reiterate their excellent documentation, but their <a href="https://beta.nextjs.org/docs/routing/pages-and-layouts" target="_blank">nested layouts</a> feature is looking like an absolute winner from where I'm sitting, and I'd like to explain why by taking you back in time. I've done this before when talking about React-related stuff when I joked that <a href="https://blog.themillhousegroup.com/2019/07/react-24-years-experience.html" target="_blank">the HTML <tt><img></tt> tag was like a proto-React component</a>. And I still stand by that comparison; I think this instant familiarity is one of the fundamental reasons why React has "won" the webapp developer mindshare battle.
</p>
<p>
Let me take you back to 1998. The Web is pretty raw, pretty wild, and mostly static pages. <a href="https://web.archive.org/web/19981212022644/http://www.alices.com.au/" target="_blank">My Dad's website</a> is absolutely no exception. I've meticulously hand-coded it in <tt>vi</tt> as a series of stand-alone HTML pages which get FTP'ed into position on his ISP's web server. Although I'm dimly aware of CSS, it's mainly still used for small hacks like removing the underlines from links (I still remember being shown the way to do this with an inline <tt>style</tt> tag and thinking it would never take off) - and I'm certainly not writing a separate <tt>.css</tt> file to be included by every HTML file. As a result, <i>everything</i> is styled "inline" so-to-speak, but not even in the CSS way; just mountains of <tt>width</tt>s and <tt>height</tt>s and <tt>font face</tt>s all over the place. It sucked, but HTML was getting better all the time so we just put up with it, used all that it offered, and automated what we could. Which was exactly what I did. If you dare to inspect the source of the above Wayback Machine page, you'll see that it uses HTML frames (ugh), which was a primitive way of maintaining a certain amount of UI consistency while navigating around the site.
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEje0hluU72njO0Hf7q-ujLS_i7bBWwn4Eg_wjHt_BAQyNAahlkpNR6bPR0-28BtxxtjC2538GJEnoagJq_lb6BfHaPSzh1tsunCcQqioejIRPG9vVL8T38aJ4NNsxQdAo2XSuOdi0iQIENEfzyAol1OoZo8VnQKnpdoefaqlaE6wdo6pDWuQur_VcNO/s2002/Screen%20Shot%202023-02-28%20at%2010.52.42%20am.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="1214" data-original-width="2002" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEje0hluU72njO0Hf7q-ujLS_i7bBWwn4Eg_wjHt_BAQyNAahlkpNR6bPR0-28BtxxtjC2538GJEnoagJq_lb6BfHaPSzh1tsunCcQqioejIRPG9vVL8T38aJ4NNsxQdAo2XSuOdi0iQIENEfzyAol1OoZo8VnQKnpdoefaqlaE6wdo6pDWuQur_VcNO/s400/Screen%20Shot%202023-02-28%20at%2010.52.42%20am.png"/></a></div>
<p>
The other thing I did to improve UI consistency, was a primitive form of templating. Probably more akin to concatenation, but I definitely had a <tt>header.htm</tt> which was crammed together with (for-example) <tt>order-body.htm</tt> to end up with <tt>order.htm</tt> using a DOS batch file that I ran to "pre-process" everything prior to doing an FTP upload - a monthly occurrence as my Dad liked to keep his "new arrivals" page genuinely fresh. Now <tt>header.htm</tt> definitely wasn't valid HTML as it would have had unclosed tags galore, but it <i>was</i> re-used for several pages that needed to look the same, and that made me <i>feel</i> efficient.
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdoegEmKzSuZMvkwYRwfSYAa_Oupa-HBDBmjEhuPQUyPBZBnMUSG12KvVoBib9_D8ShbezfEucyWQ8YkiUaf2OOO_6daQVy-D9xn-pa7JdJc7tEPoMievB9g6Ng1BJbMMfrl72jnAbu9DNqiJIDs0DUT9BFyWPdEu1RNZF4nm7BhUzbC8zL4r5DlYC/s2000/Screen%20Shot%202023-02-28%20at%2010.54.13%20am.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="1210" data-original-width="2000" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdoegEmKzSuZMvkwYRwfSYAa_Oupa-HBDBmjEhuPQUyPBZBnMUSG12KvVoBib9_D8ShbezfEucyWQ8YkiUaf2OOO_6daQVy-D9xn-pa7JdJc7tEPoMievB9g6Ng1BJbMMfrl72jnAbu9DNqiJIDs0DUT9BFyWPdEu1RNZF4nm7BhUzbC8zL4r5DlYC/s400/Screen%20Shot%202023-02-28%20at%2010.54.13%20am.png"/></a></div>
<p>
And this brings me to Next.js and the <a href="https://beta.nextjs.org/docs/routing/pages-and-layouts#nesting-layouts" target="_blank">nesting layouts</a> functionality I mentioned before. To achieve what took me a pile of HTML frames, some malformed HTML documents and a hacky batch file, all I have to do is add a <tt>layout.tsx</tt> and put all the pages that should use that UI alongside it. I can add a <tt>layout.tsx</tt> in <i>any</i> subdirectory and it will apply from there "down". Consistency via convention over configuration, while still nodding to the hierarchical filesystem structures we've been using since Before The Web. It's just really well thought-out, and a telling example of how much thought is going into Next.js right now. I am on board, and will be digging deeper this year for sure.
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-70738232315115965432023-01-29T16:50:00.004+11:002023-10-23T09:43:49.395+11:00Sneaking through the Analog HoleI perhaps-foolishly recently agreed to perform a media-archiving task. A series of books-on-tape (yes, on physical audio cassettes), almost unplayable at this point in the century, needed to be moved onto a playable media. For this particular client, that meant onto Audio CDs (OK so we're moving forward, but not _too_ far!). I myself didn't have a suitable playback device, but quickly located a bargain-priced solution, second-hand on eBay (of course) - an **AWA _E-F34U_** that appears to be exclusively distributed by the [Big W](https://www.bigw.com.au) retail chain here in Australia:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgODhKE7OSwT6BILuKNeDmUQeB-mjGRRZS4pYvNVVn66GcJ-u96PuqYalNfw_S9jqOGUNK92_mzgvlAwoVvXmK9ELwfblgQa34UrYiHBjHC0Lhi7EgDEUTmkCV-C5Y1tlKLgs0gxp7SorjfATphKRro446mzxMrqIH18bF6g2FrT1-yGB8f4bvOekCD4M/s2934/IMG_6940.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="400" data-original-height="2934" data-original-width="2829" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgODhKE7OSwT6BILuKNeDmUQeB-mjGRRZS4pYvNVVn66GcJ-u96PuqYalNfw_S9jqOGUNK92_mzgvlAwoVvXmK9ELwfblgQa34UrYiHBjHC0Lhi7EgDEUTmkCV-C5Y1tlKLgs0gxp7SorjfATphKRro446mzxMrqIH18bF6g2FrT1-yGB8f4bvOekCD4M/s400/IMG_6940.jpg"/></a></div>
This device purports to be a one-USB-cable solution to digitising the contents of analogue cassettes. Unfortunately, the example I just purchased had extremely severe issues with its USB implementation. The audio coming straight off the USB cable would jump between perfectly fine for a few seconds, to glitchy, stuttering and repeating short sections, to half-speed slooooow with the attendant drop in pitch. Unusable.
I only hope that the problem is isolated to my unit (which was _cheap_ and described as "sold untested" so I have no-one to blame but myself) - if not, someone's done a really bad job at their USB Audio implementation. Luckily, the USB Power works absolutely fine, so I had to resort to the old "Analog Hole" solution via my existing (rather nice) USB Audio Interface, a **Native Instruments _Komplete Audio 1_** which I picked up after my previous interface, a [**TASCAM _FireOne_**, finally kicked the bucket](https://blog.themillhousegroup.com/2020/07/tascam-fireone-on-macos-high-sierra.html).
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMfxY-cODwol15rj_hXAFzou3GzS7zMgL6qx1EEosEAmphvH3UY67rHvNEHoRfYCzZyj46xdy0IiDBQ28M9rbMem4jj1OnHtgIBgB9ovUiIAbjn9-AHb9EqpW4hupgtbGRNqwvhaMxsrdiYRZCloNAR5QpOfa3eg7bbzlFdw9TPUWUoTMdVJQmtpvEjN0/s4032/IMG_6939.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="3024" data-original-width="4032" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMfxY-cODwol15rj_hXAFzou3GzS7zMgL6qx1EEosEAmphvH3UY67rHvNEHoRfYCzZyj46xdy0IiDBQ28M9rbMem4jj1OnHtgIBgB9ovUiIAbjn9-AHb9EqpW4hupgtbGRNqwvhaMxsrdiYRZCloNAR5QpOfa3eg7bbzlFdw9TPUWUoTMdVJQmtpvEjN0/s400/IMG_6939.jpg"/></a></div>
In the following picture, you can see my digitising solution. AWA tape transport (powered by USB) to 3.5mm headphone socket, through a 1/4" adaptor to a short guitar lead and into the _Komplete Audio 1_'s Line In. From there, it goes in via the _KA1_'s (fully-working!) USB connection to GarageBand on the Mac. A noise gate and a little compression are applied, and once each side of each tape has been captured, it gets exported directly to an MP3 file. I intend to present the client with not only the Audio CDs but also a data CD containing these MP3s so that future media formats can hopefully be more easily accommodated.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguwCBnnIdyMAIq6_QLbBio76yJODTgHsLQQziQLLMAbK8cNvMrCdi7YAam3eFZJgfs9jbauEiZn-UZ6eKGyhnLaZE7qSGrILgnMaxV0MJ4z00ZLsF2ZhCfIQQhclJSERXwXVdEHdrIT74BtVUfbM4dwrq4s5QcXRvu8875vwcvi61TlJDWOR1VqwaqRS0/s4028/IMG_6938.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="400" data-original-height="4028" data-original-width="2495" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguwCBnnIdyMAIq6_QLbBio76yJODTgHsLQQziQLLMAbK8cNvMrCdi7YAam3eFZJgfs9jbauEiZn-UZ6eKGyhnLaZE7qSGrILgnMaxV0MJ4z00ZLsF2ZhCfIQQhclJSERXwXVdEHdrIT74BtVUfbM4dwrq4s5QcXRvu8875vwcvi61TlJDWOR1VqwaqRS0/s400/IMG_6938.jpg"/></a></div>
What if I _didn't_ already have a USB audio interface? Would the client have given up, with their media stuck in the analog era, never to be heard again?
It amused me that analog technology was both the **cause of** this work - in that this medium and the ability to play it has gone from ubiquitous in the 1980s to virtually extinct - **and its solution**, using an analog interface to get around a deficient digital one.
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-43690353350279609892022-12-18T21:35:00.001+11:002022-12-20T16:10:16.273+11:00Three D's of 3D Printing in 2022<p>
I've been somewhat fascinated with 3D printing ever since becoming aware of it a decade ago, but it was prohibitively expensive to get into it when machines were in the four-digit USD$ range and seemed likely to be limited to somewhat-unreliably producing <a href="https://blogs.scientificamerican.com/talking-back/3d-printing-the-great-american-tchotchke-machine/" target="_blank">useless tchotchkes</a> at vastly higher cost. Things have changed. A lot.
</p>
<h5>Declining costs</h5>
<p>
<b>Cost of entry</b>
<br/>
My new(ish) printer is the <a href="https://cocoonproducts.com.au/shop/3dprinting/3dprinters/balco-model-maker-3d-printer/" target="_blank">Cocoon Create Modelmaker</a>, a respin/reskin of the <a href="https://wanhao.store/products/wanaho-i3-mini" target="_blank">Wanhao i3 Mini</a> - which if you follow the links, you'll note is <b>USD$199</b> <i>brand new</i>, but I got mine second-hand on eBay for <b>AUD$100</b>. I'm a sucker for an eBay bargain. When I picked it up, the seller (who was upgrading to a model with a larger print bed) also gave me a crash course in printing and then threw in an almost-full 300m spool of filament to get me started - another AUD$20 saved. So I'm already at the <b>f*@k it point</b> as far as up-front investment goes.
</p>
<p>
<b>Cost of materials</b>
<br/>
I'm picking up 300m rolls of PLA filament from eBay for AUD$20-$24 <b>delivered</b>, and I'm choosing local suppliers so they typically get delivered within 3 days. I could go even cheaper if I used Chinese suppliers. The biggest thing I've printed so far was a case for my Raspberry Pi 3B+, part of a 19" rack mount setup (I'm also a sucker for <i>anything</i> rackmounted) - that took 21 hours and used about 95c of filament. So really, it's starting to approach "free to make" as long as you don't place too much value on your own time...
</p>
<h5>Damn Fine Software</h5>
<p>
Seven years ago, <a href="https://www.hanselman.com/blog/the-basics-of-3d-printing-in-2015-from-someone-with-16-whole-hours-experience" target="_blank">Scott Hanselman</a> documented his early experiences with 3D printing; there was a lot of rage and frustration. Maybe I've just been lucky, maybe buying a printer that had already been used, tweaked, and enhanced (with 3d-printed upgrade parts) was a galaxy-brain genius move, but honestly, I've had very little trouble, and I'd estimate less than 50c of material has ended up in the bin. Happy with that. I think the tools have moved on supremely in that time, and awesomely, they're all <b>FREE</b> and most are also <b>Open-Source</b>.
</p>
<p>
<b>Cura</b>
<br/>
Ultimaker <a href="https://ultimaker.com/software/ultimaker-cura" target="_blank">Cura</a> takes an <a href="https://en.wikipedia.org/wiki/STL_%28file_format%29" target="_blank">STL</a> file and "slices" it into something your actual hardware can print, via a <a href="https://en.wikipedia.org/wiki/G-code" target="_blank">G-Code</a> file. It's analogous to the JVM taking generic Java bytecodes and translating them to x86 machine language or whatever. Anyway, it does a great job, and it's free.
</p>
<p>
<b>OctoPrint</b>
<br/>
My first 3D printing "toolchain" consisted of me slicing in Cura on my Mac followed by saving the file to a micro SD card (via an adapter), then turning around to place the (unsheathed) micro SD card into my printer's front-panel slot, and instructing it to print. This was <i>fine</i>, but the "sneakernet"-like experience was annoying (I kept losing the SD adapter) and the printer made a huge racket being on a table in the middle of the room. Then I discovered <a href="https://octoprint.org/" target="_blank">OctoPrint</a>, an open-source masterpiece that network-enables any compatible 3D printer with a USB port. I pressed my otherwise-idle <a href="https://www.intel.com.au/content/www/au/en/products/sku/88065/intel-nuc-kit-nuc5pgyh/specifications.html" target="_blank">5-series Intel NUC</a> into service and it's been flawless, allowing me to wirelessly submit jobs to the printer, which now resides in a cupboard, reducing noise and increasing temperature stability (which is good for print quality)
</p>
<p>
<b>Tinkercad</b>
<br/>
It didn't take long for me to want a little more than what <a href="https://www.thingiverse.com/" target="_blank">Thingiverse</a> et al could provide. Thingiverse's "Remix" culture is just awesome - a hardware equivalent to open-sourced software - but my experience of CAD was limited to a semester in university bashing up against Autodesk's AutoCAD, so I figured it would just be <b>too hard</b> for a hobbyist like me to create new things. Then I discovered <a href="https://www.tinkercad.com/" target="_blank">Tinkercad</a>, a free web application by, of all companies, <b>Autodesk</b>! This app features one of the best tutorial introductions I've ever seen; truly, I could get my 10-year old daughter productive in this software thanks to that tutorial. And the whole thing being on the web makes it portable and flexible. Massive kudos to Autodesk for this one.
</p>
<h5>Do It</h5>
<p>
The useless tchotchke era is over; I've been using my printer to replace lost board game tokens, organise cables, rackmount loose devices, and create LEGO parts that don't exist yet. As far as I'm concerned it's virtually paid for itself already, and I'm still getting better as a designer and operator of the machine. If you've been waiting for the right time to pounce, I strongly recommend picking up a used 3D printer and giving it a whirl.
</p>Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-90136041092458756662022-11-26T12:09:00.002+11:002023-03-09T10:46:46.432+11:00AWS Step Functions - a pretty-good v1.0<p>
I've been using <a href="https://docs.aws.amazon.com/step-functions/index.html" target="_blank">Amazon's Step Functions</a> functionality a fair bit at work recently, as a way to orchestrate and visualise a migration process that involves some <a href="https://en.wikipedia.org/wiki/Extract,_transform,_load" target="_blank">Extract-Transform-Load</a> steps and various other bits, each one being an AWS Lambda.
</p>
<p>
On the whole, it's been pretty good - it's fun to watch the process chug along with the flowchart-like UI automagically updating (I can't show you any screenshots unfortunately, but it's neat). There have been a couple of reminders however that this is a <b>version 1.0</b> product, namely:
</p>
<h5>Why can't I resume where I failed before?</h5>
<p>With our ETL process, frequently we'll detect a source data problem in the Extract or Transform stages. It would be nice if after fixing the data in place, we could go back to the failed execution and just ask it to resume at the failed step, with all of the other "state" from the execution intact.
<br/><br/>
Similarly, if we find a bug in our Extract or Transform lambdas themselves, it's super-nice to be able to monkey-patch them <i>right there and then</i> (remembering of course to update the source code in Git as well) - but it's only half as nice as it could be. If we could fix the broken lambda code and then re-run the execution that uncovered the bug, the cycle time would be <i>outstanding</i>
</p>
<h5>Why can't you remember things for me?</h5>
<p>Possibly-related to the first point, is the disappointing discovery that Step Functions have no "memory" or "context" if you prefer, where you can stash a variable for use later in the pipeline. That is you might expect to be able to declare 3 steps like this:
<pre>
<i>Extract Lambda</i>
Inputs:
accountId
Outputs:
pathToExtractedDataBucket
<i>Transform Lambda</i>
Inputs:
pathToExtractedDataBucket
Outputs:
pathToTransformedDataBucket
<i>Load Lambda</i>
Inputs:
accountId
pathToTransformedDataBucket
Outputs:
isSuccessful
</pre>
But unfortunately that <b>simply will not work</b> (at time of writing, November 2022). The above pipeline will <b>fail at runtime</b> because <tt>accountId</tt> has not been <i>passed through the Transform lambda</i> in order for the Load lambda to receive it!
</p>
<p>
For me, this really makes a bit of a mockery of the reusability and composability of lambdas with step functions. To fix the situation above, we have to make the Extract Lambda emit the <tt>accountId</tt> and Transform Lambda <i>aware of</i> and <i>pass through</i> <tt>accountId</tt> even though <b>it has no interest in, or need for it!</b>; that is:
<pre>
<i>Extract Lambda</i>
Inputs:
accountId
Outputs:
<b>accountId</b>
pathToExtractedDataBucket
<i>Transform Lambda</i>
Inputs:
<b>accountId</b>
pathToExtractedDataBucket
Outputs:
<b>accountId</b>
pathToTransformedDataBucket
<i>Load Lambda</i>
Inputs:
accountId
pathToTransformedDataBucket
Outputs:
isSuccessful
</pre>
That's really <b>not good</b> in my opinion, and makes for <b>a lot</b> of unwanted cluttering-up of otherwise reusable lambdas, dealing with arguments that they don't care about, just because some other entity needs them. Fingers crossed this will be rectified soon, as I'm sure I'm not the first person to have been very aggravated by this design.
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-7856042420187936622022-10-30T16:01:00.007+11:002022-11-08T09:20:18.035+11:00Dispatchables Part 3; Make It So<p>
In the <a href="https://blog.themillhousegroup.com/2022/09/dispatchables-part-2-computer-enhance.html" target="_blank">previous part</a> of this series about implementing a "dispatchable" for solar-efficient charging of (AA and AAA) batteries, I'd worked out that with a combination of the Google Assistant's <a href="https://developers.google.com/assistant/smarthome/traits/energystorage" target="_blank">Energy Storage trait</a> (made visible through the openHAB <a href="https://www.openhab.org/docs/ecosystem/google-assistant/#charger" target="_blank">Google Assistant <tt>Charger</tt> integration</a>) and a small amount of local state, it looked like in theory, I could achieve my aim of a voice-commanded (and -queryable) system that would allow efficient charging for a precise amount of time. Let's now see if we can turn theory into practice.
</p>
<p>
First step is to copy all the configuration from the <a href="https://www.openhab.org/docs/ecosystem/google-assistant/#charger" target="_blank">openHAB <tt>Charger</tt> device type</a> into an <tt>items</tt> file:
<h5><tt>$OPENHAB_CONF/items/dispatchable.items</tt></h5>
<pre>
Group chargerGroup
{ ga="Charger" [ isRechargeable=true, unit="SECONDS" ] }
Switch chargingItem (chargerGroup)
{ ga="chargerCharging" }
Switch pluggedInItem (chargerGroup)
{ ga="chargerPluggedIn" }
Number capacityRemainItem (chargerGroup)
{ ga="chargerCapacityRemaining" }
Number capacityFullItem (chargerGroup)
{ ga="chargerCapacityUntilFull" }
</pre>
</p>
<p>
You'll note the only alterations I made was to change the <tt>unit</tt> to <tt>SECONDS</tt> as that's the best fit for our timing system,
and a couple of renames for clarity. Here's what they're all representing:
<ul>
<li><b><tt>chargingItem</tt></b>: are the batteries being charged at this instant?</li>
<li><b><tt>pluggedInItem</tt></b>: has a human requested that batteries be charged?</li>
<li><b><tt>capacityRemainSecondsItem</tt></b>: how many seconds the batteries have been charging for</li>
<li><b><tt>capacityFullSecondsItem</tt></b>: how many seconds of charging remain</li>
</ul>
I <i>could</i> have used the "proper" <a href="https://blog.themillhousegroup.com/2022/09/dispatchables-part-2-computer-enhance.html#dead-timer" target="_blank">dead timer pattern</a> of saying "any non-zero <tt>capacityFullSecondsItem</tt> indicates intent" but given the <tt>Charger</tt> type requires all four variables to be implemented anyway, I went for a crisper definition. It also helps with the rule-writing as we'll shortly see.
</p>
<p>
If we look at the openHAB UI at this point we'll just have a pile of <tt>NULL</tt> values for all these items:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1o2bo4Eazk4tCbG34JcsSCyDWFVq8hpnM8cIomfqmARQpWT49krs4drBU6wyd0RNfPNplVkvQIMcd3n3PsvUBQ6MFxfEmA_PSYJw_8UBUAqsdVmgUI1kAnEPFWroC3RE-vCF650DxHEIQKebAxWu00JekNJIiwGxE5VRTkBFQm7OJC5xgnBlHX7zO/s1460/Screen%20Shot%202022-10-30%20at%203.13.19%20pm.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="1204" data-original-width="1460" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1o2bo4Eazk4tCbG34JcsSCyDWFVq8hpnM8cIomfqmARQpWT49krs4drBU6wyd0RNfPNplVkvQIMcd3n3PsvUBQ6MFxfEmA_PSYJw_8UBUAqsdVmgUI1kAnEPFWroC3RE-vCF650DxHEIQKebAxWu00JekNJIiwGxE5VRTkBFQm7OJC5xgnBlHX7zO/s400/Screen%20Shot%202022-10-30%20at%203.13.19%20pm.png"/></a></div>
Now it's time to write some <tt>rule</tt>s that will get sensible values into them all. There are four in total, and I'll explain each one in turn rather than dumping a wall of code.
</p>
<br/>
<h3>Rule 1: Only charge if it's wanted, AND if we have power to spare</h3>
<p>
<h5><tt>$OPENHAB_CONF/rules/dispatchable.rules</tt></h5>
<pre>
rule "Make charging flag true if wanted and in power surplus"
when
Item currentPowerUsage changed
then
if (pluggedInItem.state == ON) {
if (currentPowerUsage.state > 0|W) {
logInfo("dispatchable", "[CPU] Non-zero power usage");
chargingItem.postUpdate(OFF);
} else {
logInfo("dispatchable", "[CPU] Zero power usage");
chargingItem.postUpdate(ON);
}
}
end
</pre>
This one looks pretty similar to <a href="https://blog.themillhousegroup.com/2022/08/dispatchables-with-openhab-and-powerpal.html#rule" target="_blank">the old naïve rule</a> we had way back in version 1.0.0, and it pretty-much is. We've just wrapped it with the "intent" check (<tt>pluggedInItem</tt>) to make sure we actually need to do something, and offloaded the hardware control elsewhere. Which brings us to...
</p>
<br/>
<h3>Rule 2: Make the hardware track the state of <tt>chargingItem</tt></h3>
<p>
<h5><tt>$OPENHAB_CONF/rules/dispatchable.rules</tt></h5>
<pre>
rule "Charge control toggled - drive hardware"
when
Item chargingItem changed to ON or
Item chargingItem changed to OFF
then
logInfo("dispatchable", "[HW] Charger: " + chargingItem.state);
SP2_Power.sendCommand(chargingItem.state.toString());
end
</pre>
The simplest rule of all, it's a little redundant but it does prevent hardware control "commands" getting mixed up with software state "updates".
</p>
<br/>
<h3>Rule 3: Allow charging to be requested and cancelled</h3>
<p>
<h5><tt>$OPENHAB_CONF/rules/dispatchable.rules</tt></h5>
<pre>
rule "Charge intent toggled (pluggedIn)"
when
Item pluggedInItem changed
then
if (pluggedInItem.state == ON) {
// Human has requested charging
logInfo("dispatchable", "[PIN] charge desired for: ");
logInfo("dispatchable", capacityFullSecondsItem.state + "s");
capacityRemainSecondsItem.postUpdate(0);
// If possible, begin charging immediately:
if (currentPowerUsage.state > 0|W) {
logInfo("dispatchable", "[PIN] Awaiting power-neutrality");
} else {
logInfo("dispatchable", "[PIN] Beginning charging NOW");
chargingItem.postUpdate(ON);
}
} else {
logInfo("dispatchable", "[PIN] Cancelling charging");
// Clear out all state
capacityFullSecondsItem.postUpdate(0);
capacityRemainSecondsItem.postUpdate(0);
chargingItem.postUpdate(OFF);
}
end
</pre>
This rule is where things start to get a little trickier, but it's pretty straightforward. The key thing is setting or resetting the three other variables to reflect the user's <i>intent</i>. <br/><br/>
If <b>charging is desired</b> we assume that the "how long for" variable has already been set correctly and zero the "how long have you been charging for" counter. Then, if the house is <i>already</i> power-neutral, we start. Otherwise we wait for conditions to be right (<i>Rule 1</i>). <br/>
If <b>charging has been cancelled</b> we can just clear out all our state. The hardware will turn off almost-immediately because of <i>Rule 2</i>.<br/>
</p>
<br/>
<h3>Rule 4: Keep timers up-to-date</h3>
<p>
<h5><tt>$OPENHAB_CONF/rules/dispatchable.rules</tt></h5>
<pre>
rule "Update charging timers"
when
Time cron "0 0/1 * * * ?"
then
if (pluggedInItem.state == ON) {
// Charging has been requested
if (chargingItem.state == ON) {
// We're currently charging
var secLeft = capacityFullSecondsItem.state as Number - 60;
capacityFullSecondsItem.postUpdate(secLeft);
logInfo("dispatchable", "[CRON] " + secLeft + "s left");
var inc = capacityRemainSecondsItem.state as Number + 60;
capacityRemainSecondsItem.postUpdate(inc);
// Check for end-charging condition:
if (secLeft <= 0) {
// Same as if user hit cancel:
logInfo("dispatchable", "[CRON] Reached target.");
pluggedInItem.postUpdate(OFF);
}
}
}
end
</pre>
This last rule runs once a minute, but only does anything if the user asked for charging AND we're doing so. If that's the case, we decrement the time left" by 60 seconds, and conversely increase the "how long have they been charging for" by 60 seconds. Yes, I know this might not be strictly accurate but it's good enough for my needs.<br/><br/>
The innermost <tt>if</tt> statement checks for the happy-path termination condition - we've hit zero time left! - and toggles the flag which will once-again lower the intent flag, thus causing <i>Rule 3</i> to fire, which in turn will cause <i>Rule 2</i> to fire, and turn off the hardware.</p>
<h3>UI Setup</h3>
<p>
This has ended up being quite the journey, and we haven't even got the Google integration going yet! The last thing for this installment is to knock up a quick control/status UI so that we can see that it actually works correctly. Here's what I've got in my openHAB "Overview" page:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEji0-ZlnUGOD8-cscd4utOhGNsCmj5vGQvbUk__25sk7X1qpH-eCQUWtcDgchDyFQMmjcSwjB11mz3kqURD-AWtnIXdRj1WY2ihdMThKr54TbvwICMYfDc3I37W-KLKixNdBVlHDE7iICD5YHnnQs1SYihl6PZkxHut7t1Dd9sbheVCehfil6lNljbN/s908/Screen%20Shot%202022-11-04%20at%205.08.21%20pm.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="320" data-original-width="908" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEji0-ZlnUGOD8-cscd4utOhGNsCmj5vGQvbUk__25sk7X1qpH-eCQUWtcDgchDyFQMmjcSwjB11mz3kqURD-AWtnIXdRj1WY2ihdMThKr54TbvwICMYfDc3I37W-KLKixNdBVlHDE7iICD5YHnnQs1SYihl6PZkxHut7t1Dd9sbheVCehfil6lNljbN/s400/Screen%20Shot%202022-11-04%20at%205.08.21%20pm.png"/></a></div>
The slider is wired to <tt>capacityFullSecondsItem</tt>, with a range of <tt>0 - 21600</tt> (6 hours) in 60-second increments, and 6 "steps" marked on the slider corresponding to integer numbers of hours for convenience.
The toggle is wired to <tt>pluggedInItem</tt>.
When I want to charge some batteries, I pull the slider to my desired charge time and flip the switch. Here's a typical example of what I get in the logs if I do this during a sunny day:
<pre>
[PIN] charge desired for: 420 seconds
[PIN] Beginning charging immediately
[HW] Charger: ON
[CRON] 360s left
...
[CRON] 120s left
[CRON] 60s left
[CRON] 0s left
[CRON] Reached desired charge. Stopping
[PIN] Cancelling charging
[HW] Charger: OFF
</pre>
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-16010644351008167232022-09-17T15:46:00.005+10:002022-11-04T15:53:57.713+11:00Dispatchables Part 2; Computer, enhance!<p>
As usual with software, <a href="https://blog.themillhousegroup.com/2022/08/dispatchables-with-openhab-and-powerpal.html" target="_blank">Dispatchables v1.0.0</a> wasn't ideal. In fact, it didn't really capture the "Dispatchable" idea <strong>at all</strong>. What if I don't have any batteries that need charging? Wouldn't it be better to only enable the charger if there was actually charging work to be done? And for how long? We need a way to specify <b>intent</b>. <br/><br/>Here's what I'd like to be able to tell the charging system:
<ul>
<li>I have flat batteries in the charger</li>
<li>I want them to be charged for a total of <i>{x}</i> hours</li>
</ul>
</p>
<p>
To me, that looks like a perfect job for a voice-powered Google Assistant integration. Let's go!
</p>
<h3>Googlification phase 1</h3>
<p>
First, let's equip our Broadlink smart power socket <tt>item</tt> with the required <tt>ga</tt> attribute so we can control it via the <a href="https://www.openhab.org/docs/ecosystem/google-assistant/" target="_blank">openHAB Google Assistant Action</a>.
<h5><tt>$OPENHAB_CONF/items/powerpoints.items</tt>:</h5>
<pre class="prettyprint">
Switch SP2_Power "Battery Charger Power" {
channel="broadlink:sp2:34-ea-34-84-86-d1:powerOn",
ga="Switch"
}
</pre>
</p>
<p>
If I go through the setup steps in the Google Assistant app on my phone, I can now see "Battery Charger Power" as a controllable device. And sure enough, I can say <strong>"Hey Google, turn on the battery charger"</strong> and it all works. Great!
</p>
<p>
Now, we need to add something to record the <strong>intent</strong> to perform battery-charging when solar conditions allow, and something else that will track the number of minutes the charger has been on for, since the request was made. Note that this may well be over multiple distinct periods, for example if I ask for 6 hours of charging but there's only one hour of quality daylight left in the day, I would expect the "dispatch" to be resumed the next day once conditions were favourable again. Once we've hit the desired amount of charging, the charger should be shut off and the "intent" marker reset to <tt>OFF</tt>. Hmmm... 🤔
</p>
<a name="dead-timer"><h3>Less state === Better state</h3></a>
<p>
Well, my first optimisation on the way to solving this is to <a href="https://blog.themillhousegroup.com/2022/03/things-people-have-trouble-with-in.html" target="_blank">streamline the state</a>. I absolutely <strong>do not need</strong> to hold multiple distinct but highly-related bits of information:
<ul>
<li>Intent to charge</li>
<li>Desired charge duration</li>
<li>Amount of time remaining in this dispatch</li>
</ul>
... that just looks like an OOP beginner's first try at a domain object. Huh. Remember <a href="https://en.wikipedia.org/wiki/JavaBeans" target="_blank">Java Beans</a>? Ugh.
</p>
<p>
We can actually do it all with one variable, the <i>Dead Timer</i> "pattern" (if you can call it such a thing) I learnt from an embedded developer (in C) almost 20 years ago:
<pre class="prettyprint c">
unsigned int warning_led_timer = 0;
/* Inside main loop, being executed once per second */
while (warning_led_timer > 0) {
warning_led_timer--;
/* Enable the LED, or turn it off if no longer needed */
enable_led(WARNING_LED, warning_led_timer > 0);
}
/* ...
* Somewhere else in the code that needs to show
* the warning LED for 3 seconds
*/
warning_led_timer = 3;
</pre>
It encapsulates:
<ul>
<li><strong>intent</strong> - anyone setting the timer to a non-zero value</li>
<li><strong>desired duration</strong> - the initial non-zero value</li>
<li><strong>duration remaining</strong> - whatever value the variable is currently holding; <strong>and</strong></li>
<li><strong>termination</strong> - when the variable hits zero</li>
</ul>
Funny that a single well-utilised variable in C (of all things) can actually achieve one of the stated goals of OO (encapsulation) isn't it? All depends on your point of view I guess. Okay. Let's step back a little bit and see what we can do here.
</p>
<h3>Objectives</h3>
<p>
What I'd like to be able to do is have this conversation with the Google Assistant:
<br/>
<br/>
<strong>Hey Google, charge the batteries for five hours</strong><br/>
<i>"Okay, I'll charge the batteries for five hours"</i>
<br/>
<br/>... with all the underlying "dispatchable" stuff I've talked about being done transparently. And for bonus points:
<br/>
<br/>
<strong>Hey Google, how much charge time remaining?</strong>
<br/>
<i>"There are three hours and 14 minutes remaining"</i>
</p>
<p>
So as it turns out, the Google Assistant has an <a href="https://developers.google.com/assistant/smarthome/traits/energystorage" target="_blank">Energy Storage</a> trait which should allow the above voice commands (or similar) to work, as it can be mapped into the <a href="https://www.openhab.org/docs/ecosystem/google-assistant/#charger" target="_blank">openHAB <tt>Charger</tt> Device Type</a>. It's all starting to come together - I don't have a "smart charger" (i.e. for an electric vehicle) but I think I can simulate having one using my "dead timer"!
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-83827653953656479152022-08-28T21:37:00.011+10:002022-11-04T16:00:46.180+11:00"Dispatchables" with OpenHAB and PowerPal <p>
I read a <a href="https://www.energymatters.com.au/renewable-news/dispatchable-renewable-energy/" target="_blank">while back</a> about the concept of "<a href="https://en.wikipedia.org/wiki/Dispatchable_generation" target="_blank">dispatchable</a>" energy sources - namely, ones that can be brought on- or off-stream at virtually no notice, at a desired output level. As an enthusiastic solar-power owner/operator, the idea of tuning my energy <i>consumption</i> to also be dispatchable, suited to the output of my rooftop solar cells, makes a lot of sense.
</p>
<p>
My first tiny exploration into this field will use OpenHAB to automate "dispatch" of a non-time-critical task: recharging some batteries, to a time that makes best use of the "free" solar energy coming from my roof.
</p>
<p>
Just to be clear, I'm referring to charging domestic AA and AAA batteries here; I'm not trying to run a <a href="https://www.tesla.com/en_AU/powerwall">PowerWall</a>!</p>
<h5>OMG PPRO REST API FTW</h5>
<p>
To get the necessary insight into whether my house is running "in surplus" power, I'm using my <a href="https://blog.themillhousegroup.com/2022/02/new-toy.html" target="_blank">PowerPal PRO</a> which offers a simple <a href="https://readings.powerpal.net/documentation" target="_blank">RESTful API</a>.
If you send off a <tt>GET</tt> with suitable credentials to
<pre class="prettyprint">https://readings.powerpal.net/api/v1/device/{{SERIAL_NUMBER}}</pre>
you get something like:
</p>
<pre class="prettyprint json">
{
"serial_number": "000abcde",
"total_meter_reading_count": 443693,
"pruned_meter_reading_count": 0,
"total_watt_hours": 4246285,
"total_cost": 1380.9539,
"first_reading_timestamp": 1627948800,
"last_reading_timestamp": 1659495300,
"last_reading_watt_hours": 0,
"last_reading_cost": 0.00062791666,
"available_days": 364,
"first_archived_date": "2021-04-13",
"last_archived_date": "2022-08-02"
}
</pre>
<p>
It's pretty straightforward to translate that into an openHAB <tt>Thing</tt> definition using the <a href="https://www.openhab.org/addons/bindings/http/" target="_blank">HTTP Binding</a> that will get us the current watt-hours reading every 60 seconds (which is how often the device phones home)
<h5><tt>$OPENHAB_CONF/things/powerpal.thing</tt>:</h5>
<pre>
Thing http:url:powerpal "PowerPal" [
baseURL="https://readings.powerpal.net",
headers="Authorization=MyPowerPalAPIKey",
"Accept=application/json",
timeout=2000,
bufferSize=1024,
refresh=60] {
Channels:
Type number : powerUsage "Newest Power Usage"
[ stateExtension="/api/v1/device/000abcde",
stateTransformation="JSONPATH:$.last_reading_watt_hours",
mode="READONLY" ]
}
</pre>
You can get <b><i><tt>MyPowerPalAPIKey</tt></i></b> as used above, by opening the PowerPal mobile app and going to <i>Guidance -> Generate an API Key</i>.
<br/><br/>
That's it for the "physical" (<tt>Thing</tt>) layer. Lets move up the stack and define an <tt>Item</tt> that we can work with in a Rule.
</p>
<h5><tt>$OPENHAB_CONF/items/powerpal.items</tt>:</h5>
<pre>
Number:Power currentPowerUsage "Current Power Usage [%d W]"
{channel="http:url:powerpal:powerUsage"}
</pre>
<p>
... and if you're me, nothing will happen, and you will curse openHAB and its constant changes.
Make sure you've actually got the HTTP Binding installed or it'll all just silently fail. I wasn't able to see the list of Official Bindings because of some weird internal issue. So I had to do a full <tt>sudo apt-get update && sudo apt-get upgrade openhab</tt> before I could get it.
</p>
<p>
Then, fun times ensued because the PowerPal API uses a slightly-strange way of providing authentication, which didn't fit very well with how the HTTP binding wants to do it. I had to go spelunking through the <a href="https://github.com/openhab/openhab-addons/blob/main/bundles/org.openhab.binding.http/src/main/java/org/openhab/binding/http/internal/HttpThingHandler.java" target="_blank">binding's source code</a> to figure out how to specify the <tt>Authorization</tt> header myself.
</p>
<p>
Now we can finally get to the "home automation bus" bit of openHAB ... we define a rule that's watching for power usage changes, and triggers my Broadlink SP2 smart power switch on or off depending on whether we're net-zero.
</p>
<a name="rule"><h5><tt>$OPENHAB_CONF/rules/dispatchable.rules</tt>:</h5></a>
<pre>
rule "Charge batteries if in power surplus"
when
Item housePowerUsage changed
then
logInfo("dispatchable", "Power: " + housePowerUsage.state);
if (SP2_Power.state === ON && housePowerUsage.state > 0|W) {
logInfo("dispatchable", "Charger -> OFF");
SP2_Power.sendCommand(OFF);
}
if (SP2_Power.state === OFF && housePowerUsage.state == 0|W) {
logInfo("dispatchable", "Charger -> ON");
SP2_Power.sendCommand(ON);
}
end
</pre>
<p>
And we're all done!
<br/><br/>
What's that weird <tt>|W</tt> stuff? that's an inline conversion to a <tt>Number:Power</tt> object, so that comparisons can be performed - a necessary, if slightly awkward aspect of openHAB's relatively-new "<a href="https://www.openhab.org/docs/concepts/units-of-measurement.html" target="_blank">Units Of Measurement</a>" feature.
</p>
<p>
What does it look like? Here's the logs from just after 9am:
<pre>
09:06:37 [dispatchable] - Power: 3 W
09:07:37 [dispatchable] - Power: 2 W
09:08:37 [dispatchable] - Power: 3 W
09:09:37 [dispatchable] - Power: 2 W
09:12:37 [dispatchable] - Power: 3 W
09:13:37 [dispatchable] - Power: 2 W
09:16:37 [dispatchable] - Power: 1 W
09:18:37 [dispatchable] - Power: 0 W
09:18:37 [dispatchable] - Charger -> ON
</pre>
</p>
<p>
So the query to PowerPal is obviously running on the 37th second of each minute. There are "missing" entries because we're only logging anything when the power figure has <b><tt>changed</tt></b>. You can see the panels gradually creating more power as the sun's incident angle/power improves, until finally at 9:18, we hit power neutrality and the charger is turned on. Not bad.
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-77484443279998191152022-07-05T13:22:00.002+10:002022-07-05T13:27:35.558+10:00The bizarre world of cheap iPhone accessories<p>
Recently I purchased a couple of extremely-cheap <a href="https://www.ebay.com.au/itm/313656700091" target="_blank">Lightning-to-3.5mm headphone socket adaptors</a> on eBay, primarily so I can use a pair of quality over-ear headphones rather than the in-ear Apple buds which I find uncomfortable.
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsrbzJ6srXXM26kfWAAg9PQkiOhBb3uRL2tnx1KlROkEL4OOaEcFGXU5nS5kjl2NsGS9ULsA8r3Q4o2kFmFdxLUYfKxldLbzci9GDZu2_JeiLwqZqB0EvMVj6i_d3u50eJE8kpy1hfRqBRQ45sohriXGXu_TzIYy_Zp_7-JGtnKvl49KUYwP4XT5D6/s500/lightning-to-35mm.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="152" data-original-width="500" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsrbzJ6srXXM26kfWAAg9PQkiOhBb3uRL2tnx1KlROkEL4OOaEcFGXU5nS5kjl2NsGS9ULsA8r3Q4o2kFmFdxLUYfKxldLbzci9GDZu2_JeiLwqZqB0EvMVj6i_d3u50eJE8kpy1hfRqBRQ45sohriXGXu_TzIYy_Zp_7-JGtnKvl49KUYwP4XT5D6/s400/lightning-to-35mm.jpg"/></a></div>
<p>
These adaptors come in at under AUD$5 <i>including shipping</i>, putting them at one-third the cost of the <a href="https://www.apple.com/au/shop/product/MMX62FE/A/lightning-to-35mm-headphone-jack-adapter" target="_blank">genuine Apple accessory</a>. They arrived within 2 days and I was all set to put them to work and feel superior at my money-saving (smug-and-play?), except ... <b>they didn't work</b>.
</p>
<p>
The adaptor would chirpily announce <b><i>"Power on!"</i></b> in my headphones, but then there was no further indication that the iPhone had "seen" them at all. And this was the case for <b>both adaptors I'd purchased</b>.
</p>
<p>
I was all set to fire off an angry complaint to the eBay seller and get a refund, when I noticed something ... odd ... on the listing:
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMULbxIvdlcqYKpwSOsW_VHvZD4-2ST7_goj0lOgq0-F4CcBS-b_lyscksoGGt_ThEiEZedBhkmDSS-NmX71rEg0Ms_DY9QoPpgYGGWSK_HDR5UmmFLCS-GgXP5dCDmMTmffWYKij2ZcfLc6p--aOVpOwSuha47ofCaxrQ6gp9OWCU8A7PbrbQyk-q/s1434/Screen%20Shot%202022-07-05%20at%2012.58.11%20pm.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="468" data-original-width="1434" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMULbxIvdlcqYKpwSOsW_VHvZD4-2ST7_goj0lOgq0-F4CcBS-b_lyscksoGGt_ThEiEZedBhkmDSS-NmX71rEg0Ms_DY9QoPpgYGGWSK_HDR5UmmFLCS-GgXP5dCDmMTmffWYKij2ZcfLc6p--aOVpOwSuha47ofCaxrQ6gp9OWCU8A7PbrbQyk-q/s400/Screen%20Shot%202022-07-05%20at%2012.58.11%20pm.png"/></a></div>
<p>
Why would these Lightning accessories "include Bluetooth support"? Just for fun, I turned on my iPhone's Bluetooth (which I usually leave turned off for battery-saving and anti-h@X0r reasons)...
</p>
<p><b><i>"Connected!"</i></b> says the chirpy voice.</p>
<h5>OH</h5>
<h5>MY</h5>
<h5>GOD</h5>
<p>So it turns out that these cheap cables are cheap because they don't bother getting certified as "Made for iPhone" by Apple. A "compliant" Lightning device must have some kind of ID in its handshake with the phone, which the <a href="https://appletoolbox.com/ios1031-cable-problem-iphone/" target="_blank">phone checks for legitimacy</a>.
</p>
<p>
So instead, the very clever, very sneaky makers of these cables just use the DC power provided on <a href="https://en.wikipedia.org/wiki/Lightning_%28connector%29" target="_blank">Lightning pins 1 and 5</a> to drive a Bluetooth audio interface chip, which doesn't have the same "Made for iPhone" hurdles. The phone doesn't even realise there's a device hanging off there, so there's no way it can check if it's compliant!
</p>
<p>
Full marks for ingenuity, but I think I'm going to go to an Apple Store and get the real deal. Audio over Bluetooth is quality-compromised, plus this solution uses much more power and I prefer to leave my Bluetooth OFF for the aforementioned reasons. Still - I won't need to return them to the eBay seller - they *do* work and I'll keep them around for backup purposes.
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-17098150198916149872022-06-12T13:02:00.008+10:002023-11-04T13:40:41.771+11:00Introducing ... Cardle!<strong>UPDATE [July 2023] - I've let the <tt><i>cardle.xyz</i></tt> domain expire after a year, but you can still play the game over at <a href="https://cardle.themillhousegroup.com">cardle.themillhousegroup.com</a></strong>
<br/>
<p>Yes, it's yet-another <a href="https://www.nytimes.com/games/wordle/index.html" target="_blank">Wordle</a> clone, this time about cars:</p>
<h3>
<a href="https://www.cardle.xyz" target="_blank">https://www.cardle.xyz</a>
</h3>
<p>
Like so many other fans of Wordle, I'd been wanting to try doing a nice self-contained client-side game like this, and after trying the Australian Rules player version of Wordle, <a href="https://playworpel.com/" target="_blank">Worpel</a> (named after a player), I saw a pattern that I could use. Worpel uses "attributes" of an AFL player like their height, playing position, and team, and uses the Wordle "yellow tile" convention to show if you're close in a certain attribute. For example, if the team is not correct, but it <i>is</i> from the correct Australian state. Or if the player's height is within 3 centimetres of the target player's.
</p>
<p>
After a bit of head-scratching I came up with the 5 categories that I figured would be challenging but with enough possibilities for the "yellow tile" to be helpful. There's no point having a category that can only be right (green tile) or wrong (black tile). The least-useful is probably the "model name" category but of course that is vital to the game, and having played the game literally hundreds of times now, it has on occasion proved useful to know that a certain character appears in the target car's name (obviously cars like the <b>Mazda <i>6</i></b> are hugely helpful here!)
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_s-gk140aa9s4ktHfcQlAeKn6Vmr2Cdh46K3sA66VKeME-MfoO6reLGwHYjG5VxqEsIGNk5Di5ppu7TbEuYdsXmbh7gcC6CpnRZFc3scuBXElc-gka-vQK8Jo3duvpDXhE4DD9rg1nvL6Qtkrf76p1-7k-faPY87M7Oet2Rae2l-efZS2duCNCEXz/s1160/Screen%20Shot%202022-06-14%20at%201.42.45%20pm.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="406" data-original-width="1160" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_s-gk140aa9s4ktHfcQlAeKn6Vmr2Cdh46K3sA66VKeME-MfoO6reLGwHYjG5VxqEsIGNk5Di5ppu7TbEuYdsXmbh7gcC6CpnRZFc3scuBXElc-gka-vQK8Jo3duvpDXhE4DD9rg1nvL6Qtkrf76p1-7k-faPY87M7Oet2Rae2l-efZS2duCNCEXz/s400/Screen%20Shot%202022-06-14%20at%201.42.45%20pm.png"/></a></div>
<p>
It has been a while since I last did a publicly-visible web side-project, and I wanted to see what the landscape was like in 2022. The last time I published a dynamic website it was on the <a href="https://www.heroku.com/" target="_blank">Heroku</a> platform, which is still pretty good, but I think there are better options these days. After a bit of a look around I settled on <a href="https://www.netlify.com/" target="_blank">Netlify</a>, and so far they've delivered admirably - fast, easy-to-configure and free!
</p>
<p>
There has been some criticism bandied about for <a href="https://create-react-app.dev/docs/getting-started/" target="_blank">create-react-app</a> recently, saying it's a bad starting point, but for me it was a no-brainer. I figure not having to know how to optimally configure webpack just leaves me more brain-space to devote to making the game a bit better. So without any further ado, I'd like to showcase some of my favourite bits of the app.
</p>
<h5>Tile reveal animation</h5>
<p>Wordle is outstanding in its subtle but highly-effective animations that give it a really polished feel, but I <i>really</i> didn't want to have to use a Javascript animation library to get a few slick-looking animations. The few libraries I've tried in the past have been quite heavy in both bundle-size and intrusiveness into the code. I had a feeling I could get what I wanted with a suitable CSS <tt>keyframes</tt> animation of a couple of attributes, and after some experimenting, I was happy with this:
<pre class="prettyprint css">
@keyframes fade-in {
from {
opacity: 0;
transform:scale(0.5)
}
50% {
transform:scale(1.2);
opacity: 0.5;
}
to {
opacity: 1;
transform:scale(1.0)
}
}
</pre>
</p>
<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='400' height='322' src='https://www.blogger.com/video.g?token=AD6v5dzHDsdqDzjkFdVyW1tMCdl_0PeKsxbnP478JlTR7bP7eEcFVJseWmk4uth2A30GJleymPb_zOxZwrDNBhkVAQ' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div>
<p>
I really like the "over-bulge" effect before it settles back down to the correct size. The pure-CSS solution for a gradual left-to-right "reveal" once a guess has been entered worked out even better I think. Certainly a lot less fiddly than doing it in Javascript:
<pre class="prettyprint css">
.BoxRow :nth-child(1) {
animation: fade-in 200ms;
}
.BoxRow :nth-child(2) {
animation: fade-in 400ms;
}
.BoxRow :nth-child(3) {
animation: fade-in 600ms;
}
.BoxRow :nth-child(4) {
animation: fade-in 800ms;
}
.BoxRow :nth-child(5) {
animation: fade-in 1000ms;
}
</pre>
Those different times are the amount of time the animation should take to run - giving it the "sweeping" effect I was after:
</p>
<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='400' height='322' src='https://www.blogger.com/video.g?token=AD6v5dz7KK7k3R0M6xxTWPA-KWqvcF78Dvn4C362KIcWHhpQToLFb7ozJ2DziMkf4NQbNl-4tTWrmtnZJK8cqsC1dg' class='b-hbp-video b-uploaded' frameborder='0'></iframe></div>
<h5>Mobile-first</h5>
<p>As developers we get far too used to working on our own, fully up-to-date, <b>desktop</b>, browser of choice. But a game like this is far more likely to be played on a mobile device. So I made a concerted effort to test as I went both with my desktop Chrome browser simulating various mobile screens <b>and</b> on my actual iPhone 8. Using an actual device threw up a number of subtle issues that the desktop simulation couldn't possibly hope to replicate (and nor should it try) like the <a href="https://stackoverflow.com/a/60420392/649048" target="_blank">extraordinarily quirky stuff you have to do</a> to share to the clipboard on iOS and subtleties of font sizing. It was worth it when my beta-testing crew complimented me on how well it looked and played on their phones.</p>
<h5 id="performance">Performance</h5>
<p>The site gets <b>98</b> for mobile performance (on slow 4G) and <b>100</b> for desktop from <a href="https://pagespeed.web.dev/report?url=https%3A%2F%2Fcardle.xyz%2F&form_factor=mobile" target="_blank">PageSpeed</a>, which I'm pretty chuffed about. I spent a <b>lot</b> of time messing around with <a href="https://fonts.google.com/" target="_blank">Google Fonts</a> and then <a href="https://github.com/fontsource/fontsource" target="_blank">FontSource</a> trying to get a custom sans-serif font to be performant, before just giving up and going with "whatever you've got", i.e.:
<pre class="prettyprint css">
font-family: 'Segoe UI', 'Roboto', 'Oxygen',
'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
sans-serif;
</pre>
... sometimes it's just not worth the fight.
</p>
<p>The other "trick" I did was relocating a ton of CSS from the various <tt>ComponentName.css</tt> files in <tt>src</tt> right into a <tt><style></tt> block right in the <tt>head</tt> of <tt>index.html</tt>. This allows the browser to get busy rendering the header (which I also pulled out of React-land), bringing the "first contentful paint" time down appreciably. Obviously this is not something you'd want to be doing while you're in "active development" mode, but it doesn't have to be a nightmare - for example, in this project I made good use of <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties" target="_blank">CSS Variables</a> for the first time, and I define a whole bunch of them in that <tt>style</tt> block and then reference them in <tt>ComponentName.css</tt> to ensure consistency.
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-23351255199031437202022-05-29T15:03:00.002+10:002022-06-01T15:19:49.883+10:00Automating heating vents with openHAB, esp8266 and LEGO - Part 3; Firmware intro<p>
Continuing to work up the stack from my LEGO physical vent manipulator<a href="http://blog.themillhousegroup.com/2021/09/automating-heating-vents-with-openhab.html" target="_blank">(V1)</a>, <a href="https://blog.themillhousegroup.com/2022/04/automating-heating-vents-with-openhab.html" target="_blank">(V2)</a>, I decided to do something new for the embedded control software part of this solution and employ an Expressif ESP8266-based chipset and an accompanying L293D H-bridge daughterboard, primarily because they are just ridiculously cheap.
</p>
<p>It took a little bit of finessing to find out exactly what to search eBay for (try <tt>ESP-12E + L293D</tt>) but listings like <a href="https://www.ebay.com.au/itm/324681555875" target="_blank">this, for AUD$12.55 <i>including postage</i></a> are simply incredible value. That's an 80MHz processor, motor driver board, USB cable and motor cable, all for less than I probably paid for the serial cable I would have used for my primitive robotics exercises back in university. Absolutely extraordinary.
</p>
<p>
As this setup uses the "NodeMCU" framework, it can be developed in the <a href="https://www.arduino.cc/en/software" target="_blank">Arduino Studio IDE</a> that I've used previously for Arduino experiments, in Arduino's C++-esque language that is simultaneously familiar, but also not ...
</p>
<div class="separator" style="clear: both;"><a href="https://1.bp.blogspot.com/-POoATsrU_lo/YU6blAl9fyI/AAAAAAAAFaw/Qb1HUPZHMYk_x38j6j0QoiMV_NnIcvvqQCLcBGAsYHQ/s808/IMG_5442.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="400" data-original-height="808" data-original-width="750" src="https://1.bp.blogspot.com/-POoATsrU_lo/YU6blAl9fyI/AAAAAAAAFaw/Qb1HUPZHMYk_x38j6j0QoiMV_NnIcvvqQCLcBGAsYHQ/s400/IMG_5442.jpg"/></a></div>
<p>
But I digress. The real trick with this board package is deducing from the non-existent documentation, exactly what you have and how it's meant to be used. For this particular combination, it's a <b>"Node MCU 1.0 (ESP-12E Module)"</b> that is accessed by using the CP210x "USB to UART" port driver available <a href="https://www.silabs.com/developers/usb-to-uart-bridge-vcp-drivers" target="_blank">here</a>. Once you've got the board installed, you can browse example code that should work perfectly for your hardware under <b>File -> Examples -> Examples for NodeMCU 1.0</b>. There's a generous selection here, all the way from "Blink" (which, as the "Hello World" of hardware, should <i>always</i> be the first sketch your hardware runs) all the way to "ESP8266WebServer" - which unsurprisingly ended up being the perfect jumping-off point for my own firmware.
</p>
<p>
After a frustrating and time-consuming detour getting the device to join my WiFi network (it transpires that the "Scan" sketch can <i>find</i> the SSIDs of 802.11b/g/n networks, but to actually <i>connect</i>, it's far better to be on 802.11n-only) it was time to drive some output, which meant more Googling to determine exactly how the L293D "Motor Driver Expansion Board" actually connects to the ESP's GPIO, and what that means in terms of software configuration.
</p>
<p>Eventually I cobbled together the necessary knowledge from <a href="https://temperosystems.com.au/products/l293d-motor-driver-expansion-shield-esp-12e-esp8266-for-nodemcu/" target="_blank">this board datasheet</a> which talks about <tt>D1</tt>-<tt>D4</tt>, and the <a href="https://github.com/esp8266/Arduino/blob/master/doc/boards.rst#pin-mapping" target="_blank">Arduino documentation for NodeMCU</a> which indicates that these symbols should be magically available in my code. Then I took a tour through the <a href="https://github.com/esp8266/Arduino/blob/master/libraries/ESP8266WebServer/examples/PathArgServer/PathArgServer.ino" target="_blank">ESP8266WebServer example code</a> to find out what handler methods I had available. At last, I was ready to put it all together - as you'll see in the next blog post.
</p>
<p>
But before then, <b>a cautionary tale</b> - I fried both the motor shield board <b>and</b> an ESP board while developing this, and I suspect it was due to not being able to resist the temptation to run the whole thing off a single power supply. You can do this by moving this jumper to bridge the VIN (for the chip) pin to the VM (for the motor) pin. But I suspect the resulting exposure to back-EMF and all that grubby analogue stuff is not good for either the ESP chip nor the L293D motor driver on the shield board. You've been warned.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRG9nltMClPcQjj24PDUpwvDl6dbOBG5dn6JJ47-2bBCdyB84y0qtZbbwnX21C6LGdcGzkJBX5tjn7mLqOGWTytVib93QuNJyFxx-BJEq173nBEGTGZxIBGuzGxsPeFUozROKhwrPiM14g46sUkSDUxRRcz787IxgC7OokXroqESAtV2zG_guNT4Kl/s2575/power-in.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="400" data-original-height="2575" data-original-width="1852" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRG9nltMClPcQjj24PDUpwvDl6dbOBG5dn6JJ47-2bBCdyB84y0qtZbbwnX21C6LGdcGzkJBX5tjn7mLqOGWTytVib93QuNJyFxx-BJEq173nBEGTGZxIBGuzGxsPeFUozROKhwrPiM14g46sUkSDUxRRcz787IxgC7OokXroqESAtV2zG_guNT4Kl/s400/power-in.jpg"/></a></div>
<caption><i>Use 2 separate power supplies here, or just one, but beware ...</i></caption>
<hr>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1djP9poB2f5UDu1t64rAYvUbfZkUnZ0sFfNEhy8K5qAAR5P3iERr83W0OaSirP5KszxnC21kHp5XNJ6McXhn_n9LODuaaAQoyes9SGIk6lvzFA26_ORdbIQ7FgPT3fuFvwB7CgSXkZpb4G7yfvS542gdqOiQKaU1AQOkEfsOukccJZpH0FwXFoAKq/s2625/bridge-pins.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="1463" data-original-width="2625" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1djP9poB2f5UDu1t64rAYvUbfZkUnZ0sFfNEhy8K5qAAR5P3iERr83W0OaSirP5KszxnC21kHp5XNJ6McXhn_n9LODuaaAQoyes9SGIk6lvzFA26_ORdbIQ7FgPT3fuFvwB7CgSXkZpb4G7yfvS542gdqOiQKaU1AQOkEfsOukccJZpH0FwXFoAKq/s400/bridge-pins.jpg"/></a></div>
<caption><i>Putting the jumper across here allows using just one of the Vx/GND input pairs ...</i></caption>
<hr>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghjfP-oXGyX6Uci1CLXbzAX_3cG9Wl0eOWLi2nrFgasmnwHbv89Cm9L351QBOslnQKOqLTwxyz0Ak1q99mzZiZnQxtnfZ2LkvqPQenYhB2UhhVZeWYeeY36lh6xKtYpeDir2MxvhGdiWr2VnBwIP3NoHCcxkhQ5-HCo4UUWutnomEPi7eZxe0HIDPo/s1987/burnt-l293d.heic" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="400" data-original-height="1987" data-original-width="1518" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghjfP-oXGyX6Uci1CLXbzAX_3cG9Wl0eOWLi2nrFgasmnwHbv89Cm9L351QBOslnQKOqLTwxyz0Ak1q99mzZiZnQxtnfZ2LkvqPQenYhB2UhhVZeWYeeY36lh6xKtYpeDir2MxvhGdiWr2VnBwIP3NoHCcxkhQ5-HCo4UUWutnomEPi7eZxe0HIDPo/s400/burnt-l293d.heic"/></a></div>
<caption><i>The L293D chip looking worse for wear, having overheated and/or died</i></caption>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjos9GgQWcPm1iWXHP2KCc5LVpyV0JAHMnp4pCY47wJBK90T6xs440FkfzuEHO9SQUs5I9yvOpNy0eFUNOcghHmYj7natEGP29eCyy4FyzA83kmszcFT9sIBQel5DM20xfj98buaQu2gbp4qgJXS9r8OpGpo5f4MmTttWDuEHlf4a6Clzy8ta93gtAM/s1482/Screen%20Shot%202022-05-06%20at%203.01.46%20pm.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="526" data-original-width="1482" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjos9GgQWcPm1iWXHP2KCc5LVpyV0JAHMnp4pCY47wJBK90T6xs440FkfzuEHO9SQUs5I9yvOpNy0eFUNOcghHmYj7natEGP29eCyy4FyzA83kmszcFT9sIBQel5DM20xfj98buaQu2gbp4qgJXS9r8OpGpo5f4MmTttWDuEHlf4a6Clzy8ta93gtAM/s400/Screen%20Shot%202022-05-06%20at%203.01.46%20pm.png"/></a></div>
<caption><i>(from the eBay listing) this should probably say <b>maximise</b> interference...</i></caption>
</p>Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-53010096140780003072022-04-30T16:27:00.001+10:002022-05-06T14:50:56.066+10:00Automating heating vents with openHAB, esp8266 and LEGO - Part 2.5; Hardware rework<p>
Working with hardware is fun; working with LEGO hardware is <i><a href="https://www.youtube.com/watch?v=vZRgpb4VS_Y" target="_blank">awesome</a></i>. So before proceeding with the next part of my <a href="https://blog.themillhousegroup.com/2021/07/automating-heating-vents-with-openhab.html" target="_blank">heating vent automation series</a>, I took the opportunity to refine my vent manipulator, with the following aims:
<ul>
<li><b>Quieter operation</b>; v1 sounded like a coffee-grinder</li>
<li><b>Faster movement</b>; to sound like a quieter coffee-grinder for less time</li>
<li><b>Lower stack height</b> above floor level; to avoid impeding the sofa-bed mechanism directly overhead</li>
</ul>
</p>
<h5>V1 Hardware</h5>
<p>
As a reminder, here's the first hardware revision featuring a LEGO Technic XL motor and an extremely over-engineered - and tall - chassis.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw1szZSNsmhAu4cQuNWC-ub5b9WNQz3iSzThg42btee6qZWsZnIGmhuZVcFh9VgyVbmpDuUxmxY6VYNfllm9KYQ_n9CfQPjYIdXexcrWAnnpJjKBHcpWmWrsZtWuE6sqfDQfPochlYcmEFzG77eXdVUbJed72HaXIHl3JUYfGvGl8uT3Ci964DtBs4/s2048/IMG_5308.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="1536" data-original-width="2048" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhw1szZSNsmhAu4cQuNWC-ub5b9WNQz3iSzThg42btee6qZWsZnIGmhuZVcFh9VgyVbmpDuUxmxY6VYNfllm9KYQ_n9CfQPjYIdXexcrWAnnpJjKBHcpWmWrsZtWuE6sqfDQfPochlYcmEFzG77eXdVUbJed72HaXIHl3JUYfGvGl8uT3Ci964DtBs4/s400/IMG_5308.jpg"/></a></div>
</p>
<h5>V2 Hardware</h5>
<p>
Here's the respun version, which works as well as, if not better than, the original.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgScOKW5ukg_7GuelZoHuwz5S5KKj-wH5J5-2r4QBp4nS9Rz-wxlZnCDnwdk_cKIcANZ_JQb2VWKeZAIG3Y1eAC5BFYrd3X1lWu8NSFWCZvZso0Y-dPLG7ohhWPihaV0H_t04ucqYyJpf7cGmAJMbt09UpQjZCiByiu3-SnxwH5hX7mF_wYghI9IjT/s4032/hardwarev2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="3024" data-original-width="4032" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgScOKW5ukg_7GuelZoHuwz5S5KKj-wH5J5-2r4QBp4nS9Rz-wxlZnCDnwdk_cKIcANZ_JQb2VWKeZAIG3Y1eAC5BFYrd3X1lWu8NSFWCZvZso0Y-dPLG7ohhWPihaV0H_t04ucqYyJpf7cGmAJMbt09UpQjZCiByiu3-SnxwH5hX7mF_wYghI9IjT/s400/hardwarev2.jpg"/></a></div>
</p>
<p>
The changes:
<ul>
<li>The chassis is <b>half as high</b> above the vent surface</li>
<li>The rack-and-pinion mechanism is <b>centered</b> in the chassis to reduce torque</li>
<li>The rack is situated lower to <b>reduce flex</b></li>
<li>The motor is <b>reduced in size</b> to a LEGO Technic "M" motor (quieter and faster)</li>
<li>The manipulator clamps to the vent with a Technic pulley instead of a brick, <b>further reducing height-above-floor</b>
</ul>
</p>
<p>Now we're in a really good position to get down-and-dirty with some firmware...</p>Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-34360091160331653512022-03-26T15:07:00.106+11:002022-06-09T13:16:19.399+10:00Things people have trouble with in React / Javascript, part 1: Too much React state<p>
I've been on a bit of a <a href="https://stackoverflow.com/users/649048/millhouse" target="_blank">Stack Overflow</a> rampage recently, most commonly swooping in and answering questions/fixing problems people are having with their React apps. I'm going to spend my next few blog posts going over what seems to be giving people the most trouble in 2022.
</p>
<h4>Episode 1: In which things are over<tt>state</tt>d</h4>
<p>
This problem typically manifests itself as a question like "why doesn't my UI update" or the inverse "my UI is always updating" (which is almost always related to <tt>useEffect</tt> - see later in this series). With <a href="https://reactjs.org/docs/hooks-state.html" target="_blank">the <tt>useState</tt> hook</a>, state management becomes so easy, it's tempting to just scatter state everywhere inside a component instead of thinking about whether it belongs there, or indeed if it's needed at all.</p>
<p>
If I see more than 3 <tt>useState</tt> hooks in one component, I get nervous, and start looking for ways to:
<ul>
<li>Derive the state rather than store it</li>
<li>Push it up</li>
<li>Pull it down</li>
</ul>
The <a href="https://reactjs.org/docs/lifting-state-up.html" target="_blank">React docs</a> (and <a href="https://overreacted.io/writing-resilient-components/#principle-4-keep-the-local-state-isolated" target="_blank">Dan Abramov himself</a>) talk a lot about pulling-up and pushing-down state, but I think <b>deriving state</b> may actually be more important than either of those.
</p>
<p>
What do I mean? Well, I see people doing this:
<pre class="prettyprint javascript">
const [cars, setCars] = useState([]);
const [preferredColor, setPreferredColor] = useState(undefined);
const [preferredMaker, setPreferredMaker] = useState(undefined);
// DON'T DO THIS:
const [filteredCars, setFilteredCars] = useState([]);
...
</pre>
Obviously I've left out tons of code where the list of cars is fetched and the UI is wired up, but honestly, you can already see the trouble brewing. The <tt>cars</tt> list and the <tt>filteredCars</tt> list are <i>both</i> held as React state. But <tt>filteredCars</tt> shouldn't be - it's the <b>result of applying the user's selections</b> (preferred color and maker) and so can be trivially calculated at render time. As soon as you realise this, all kinds of UI problems with staleness, flicker, and lag just melt away:
<pre class="prettyprint javascript">
const [cars, setCars] = useState([]);
const [preferredColor, setPreferredColor] = useState(undefined);
const [preferredMaker, setPreferredMaker= = useState(undefined);
// Derive the visible list based on what we have and what we know
const filteredCars = filterCars(cars, preferredColor, preferredMaker);
...
</pre>
</p>
<p>
I think some people have trouble with their mental model around functional components, and are afraid to have a "naked <tt>const</tt>" sitting there in their function - somehow it's a hack or a lesser variable than one that <tt>useState</tt> handed you. Quite the reverse I think.
</p>
<p>Another argument might be that it's "inefficient" to derive the data on each and every render. To that, I counter that if you are maintaining the <tt>cars</tt> and <tt>filteredCars</tt> lists <b>properly</b> (and this is certainly not guaranteed), then the number of renders should be exactly the same, and thus the amount of work being done is the same. However there's a strong chance that deriving-on-the-fly will actually <b>save</b> you unnecessary renders. I might keep using this car-filtering analogy through this series to explain why this is.
</p>
Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0tag:blogger.com,1999:blog-8025834647965791999.post-66865511582217186532022-02-26T11:00:00.001+11:002022-03-02T11:05:26.754+11:00New Toy<p>
I just received this from the good folk at <a href="https://www.powerpal.net/" target="_blank">PowerPal</a>:
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEj5bZQnPNfNdLEMgGYlYbbQkhjG2ugYUViBrqhETeGE-KTBPuZljfUmKtZfZ2K_AefxZ7Vqy5YEHGNH0_mBUx48uJhqOPCrZeBdR-IPEPliptLx_2gdG41gVELf5Mz8ji7Fekjc7eh8hHefFqQXzCtpgEoh9bhof8kN_7af_kUrvJ-8-hN0YecMFBjQ=s3334" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="2261" data-original-width="3334" src="https://blogger.googleusercontent.com/img/a/AVvXsEj5bZQnPNfNdLEMgGYlYbbQkhjG2ugYUViBrqhETeGE-KTBPuZljfUmKtZfZ2K_AefxZ7Vqy5YEHGNH0_mBUx48uJhqOPCrZeBdR-IPEPliptLx_2gdG41gVELf5Mz8ji7Fekjc7eh8hHefFqQXzCtpgEoh9bhof8kN_7af_kUrvJ-8-hN0YecMFBjQ=s400"/></a></div>
<p>
This is a cool device that basically should give me real-time API-based access to the power usage of my house. The next logical step for me is to then bundle it up into my openHAB setup. I'll probably begin with just using the HTTP binding to get what I need, and maybe (and it's a BIG maybe) turn it into a genuine binding at some point in the future. My experience trying to get the Broadlink binding merged into the openHAB addons codebase has turned me off that process a little...
</p>Johnhttp://www.blogger.com/profile/01728894916174854514noreply@blogger.com0