Showing posts with label functionalprogramming. Show all posts
Showing posts with label functionalprogramming. Show all posts

Friday, 24 August 2018

Building a retro game with React.js. Part 5 - Fill 'er up

By now I had a good portion of the "player's part" of the game complete. Movement around the game area, drawing lines, detecting intersection with existing lines and preventing illegal movement were all done, using the div-based "drawing" that I was familiar with. But now it was time to draw the filled colour block that results when a closed area has been completed. And I was looking at a wall of complexity that I didn't know how to scale.



The answer was to challenge my own comfort zone. Allow myself to quote, myself:
I could do that with an HTML canvas, but I'm (possibly/probably wrongly) not going to use a canvas. I'm drawing divs dammit!
After all, if ever there was a time and place to learn how to do something completely new, isn't it a personal "fun" project? After a quick scan around the canvas-in-React landscape I settled upon React-konva, which has turned out to be completely awesome for my needs, as evidenced by the amount of code changes actually needed to go from my div-based way of drawing a line to the Konva way:
The old way:

const Line = styled('div')`
  position: absolute;
  border: 1px solid ${FrenzyColors.CYAN};
  box-sizing: border-box;
  padding: 0;
`;

renderLine = (line, lineIndex) => {
  const [top, bottom] = minMax(line[2], line[4]);
  const [left, right] = minMax(line[1], line[3]);
  const height = (bottom - top) + 1;
  const width = (right - left) + 1;
  return <Line key={`line-${lineIndex}`}
                  style={{ top: `${top}px`, 
                           left: `${left}px`,
                           width: `${width}px`,
                           height: `${height}px` }} />;
}

The new way:

import { Line } from 'react-konva';

renderLine = (line, lineIndex) => {
  return (
    <Line
      key={`line-${lineIndex}`}
      points={line.slice(1)}
      stroke={FrenzyColors.CYAN}
      strokeWidth={2}
    />
  );
}

... and the jewel in the crown? Here's how simple it is to draw a closed polygon with the correct fill colour; by total fluke, my "model" for lines and polygons is virtually a one-to-one match with the Konva Line object, making this amazingly straightforward:
renderFilledArea = (filledPolygon) => {
  const { polygonLines, color } = filledPolygon;
  const flatPointsList = polygonLines.reduce((acc, l) => {
    const firstPair = l.slice(1, 3);
    return acc.concat(firstPair);
  }, []);

  return (
    <Line
      points={flatPointsList}
      stroke={FrenzyColors.CYAN}
      strokeWidth={2}
      closed
      fill={color}
    />
  );
}

Time Tracking
Probably at around 20 hours of work now.

Monday, 16 July 2018

Building a retro game with React.js. Part 4 - Drawing the line somewhere

In the previous instalment of this series, I was able to get the player's "sprite" (actually just a div with a border!) to move around the existing lines on the edge of the screen. The next logical step is to allow the player to draw their own lines, which, upon joining at both ends to existing lines, will become part of the "navigable" world the player can manoeuvre through.

Bugs Galore
It was at this point where I started being plagued by off-by-one errors; it seemed everywhere I turned I was encountering little one-pixel gaps when drawing lines, because:
  • My on-screen lines are actually 2px wide
  • My line-drawing function was doing an incorrect length calculation (had to do (right - left) + 1)
  • I was not updating my position at the right time, so was storing my "old" position as the current line's end point; and;
  • I was naively using setState and expecting the new this.state to be visible immediately

My solution to almost all of these problems (with the exception of the UI line-drawing function) was to write a heap of unit tests; these generally flushed things out pretty quickly.

Writing the line-drawing function was a weird experience. Virtually every software development "environment" I've ever used before, from BBC Basic on my Acorn Electron on, has had a function like drawLine(startX, startY, endX, endY);. And I could do that with an HTML canvas, but I'm (possibly/probably wrongly) not going to use a canvas. I'm drawing divs dammit! Here's what my function looks like:
renderLine = (line, lineIndex) => {
  const [top, bottom] = minMax(line[2], line[4]);
  const [left, right] = minMax(line[1], line[3]);
  const height = (bottom - top) + 1;
  const width = (right - left) + 1;
  return <Line key={`line-${lineIndex}`}
                  style={{ top: `${top}px`, 
                           left: `${left}px`,
                           width: `${width}px`,
                           height: `${height}px` }} />;
}
Where minMax is a simple function that returns [min, max] of its inputs, and Line is a React-Emotion styled div:
const Line = styled('div')`
  position: absolute;
  border: 1px solid ${FrenzyColors.CYAN};
  box-sizing: border-box;
  padding: 0;
`;
Notice that I resisted the temptation to pass the top, left etc into the Line wrapper. The reason for this is that doing so results in a whole new CSS class being created, and getting applied to the line, every time one of these computed values changes. This seems wasteful when most of the line's attributes remain the same! So I use an inline style to position the very-thin divs where I need it.
Time Tracking
Up to about 12 hours by my rough estimate.

Friday, 15 June 2018

Building a retro game with React.js. Part 3 - I Like To Move It

So with most of the graphical pieces in position, it's time to make things move around.

Again, starting with the easy stuff, I wanted the four directional keys to move the Player around. But in Frenzy, you can only move (as opposed to draw) along the boundaries of the game area and on lines you have already drawn. So if we look at my first iteration of the code in GameArea to handle a request to move the Player left, it's something like this:
 
update = () => {
  if (this.keyListener.isDown(this.keyListener.LEFT)) {
    this.moveLeft();
  }
};

moveLeft = () => {
  if (this.canMove(Direction.LEFT)) {
    this.setState({
       playerX : this.state.playerX -1
    });
  }
}
I ended up bundling quite a lot of smarts into the Direction enumeration in order to make the logic less "iffy" and more declarative. That one Direction.LEFT key encapsulates everything that is needed to check whether a) the player is on a line that has the correct orientation (horizontal) and b) there is room on that line to go further to the left.
A line looks like this:
[Orientation.HORIZONTAL, 0, 0, 478, 0], // startX, startY, endX, endY
and Direction looks like this:
export const Direction = {
  LEFT: {
    orientation: Orientation.HORIZONTAL,
    primaryCoord: (x, y) => y,
    lineToPrimaryCoord: (line) => line[2],
    secondaryCoord: (x, y) => x,
    testSecondary: (c, line) => c > Math.min(line[1], line[3])
  },
  ...
}

My test for whether I can move in a certain direction is:
static canPlayerMoveOnExistingLine = (playerX, playerY, direction, lines) => {
  const candidates = lines.filter(line => {
    return (line[0] === direction.orientation)
  });
    
  const pri = direction.primaryCoord(playerX, playerY);
  const primaryLines = candidates.filter(candidateLine => {
    return direction.lineToPrimaryCoord(candidateLine) === pri;
  });

  if (primaryLines.length > 0) {
    const sec = direction.secondaryCoord(playerX, playerY);
    const found = primaryLines.find(line => {
      return direction.testSecondary(sec, line);
    });

    return typeof found !== 'undefined';
  }
  return false;
} 
Declared static for ease of testing - easy and well worth doing for something like this where actually moving the player around is time-consuming and tedious. It's working well as it stands, although as we all know, naming things is hard. It's pretty easy to follow the process though. At this point I'm holding a lines array in this.state and doing filter and find operations on it as you can see above. We'll have to wait and see whether that will be fast enough. It may well be a good idea to keep a currentLine in state, as most of the time it will be unchanged from the last player movement. Next up, it's time to start drawing some new lines on the screen!

Kudos
I am starting to build up some tremendous respect for the original author of this game; although often dismissed as "very simple" there are some tricky little elements to coding this game and I'm only just scratching the surface. To achieve the necessary performance on an 8-bit, 1MHz processor with RAM measured in the handfuls of kilobytes is super impressive. Assembly language would have been necessary for speed, making the development and debugging a real pain. I haven't even started thinking about how to do the "fill" operation once a line has been drawn and it encloses some arbitrary space, but I suspect the original developer "sniffed" the graphics buffer to see what was at each pixel location - a "luxury" I don't think I'll have!
Time Tracking
Up to about 6 hours now.

Friday, 8 June 2018

A New Old Thing; Building a retro game with React.js. Part 1 - Background and Setup

I've blogged before about entering the fast-paced world of React.js, after a couple of years I'm still (on the whole) enjoying my day job working with it. Over this period React has done a pretty good job of delivering the "maintainable large JavaScript application" promise, but in the apps I built we've seen a few problems that stemmed from our developers' differing levels of experience with design patterns, immutability concepts, higher-order functions and higher-order components.

At the risk of being immodest, I'm comfortable with those concepts - Design Patterns from waaaay back and the functional paradigms from my five-year (and counting) love affair with Scala. What I wanted to explore was - what would happen if I built a React app by myself, endeavouring to write the cleanest, purest software based upon the best starting-point we currently have? How productive could I be? How long would it take to build a working full app? How would maintenance go? How quickly could I add a new feature?

As my day job is basically building CRUD apps, I wanted to do something a lot more fun for this side-project. And what could be more fun than a game? (Mental note: ask people working at Electronic Arts...) There's also a pleasing circularity in building a game and documenting how I did it - back in my earliest days of having a computer, aged about 7, I would buy magazines with program listings and laboriously enter them, line-by-line, while marvelling at how anyone could really have been good enough to know how to do this.

The Game
I'll admit, I've never built a non-trivial game before, but I think attempting an 8-bit home computer game I remember fondly from my childhood, on top of 2018's best front-end technologies, should be about the right level of difficulty.

The game I'll be replicating is called Frenzy; a Micro Power publication for the BBC B, Acorn Electron and Commodore 64. My machine was the Electron - basically a low-cost little brother to the mighty Beeb; highly limited in RAM and CPU, titles for this platform usually needed substantial trimming from their BBC B donor games, despite using the same BBC BASIC language.

Check out the links above for more details and screenshots, but the game is basically a simplified version of "Qix" or "Kix" where the object is to fill in areas of the screen without being hit by one or more moving enemies.

Just for the hell of it, I'm going to place this game front-and-centre on my homepage at http://www.themillhousegroup.com, which I just nuked for this purpose. The page is now a React app being served off a Play Scala backend as per my new-era architecture, and the key technologies I'm using so far are: I'm sure more will follow.

Initial Development
To develop the game, I decided to start from the start. The welcome page would need to be suitably old-skool but would force me to consider a few things:
  • What screen size should I be working to?
  • Can I get a suitably chunky, monospaced font?
  • Press Space to start sounds easy, but how do I make that actually work?
Decisions
The original Frenzy must have operated in the BBC's graphical MODE 1 because it used a whopping 4 colours and the pixels were square. So that means the native resolution was 320x256. While it would be tempting to stick to that screen size and thus have it fit on every smartphone screen, I've decided to double things and target a 640x512 effective canvas.
Some searching for 8-bit fonts led me to "Press Start 2P" which, while intended to honour Namco arcade machines, is near enough to the chunky fonts I remember fondly from my childhood home computer that I can't go past it:
As a tiny nod to the present, the "screen" is actually slightly transparent and has a drop shadow - showing how far we've come in 34 years!
The final piece of the welcome screen was achieved by mounting the FrenzyGame component in a React-Game-Kit Loop and using the KeyListener to subscribe to the keys I care about - a quick perusal of the demo game showed me how to use it:
class FrenzyGame extends Component {

  constructor(props) {
    super(props);
    this.keyListener = new KeyListener();

    this.state = {
      gameOver: true  
    };
  }

  componentDidMount() {
    this.loopSubscriptionId = this.context.loop.subscribe(this.update);
    this.keyListener.subscribe([
      this.keyListener.SPACE
    ]);
  }

  componentWillUnmount() {
    this.context.loop.unsubscribe(this.loopSubscriptionId);
    this.keyListener.unsubscribe();
  }

  update() {
    if (this.state.gameOver) {
      if (this.keyListener.isDown(this.keyListener.SPACE)) {
        this.setState({ gameOver: false });
      }
    }
  };

  ...

  render() {
    return this.state.gameOver 
      ? this.renderWelcomeScreen() 
      : this.renderGame();
  }
}

Saturday, 8 July 2017

The CRAP Stack, Part 3 - Front-End Routes with a Play server

As I continue to develop my React app that is hosted on a Play backend, I've come across the need to support "front-end routes"; that is, URLs that look like this:
  http://myapp.com/foo/bar
where there is no explicit entry for GET /foo/bar in Play's routes and nor is there a physical asset located in /public/foo/bar for the Assets controller to return to the client, as we set up in the last instalment:
  # Last of all, fall through to the React app
  GET /       controllers.Assets.at(path="/public",file="index.html")
  GET /*file  controllers.Assets.at(path="/public",file)
What we'd like is for the React application at index.html to be served up, so that it can then consume/inspect/route from the original URL via the Window.location API.

As it stands, the last line of routes will match, the Assets controller will fail to find the resource, and your configured "client error handler" will be called to deal with the 404. This is not what we want for a "front-end route"!

We want requests that don't correspond to a physical asset to be considered a request for a virtual asset - and hence given to the React app. And after a bit of fiddling around, I've come up with a FrontEndServingController that gives me the most efficient possible way of dealing with this. The Gist is available for your copy-paste-and-improve pleasure, but the key points are:

The fall-through cases at the bottom of routes become:
  GET /       controllers.FrontEndServingController.index
  GET /*file  controllers.FrontEndServingController.frontEndPath(file)
Those methods in FrontEndServingController just being:
  val index = serve(indexFile)

  def frontEndPath(path: String) = serve(path)

  private def serve(path: String) = {
    if (physicalAssets.contains(path)) {
      logger.debug(s"Serving physical resource: '$path'")
      assets.at(publicDirectory, path, true)
    } else {
      logger.debug(s"Serving virtual resource: '$path'")
      // It's some kind of "virtual resource" -
      // a front-end "route" most likely
      assets.at(publicDirectory, indexFile, true)
    }
  }


We're still using Play's excellent built-in AssetsController to do the hard work of caching, ETags, GZipping (all the classic webserver jobs) - we have injected it as assets using Dependency Injection - composition FTW. That true argument tells it to use "aggressive caching" which is ideal for this scenario where the bundle files we're serving up already have a cache-busting filename.
And now the "clever" bit being a recursive scan of the /public directory when we start up, assembling a definitive (and immutable!) Set[String] of what's actually a physical asset path:
  lazy val physicalAssets:Set[String] = {
    val startingDirectory = new File(physicalPublicDirectory)
    deepList(startingDirectory)
  }

  private def deepList(f: File): Set[String] = {
    val these = f.listFiles.toSet
    val inHere = these.filter(_.isFile).map { f =>
      f.getPath.replace(physicalPublicDirectory, "")
    }
    val belowHere = these.filter(_.isDirectory).flatMap(deepList)
    inHere ++ belowHere
  }

Thursday, 20 April 2017

Don't Bake That Cake!

I resisted using the old "XXX Considered Harmful" riff here, but the intent is the same; learn from my pain!

I recently revisited some Scala Play Framework code I'd written a while back (circa Play 2.3) and, as is so often the case, found myself horrified at the spaghetti I had excreted. My intention had been to add some quick features to the codebase after taking it through the 2.4 and then 2.5 upgrade processes, but it was such a mess that it ended up taking several weeks (in after-hours time) to get it done.

The main culprit? The Cake Pattern

Back in the days before Play had a first-class dependency-injection mechanism, layering in traits was considered the best-practice. However, I can tell you now, with the robust DI support available via Google Guice, the Cake Pattern is definitely not a good idea.

In particular, if you're trying to favour composition over inheritance, it's best not to even start drinking the trait Kool-Aid. It's very tempting early on in a project to define what seem to be neatly-encapsulated bits of functionality, and then mix them in. At first, it seems just as elegant, if not more-so, than wiring in collaborators. The problem comes as you start to get large numbers of these mixins. Multiple-inheritance confusion, your compile time goes through the roof, testing becomes extremely awkward. Yuk. And then once you've decided you want out of the cake, you realise.

YOU CAN'T UNBAKE A CAKE

Once you have a teetering tower of inheritance, it's extremely difficult to carefully refactor it into a composed structure without the whole thing exploding. You really can't do it iteratively, and so end up with a big-bang rewrite, and your tests (if you had any) are all broken too because everything is so fundamentally different.

I was going to provide examples in this article but I'm too embarrassed and exhausted :-)

Friday, 28 October 2016

Configuring Jacoco4sbt for a Play application

Despite (or perhaps due to) my recent dalliances with React.js, I'm still really loving the Play Framework, in both pure-backend- (JSON back-and-forth) and full-stack (serving HTML) modes. It's had a tremendous amount of thought put into it, it's been rock-solid in every situation (both work- and side-project) I've deployed it, it's well-documented and there's a solid ecosystem of supporting plugins, frameworks and libraries available.

One such plugin is Jacoco4sbt, which wires the JaCoCo code-coverage tool into SBT (the build system for Play apps). Configuration is pretty straightforward, and the generated HTML report is a nice way to target untested corners of your code. The only downside (which I've finally got around to looking at fixing) was that by default, a lot of framework-generated code is included in your coverage stats.

So without further ado, here's a stanza you can add to your Play app's build.sbt to whittle down your coverage report to code you actually wrote:

jacoco.settings

jacoco.excludes in jacoco.Config := Seq(
    "views*",
    "*Routes*",
    "controllers*routes*",
    "controllers*Reverse*",
    "controllers*javascript*",
    "controller*ref*",
    "assets*"
)
I'll be putting this into all my Play projects from now on. Hope it helps someone.

Wednesday, 16 March 2016

Building reusable, type-safe Twirl components

I've been doing quite a lot of work on a Play Framework 2.4.x app recently, a hit upon a little problem that others have noted as well. I'm trying to make the view layer as nice a place to be as the "main" codebase - after all, it's all Scala - and so I'm extracting out anything re-usable into a components package.

Here's a simple example. I'm using Bootstrap (of course), and I'm using the table-striped class to add a little bit of interest to tabular data. The setup of an HTML table is quite verbose and definitely doesn't need to be repeated, so I started with the following basic structure:
@(items:Seq[_], headings:Seq[String] = Nil)
  <table class="table table-striped">
      @if(headings.nonEmpty) {
      <thead>
          <tr>
            @for(heading <- headings) {
                <th>@heading</th>
            }
          </tr>
      </thead>
      }
      <tbody>
        @for(item <- items) {
            <tr>
                ???
            </tr>
          }
        }
      </tbody>
  </table>
Which neatens up the call-site from 20-odd lines to one:
  @stripedtable(userList, Seq("Name", "Age")

Except. How do I render each row in the table body? That differs for every use case!
What I really wanted was to be able to map over each of the items, applying some client-provided function to render a load of <td>...</td> cells for each one. Basically, I wanted stripedtable to have this signature:
@(items:Seq[T], headings:Seq[String] = Nil)(fn: T => Html)
With the body simply being:
   @for(item <- items) {
      <tr>
        @fn(item)
      </tr>
   }
and client code looking like this:
  @stripedtable(userList, Seq("Name", "Age") { user:User =>
    <td>@user.name</td><td>@user.age</td>
  }
...aaaaand we have a big problem. At least at time of writing, Twirl templates cannot be given type arguments. So those [T]'s just won't work. Loosening off the types like this:
@(items:Seq[_], headings:Seq[String] = Nil)(fn: Any => Html)
will compile, but the call-site won't work because the compiler has no idea that the _ and the Any are referring to the same type. Workaround solutions? There are two, depending on how explosively you want type mismatches to fail:
Option 1: Supply a case as the row renderer
  @stripedtable(userList, Seq("Name", "Age") { case user:User =>
    <td>@user.name</td><td>@user.age</td>
  }
This works fine, as long as every item in userList is in fact a User - if not, you get a big fat MatchError.
Option 2: Supply a case as the row renderer, and accept a PartialFunction
The template signature becomes:
@(items:Seq[_],hdgs:Seq[String] = Nil)(f: PartialFunction[Any, Html])
and we tweak the body slightly:
   @for(item <- items) {
      @if(fn.isDefinedAt(item)) {
        <tr>
          @fn(item)
        </tr>
      }
   }
In this scenario, we've protected ourselves against type mismatches, and simply skip anything that's not what we expect. Either way, I can't currently conceive of a more succinct, reusable, and obvious way to drop a consistently-built, styled table into a page than this:
  @stripedtable(userList, Seq("Name", "Age") { case user:User =>
    <td>@user.name</td><td>@user.age</td>
  }

Friday, 17 July 2015

Strongly-Typed Time. Part 3. Application

In the previous instalments of this little series, I looked at the motivation for and design choices behind a Scala library to use the type system to eliminate (or at least massively reduce) the incidence of errors when dealing with times here on planet Earth.

As is often the case with these things, the motivator was a real-life project that would benefit from such a library. While I can't share the code of that project, the library is up on Github right now, and you can add the JAR as a dependency to your SBT-driven project in both Scala 2.10 and 2.11 flavours.

So what did I miss when going from an idealised, clean-slate design to something a real-life application can use?

Quite a lot. Let's have a look.

Once you're strongly-typed anywhere, you have to be strongly-typed everywhere

Previously, I was representing instants and times using a mixture of Long or org.joda.time.DateTime. I couldn't believe how quickly switching to TimeInZone[TZ] resulted in that [TZ] getting into everything - for better or worse.

Iteration 1; Naive, timezone-less DateTimes:

case class CarRace ( location: String, startTime:DateTime, endTime:DateTime  )

// We forgot a timezone - now we'll get whatever the server defaults to... 
val localRaceStart = new DateTime(2015, 7, 5, 13, 0) // July 5, 2015, 1pm
val localRaceEnd = new DateTime(2015, 7, 5, 15, 0)   // July 5, 2015, 3pm 

// Highly likely to be WRONG:
val silverstoneGrandPrix = CarRace( "Silverstone", localRaceStart, localRaceEnd )

Iteration 2; Let's try to strongly-type the times:

case class CarRace ( location: String, 
                     startTime:TimeInZone[_ <: TimeZone], 
                     endTime:TimeInZone[_ <: TimeZone] ) 
This instance happens to be correct, but CarRace doesn't enforce that races always start and end in the same timezone:
val britishGrandPrix = CarRace( "Silverstone", 
                                TimeInZone[London](localRaceStart), 
                                TimeInZone[London](localRaceEnd))
So we could end up with this (a half-length race):
val brokenGrandPrix = CarRace( "Silverstone", 
                               TimeInZone[London](localRaceStart), 
                               TimeInZone[Paris](localRaceEnd))

Iteration 3; We have to enforce the timezone in the parent object:

case class CarRace[TZ <: TimeZone] ( location: String, 
                                     startTime:TimeInZone[TZ], 
                                     endTime:TimeInZone[TZ] )
So now we get good compile-time safety:
// This won't compile now!
val brokenGrandPrix = CarRace( "Silverstone", 
                               TimeInZone[London](localRaceStart), 
                               TimeInZone[Paris](localRaceEnd)) // error: type mismatch;
... but now we have to lug [TZ] around everywhere...
val raceSeason = List[CarRace] // error: class CarRace takes type parameters
... and worse still, we often have to wildcard the "strong" type to actually Get Stuff Done:
val raceSeason = List[CarRace[_ <: TimeZone]] // So what was the point of all this again???

Types are great - if you know them at compile-time

Although I knew the timezones that some events would be occurring in, there's no way to know all of the timezones of all of the things:
// Seems reasonable enough ...
case class RaceWatcher[TZ <: TimeZone](name:String, watchingFrom:TZ)
OK so let's create and use a RaceWatcher to find out when somebody needs to tune into a race in their timezone:

// Assume this gets passed in from the user's browser, or maybe preferences
val tz = TimeZone("America/New_York")

val chuck = RaceWatcher("Chuck", tz)


// Let's find out when Chuck needs to turn on his TV:
val switchOnAt = britishGrandPrix.startTime.map[chuck.watchingFrom] 
// => error: type watchingFrom is not a member of RaceWatcher
 
We can't do that (without reflection). So in the end, we end up stringly-typed instead of strongly-typed; I had to add map(javaTimeZoneName:String) to the initial, "clean" map[TZ]:
val switchOnAt = britishGrandPrix.startTime.map(chuck.watchingFrom.name)
// => TimeInZone[New_York] UTC: '2015-07-05T12:00:00.000Z' UTCMillis: '1436097600000' Local: '2015-07-05T08:00:00.000-04:00'


Timezones are still hard

My final observation is more particular to the domain of time rather than the use of types. They are still a mind-bender, and you still have to concentrate while working this area. Types can prevent obvious mismatches in assignments or parameters, but at the end of the day, the developer still needs to build up that mental picture of what they need to get done.

I will regard my first outing of Arallon as a success though - most of the runtime problems I encountered in this first application were actually in the area of time ranges rather than point-in-time errors. Which is why the next focus of Arallon will be type-safe representations of the concept:
  • TimeSpanInZone - such as would be perfect for my car-race example above; and
  • DayInZone - where a midnight-to-midnight 24-hour period in a timezone is the prime focus

Wednesday, 27 May 2015

Post-Patterns Patterns Part 2 - Partial at my house

So in my sausagefactory library, my first attempt at adding an extension mechanism was very Java-esque.
trait FieldConverter {
  def convert(a: A, b: B):Any 
}
The companion object for the CaseClassConverter supplied a default FieldConverter if you didn't need one:
object CaseClassConverter {

  def apply(converter: => FieldConverter = defaultFieldConverter)

  class DefaultFieldConverter extends FieldConverter {
     def convert(a: A, b: B):Any = {
       b
     }
  } 
}
Ye gads look at the boilerplate! All to wrap a simple function! And yet there is huge scope to still get it wrong, because if you do actually supply a custom FieldConverter, it actually ends up being you who handles all conversions from then on, even though you really don't care about most of them. So you need an if for everything to work properly:
class MyFieldConverter extends FieldConverter {

  def convert(a: A, b: B):Any = {
   if (specialCase) {
      ... // do special conversion
   } else {
      // Do the normal conversion
       b
     }
   }
}

That sucks.

So, following a nice little nugget I found in Effective Scala, I refactored the whole extension mechanism into using PartialFunctions, like this:
Make FieldConverter a type alias
  type FieldConverter = PartialFunction[(Type, Any), Any]
Chain up a user's custom FieldConverter with the default one
  val exhaustiveConverter:FieldConverter = userConverter orElse defaultConverter
Scala will check whether the userConverter is defined at a given input - if not, it'll fall through to the defaultConverter - perfect.
Now custom converters are simple case blocks
  val alwaysMakeJavaLongsIntoInts: FieldConverter = {
    case (t: Type, v: Any) if (isInt(t) && isJLong(v.getClass)) => {
      v.asInstanceOf[Long].toInt
    }
  }
A userConverter only has to worry about converting one type of thing, and doesn't know (or care) about downstream converters. A simplified Chain of Responsibility.

Wednesday, 10 September 2014

Why Clojure is fascinating me

So I've hopped aboard the Clojure boat, as it's the preferred implementation language for "new stuff" at work.

And I'm liking it. A lot. Possibly because of the way we're using it (microservices), but probably just intrinsically, it is a language that seems to fit in the head very nicely. Not encumbered by special cases, exceptions, implicit magic and overloads. (Don't worry, I still enjoy Scala, but it's a very different Kettle[Either[Throwable, Map[String, Fish]]]).

The succinctness and elegance of Clojure is also thrown into sharp relief by the other thing I seem to be spending a lot of time on at work - grinding through a multiple-hundred-thousand-line instant-legacy untested Java codebase. This thing might have been considered state-of-the-art ten years ago when it was all about 3-tiered systems putting messages on busses - iff it had been implemented nicely, but it wasn't. As a result, it's a monolithic proliferation of POJO-manipulation, with control flow by exceptions, mutable state throughout, and impossible to test in isolation.

It can take hours to find code that actually "does something", but you have to follow the path(s) all the way down from "the top" just in case there's a bug or "hidden feature" somewhere on the way through the myriad layers with methods that look like this (anonymised somewhat):
  public List getAllFoo(Integer primaryId, Short secondaryId, String detail, Locale locale,
      String timeZone, String category) {

    if (category != null) {
      Map foosMap = ParameterConstants.foosMap;
      if (foosMap != null) {
        category = (foosMap.get(category.toUpperCase()) != null) ? foosMap.get(category.toUpperCase()) : category;
      }
    }
    List values = new ArrayList();
    FooValue searchValue = new FooValue();
    List fooValues = null;
    searchValue.setPrimaryID(primaryId);
    searchValue.setSecondaryId(secondaryId);
    searchValue.setCategory(category);

    try {
      LOGGER.info(CommonAPILoggingConstants.INF_JOBTYPE_GETALL_VALIDATION_COMPLETED);
      fooValues = fooDAO.getFoos(searchValue, detail);
    } catch (FooValidationException e) {
      handleException(e.getErrorId(), e);
    } catch (Exception e) {
      throw new InternalAPIException(UNKNOWN_CODE, e);

    }
    if (FULL.equalsIgnoreCase(detail)) {
      for (FooValue fooValue : fooValues) {
        Bar bar = null;
        try {
          if (StringUtils.isNotBlank(fooValue.getBarID())) {
            bar = barDAO.getBarByBarId(fooValue.getBarID());
            fooValue.setBarName(bar.getBarName());
            fooValue.setBarShortName(bar.getShortName());

            LOGGER.debug(CommonAPILoggingConstants.DBG_JOBTYPE_GETALL_FETCH_BAR_BY_ID,
                                bar.getBarName(),fooValue.getBarID());
          }
        } catch (Exception e) {
          throw new InternalAPIException(UNKNOWN_CODE, e);
        }

        try {
          if (null != bar) {
            if (StringUtils.isNotBlank(bar.getBrandID())) {
              fooValue.setBazID(bar.getBazID());
                            Baz baz = bazDAO.getBazByBazId(fooValue.getBazID());
              LOGGER.debug(CommonAPILoggingConstants.DBG_JOBTYPE_GETALL_FETCH_BAZ,
                                    baz.getName(),fooValue.getBazID());
              fooValue.setBazName(baz.getName());
            }
          }
        } catch (Exception e) {
          throw new InternalAPIException(UNKNOWN_CODE, e);
        }

        FooValue value = filterFooDetails(fooValue);
        values.add(value);
      }
    } else if (BASIC.equalsIgnoreCase(detail)) {

      for (FooValue fooValue : fooValues) {
        FooValue value = new FooValue();
        value.setFooID(fooValue.getFooID());
        value.setJobName(fooValue.getJobName());
        value.setContentTypeName(fooValue.getContentTypeName());
        value.setCategory(fooValue.getCategory());
        value.setIsOneToMany(fooValue.getIsOneToMany());
        values.add(value);
      }
    } else {
      throw new CommonAPIException(INVALID_DETAIL_PARAM,"Detail parameter value invalid");
    }
    return values;
  }
This is everywhere. The lines that get me most annoyed are things like this:
            fooValue.setBarName(bar.getBarName());
            fooValue.setBarShortName(bar.getShortName());
These x.setFoo(y.getFoo()) stanzas can go on for tens of lines. I haven't come across a name for them, so I'll call them POJO Shuffles. They suck the will-to-live out of anyone who has to navigate them as they frequently contain misalignments, micro-adjustments and hard-coding e.g.:
            fooValue.setBarName(bar.getBazName());
            fooValue.setBarShortName("Shortname: " + bar.getShortName());
            fooValue.setBarLongName(bar.getShortName().toUpperCase());
Did you notice:
  • We're actually getting bazName from bar - almost certainly an autocomplete fail, but perhaps not?
  • The "short name" of fooValue will actually be longer than in the source object. Is that important to something?
  • There's a potential NullPointerException when we innocently try and set the "long name" of the fooValue


Then I read this gem of a paragraph from Rich Hickey, which is merely an introduction to the usage of defrecord in the official Clojure documentation, and yet reads like poetry when you've just come from code like the above:

It ends up that classes in most OO programs fall into two distinct categories: those classes that are artifacts of the implementation/programming domain, e.g. String or collection classes, or Clojure's reference types; and classes that represent application domain information, e.g. Employee, PurchaseOrder etc. It has always been an unfortunate characteristic of using classes for application domain information that it resulted in information being hidden behind class-specific micro-languages, e.g. even the seemingly harmless employee.getName() is a custom interface to data. Putting information in such classes is a problem, much like having every book being written in a different language would be a problem. You can no longer take a generic approach to information processing. This results in an explosion of needless specificity, and a dearth of reuse.
Rich Hickey

Tuesday, 26 August 2014

Fun with Scala - Post-Patterns Patterns, Part 1 - Loan Star

Are the original Software Design Patterns dead?

Seriously, aside from perhaps Builder, the dreaded Singleton, Model-View-Controller or its hipster cousin Model-View-ViewModel, when was the last time you saw one of the Gang Of Four's patterns used in a new project? Even the direct use of an Iterator is borderline bad-practice nowadays!

I'm thinking that in these days of maximal code-avoidance (and these are great days - less code is always better code in my opinion), the just amount of overhead required to implement most of these patterns is a big turn-off. It's not quite "boilerplate", that word that implies so much burden these days, but it is definitely Not Fun to churn out all those interfaces and abstract classes that do very little aside from give you that apparently-vital level of indirection which so often ends up being nothing more than a level of annoyance.

But I'm in no doubt that a new generation of post-Patterns design patterns have started to arrive, as more powerful, expressive languages enable formations of code that Gamma et al could only have dreamt of. Over the next little bit I'm going to explore a couple of nice ones that I've come across:

The Loan Pattern

... is actually the Strategy pattern but without the dreaded inheritance requirement - to refresh, here's a micro-Strategy example:
abstract class StrategySuperclass<T> {
  
  public T doSomethingIntricateInThreePartsWherePartTwoVaries() {
    T part1Result = doFirstPart();
    T part2Result = doSecondPart(part1Result);
    return doThirdPart(part2Result);
  }

  protected abstract T doSecondPart(T firstPartResult);
  ...
} 

public class ConcreteStrategyClass<T> extends StrategySuperclass<T> {
  protected T doSecondPart(T firstPartResult) {
    // Do stuff here
  }
}
The principal idea is to shield concrete classes from the complexity or intricate orchestration of resources required to do some "large" task, by allowing them to just "slot in" the specialisation or detail that they need for their solution.

The Loan Pattern does not mandate any inheritance structure at all - the two parts of the solution could be within the same file, mixed in as traits, inherited, or composed together. It is particularly excellent at protecting limited/valuable/scarce resources that have some kind of lifecycle where they should be closed/returned/de-allocated after use. Here's an example that I gave as an answer to a Stack Overflow problem related to closing resources:

Here's the loan "provider" for want of a better term:
def withPrintWriter(dir:String, name:String)(f: (PrintWriter) => Any) {
  val file = new File(dir, name)
  val writer = new FileWriter(file)
  val printWriter = new PrintWriter(writer)
  try {
    f(printWriter)
  }
  finally {
    printWriter.close()
  }
}
Which you use like this, as a "consumer":
withPrintWriter("/tmp", "myFile") { printWriter =>
  printWriter.write("all good")
}
Scala makes this kind of anonymous-function goodness really easy to both write and use. I've been using something similar in Specs2 tests recently for things like:
  • Database connections. Borrow one, give it back at the end, no matter what happened
  • Working directories. The provider makes sure the dir is empty, gives to the consumer, and then empties it out again at the end, just to be sure
  • System properties This is a really nice pattern for this hard-to-unit-test situation. Set it, call the test function, then clear it out again. Just make sure your tests are both isolated and sequential to avoid unpleasant inter-test interference