Wednesday, 8 October 2014

Walking away from Run@Cloud. Part 1: Finding A Worthy Successor

In very disappointing news, last month CloudBees announced that they would be discontinuing their Run@Cloud service, a facility I have been happily using for a number of years.

I was using a number of CloudBees services, namely:
  • Dev@Cloud Repos - Git repositories
  • Dev@Cloud Builds - Jenkins in the cloud
  • Run@Cloud Apps - PaaS for hosted Java/Scala apps
  • Run@Cloud Database Service - for some MySQL instances
  • MongoHQ/Compose.io ecosystem service - MongoDB in the cloud

In short, a fair bit of stuff:
... the best bit being of course, that it was all free. So now I'm hunting for a new place for most, if not all, of this stuff to live, and run, for $0.00 or as close to it as conceivably possible. And before you mention it, I've done the build-it-yourself, host-it-yourself thing enough times to know that I do not ever want to do it again. It's most definitely not free for a start.

After a rather disappointing and fruitless tour around the block, it seemed there was only one solution that encompassed what I consider to be a true "run in the cloud" offering, for JVM-based apps, at zero cost. Heroku.

Wednesday, 10 September 2014

Why Clojure is fascinating me

So I've hopped aboard the Clojure boat, as it's the preferred implementation language for "new stuff" at work.

And I'm liking it. A lot. Possibly because of the way we're using it (microservices), but probably just intrinsically, it is a language that seems to fit in the head very nicely. Not encumbered by special cases, exceptions, implicit magic and overloads. (Don't worry, I still enjoy Scala, but it's a very different Kettle[Either[Throwable, Map[String, Fish]]]).

The succinctness and elegance of Clojure is also thrown into sharp relief by the other thing I seem to be spending a lot of time on at work - grinding through a multiple-hundred-thousand-line instant-legacy untested Java codebase. This thing might have been considered state-of-the-art ten years ago when it was all about 3-tiered systems putting messages on busses - iff it had been implemented nicely, but it wasn't. As a result, it's a monolithic proliferation of POJO-manipulation, with control flow by exceptions, mutable state throughout, and impossible to test in isolation.

It can take hours to find code that actually "does something", but you have to follow the path(s) all the way down from "the top" just in case there's a bug or "hidden feature" somewhere on the way through the myriad layers with methods that look like this (anonymised somewhat):
  public List getAllFoo(Integer primaryId, Short secondaryId, String detail, Locale locale,
      String timeZone, String category) {

    if (category != null) {
      Map foosMap = ParameterConstants.foosMap;
      if (foosMap != null) {
        category = (foosMap.get(category.toUpperCase()) != null) ? foosMap.get(category.toUpperCase()) : category;
      }
    }
    List values = new ArrayList();
    FooValue searchValue = new FooValue();
    List fooValues = null;
    searchValue.setPrimaryID(primaryId);
    searchValue.setSecondaryId(secondaryId);
    searchValue.setCategory(category);

    try {
      LOGGER.info(CommonAPILoggingConstants.INF_JOBTYPE_GETALL_VALIDATION_COMPLETED);
      fooValues = fooDAO.getFoos(searchValue, detail);
    } catch (FooValidationException e) {
      handleException(e.getErrorId(), e);
    } catch (Exception e) {
      throw new InternalAPIException(UNKNOWN_CODE, e);

    }
    if (FULL.equalsIgnoreCase(detail)) {
      for (FooValue fooValue : fooValues) {
        Bar bar = null;
        try {
          if (StringUtils.isNotBlank(fooValue.getBarID())) {
            bar = barDAO.getBarByBarId(fooValue.getBarID());
            fooValue.setBarName(bar.getBarName());
            fooValue.setBarShortName(bar.getShortName());

            LOGGER.debug(CommonAPILoggingConstants.DBG_JOBTYPE_GETALL_FETCH_BAR_BY_ID,
                                bar.getBarName(),fooValue.getBarID());
          }
        } catch (Exception e) {
          throw new InternalAPIException(UNKNOWN_CODE, e);
        }

        try {
          if (null != bar) {
            if (StringUtils.isNotBlank(bar.getBrandID())) {
              fooValue.setBazID(bar.getBazID());
                            Baz baz = bazDAO.getBazByBazId(fooValue.getBazID());
              LOGGER.debug(CommonAPILoggingConstants.DBG_JOBTYPE_GETALL_FETCH_BAZ,
                                    baz.getName(),fooValue.getBazID());
              fooValue.setBazName(baz.getName());
            }
          }
        } catch (Exception e) {
          throw new InternalAPIException(UNKNOWN_CODE, e);
        }

        FooValue value = filterFooDetails(fooValue);
        values.add(value);
      }
    } else if (BASIC.equalsIgnoreCase(detail)) {

      for (FooValue fooValue : fooValues) {
        FooValue value = new FooValue();
        value.setFooID(fooValue.getFooID());
        value.setJobName(fooValue.getJobName());
        value.setContentTypeName(fooValue.getContentTypeName());
        value.setCategory(fooValue.getCategory());
        value.setIsOneToMany(fooValue.getIsOneToMany());
        values.add(value);
      }
    } else {
      throw new CommonAPIException(INVALID_DETAIL_PARAM,"Detail parameter value invalid");
    }
    return values;
  }
This is everywhere. The lines that get me most annoyed are things like this:
            fooValue.setBarName(bar.getBarName());
            fooValue.setBarShortName(bar.getShortName());
These x.setFoo(y.getFoo()) stanzas can go on for tens of lines. I haven't come across a name for them, so I'll call them POJO Shuffles. They suck the will-to-live out of anyone who has to navigate them as they frequently contain misalignments, micro-adjustments and hard-coding e.g.:
            fooValue.setBarName(bar.getBazName());
            fooValue.setBarShortName("Shortname: " + bar.getShortName());
            fooValue.setBarLongName(bar.getShortName().toUpperCase());
Did you notice:
  • We're actually getting bazName from bar - almost certainly an autocomplete fail, but perhaps not?
  • The "short name" of fooValue will actually be longer than in the source object. Is that important to something?
  • There's a potential NullPointerException when we innocently try and set the "long name" of the fooValue


Then I read this gem of a paragraph from Rich Hickey, which is merely an introduction to the usage of defrecord in the official Clojure documentation, and yet reads like poetry when you've just come from code like the above:

It ends up that classes in most OO programs fall into two distinct categories: those classes that are artifacts of the implementation/programming domain, e.g. String or collection classes, or Clojure's reference types; and classes that represent application domain information, e.g. Employee, PurchaseOrder etc. It has always been an unfortunate characteristic of using classes for application domain information that it resulted in information being hidden behind class-specific micro-languages, e.g. even the seemingly harmless employee.getName() is a custom interface to data. Putting information in such classes is a problem, much like having every book being written in a different language would be a problem. You can no longer take a generic approach to information processing. This results in an explosion of needless specificity, and a dearth of reuse.
Rich Hickey

Tuesday, 26 August 2014

Fun with Scala - Post-Patterns Patterns, Part 1 - Loan Star

Are the original Software Design Patterns dead?

Seriously, aside from perhaps Builder, the dreaded Singleton, Model-View-Controller or its hipster cousin Model-View-ViewModel, when was the last time you saw one of the Gang Of Four's patterns used in a new project? Even the direct use of an Iterator is borderline bad-practice nowadays!

I'm thinking that in these days of maximal code-avoidance (and these are great days - less code is always better code in my opinion), the just amount of overhead required to implement most of these patterns is a big turn-off. It's not quite "boilerplate", that word that implies so much burden these days, but it is definitely Not Fun to churn out all those interfaces and abstract classes that do very little aside from give you that apparently-vital level of indirection which so often ends up being nothing more than a level of annoyance.

But I'm in no doubt that a new generation of post-Patterns design patterns have started to arrive, as more powerful, expressive languages enable formations of code that Gamma et al could only have dreamt of. Over the next little bit I'm going to explore a couple of nice ones that I've come across:

The Loan Pattern

... is actually the Strategy pattern but without the dreaded inheritance requirement - to refresh, here's a micro-Strategy example:
abstract class StrategySuperclass<T> {
  
  public T doSomethingIntricateInThreePartsWherePartTwoVaries() {
    T part1Result = doFirstPart();
    T part2Result = doSecondPart(part1Result);
    return doThirdPart(part2Result);
  }

  protected abstract T doSecondPart(T firstPartResult);
  ...
} 

public class ConcreteStrategyClass<T> extends StrategySuperclass<T> {
  protected T doSecondPart(T firstPartResult) {
    // Do stuff here
  }
}
The principal idea is to shield concrete classes from the complexity or intricate orchestration of resources required to do some "large" task, by allowing them to just "slot in" the specialisation or detail that they need for their solution.

The Loan Pattern does not mandate any inheritance structure at all - the two parts of the solution could be within the same file, mixed in as traits, inherited, or composed together. It is particularly excellent at protecting limited/valuable/scarce resources that have some kind of lifecycle where they should be closed/returned/de-allocated after use. Here's an example that I gave as an answer to a Stack Overflow problem related to closing resources:

Here's the loan "provider" for want of a better term:
def withPrintWriter(dir:String, name:String)(f: (PrintWriter) => Any) {
  val file = new File(dir, name)
  val writer = new FileWriter(file)
  val printWriter = new PrintWriter(writer)
  try {
    f(printWriter)
  }
  finally {
    printWriter.close()
  }
}
Which you use like this, as a "consumer":
withPrintWriter("/tmp", "myFile") { printWriter =>
  printWriter.write("all good")
}
Scala makes this kind of anonymous-function goodness really easy to both write and use. I've been using something similar in Specs2 tests recently for things like:
  • Database connections. Borrow one, give it back at the end, no matter what happened
  • Working directories. The provider makes sure the dir is empty, gives to the consumer, and then empties it out again at the end, just to be sure
  • System properties This is a really nice pattern for this hard-to-unit-test situation. Set it, call the test function, then clear it out again. Just make sure your tests are both isolated and sequential to avoid unpleasant inter-test interference

Wednesday, 6 August 2014

Scala by Stealth part 2: Scala-powered tests

Testing from Scala

Now for the fun part - we get to write some Scala!

Now if may turn out that this ends up being the end of the Scala road at your shop, due to restrictive policies about production code. That's a shame, and it could take a very long time to change. I know of one large company where "Java for production code, Scala for tests" has been the standard now for several years. Sure it's not perfect, but it's better than nothing, and developers who haven't yet caught the Scala bug can learn it in their day job.

The tests you write may eventually be the only unit tests for this code, so I would strive for complete coverage rather than merely a copy of the "legacy" Java-based tests. For the purposes of measuring this coverage I can highly recommend the jacoco4sbt plugin which is simple to get going, well-documented and produces excellent output that makes sense in Scala terms (some other Java-based coverage tools seem to struggle with some of the constructs the Scala compiler emits).

In addition to (possibly) getting introduced to Scala and also learning the basics of writing a specs2 test, you might even discover that your code under test is a little tricky to test from this new perspective. This is a good thing, and if it encourages a bit of mild refactoring (while keeping both Java- and Scala-based unit tests passing of course) then so much the better.

Once you've got some solid, measurable coverage from the Scala side (I like to shoot for 90+% line coverage), it's time to commit those changes again, and push them to your CI build. If you haven't already, install the JaCoCo Plugin for Jenkins so you can get pretty coverage graphs on the project page, and even automatically fail the build if coverage drops below your nominated threshold(s).

Switching your Jenkins build project to SBT

Speaking of which, you'll be wanting to adjust your Jenkins job (or equivalent) to push your new, somewhat-Scala-ish artifact to your Nexus (or equivalent). Firstly, for safety, I would actually be duplicating the existing job and disabling it rather than getting all gung-ho with something that can potentially be a very carefully-configured, nay curated Jenkins project configuration.

Luckily this should be pretty straightforward if you employ the Jenkins SBT Plugin - set the Actions to something like clean jacoco:cover publish to get the optimal blend of cleanliness, test-coverage visualisation, speed, and build traceability.

If for any reason you can't use the plugin, I'd recommend using your CI tool's Run script functionality, and including a dead-simple shell script in a suitable place in your repository; e.g.:

#!/bin/bash
echo "Running `which sbt`"
sbt -Dsbt.log.noformat=true -Dbuild.version=$BUILD_NUMBER clean jacoco:cover publish

Once you've got everything sorted out and artifacts uploading, you'll notice that your Nexus now has a new set of artifacts alongside your old Java ones, with a _2.10 (or whatever Scala version you're running) suffix. Scala in your corporate repo! Progress!

Wednesday, 18 June 2014

Scala By Stealth, Part 1: SBTifying your Mavenized Build

I was faced with updating and extending some old Java code of mine recently, and it seemed like much more of a chore than it used to. The code in question does a lot of collection manipulation, and I was looking at the Java code (which was, if I say so myself, not too bad - clean, thoroughly-tested and using nice libraries like Google Guava where at all possible) thinking "ugh - that would be a couple of lines in Scala and way more readable at the same time".

At this point I realised it would be a perfect candidate for a step-by-step guide for converting a simple Maveni[sz]ed Java library project (e.g. resulting in a JAR file artifact) to an SBT-based, Scala library.

Shortly after that I realised this could be a terrific way for a traditional "Java shop" where everything up until now has been delivered as JARs (and/or WARs) into a private Nexus to get its feet wet with Scala without having to go with a risky "big-bang" approach. An iterative migration, if you will. So let's get started!

A tiny bit of background first though - I'm not going to bother anonymising the library I'll be migrating, because I will almost certainly forget to do so somewhere in the example snippets I'll be including. So I'll say it here: the library is called brickhunter, and it's the "engine" behind the web-scraping LEGO search engine you can use at brickhunter.net. The site itself is a Java/Spring MVC/JQuery webapp that I launched in late 2012, and was the last significant bit of Java I ever wrote. It includes brickhunter.jar as a standard Maven dependency, pulling it from my private Maven repo hosted by CloudBees.

Step 0 (A Precondition): A Cared-For Maven Java Project

You need to be doing this migration for a library that has redeeming qualities, and not one that suffers from neglect, lack of test coverage, or a non-standard building process. Generally, using Maven will have made the latter difficult, but if, somehow, weird stuff is still going on, fix that. And make sure your tests are in order - comprehensive, relevant and not disabled!

Step 1: An SBTified Java Project

  • Create a new directory alongside the "legacy" project directory with a suitable name. For me, the obvious one was brickhunter-scala.
  • Now recursively copy everything under src from legacy to new. Hopefully that gets everything of importance; if not, see Step 0 and decide what should be done.
  • While a number of people have written helpers to automate the creation of a build.sbt from a pom.xml, unless you have a truly enormous number of dependencies, you're probably better-off just writing it yourself. For one thing, it's the obvious entry point to the enormous world of SBT, and there's plenty to learn;
  • In a typical Maven shop you may have quite a stack of parent POMs bringing in various dependencies - I found the quickest way to get all of them into SBT style was by invoking mvn dependency:tree which for my project, gave me:
    [INFO] +- org.jsoup:jsoup:jar:1.6.1:compile
    [INFO] +- commons-lang:commons-lang:jar:2.6:compile
    [INFO] +- com.google.guava:guava:jar:11.0.1:compile
    [INFO] |  \- com.google.code.findbugs:jsr305:jar:1.3.9:compile
    [INFO] +- log4j:log4j:jar:1.2.16:compile
    [INFO] +- org.slf4j:slf4j-api:jar:1.6.4:compile
    [INFO] +- org.slf4j:slf4j-log4j12:jar:1.6.4:compile
    [INFO] +- com.themillhousegroup:argon:jar:1.1-SNAPSHOT:compile
    [INFO] +- org.testng:testng:jar:6.3.1:test
    [INFO] |  +- junit:junit:jar:3.8.1:test
    [INFO] |  +- org.beanshell:bsh:jar:2.0b4:test
    [INFO] |  +- com.beust:jcommander:jar:1.12:test
    [INFO] |  \- org.yaml:snakeyaml:jar:1.6:test
    [INFO] +- org.mockito:mockito-all:jar:1.9.0:test
    [INFO] \- org.hamcrest:hamcrest-all:jar:1.1:test
    
  • Anything transitive (i.e. indented once or more) can be omitted as SBT will work that out for us just as Maven did.
  • The eagle-eyed might notice an in-house dependency (argon) which clearly isn't going to be found in the usual public repos - it will need its own resolver entry in build.sbt.
  • Here's how mine looked at this point:
  • name := "brickhunter-scala"
    
    organization := "com.themillhousegroup"
    
    version := "0.1"
    
    scalaVersion := "2.10.3"
    
    credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
    
    resolvers += "tmg-private-repo" at "https://repository-themillhousegroup.forge.cloudbees.com/private/"
    
    libraryDependencies ++= Seq(
      "org.jsoup"             % "jsoup"           % "1.6.1",
      "commons-lang"          % "commons-lang"    % "2.6",
      "com.google.guava"      % "guava"           % "11.0.1",
      "log4j"                 % "log4j"           % "1.2.16",
      "org.testng"            % "testng"          % "6.3.1"         % "test",
      "org.mockito"           % "mockito-all"     % "1.9.0"         % "test",
      "com.themillhousegroup" % "argon"           % "1.1-SNAPSHOT"  % "test"
    )
    
  • At this point, firing up SBT and giving it a compile command should be successful. If so, pat yourself on the back, and commit all pertinent files in source control. This is a good milestone!


Step 2: A Tested SBTified Java Project

  • Compiling is all very well but you can't really be sure your SBT-ification has been a success until all the tests are passing, just like they did in Maven. They did all pass in Maven, didn't they?
  • Here's where I hit my first snag, as my Java tests were written using the TestNG framework, which SBT has no idea how to invoke. And thus, the brickhunter-scala project gets its first plugin, the sbt-testng-interface.
  • But now when running sbt test, instead of "0 Tests Found", I get a big stack trace - the plugin is expecting to find a src/test/resources/testng.yaml and I don't have one, because Maven "just knows" how to run a load of TestNG-annotated tests it finds in src/test/java, and I've never needed to define what's in the default test suite.
  • The fix is to create the simplest possible testng.yaml that will pick up all the tests:
    name: BrickhunterSuite
    threadCount: 4
     
    tests:
      - name: All
        packages:
        - com.themillhousegroup.brickhunter
    
  • And now we should have the same number of tests running as under Maven, and all passing. Commit all the changes!


Next time: Publishing the new artifact to your private repository.

Friday, 6 June 2014

Tascam FireOne hardware buttons and GarageBand

Another blatant Google-troll here but hopefully it'll help someone else out there.

As mentioned elsewhere I use a Tascam FireOne Firewire Audio Interface when I make music with GarageBand, and it works pretty well.
Side note for even more karma: There are times when it doesn't work well (particularly on OSX Mavericks) and I humbly present my fixes which seem to work - mostly variations on the classic "turn it back off and back on again" trick:
  • Mac doesn't "see" the FireOne - Check Thunderbolt-to-Firewire adaptor is snug, unplug-replug.
  • Mac sees FireOne, FireOne seems dead - Unplug-replug.
  • Mac sees FireOne, FireOne lights and meters working, no sound - Mash both PHANTOM buttons at the same time. This seems to (probably not by design!) cause a hardware soft-ish reset and audio should ensue.

But I digress. One of the nice things about the FireOne is the hardware control surface it offers. Now ideally you're running Pro Tools or some other very nice, very expensive DAW where the FireOne's buttons Just Work but if, like me, your needs are actually met quite nicely by GarageBand (not to mention its price), then you'll be wanting to get those buttons going in GB. Because they most certainly don't by default.

Sadly, you won't be able to map all the FireOne's buttons to GB functions, but the most important ones can be done. Firstly, download GarageRemote, a very simple, but nicely done System Preferences extension thingy. Install it, and turn on its "Listener" functionality so it can do its thing. Then, you'll need to customise the MIDI message mapping as follows:



I diagnosed the MIDI messages that the FireOne sends by using the free Snoize MIDI Monitor utility. Here's the full list, in case you want to tune your setup:
FireOne Hardware ControlMIDI Message Bytes
<<90 5B 7F
>>90 5C 7F
[]90 5D 7F
>90 5E 7F
O90 5F 7F
 
F190 36 7F
F290 37 7F
F390 38 7F
F490 39 7F
F590 3A 7F
F690 3B 7F
F790 3C 7F
F890 3D 7F
 
Jogwheel CW (Slowest)90 3C 01
Jogwheel CW (Slow)90 3C 02
Jogwheel CW (Medium)90 3C 03
Jogwheel CW (Fast)90 3C 04
Jogwheel CW (Fastest)90 3C 05
 
Jogwheel CCW (Slowest)90 3C 41
Jogwheel CCW (Slow)90 3C 42
Jogwheel CCW (Medium)90 3C 43
Jogwheel CCW (Fast)90 3C 44
Jogwheel CCW (Fastest)90 3C 45
 
SHIFT (on its own)90 46 7F
Weirdly, using SHIFT + other keys doesn't actually change the MIDI message that is sent, making it pretty useless for our purposes. I'd sure love to get my hands on that GarageRemote source code and support more buttons!

Wednesday, 21 May 2014

Easy artifact uploads: MacOS to WebDAV Nexus

I posted a while back about using the WebDAV plugin for SBT to allow publishing of SBT build artifacts to the CloudBees repository system.

Well it turns out that this plugin is sadly not compatible with SBT 0.13, so it's back to the drawing board when publishing a project based on the latest SBT.

Luckily, all is not lost. Hinted at in a CloudBees post, your repositories are available via WebDAV at exactly the same location you use in your build.sbt to access them, but via https.

And the MacOS Finder can mount such a beast automagically via the Connect To Server (Command-K) dialog - supply your account name (i.e. the word in the top-right of your CloudBees Grand Central window) rather than your email address, and boing, you've got a new filesystem mounted, viz:





The only thing the WebDAV plugin actually did was create a new directory (e.g. 0.6) on-demand - so if you simply create the appropriately-named "folder" via the MacOS Finder, a subsequent, completely-standard SBT publish will work just fine.

You might even want to create a whole bunch of these empty directories (e.g. 0.7, 0.8, 0.9) while you're in there, so you don't get caught out if you decide to publish on a whim from somewhere else.