Friday, 16 October 2015

Cloudy Continuous Integration: BitBucket-to-Jenkins-to-Heroku[-to-Heroku] - Part 1

I can't quite believe it but it's actually almost a year since I moved away from an entirely-Cloudbees-based build-and-deploy chain to a far more higgledy-piggledy, yet much more satisfactory, best-of-breed chain.

In that time this setup has built a helluva lot of software, both open-source libraries and closed-source moonlighting apps, and I've learnt a helluva lot too. Time to share.

John's Continuous Integration Rules

No Polling

The pipeline/flow should kick off the instant something is pushed to master. Waiting 59 seconds because we just missed the poll is wasteful. If we're using a modern source-control system, there is absolutely no reason to be periodically polling it for changes. It's the 21st century, last century's batch processing techniques aren't useful here.

Clean, Tagged Builds

The build must begin with a clean to ensure repeatability. Each and every successful build should be appropriately tagged so that the correlation between git commit ID and Jenkins build number is evident.

Versioning Includes Build Configuration

Things go wrong. Jenkins configurations get accidentally broken. It should be just as easy to roll back a misconfigured job as it is to roll back a bad code change.

If It Passes The Tests, It's In test

Yes the test environment will be volatile, but as long as the tests are good, it should be good-volatile; aka latest-and-greatest. This puts the onus on developers to write comprehensive, meaningful tests. The test environment should be a glittering showcase of all the awesome that is about to hit prod.

Fully-automated push-button test/staging-to-prod

No manual funny-business allowed. Repeatable, reliable, and (ideally) rollback-able from the Jenkins UI.

Desired Setup

The simplest thing that will deliver to the target environments, and abides by the above rules:

            [User] 
              |
      commits to master 
              |
              v
         [BitBucket]
              |
            pokes
              |
              v
  [Secured Jenkins Instance] 
              |
          commits to 
              |
              v
       [Heroku TEST/STAGING Env]
  
        (manual trigger)
              |
              v
       [Heroku PROD Env]       

How To Make It Happen

Sound good? Stand by for Part 2 where all is revealed...

Thursday, 27 August 2015

SSH Tunnels: The corporate developer's WD40 + Gaffer Tape



So at my current site the dreaded Authenticating Proxy policy has been instigated - one of those classic corporate network-management patterns that may make sense for the 90% of users with their locked-down Windows/Active Directory/whatever setups, but makes life a miserable hell for those of us playing outside on our Ubuntu boxes.

In a nice display of classic software-developer passive-aggression we've been keeping track of the hours lost due to this change - we're up to 10 person-days since the policy came in 2 months ago. Ouch.

Mainly the problems are due to bits of open-source software that simply haven't had to deal with such proxies - these generally cause things like Jenkins build boxes and other "headless" (or at least "human-less") devices to have horrendous problems.

I got super-tied-up today trying to get one of these build boxes to install something via good-old apt-get in Ubuntu. In the end I used one of my old favourite tricks, the SSH Tunnel backchannel to use the proxy that my dev box has authenticated with, to get the job done.

Here's how it goes:
Preconditions:
  • dev-box is my machine, which is happily using the authenticated proxy via some other mechanism (e.g. kinit)
  • build-box is a build slave that is unable to use apt-get due to proxy issues (e.g. 407 Proxy Authentication Required)
  • proxy-box is the authenticating proxy, listening on port 8080



 proxy-box            dev-box            build-box
    ---                 ---                ---
    | |                 | |                | |
    | |                _____               | |
    | 8080    < < <    _____    < < <   7777 |
    | |                 | |                | |
    | |                 | |                | |
    ---                 ---                ---
     
   


From dev-box
ssh build-box -R7777:proxy-box:8080

Welcome to build-box
> sudo vim /etc/apt/apt.conf
.. and create/modify apt.conf as follows:
Acquire::http::proxy "http://localhost:7777/";
At which point, apt-get should start working, via your own machine (and your proxy credentials). Once you're done, you may want to revert your change to apt.conf, or you could leave it there, with an explanatory comment of how and why it has been set up like this (or just link to this post!)

Friday, 17 July 2015

Strongly-Typed Time. Part 3. Application

In the previous instalments of this little series, I looked at the motivation for and design choices behind a Scala library to use the type system to eliminate (or at least massively reduce) the incidence of errors when dealing with times here on planet Earth.

As is often the case with these things, the motivator was a real-life project that would benefit from such a library. While I can't share the code of that project, the library is up on Github right now, and you can add the JAR as a dependency to your SBT-driven project in both Scala 2.10 and 2.11 flavours.

So what did I miss when going from an idealised, clean-slate design to something a real-life application can use?

Quite a lot. Let's have a look.

Once you're strongly-typed anywhere, you have to be strongly-typed everywhere

Previously, I was representing instants and times using a mixture of Long or org.joda.time.DateTime. I couldn't believe how quickly switching to TimeInZone[TZ] resulted in that [TZ] getting into everything - for better or worse.

Iteration 1; Naive, timezone-less DateTimes:

case class CarRace ( location: String, startTime:DateTime, endTime:DateTime  )

// We forgot a timezone - now we'll get whatever the server defaults to... 
val localRaceStart = new DateTime(2015, 7, 5, 13, 0) // July 5, 2015, 1pm
val localRaceEnd = new DateTime(2015, 7, 5, 15, 0)   // July 5, 2015, 3pm 

// Highly likely to be WRONG:
val silverstoneGrandPrix = CarRace( "Silverstone", localRaceStart, localRaceEnd )

Iteration 2; Let's try to strongly-type the times:

case class CarRace ( location: String, 
                     startTime:TimeInZone[_ <: TimeZone], 
                     endTime:TimeInZone[_ <: TimeZone] ) 
This instance happens to be correct, but CarRace doesn't enforce that races always start and end in the same timezone:
val britishGrandPrix = CarRace( "Silverstone", 
                                TimeInZone[London](localRaceStart), 
                                TimeInZone[London](localRaceEnd))
So we could end up with this (a half-length race):
val brokenGrandPrix = CarRace( "Silverstone", 
                               TimeInZone[London](localRaceStart), 
                               TimeInZone[Paris](localRaceEnd))

Iteration 3; We have to enforce the timezone in the parent object:

case class CarRace[TZ <: TimeZone] ( location: String, 
                                     startTime:TimeInZone[TZ], 
                                     endTime:TimeInZone[TZ] )
So now we get good compile-time safety:
// This won't compile now!
val brokenGrandPrix = CarRace( "Silverstone", 
                               TimeInZone[London](localRaceStart), 
                               TimeInZone[Paris](localRaceEnd)) // error: type mismatch;
... but now we have to lug [TZ] around everywhere...
val raceSeason = List[CarRace] // error: class CarRace takes type parameters
... and worse still, we often have to wildcard the "strong" type to actually Get Stuff Done:
val raceSeason = List[CarRace[_ <: TimeZone]] // So what was the point of all this again???

Types are great - if you know them at compile-time

Although I knew the timezones that some events would be occurring in, there's no way to know all of the timezones of all of the things:
// Seems reasonable enough ...
case class RaceWatcher[TZ <: TimeZone](name:String, watchingFrom:TZ)
OK so let's create and use a RaceWatcher to find out when somebody needs to tune into a race in their timezone:

// Assume this gets passed in from the user's browser, or maybe preferences
val tz = TimeZone("America/New_York")

val chuck = RaceWatcher("Chuck", tz)


// Let's find out when Chuck needs to turn on his TV:
val switchOnAt = britishGrandPrix.startTime.map[chuck.watchingFrom] 
// => error: type watchingFrom is not a member of RaceWatcher
 
We can't do that (without reflection). So in the end, we end up stringly-typed instead of strongly-typed; I had to add map(javaTimeZoneName:String) to the initial, "clean" map[TZ]:
val switchOnAt = britishGrandPrix.startTime.map(chuck.watchingFrom.name)
// => TimeInZone[New_York] UTC: '2015-07-05T12:00:00.000Z' UTCMillis: '1436097600000' Local: '2015-07-05T08:00:00.000-04:00'


Timezones are still hard

My final observation is more particular to the domain of time rather than the use of types. They are still a mind-bender, and you still have to concentrate while working this area. Types can prevent obvious mismatches in assignments or parameters, but at the end of the day, the developer still needs to build up that mental picture of what they need to get done.

I will regard my first outing of Arallon as a success though - most of the runtime problems I encountered in this first application were actually in the area of time ranges rather than point-in-time errors. Which is why the next focus of Arallon will be type-safe representations of the concept:
  • TimeSpanInZone - such as would be perfect for my car-race example above; and
  • DayInZone - where a midnight-to-midnight 24-hour period in a timezone is the prime focus

Wednesday, 27 May 2015

Post-Patterns Patterns Part 2 - Partial at my house

So in my sausagefactory library, my first attempt at adding an extension mechanism was very Java-esque.
trait FieldConverter {
  def convert(a: A, b: B):Any 
}
The companion object for the CaseClassConverter supplied a default FieldConverter if you didn't need one:
object CaseClassConverter {

  def apply(converter: => FieldConverter = defaultFieldConverter)

  class DefaultFieldConverter extends FieldConverter {
     def convert(a: A, b: B):Any = {
       b
     }
  } 
}
Ye gads look at the boilerplate! All to wrap a simple function! And yet there is huge scope to still get it wrong, because if you do actually supply a custom FieldConverter, it actually ends up being you who handles all conversions from then on, even though you really don't care about most of them. So you need an if for everything to work properly:
class MyFieldConverter extends FieldConverter {

  def convert(a: A, b: B):Any = {
   if (specialCase) {
      ... // do special conversion
   } else {
      // Do the normal conversion
       b
     }
   }
}

That sucks.

So, following a nice little nugget I found in Effective Scala, I refactored the whole extension mechanism into using PartialFunctions, like this:
Make FieldConverter a type alias
  type FieldConverter = PartialFunction[(Type, Any), Any]
Chain up a user's custom FieldConverter with the default one
  val exhaustiveConverter:FieldConverter = userConverter orElse defaultConverter
Scala will check whether the userConverter is defined at a given input - if not, it'll fall through to the defaultConverter - perfect.
Now custom converters are simple case blocks
  val alwaysMakeJavaLongsIntoInts: FieldConverter = {
    case (t: Type, v: Any) if (isInt(t) && isJLong(v.getClass)) => {
      v.asInstanceOf[Long].toInt
    }
  }
A userConverter only has to worry about converting one type of thing, and doesn't know (or care) about downstream converters. A simplified Chain of Responsibility.

Monday, 4 May 2015

Strongly-Typed Time. Part 2: Design

Following on from my lightbulb moment, I tried to sketch out what I wanted from a strongly-typed system for representing timezoned instants in time.

The Look

Ever since Martin Odersky gave us Generics in Java 5, we've become comfortable with reading parameterized types like Set<String> ("Set of String") for container classes. Unsurprisingly, with the move to Odersky's Scala has come further use of parameterization; for example Try[Double] and Future[User]. Essentially, I wanted a type that looked like this. Hence:
  val pierreTime: TimeInZone[Paris]

  val johnTime: TimeInZone[Melbourne]

  val canonicalTime: TimeInZone[UTC]

Behaviour

I want wrong code to look wrong but more than that, I want the compiler to consider it wrong too:
  val pierreTime: TimeInZone[Paris]

  def doSomethingInMelbourne(mTime: TimeInZone[Melbourne]) ...

  
  // later...
  
  doSomethingInMelbourne(pierreTime)
                         ^
[error]  type mismatch;
[error]  found   : TimeInZone[Paris]
[error]  required: TimeInZone[Melbourne]

Functional Familiarity

I'm only a tiny way down the path to true functional-programming enlightenment. Hell, I've only just started looking at Scalaz, mainly thanks to eed3si9n's excellent tutorials.

But the following patterns seem pretty sensible to me:

map from one timezone to another

The instant-in-time is unchanged, but the type changes, and the local time changes too:
  val pierreTime: TimeInZone[Paris] 
  // Local: 2015-04-29T09:15:49.739+02:00
  // Millis (UTC): 1430291749739

  val johnTime: TimeInZone[Melbourne] = pierreTime.map[Melbourne]  
  // Local: 2015-04-29T17:15:49.739+10:00
  // Millis (UTC): 1430291749739

transform the time inside the container

I can make adjustments* to the DateTime contained within the TimeInZone[T] using any Joda-Time method that returns another DateTime:
 
  val pierreTime: TimeInZone[Paris]
  // Local: 2015-04-29T09:15:49.739+02:00 

  val pierreWakeUpTime: TimeInZone[Paris] = pierreTime.transform(_.withTime(7,0,0,0))
  // Local: 2015-04-29T07:00:00.000+02:00 

  val pierreLunchTime: TimeInZone[Paris] = pierreTime.transform(_.plusHours(4))
  // Local: 2015-04-29T13:15:49.739+02:00 


(*) Everything is immutable (including within Joda-Time) so "adjustments" naturally result in a new object being returned. Do we have a word yet for "A modified copy of an immutable thing"?

Companion-object for construction

I should be able to get a TimeInZone[T] via its companion object for every conceivable scenario:
  // "Now" in whatever TZ my JVM is running in: 
  val myLocalTime: TimeInZone[TimeZone] = TimeInZone.now
  // -> TimeInZone[Melbourne] UTC: '2015-04-28T12:37:00.000Z'

  // "Now" in the given TZ: 
  val myParisTime: TimeInZone[TimeZone] = TimeInZone.now("Europe/Paris")
  // -> TimeInZone[Paris] UTC: '2015-04-28T12:37:23.000Z'

  // "Now" in UTC: 
  val myUTCTime: TimeInZone[UTC] = TimeInZone.nowUTC
  // -> TimeInZone[UTC] UTC: '2015-04-28T12:37:28.000Z'

  // If I pass millis, UTC is implied: 
  val myUTCTime: TimeInZone[UTC] = TimeInZone.fromUTCMillis(1430703466430)
  // -> TimeInZone[UTC] UTC: '2015-05-04T01:37:46.430Z

  // Reflective methods where the desired [TimeZone] affects the result:

  // Give me "now" on the West Coast:
  val paloAlto = TimeInZone[PST]
  // -> TimeInZone[PST] UTC: '2015-05-04T01:37:56.430Z

  // Give me "then" on the West Coast:
  val paloAltoLastWeek = TimeInZone[PST](new DateTime().minusDays(7))
  // -> TimeInZone[PST] UTC: '2015-04-28T01:37:59.430Z



Tuesday, 28 April 2015

Strongly-Typed Time. Part 1: Rationale

I've quite recently become involved in an after-hours project that has a strong temporal component to it. Basically every interaction with the system will need to be labelled with a time, and they will constantly need to be compared and converted. Add to this the fact that the first beta customers are located on opposite sides of the Pacific, and that events can occur in a further 3 European countries, and a way to safely and unambiguously represent the time of something happening in a time and a place seems paramount.

While Joda Time has undoubtedly made date/calendar/timezone manipulation a happier task for the JVM developer, I'm looking for something stronger. I can pass around a DateTime all day long (no pun intended) but until I inspect its TimeZone I can't be sure where it originated from, or if it is in fact the canonical UTC.

As a result, there is nothing at compile time to stop me doing something like:
  def displayAllEventsBefore(allEvents:Seq[Event], threshold:DateTime) = {
 
    // allEvents have been normalized to UTC. But there's no way of knowing this:
    allEvents.filter(_.isBefore(threshold)).display    
  } 

  // ... much later, miles away
  val myTime = new DateTime() // Happens to be in "Europe/Paris"

  displayAllEventsBefore(events, myTime)

Which will work just fine most of the time, except when there's been an event in the last hour, when we won't see it. Or is it the other way around? Tricky, isn't it?

There's nothing in the type system to prevent these kinds of runtime problems. It comes down to developer diligence in naming/commenting/testing all the things - literally, every thing that uses a representation of time - to ensure correctness.

But hang on, aren't compilers really, really good at ensuring correctness?

Friday, 12 December 2014

Walking away from CloudBees Part 5 - Publishing and Fine-Tuning

Publishing private artefacts to a private Nexus repository
As per my new world order diagram, I decided to use my third and final free OpenShift node as a Nexus box, and what a great move that turned out to be. Without a doubt the easiest setup of a Nexus box I've ever experienced:
  • Log in to OpenShift
  • Click theAdd Application... button
  • Scroll down to the Code Anything heading, and paste http://nexuscartridge-openshiftci.rhcloud.com/ into the URL textbox
  • Click Next, nominate the URL for the box, and wait a few minutes
Wow. More detail (if you need it) from OpenShift.
Publishing open-source artefacts to a public repository
As all of my open-source efforts are now written in Scala with SBT as the build tool, it was a simple matter to add the bintray-sbt plugin to each of them, allowing publication to BinTray, or more specifically, The Millhouse Group's little corner of it.

The only trick here was SSHing into the Jenkins Build slave (one time) and adding an ${OPENSHIFT_DATA_DIR}/.bintray/.credentials file so that an sbt publish would succeed.
Deployment of webapps to Heroku
As with most things open and/or free, someone has been here before - this blog post, together with the Heroku Jenkins Plugin README were a very good starting point for getting this all working.

In brief, the steps are:
  • Install the Heroku and Git Publisher Jenkins plugins
  • Grab your Heroku API key from your Account Settings page, and put it into Manage Jenkins -> Configure System -> Heroku -> API Key
  • Grab the details of the Heroku remote from your .git/config in your local repo, or from the "Git URL" in the Info on your app's Settings page on Heroku.
  • Set this up as an additional Git repo in your Jenkins build, and name it heroku. For safety, I like to name my other repo (i.e. the one holding the source that triggers builds) appropriately as well; it avoids confusion.
    • Actual example:
    • I name my source repo bitbucket
    • Thus my Branch Specifier is bitbucket/master
  • Add a new Git Publisher Post-Build Action, that pushes to heroku/master when the build succeeds
Fine-tuning the OpenShift build setup
Having to do "Layer-8" timezone conversion when reading build logs is just annoying so put the slave node into your local time zone by navigating to (Manage Jenkins -> Manage Nodes -> Slave -> Configure icon -> Launch Method -> Advanced -> JVM Options) (phew!) and setting it to:
-Duser.home=${OPENSHIFT_DATA_DIR} -Duser.timezone="Australia/Melbourne" -XX:MaxPermSize=1M -Xmx2M -Xss128k
(You might need to consult the list of Java timezone ids)

The final pieces of the puzzle were the configuring the "final destinations" of my private artifacts (my gets sent to BinTray courtesy of the bintray-sbt plugin). Details follow.

After that, a little bit of futzing around to get auto-triggered builds working from both GitHub and BitBucket, and I had everything back to normal, or possibly, even better - I now have unlimited app slots on Heroku versus four on CloudBees - and I'm somewhat insulated from outages of a single provider. Happy!