Wednesday, 16 March 2016

Building reusable, type-safe Twirl components

I've been doing quite a lot of work on a Play Framework 2.4.x app recently, a hit upon a little problem that others have noted as well. I'm trying to make the view layer as nice a place to be as the "main" codebase - after all, it's all Scala - and so I'm extracting out anything re-usable into a components package.

Here's a simple example. I'm using Bootstrap (of course), and I'm using the table-striped class to add a little bit of interest to tabular data. The setup of an HTML table is quite verbose and definitely doesn't need to be repeated, so I started with the following basic structure:
@(items:Seq[_], headings:Seq[String] = Nil)
  <table class="table table-striped">
      @if(headings.nonEmpty) {
      <thead>
          <tr>
            @for(heading <- headings) {
                <th>@heading</th>
            }
          </tr>
      </thead>
      }
      <tbody>
        @for(item <- items) {
            <tr>
                ???
            </tr>
          }
        }
      </tbody>
  </table>
Which neatens up the call-site from 20-odd lines to one:
  @stripedtable(userList, Seq("Name", "Age")

Except. How do I render each row in the table body? That differs for every use case!
What I really wanted was to be able to map over each of the items, applying some client-provided function to render a load of <td>...</td> cells for each one. Basically, I wanted stripedtable to have this signature:
@(items:Seq[T], headings:Seq[String] = Nil)(fn: T => Html)
With the body simply being:
   @for(item <- items) {
      <tr>
        @fn(item)
      </tr>
   }
and client code looking like this:
  @stripedtable(userList, Seq("Name", "Age") { user:User =>
    <td>@user.name</td><td>@user.age</td>
  }
...aaaaand we have a big problem. At least at time of writing, Twirl templates cannot be given type arguments. So those [T]'s just won't work. Loosening off the types like this:
@(items:Seq[_], headings:Seq[String] = Nil)(fn: Any => Html)
will compile, but the call-site won't work because the compiler has no idea that the _ and the Any are referring to the same type. Workaround solutions? There are two, depending on how explosively you want type mismatches to fail:
Option 1: Supply a case as the row renderer
  @stripedtable(userList, Seq("Name", "Age") { case user:User =>
    <td>@user.name</td><td>@user.age</td>
  }
This works fine, as long as every item in userList is in fact a User - if not, you get a big fat MatchError.
Option 2: Supply a case as the row renderer, and accept a PartialFunction
The template signature becomes:
@(items:Seq[_],hdgs:Seq[String] = Nil)(f: PartialFunction[Any, Html])
and we tweak the body slightly:
   @for(item <- items) {
      @if(fn.isDefinedAt(item)) {
        <tr>
          @fn(item)
        </tr>
      }
   }
In this scenario, we've protected ourselves against type mismatches, and simply skip anything that's not what we expect. Either way, I can't currently conceive of a more succinct, reusable, and obvious way to drop a consistently-built, styled table into a page than this:
  @stripedtable(userList, Seq("Name", "Age") { case user:User =>
    <td>@user.name</td><td>@user.age</td>
  }

Friday, 4 March 2016

Unbreaking the Heroku Jenkins Plugin

TL;DR: If you need a Heroku Jenkins Plugin that doesn't barf when you Set Properties, here you go.

CI Indistinguishable From Magic

I'm extremely happy with my OpenShift-based Jenkins CI setup that deploys to Heroku. It really does do the business, and the price simply cannot be beaten.

Know Thy Release

Too many times, at too many workplaces, I have faced the problem of trying to determine Is this the latest code? from "the front end". Determined not to have this problem in my own apps, I've been employing a couple of tricks for a few years now that give excellent traceability.

Firstly, I use the nifty sbt-buildinfo plugin that allows build-time values to be injected into source code. A perfect match for Jenkins builds, it creates a Scala object that can then be accessed as if it contained hard-coded values. Here's what I put in my build.sbt:

buildInfoSettings

sourceGenerators in Compile <+= buildInfo

buildInfoKeys := Seq[BuildInfoKey](name, version, scalaVersion, sbtVersion)

// Injected via Jenkins - these props are set at build time: 
buildInfoKeys ++= Seq[BuildInfoKey](
  "extraInfo" -> scala.util.Properties.envOrElse("EXTRA_INFO", "N/A"),
  "builtBy"   -> scala.util.Properties.envOrElse("NODE_NAME", "N/A"),
  "builtAt"   -> new java.util.Date().toString)

buildInfoPackage := "com.themillhousegroup.myproject.utils"
The Jenkins Wiki has a really useful list of available properties which you can plunder to your heart's content. It's definitely well worth creating a health or build-info page that exposes these.

Adding Value with the Heroku Jenkins Plugin

Although Heroku works spectacularly well with a simple git push, the Heroku Jenkins Plugin adds a couple of extra tricks that are very worthwhile, such as being able to place your app into/out-of "maintenance mode" - but the most pertinent here is the Heroku: Set Configuration build step. Adding this step to your build allows you to set any number of environment variables in the Heroku App that you are about to push to. You can imagine how useful this is when combined with the sbt-buildinfo plugin described above!

Here's what it looks like for one of my projects, where the built Play project is pushed to a test environment on Heroku:

Notice how I set HEROKU_ENV, which I then use in my app to determine whether key features (for example, Google Analytics) are enabled or not.

Here are a couple of helper classes that I've used repeatedly (ooh! time for a new library!) in my Heroku projects for this purpose:

import scala.util.Properties

object EnvNames {
  val DEV   = "dev"
  val TEST  = "test"
  val PROD  = "prod"
  val STAGE = "stage"
}

object HerokuApp {
  lazy val herokuEnv = Properties.envOrElse("HEROKU_ENV", EnvNames.DEV)
  lazy val isProd = (EnvNames.PROD == herokuEnv)
  lazy val isStage = (EnvNames.STAGE == herokuEnv)
  lazy val isDev = (EnvNames.DEV == herokuEnv)
 
  def ifProd[T](prod:T):Option[T] = if (isProd) Some(prod) else None

  def ifProdElse[T](prod:T, nonProd:T):T = {
    if (isProd) prod else nonProd
  }
}

... And then it all went pear-shaped

I had quite a number of Play 2.x apps using this Jenkins+Heroku+BuildInfo arrangement to great success. But then at some point (around September 2015 as far as I can tell) the Heroku Jenkins Plugin started throwing an exception while trying to Set Configuration. For the benefit of any desperate Google-trawlers, it looks like this:

  at com.heroku.api.parser.Json.parse(Json.java:73)
  at com.heroku.api.request.releases.ListReleases.getResponse(ListReleases.java:63)
  at com.heroku.api.request.releases.ListReleases.getResponse(ListReleases.java:22)
  at com.heroku.api.connection.JerseyClientAsyncConnection$1.handleResponse(JerseyClientAsyncConnection.java:79)
  at com.heroku.api.connection.JerseyClientAsyncConnection$1.get(JerseyClientAsyncConnection.java:71)
  at com.heroku.api.connection.JerseyClientAsyncConnection.execute(JerseyClientAsyncConnection.java:87)
  at com.heroku.api.HerokuAPI.listReleases(HerokuAPI.java:296)
  at com.heroku.ConfigAdd.perform(ConfigAdd.java:55)
  at com.heroku.AbstractHerokuBuildStep.perform(AbstractHerokuBuildStep.java:114)
  at com.heroku.ConfigAdd.perform(ConfigAdd.java:22)
  at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
  at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:761)
  at hudson.model.Build$BuildExecution.build(Build.java:203)
  at hudson.model.Build$BuildExecution.doRun(Build.java:160)
  at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:536)
  at hudson.model.Run.execute(Run.java:1741)
  at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
  at hudson.model.ResourceController.execute(ResourceController.java:98)
  at hudson.model.Executor.run(Executor.java:374)
Caused by: com.heroku.api.exception.ParseException: Unable to parse data.
  at com.heroku.api.parser.JerseyClientJsonParser.parse(JerseyClientJsonParser.java:24)
  at com.heroku.api.parser.Json.parse(Json.java:70)
  ... 18 more 
Caused by: org.codehaus.jackson.map.JsonMappingException: Can not deserialize instance of java.lang.String out of START_OBJECT token
 at [Source: [B@176e40b; line: 1, column: 473] (through reference chain: com.heroku.api.Release["pstable"])
  at org.codehaus.jackson.map.JsonMappingException.from(JsonMappingException.java:160)
  at org.codehaus.jackson.map.deser.StdDeserializationContext.mappingException(StdDeserializationContext.java:198)
  at org.codehaus.jackson.map.deser.StdDeserializer$StringDeserializer.deserialize(StdDeserializer.java:656)
  at org.codehaus.jackson.map.deser.StdDeserializer$StringDeserializer.deserialize(StdDeserializer.java:625)
  at org.codehaus.jackson.map.deser.MapDeserializer._readAndBind(MapDeserializer.java:235)
  at org.codehaus.jackson.map.deser.MapDeserializer.deserialize(MapDeserializer.java:165)
  at org.codehaus.jackson.map.deser.MapDeserializer.deserialize(MapDeserializer.java:25)
  at org.codehaus.jackson.map.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:230)
  at org.codehaus.jackson.map.deser.SettableBeanProperty$MethodProperty.deserializeAndSet(SettableBeanProperty.java:334)
  at org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:495)
  at org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:351)
  at org.codehaus.jackson.map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java:116)
  at org.codehaus.jackson.map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java:93)
  at org.codehaus.jackson.map.deser.CollectionDeserializer.deserialize(CollectionDeserializer.java:25)
  at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2131)
  at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1481)
  at com.heroku.api.parser.JerseyClientJsonParser.parse(JerseyClientJsonParser.java:22)
  ... 19 more 
Build step 'Heroku: Set Configuration' marked build as failure
Effectively, it looks like Heroku has changed the structure of their pstable object, and that the baked-into-a-JAR definition of it (Map<String, String> in Java) will no longer work.

Open-Source to the rescue

Although the Java APIs for Heroku have been untouched since 2012, and indeed the Jenkins Plugin itself was announced deprecated (without a suggested replacement) only a week ago, fortunately the whole shebang is open-source on Github so I took it upon myself to download the code and fix this thing. A lot of swearing, further downloading of increasingly-obscure Heroku libraries and general hacking later, and not only is the bug fixed:
- Map<String, String> pstable;
+ Map<String, Object> pstable;
But there are new tests to prove it, and a new Heroku Jenkins Plugin available here now. Grab this binary, and go to Manage Jenkins -> Manage Plugins -> Advanced -> Upload Plugin and drop it in. Reboot Jenkins, and you're all set.

Friday, 26 February 2016

Making better software with Github

The first time I extracted a library from a private project and open-sourced it to Github was a purely practical decision; the project was simply getting too large for the puny build box I was using to build it with (an OpenShift free node*). The library was Arallon - you can read a bit more about what it does in my blog series about Strongly-Typed Time.

This solved my problem, in that I no longer ran out of PermGen on my build slave. But the repercussions were far-reaching. Any decent public-facing library needs documentation, and Github's README.md is an incredibly convenient place to put it all. I've lost count of the number of times I've found myself reading my own documentation up there on Github; if Arallon was still a hodge-podge of classes within my application, I'd have spent hours trying to deduce my own functionality ...

Of course, a decent open-source library must also have excellent tests and test coverage. Splitting Arallon into its own library gave the tests a new-found focus and similarly the test coverage (measured with JaCoCo) was much more significant.

Since that first library split, I've peeled off many other utility libraries from private projects; almost always things to make Play2 app development a little quicker and/or easier:

As a shameless plug, I use yet another of my own projects (I love my own dogfood!), sbt-skeleton to set up a brand new SBT project with tons of useful defaults like dependencies, repository locations, plugins etc as well as a skeleton directory structure. This helps make the decision to extract a library a no-brainer; I can have a library up-and-building, from scratch, in minutes. This includes having it build and publish to BinTray, which is simply just a matter of cloning an existing Jenkins job and changing the name of the source Github repo.

I've found the implied peer-pressure of having code "out there" for public scrutiny has a strong positive effect on my overall software quality. I'm sure I'm not the only one. I highly recommend going through the process of extracting something re-usable from private code and open-sourcing it into a library you are prepared to stand behind. It will make you a better software developer in many ways.

* This is not a criticism of OpenShift; I love them and would gladly pay them money if they would only take my puny Australian dollars :-(

Monday, 11 January 2016

*Facepalm* 2016

The newest entry in my (very) occasional series of career facepalm moments comes from this new year. My current project is using Scala, Play, MongoDB and the Pac4J library for authentication/authorization with social providers like Google, Facebook, Twitter etc. It's a good library that I've used successfully on a couple of previous projects, but purely in the "select a provider to auth with" mode. For this project, I needed to use the so-called HTTP Module to allow a traditional username/password form to also be used, for people who (for whatever reason) don't want to use social login. As an aside, this does actually seem to be a reasonably significant portion of users, even though it is actually placing more trust in an "unknown" website than delegating off to a well-known auth provider like Facebook. But users will be users; I digress.

Setup for Failure

The key integration point between your existing user-access code and pac4j's form handling is your implementation of the UsernamePasswordAuthenticator interface which is where credentials coming from the input form get checked over and the go/no-go decision is made. Here's what it looks like:
public interface UsernamePasswordAuthenticator 
    extends Authenticator<UsernamePasswordCredentials> {

    /**
     * Validate the credentials. 
     * It should throw a CredentialsException in case of failure.
     *
     * @param credentials the given credentials.
     */
    @Override
    void validate(UsernamePasswordCredentials credentials);
}
An apparently super-simple interface, but slightly lacking in documentation, this little method cost me over a day of futzing around debugging, followed by a monstrous facepalm.

Side-effects for the lose

The reasons for this method being void are not apparent, but such things are not as generally frowned-upon in the Java world as they are in Scala-land. Here's what a basic working implementation (that just checks that the username is the same as the password) looks like as-is in Scala:
object MyUsernamePasswordAuthenticator 
    extends UsernamePasswordAuthenticator {

  val badCredsException = 
    new BadCredentialsException("Incorrect username/password")

  def validate(credentials: UsernamePasswordCredentials):Unit = {
    if (credentials.getUsername == credentials.getPassword) {
      credentials.setUserProfile(new EmailProfile(u.emailAddress))
    } else {
      throw badCredsException
    }
  }
}
So straight away we see that on the happy path, there's an undocumented incredibly-important side-effect that is needed for the whole login flow to work - the Authenticator must mutate the incoming credentials, populating them with a profile that can then be used to load a full user object. Whoa. That's three pretty-big no-nos just in the description! The only way I found out about this mutation path was by studying some test/throwaway code that also ships with the project.

Not great. I think a better Scala implementation might look more like this:
object MyUsernamePasswordAuthenticator 
    extends ScalaUsernamePasswordAuthenticator[EmailProfile] {

  val badCredsException = 
    new BadCredentialsException("Incorrect username/password")

  /** Return a Success containing an instance of EmailProfile if  
   * successful, otherwise a Failure around an appropriate 
   * Exception if invalid credentials were provided
   */
  def validate(credentials: UsernamePasswordCredentials):Try[EmailProfile] = {
    if (credentials.getUsername == credentials.getPassword) {
      Success(new EmailProfile(u.emailAddress))
    } else {
      Failure(badCredsException)
    }
  }
}
We've added strong typing with a self-documenting return-type, and lost the object mutation side-effect. If I'd been coding to that interface, I wouldn't have needed to go spelunking through test code.

But this wasn't my facepalm.

Race to the bottom

Of course my real Authenticator instance is going to need to hit the database to verify the credentials. As a longtime Play Reactive-Mongo fan, I have a nice little asynchronous service layer to do that. My UserService offers the following method:
class UserService extends MongoService[User]("users") {
  ...

  def findByEmailAddress(emailAddress:String):Future[Option[User]] = {
    ...
  }

I've left out quite a lot of details, but you can probably imagine that plenty of boilerplate can be stuffed into the strongly-typed MongoService superclass (as well as providing the basic CRUD operations) and subclasses can just add handy extra methods appropriate to their domain object.
The signature of the findByEmailAddress method encapsulates the fact that the query both a) takes time and b) might not find anything. So let's see how I employed it:
def validate(credentials: UsernamePasswordCredentials):Unit = {
  userService.findByEmailAddress(credentials.getUsername).map { maybeUser =>

    maybeUser.fold(throw badCredsException) { u =>
      if (!User.isValidPassword(u, credentials.getPassword)) {
        logger.warn(s"Password for ${u.displayName} did not match!")
        throw badCredsException
      } else {
        logger.info(s"Credentials for ${u.displayName} OK!")
        credentials.setUserProfile(new EmailProfile(u.emailAddress))
      }
    }
  }
}
It all looks reasonable right? Failure to find the user means an instant fail; finding the user but not matching the (BCrypted) passwords also results in an exception being thrown. Otherwise, we perform the necessary mutation and get out.

So here's what happened at runtime:
  • A valid username/password combo would appear to get accepted (log entries etc) but not actually be logged in
  • Invalid combos would be logged as such but the browser would not redisplay the login form with errors

Have you spotted the problem yet?

The signature of findByEmailAddress is Future[Option[User]] - but I've completely forgotten the Future part (probably because most of the time I'm writing code in Play controllers where returning a Future is actually encouraged). The signature of the surrounding method, being Unit, means Scala won't bother type-checking anything. So my method ends up returning nothing almost-instantaneously, which makes pac4j think everything is good. Then it tries to use the UserProfile of the passed-in object to actually load the user in question, but of course the mutation code hasn't run yet so it's null- we're almost-certainly still waiting for the result to come back from Mongo!

**Facepalm**

An Await.ready() around the whole lot fixed this one for me. But I think I might need to offer a refactor to the pac4j team ;-)

Monday, 2 November 2015

Green Millhouse - Hacking WiFi Power Points

I recently discovered Mr Money Mustache and his delightfully-entertaining take on frugality. Turns out I had independently come to some very similar conclusions (live close to work, ride a bike to work, minimise car ownership/usage, avoid borrowing) but he still offers a ton of new perspectives on saving money that I'm really enjoying.

An area in particular is running an efficient home. My first toe-dip in this area is shutting down (not putting-into-standby, shutting down) as many things as possible for as much of the day as possible. Being a geek, controlling some power-points via WiFi seemed like a great way to get started. Hence:
These $20 devices can be found on eBay and are controlled by an app that is only ever named in the user manual. This smells of "security by obscurity" and after further investigation I can see why...

My particular one (which was identical to the photo) was called a "Small-K" but you'll probably get the most Google-joy by searching for "Kankun" devices. There are a whole pile of different iOS and Android apps, with various brandings and/or success at translation into English.

There is an extremely troubling suggestion that the various Android apps do some "calling home" via some Chinese servers - this, combined with the fact that you have to tell your device your SSID and WiFi password, made me sufficiently uneasy that even though I have uninstalled the Android app (it was junk anyway), I'll be putting my smart plugs into a dedicated subnet that is unable to "get out of" my network. How?

Putting network devices in jail

I use DNSmasq on my NAS which gives me much more control over my DHCP than the usual on/off switch in consumer routers. The setup to make sure a particular device doesn't go wandering off out onto the interwebs just needs two lines of configuration in dnsmasq.conf, namely:
# Always give the host with ethernet address 11:22:33:44:55:66
# the name smartplug1,
# IP address 10.200.240.1,
# lease time 45 minutes, and
# assign it to the "jailed" group
dhcp-host=11:22:33:44:55:66,smartplug1,10.200.240.1,45m,net:jailed
Why 10.200.240.1? Well:
  • 10 because I like to have space ;-)
  • 200d is 11001000 which in my bitwise-masking permissions scheme means:
    • Firewalled FROM internet
    • Trusted WITHIN home network; and
    • Prevented from going TO internet (I've added that last bit flag since writing that old post)
  • 240d because it's 240-Volt mains power (geddit?); and
  • 1 because it's my first device of this type
Now for the part where inhabitants of the 200 subnet get jailed:
# Members of the "jailed" group don't get 
# told about the default gateway 
#("router" in DHCP-speak) 
#- this sends a zero-length value:
dhcp-option = net:jailed, option:router
And we can check this after rebooting the plug; in the DNSmasq logs:
dnsmasq-dhcp: DHCPDISCOVER(eth0) 11:22:33:44:55:66 
dnsmasq-dhcp: DHCPOFFER(eth0) 10.200.240.1 11:22:33:44:55:66 
dnsmasq-dhcp: DHCPREQUEST(eth0) 10.200.240.1 11:22:33:44:55:66 
dnsmasq-dhcp: DHCPACK(eth0) 10.200.240.1 11:22:33:44:55:66 smartplug1
.. and after logging in to root@smartplug1:
root@koven:~# ping www.google.com
PING www.google.com (220.244.136.54): 56 data bytes
ping: sendto: Network is unreachable


Avoiding The Chinese Entirely :-)

Now that I've been through the pain of the Android-app-setup dance (which sucked, and never correctly registered the presence of the WiFi power point in its own app), I can heartily recommend just SSH'ing into the plug and doing it manually. It was at this point that I came across the awesome OpenHAB project, which is an open-source universe of "bindings" for devices such as these, with the nice UI required for a high WAF. Much more OpenHAB-hacking to follow!

Friday, 16 October 2015

Cloudy Continuous Integration: BitBucket-to-Jenkins-to-Heroku[-to-Heroku] - Part 1

I can't quite believe it but it's actually almost a year since I moved away from an entirely-Cloudbees-based build-and-deploy chain to a far more higgledy-piggledy, yet much more satisfactory, best-of-breed chain.

In that time this setup has built a helluva lot of software, both open-source libraries and closed-source moonlighting apps, and I've learnt a helluva lot too. Time to share.

John's Continuous Integration Rules

No Polling

The pipeline/flow should kick off the instant something is pushed to master. Waiting 59 seconds because we just missed the poll is wasteful. If we're using a modern source-control system, there is absolutely no reason to be periodically polling it for changes. It's the 21st century, last century's batch processing techniques aren't useful here.

Clean, Tagged Builds

The build must begin with a clean to ensure repeatability. Each and every successful build should be appropriately tagged so that the correlation between git commit ID and Jenkins build number is evident.

Versioning Includes Build Configuration

Things go wrong. Jenkins configurations get accidentally broken. It should be just as easy to roll back a misconfigured job as it is to roll back a bad code change.

If It Passes The Tests, It's In test

Yes the test environment will be volatile, but as long as the tests are good, it should be good-volatile; aka latest-and-greatest. This puts the onus on developers to write comprehensive, meaningful tests. The test environment should be a glittering showcase of all the awesome that is about to hit prod.

Fully-automated push-button test/staging-to-prod

No manual funny-business allowed. Repeatable, reliable, and (ideally) rollback-able from the Jenkins UI.

Desired Setup

The simplest thing that will deliver to the target environments, and abides by the above rules:

            [User] 
              |
      commits to master 
              |
              v
         [BitBucket]
              |
            pokes
              |
              v
  [Secured Jenkins Instance] 
              |
          commits to 
              |
              v
       [Heroku TEST/STAGING Env]
  
        (manual trigger)
              |
              v
       [Heroku PROD Env]       

How To Make It Happen

Sound good? Stand by for Part 2 where all is revealed...

Thursday, 27 August 2015

SSH Tunnels: The corporate developer's WD40 + Gaffer Tape



So at my current site the dreaded Authenticating Proxy policy has been instigated - one of those classic corporate network-management patterns that may make sense for the 90% of users with their locked-down Windows/Active Directory/whatever setups, but makes life a miserable hell for those of us playing outside on our Ubuntu boxes.

In a nice display of classic software-developer passive-aggression we've been keeping track of the hours lost due to this change - we're up to 10 person-days since the policy came in 2 months ago. Ouch.

Mainly the problems are due to bits of open-source software that simply haven't had to deal with such proxies - these generally cause things like Jenkins build boxes and other "headless" (or at least "human-less") devices to have horrendous problems.

I got super-tied-up today trying to get one of these build boxes to install something via good-old apt-get in Ubuntu. In the end I used one of my old favourite tricks, the SSH Tunnel backchannel to use the proxy that my dev box has authenticated with, to get the job done.

Here's how it goes:
Preconditions:
  • dev-box is my machine, which is happily using the authenticated proxy via some other mechanism (e.g. kinit)
  • build-box is a build slave that is unable to use apt-get due to proxy issues (e.g. 407 Proxy Authentication Required)
  • proxy-box is the authenticating proxy, listening on port 8080



 proxy-box            dev-box            build-box
    ---                 ---                ---
    | |                 | |                | |
    | |                _____               | |
    | 8080    < < <    _____    < < <   7777 |
    | |                 | |                | |
    | |                 | |                | |
    ---                 ---                ---
     
   


From dev-box
ssh build-box -R7777:proxy-box:8080

Welcome to build-box
> sudo vim /etc/apt/apt.conf
.. and create/modify apt.conf as follows:
Acquire::http::proxy "http://localhost:7777/";
At which point, apt-get should start working, via your own machine (and your proxy credentials). Once you're done, you may want to revert your change to apt.conf, or you could leave it there, with an explanatory comment of how and why it has been set up like this (or just link to this post!)