Tuesday, 11 October 2011

Broken Toys

The keen might have noticed this site has been coming and going for the past fortnight :-(


The cause is yet to be determined - I attempted to log in on Monday and nothing was there - although I could SSH in to a different machine on my home network, the blog server was unpingable. I half-expected it to be smouldering from a particularly brutal DOS attack when I came home, but nothing so spectacular. It was just locked up.


I decided to take the opportunity and use this downtime to do some comprehensive upgrades I'd been wanting to perform for a while:

  • Move the server to Debian to match my other Debian/Ubuntu boxen
  • Upgrade my web platform to the latest-and-greatest Tomcat 7.0.22
  • Upgrade the blog software to Pebble 2.6.2 to get lots of fixes/improvements


Such big plans expose one to potential big failures. And I had a lot of them. Firstly, my wonderful, faithful little NetVista just simply didn't have the grunt to run anything much bigger than a "Hello World" webapp on the shiny new Tomcat/Debian stack - its 256Mb of RAM and puny 233MHz CPU had no chance against the might of a full JEE application like Pebble as well as running a beefcake OS like Debian (as opposed to the super-slender Puppy Linux it had before). So I moved the whole thing to a VM on a much gruntier Ubuntu Server I have. It remains to be seen what will happen to the little IBM - it's not adding much to the party at this point ...


So after migrating to the new hardware, and getting the new Pebble going, I had to bring in the performance improvements I blogged about a while ago, which still don't exist in the official Pebble codebase. (Note to self: must contribute those fixes back!). Initially it seemed all good - my YSlow score of 99 is mighty pleasing - but there was an elephant in the room.


Why the hell was it taking 8.5 seconds to serve up the home page?


Extensive (frustrating) investigation has shown that something added to Pebble since v2.4 has pretty badly borked the performance when hitting /. Fully-specified URLs are fine. I'm suspecting the new-in-2.5 SEO-friendly URL generator, but that's pure speculation. So until the bad interaction between Tomcat 7 and Pebble > 2.4 is sorted out, I'm stuck with icky URLs and no OAuth logins :-(

Tuesday, 4 October 2011

Make Sure You're Testing The Right Things

I'm going to come right out and say it. I love Unit Tests.


I love mocking the collaborators with Mockito (especially the annotation-driven mode). I love the small but oh-so-valuable mini-refactors that are sometimes necessary to make a class testable. And I love watching the code coverage march inexorably to the 100% asymptote.


What makes me very sad is seeing a sorry set of unit tests, and/or a poor development environment for testing. What do I mean?


  • If you're not mocking your collaborators, you're not unit testing. And if you're not unit testing, your code is Instant Legacy Code™. You cannot properly test any non-trivial class without needing to mock its dependencies.

  • Tests should be named in terms of expected behaviour. Method name testParseIntNullString() says nothing that couldn't be gleaned from looking at the test code itself. Method name shouldThrowFormatExceptionWhenParsingNullString() tells you the intention of both the code under test and the test itself. It's documentation that won't get out of sync with the code.

  • Use the facilities provided by your testing package. I'm a TestNG fan, but recent JUnit versions are pretty good too. I use TestNG's groups option in the @Test annotation to specify the style and/or scope of each test method. Another good one is the expected or expectedException annotation option that states what is going to be thrown. Much, much tidier than having try..catch blocks in test methods.

  • Code coverage is more than just a percentage. I use the eCobertura Cobertura plugin for Eclipse while I'm writing tests - to visually indicate which parts of the code I'm actually hitting. I've lost count of the number of times it's shown me an if I thought I had covered is still "red" due to a missing precondition.

Tuesday, 27 September 2011

Spring Roo Experiment, part 2

Making it perty

Roo does a great job of making a functional CRUD website, and that's great for an admin page. But public-facing websites, especially ones developed in 2011, need to look funky.


Enter Bootstrap, from the good people at Twitter. I'm not a Tweeter, but I have to extend maximum love and respect to these guys for turning out something to make things look so good, so easily.


Including the Bootstrap CSS in your webapp is like having the most web-2.0, rounded-cornered, missing-vowelled, fixie-riding front-end guy pairing with you, except he's actually completely finished the work, it works on all browsers and you don't have to listen to him talk about his iPhone all day :-)


Seriously though, Bootstrap does exactly what it says on the tin, with beautifully-clean, semantic markup, a wonderfully easy-to-use grid system and a contemporary, stylishly-minimal look.


I downloaded the Bootstrap CSS file and added it to my Spring Roo project by following these instructions - so now I can flip between the standard green Roo theme for my admin pages, and the super-funky Bootstrap theme for public pages. Lovely.

Tuesday, 20 September 2011

Spring Roo Experiment, part 1

Kicking the tyres

Wow - been getting a bit "10,000 foot view" recently. Back to good ol' Java. In particular, Spring Roo.


I had a bit of an idea for a basic database-backed web app, but the thought of all that XML-fiddling, boilerplate Java rubbish just to get something going was turning me right off. I'll admit, I seriously contemplated trying Ruby on Rails to get started ... then I remembered a colleague mentioning Roo as the Java answer to Rails. Java, Spring, Spring MVC and Maven best practices all driven from a console app?! Had to be worth a try!


I ran through the "pizza" sample, doing the setup for my app at the same time. Apart from a hiccup with my local Nexus instance (I always forget to add new repositories to the public Group!), it was super-smooth.


What struck me was how I was able to concentrate on my domain objects (aka entities in Roo-speak) and how they interacted. In four lines, I can make a typical tested, persisted POJO with a full CRUD web GUI!:

roo> entity --class ~.DescribedThing --testAutomatically
roo> field string --fieldName name --notNull
roo> field string --fieldName description --notNull
roo> web mvc all --package ~.web

It's really fun to "meta-program" like this. Less boilerplate, fewer typos (thanks Tab auto-complete!) and more designing.


I'm going to continue developing Roo-style for as far as it will take me and see how it goes.

Tuesday, 13 September 2011

Extending your SEP field

More anti-complexity ravings

Once you've got over your Complexity "Hump" things become a lot clearer.


The "good programmer == lazy programmer" thing is merely a starting point to a wider world of Using Other People's Stuff. In university this was A Bad Thing, but in the real world, call it "leveraging" and you're on your way to an MBA.


It can be succinctly summarised as Maximising Your SEP - nothing to do with getting a higher Google rank - I'm talking the Somebody Else's Problem. The SEP is the secret to a slender, simple IT solution. The whole "Cloud Computing" hype-bubble is actually SEP translated to the machine-room and its associated sysadmins.


With judicious use of SEP, your "in-house" component is the smallest possible intersection of other peoples' work - be it libraries, frameworks, or servers. Sprinkle in your domain model and you're done.


Moving your infrastructure "to the cloud" is totally hot right now - but how many development shops are actively trying to offload their code too?

Tuesday, 6 September 2011

Die Complexity! Die!

I've mentioned it before but I feel the need to do so again. What is it with developers and complexity?


You know when in a film or cartoon, a character opens an innocent-looking door and all manner of horror and noise belches out, so they quickly slam the door and the noise stops? I got another blast of that kinda thing recently - innocent-looking website, foul twisted complex horror beneath. Why does this happen?


My current theory boils down to a three-legged milking-stool of failures:


  1. Developers move on and can't/don't/won't pass on all their knowledge. Replacement developers have to read between the lines (or worse, read bad and/or out-of-date documentation) and are doomed to repeat the past, slightly worse each iteration. Rinse and repeat

  2. Developers love doing new stuff. At least, the good ones do. They love to cover a pristine whiteboard with boxes and arrows. Who doesn't love the purity of a codebase that consists only of interfaces? No implementation means no ugly real-life workaround warts! The trouble is, we can't all be developing new frameworks all the time

  3. Lastly, and most controversially, The Agile Process, or rather, the blind adherence to some aspects of it, can be blamed. I'm talking Sprints here. It seems like in the effort to shoe-horn a large piece of work into a too-short sprint cycle, the lethal one-two punch of doing a half-a[rs]sed job and chalking up a load more technical debt to actually do it the right way* has become an acceptable outcome. It isn't

The solution? BAN Complexity in your development team. Stamp on it the instant it makes a first tentative push out of the soil. Make it a priority amongst the team members to simplify any new feature to the utmost. Exercise and extend the Boy-Scout Rule to make each and every code commit an incremental improvement in straightforwardness.

How will this help address the milking-stool of coding horror?

  1. Simpler code groks faster so new devs don't need all the handover baggage
  2. Developers can still show their design skills, but at de-baroquifying designs. Surely that's a whole lot harder than over-engineering?
  3. And finally, if you can't change your sprint durations (and if not, why not?) then lower complexity should go hand-in-hand with higher velocity. There must be no Tech Debt. Ever


(*) In some mythical far-off time when the schedule does allow for longer-term work to be completed...

Tuesday, 30 August 2011

I <3 WWW

As I'm not an iOS or Android developer I attended the August "Mobile Focused" Melbourne YOW! Night with a certain degree of reluctance.


I had visions of skinny-jeaned, black-plastic-spectacled iOS-developing hipsters telling me how, like, awesome their, like, app was because like, it totally like mashed up the tweeting paradigm by like ironically cross-pollenating the artisan tumblr mixtape vibe (thanks Hipster Ipsum!).


And while there was a certain amount of that, I was somewhat refreshed to hear from a couple of speakers (native mobile app developers, mind you) that a great deal of the time, when you strip away all of the hype, what the customer needs can be addressed with a mobile-focused website.


I've seen first-hand the hype-storm that (in particular) the iPad has created. One of the speakers at YOW! painted a pretty accurate picture:

  • CEO's wife buys CEO a new iPad because they're shiny
  • CEO can't really use iPad but notes these apps on the home screen have pretty icons
  • CEO sees other companies have apps
  • CEO decides his company needs an app
  • CEO doesn't exactly know what app is for, but knows that it's needed NOW


What follows (MyCompanyApp v1.0) is usually an app-ification of what the company's current website offers. It is a pointless, money-wasting exercise that gives the CEO his app icon that he can show off in the golf club, but actually has zero advantage over his website (actually negative advantage because it doesn't self-update like a web page).


If the iPad had a way to add a shortcut to a webpage as a home-screen icon, the whole thing could have been done in 30 seconds.


As mobile devices get increasingly capable, well-written, lightweight web pages with advanced features like HTML5 Geolocation can get pretty damn close to the "native app experience" - with none of the approval-process or old-installed-version hassles.


So that's the basket where I'm putting my eggs for the next little while; I just hope the egg doesn't end up on my face :-)

Monday, 22 August 2011

The CSS F(l)ail

I've written before about CSS and why it seems to be so hard for a Typical Java Developer (TJD) to craft "nice" HTML+CSS presentation layers.


Here's the behavioural pattern I've observed when a TJD has to do some front-end web-work

:
  • Panic - rather than embrace the opportunity to learn a new paradigm/technology, as they would probably enjoy doing if it was a Java API

  • Copy-and-paste from the existing codebase - despite knowing that this leads to unmaintainable code no matter what the language

  • Copy-and-paste from the web - without understanding what is going on

  • Randomly tweak values in the browser using Firebug or Chrome Developer Tools until it "looks right" (I've started to call this CSS flailing)

  • Give Up And Use Tables (for non-tabular data). The ultimate fail

In posts-to-come I'm going to try and identify some CSS "patterns" that a software developer can actually understand (as opposed to copy-and-pasting) which hopefully will put an end to the above behaviours, any of which would be regarded as unforgivable if it was perpetrated in the squeaky-clean corridors of Java-land ;-) but are somehow considered par-for-the-course in the unfamiliar yet forgiving world of UI...

Friday, 1 July 2011

The Battle {For | Against} Complexity

Rod Johnson:

"I probably shouldn’t say this in a Java community website, but I think that the Java community has almost a pathological desire for complexity at times."

Testify, brother! Although I must add that I suspect this is not exclusively a Java-specific syndrome.


I think it's more to do with the average experience level of developers on a given platform. I would expect the typical Rails developer to be more experienced than a Java dev (I have nothing to back this up, but a cursory inspection of the type of questions being asked on Stack Overflow would tend to confirm this). I have observed a distinct correlation between development experience and desire for over-engineered, complex solutions; it looks like this (pardon my ASCII):


D(comp)
|
|               C C C
|             C C C C
|           C C C C C C
|         C C C C C C C C
|       C C C C C C C C C C 
|     C C C C C C C C C C C C
|   C C C C C C C C C C C C C C 
| C C C C C C C C C C C C C C C C C C C C
|------------------------------------------- Exp
0   1   2   3   4   5   6   7   8   9   10            

The initial "I'm so green" fear of complicated stuff quite quickly is replaced by a desire to flex one's "architect" muscles, typically culminating in an extravaganza of ornate (often distributed) designs at the 4-5 year mark. Shortly afterwards, the folly of one's ways is realised. Ornate designs require ornate maintenance, and that doesn't go down too well with anyone, least of all the prima-donna who came up with the design and now is saddled with babysitting it. The enthusiasm for system diagrams with many, many boxes quickly subsides. A pragmatic programmer emerges at the 8-year mark. DRY, YAGNI and anti-NIH are the order of the day.


And that seems to be the difference between the Typical Java Dev, who is probably sitting on top of the above mountain, and the grizzled Typical Rails Dev, who has scaled it and is coasting down the other side :-)

Tuesday, 28 June 2011

Debian on a NetVista N2200

While Puppy Linux has been a solid base for this web server for quite some time now, before bringing up a second NetVista N2200 unit, I was looking to move OS to something a little more familiar - a Debian-based OS. I'd found myself wishing for the ease of the deb package management and cursing the way the entire Puppy OS was copied into RAM on startup - very clever and fast, but I don't really want my precious 256Mb filled up with unused instances of /usr/bin/xeyes ...*


I wasted an entire weekend hacking PusPus, which claimed to be able get Debian Etch onto the NetVista. This claim was true, but the OS was all but unusable - any attempt to apt-get install some useful components (like ssh), or even running apt-get update was enough for a segmentation fault and complete system halt. Tweaking the PusPus Makefiles to fetch and build a Debian Lenny image resulted in exactly the same problem.


The Single Point of Truth on all things N2200 is, rather bizarrely, the comment thread at the foot of This Guy's Blog Post, but there is some great stuff there. My saviour was the replacement boot loader and OS image by der_odenwaelder (username: NetVista, password N22008363). This guy has done a sterling job in papering over the nastiest deficiencies in the NetVista's quirky hardware.


Finally I have a (relatively) up-to-date Debian build (Lenny) on my new machine, and it goes beautifully. My local APT Cache is really starting to pay dividends now, and will be even more valuable once I migrate this machine to the same platform.


Now that I can run a more "conventional" Linux OS, I'm getting excited again about the server potential of this all-but-forgotten black box. You can pick them up from eBay's Workstations category all the time for a pittance, and with a suitable RAM and CF Card injection, you've got a handy-dandy general-purpose server that you can leave switched on 24/7 while it uses less power than your broadband router.


(*) Just an example - I removed all of the XWindows stuff from the Puppy image as soon as I got it working.

Friday, 17 June 2011

2011H2 Ponderables

What's on the cards and/or on my mind for the second half of 2011?


  • Going on holiday to Canada and the States in July. Woohoo!

  • The BRAIN (Blatantly Ridiculous Array of Inexpensive Netvistas) - utilising yesterday's ultra-low-power hardware, today!

  • Evaluator Chain - continuing my personal, hopefully fortune-making project

  • Continuing the epic Ubuntu Build Box story, that somebody, somewhere, may find useful...

  • Less Code - enjoying the luxury that in the Java world, it's almost certain that someone has solved your problem and published a good solution

  • More Tests - to make sure that what code I have written is absolutely rock solid

  • Enjoying Learning about HTML5, CSS3, Ruby, JPA2 and more

Friday, 10 June 2011

On Walkthroughs

At my current employer, as part of their flavour of Agile (no two ever the same!) we developers are required to conduct a walkthough of every story we finish. Present must be a tester and a BA. All very well in theory.


But it seemed to us that some walkthroughs were uncovering bugs and/or oversights early, while other stories were breezing through their walkthrough but then exploding once the testers got their mitts on them. What was going on?


It turned out that in many cases, the developer, having spent possibly days neck-deep in the code, had a level of understanding of the problem domain far in excess of the BA and tester he was demoing to. As a result, the developer (consciously or not) would exude a confidence in his solution that would almost intimidate the "spectators" into not objecting to any deviations from the story specification.


A walkthrough conducted in this vein is almost useless, which led me to the formulation of the following rule:


The value of a story walkthrough or other such demonstration is directly proportional to the experience level of the audience

Tuesday, 3 May 2011

Ultimate Ubuntu Build Server Guide, Part 4

Sync Thine Clocks
Note from future self; Although this setup guide is now superseded by cloud-based tools, certain elements are still useful simply as good practice, such as the "groundwork" in the early stages of this guide. As such, this article has been spared the chop

This is Part 4 of my Ultimate Ubuntu Build Server Guide.


Although it's less critical these days than it was in the bad old days of makefiles, it's still not optional that ALL your machines MUST be NTP-synced to a machine on the local network.


Think about how annoying it is when you login to a remote server to do some logfile-trawling but discover that the server's concept of "wall clock time" is actually 3 minutes, 37.234 seconds behind your local machine's. Ugh. I've noticed this is a particular issue with Virtual Machines - clock drift seems to be a perennial issue there.


Again, getting this stuff working doesn't have to be a big deal and certainly doesn't have to involve A Big Linux Box.


Your common-or-garden DSL router can almost certainly be "pimped" with a more-useful firmware image from one of the many open-source projects. On my NetGear DG834G, I'm using the DGTeam firmware image (unfortunately that project seems to have died, but a very kind soul is mirroring their excellent final versions here).


This offers a fully-working, webpage-configured implementation of OpenNTPD which is the exact same software Big Linux/BSD Servers run anyway.

Owners of DSL routers from other manufacturers (like Cisco/Linksys, D-Link, Buffalo and non-DG- NetGear equipment) can get similar functionality boosts from:

  • dd-wrt - I'm personally using this with success on a Linksys wireless AP. Supported device list
  • OpenWRT - requires more powerful hardware than dd-wrt but has the potential to act as a full Linux server should you desire...

Friday, 29 April 2011

Ultimate Ubuntu Build Server Guide, Part 3

Groundwork, Phase 3: Calling Names
Note from future self; Although this setup guide is now superseded by cloud-based tools, certain elements are still useful simply as good practice, such as the "groundwork" in the early stages of this guide. As such, this article has been spared the chop

This is Part 3 of my Ultimate Ubuntu Build Server Guide.


A productive network needs a comprehensible naming convention and a reliable mechanism for dishing out and looking up these names.


Naming Thine Boxen

On my first day at my current workplace I asked what the URL of the Wiki was. "wiki" was the reply. That kind of smile-inducing, almost-discoverable name is exactly the kind of server name we are looking for. At the other end of the scale is the completely-unmemorable, partly-implemented corporate server-farm naming scheme that means your build box is psal04-vic-virt06-r2 and your wiki lives on 12dev_x64_v227a. Ugh.


DNS and DHCP

The cornerstone of being able to talk to machines "by name" is of course, DNS. You need your own DNS server somewhere on the network. I know what you're thinking - that means a big, noisy, power-sucking Unix box and bind and resolf.conf and ... argh!.


Not necessarily.


Unless you've been living under a rock for the last 10 years, you'll have noticed that there are now rather a lot of small networked devices around, all running some flavour of Linux. Your DSL router is almost certainly one of them, but while it probably offers DHCP services, it probably won't be able to serve up DNS entries (as aside from proxying DNS entries from upstream). That's OK. There are other small Linux boxen that will do the job.


I speak of NAS devices. I'm personally using a Synology DS209, which is the kind of web-configured, one-box-solution, Linux-powered network überdevice I could have only dreamt about 10 years ago. In addition to storing mountains of media files and seamlessly acting as a Time Capsule for my MacBook, this neat little unit also runs SynDnsMasq, a port of the amazing dnsmasq DHCP/DNS server.


dnsmasq

A simple, elegant and functional tool that runs off one superbly-commented configuration file, dnsmasq will make your local network much more navigable thanks to local DNS addresses - ssh user@buildbox is much better than ssh user@10.12.14.16, don't you think?


Having full control of your DHCP server (as opposed to the primitive on/off on most domestic routers) also allows you to set up effectively-permanent address allocations based on MAC addresses. This gives you all of the advantages of static IP addresses for servers, but allows you to have a centralised repository of who-is-who, and even change things on the fly, if a server goes offline for example.


By running this software on my NAS, I get all of these features, plus I save scads of power as it's a low-power unit AND it switches itself on and off according to a Power Schedule so it's not burning any juice while I'm asleep. I've configured the DHCP server to actually tell clients about two DNS servers - the primary being the NAS itself, the secondary being my ADSL router. That way, if I start using a client at 10.55pm, I can keep surfing the web after the NAS goes to sleep at 11pm - the client will just "fail over" to the main gateway.


Name Your Poison

The actual names you use for servers and clients are of course a very personal choice. One of the best schemes I've used was based on animals, with increasing levels of maturity and/or decreasing domesticity based on their function. A table's worth a thousand words here I think!


FunctionSpeciesDevTestProd
App ServerCaninePuppyDogWolf
Web ServerEquineFoalHorseZebra
Database ServerCapraKidGoatIbex

While this caused much hilarity amongst non-technical people ("Horse will be down until we bounce the Goat" is not something you hear in many offices!), it actually worked very well.


The scheme I'm using at home is a simple "big cat" scheme - lion, tiger, cheetah, leopard etc - but I've taken the opportunity to "overload" the names in my dnsmasq configuration - so buildbox currently resolves to the same machine as cheetah - but of course should that duty ever change, it's just a one-line change on the NAS to fix it.

Tuesday, 26 April 2011

Ultimate Ubuntu Build Server Guide, Part 2

Groundwork, Phase 2: Sensible IP Addresses
Note from future self; Although this setup guide is now superseded by cloud-based tools, certain elements are still useful simply as good practice, such as the "groundwork" in the early stages of this guide. As such, this article has been spared the chop

This is Part 2 of my Ultimate Ubuntu Build Server Guide.


First things first. If you're running a development network on a 192.168.x.y network, you're going to want to change that, stat. Why? Because you simply don't (or won't) have enough numbers to go around.

Yes, you're going to hit your very own IPv4 address exhaustion crisis. Maybe not today, maybe not tomorrow, but consider the proliferation of WiFi-enabled devices and virtualised machines in the last few years. If you've only got 250-odd addresses, minus servers (real and virtualised), minus workstations (real and virtualised), minus bits of networking equipment, and each developer has at least one WiFi device in her pocket, you're probably going to have to start getting pretty creative to fit everyone in. And I haven't even mentioned the possibility of being a mobile shop and having a cupboard full of test devices!


To me, it makes much more sense to move to the wide open spaces of the 10.a.b.c local network range. Then not only will you have practically-unlimited room for expansion, but you can also start encoding useful information into machine addresses. Allow me to demonstrate with a possible use of the bits in the a digit:

 7 6 5 4 3 2 1 0
 | | | |
 | | | \- "static IP"
 | | \--- "wired"
 | \----- "local resource access OK"
 \------- "firewalled from internet"


Which leads to addresses like:

AddressMeaningExample Machine Type
10.240.b.cfully-trusted, wired, static-IPDev Servers
10.224.b.cfully-trusted, wired, DHCPDev Workstations
10.192.b.cfully-trusted, WiFi, DHCPKnown Wireless Devices
10.128.b.cpartly trusted WiFi DHCPVisitor Laptops etc
10.48.b.cuntrusted wired static-IPDMZ


You've still got scads of room to create further subdivisions (dev/test/staging for example in the servers group) and access-control is as simple as applying a suitable netmask.


In the above case, sensitive resources could require a /10 (trusted, firewalled) IP address. Really private stuff might require access from a wired network - i.e. a /11. Basically, the more secure the resource, the more bits you need in your a octet.


It might be a bit painful switching over from your old /24 network but I think in the long term it'll be well worth it.


Next time, we'll look at how to name all these machines.

Friday, 22 April 2011

Ultimate Ubuntu Build Server Guide, Part 1

Groundwork, Phase 1: A Quality Local Network
Note from future self; Although this setup guide is now superseded by cloud-based tools, certain elements are still useful simply as good practice, such as the "groundwork" in the early stages of this guide. As such, this article has been spared the chop

OK, so for the next I-don't-know-how-long I'm going to devote this blog to a comprehensive step-by-step guide to setting up an absolutely rock-solid Continuous-Integration/Continuous Delivery-style build server.
Fundamentally it'll be based around the latest release of Ubuntu Server, but there are a number of steps that we need to address before we even introduce an ISO to a CD-ROM.


Possibly the dullest, but most potentially useful is Getting Our Local Network Sorted Out. What do I mean by that?

  • Working out a sensible IP addressing scheme
  • Maintaining a comprehensible naming convention for machines
  • Dedicating an (almost-)always-on box to do DNS lookups and DHCP handouts
  • Having as few statically-addressed machines as feasible
  • Keeping DHCP clients on as stable an IP address as possible
  • Having good bandwidth where it's needed
  • Using the simplest-possible network infrastructure
  • Offering simple options for both backed-up and transient file storage


You might be surprised how many "proper" system-administered networks fail on at least one of these hurdles; so while we wait patiently for the Natty Narwhal I'm going to take the next few weeks to go through them.

Tuesday, 19 April 2011

The Joy of CSS

Why It's hard to be a CSS Rockstar

Writing CSS is hard. Writing good CSS is really hard.


I've been using CSS in anger for about 4 years now, and I'd rate my skills at no more than 5 out of 10. I'm the first to admit I spend a lot of time Googling when I'm tweaking CSS, and I'm sure I'm not the only one. I've only ever come across one Java-world developer who could craft elegant, cross-browser CSS solutions with appropriate use of semantic HTML, and he was an exceptional front-end guy who could make PhotoShop walk around on its front paws, and write clean, performant JavaScript while balancing a ball on his head. Seriously.


So why do otherwise-competent software developers find it so hard to produce good CSS?


  • width: doesn't really mean width: The W3C's box model might be intuitive to some, but ask a typical software developer to draw a box with width: 100px; border: 5px; and I can virtually guarantee that the absolute width of the result will be 100 pixels while the internal (or "content" in W3C-speak) width will be 90 pixels. Given this, it becomes slightly easier to forgive Microsoft for their broken box model in IE5

  • Inconsistent inheritance: As OO developers, we've come to expect every property of an object to be inherited by its children. This is not the case in CSS, which can lead to a non-DRY sensation that is uncomfortable

  • It's a big API: Although there is a lot of repetition (e.g.: border; border-top; border-top-width; border-top-color; border-left; border-left-style; etc etc etc) there are also tons of tricky shortcuts which behave dramatically differently depending on the number of "arguments" used. Compare border-width: thin thick;

    to border-width: thin thin thick;

    to border-width: thin thin thin thick;

  • You can't debug CSS Selectors: The first move of most developers when they have to change some existing styling is to whack a background-color: red; into the selector they think should be "the one". And then have to hunt around a bit more when their target div doesn't turn red ...

  • Semantic, understandable and succinct?!?!: Most developers understand that using CSS classes with names like boldface is not cool, and nor is using identifiers called tabbedNavigationMenuElementLevelTwo - but getting the damn thing even working is hard enough without having to wonder if the Gods of HTML would sneer at your markup...

Friday, 15 April 2011

Safely Ignored

After attempting for almost two weeks to get through to the Australian Taxation Office's main phone line, I completed a 2-minute automated process and had the dubious satisfaction of being told that "future request letters can be safely ignored".


Which got me thinking about extending that metaphor to humans. So without further ado, allow me to present my list of People who can be Safely Ignored:


  • Search Engine Optimization "experts" - Let's be honest, by "Search Engine" you mean Google, and if you use said search engine to search for improve google pagerank you'll get Google's OWN ADVICE. Now use the money you saved on an SEO expert to make your content worth looking at.

  • Excel-holics - People who insist on copying the up-to-date data from a web page in order to have a local copy that will get stale, just so they can be in their familiar rows-and-columns walled garden. It's madness, and we all know the typical error rates in hacked-together spreadsheets ...

  • iPhone Bandwagoneers - The "Jesus Phone" as it's known over at The Register is a competent and capable smartphone. That's all. On several occasions I have been amused by a flustered hipster desperately asking "does someone have an iPhone I can borrow?" - meaning "I need to go to a website (but I'm scared to be seen using a non-Apple product)". Sad.

  • Microsoft - They've jumped the software shark. With nothing worthwhile on the desktop since Windows XP and a mobile OS that manages to make high end hardware perform like a low-end $29 outright phone from the post office.

Tuesday, 12 April 2011

Sophistication via Behavioural Chaining

A Nice Pattern

A few years ago I had the pleasure of working with a truly excellent Java developer in the UK, Simon Morgan. I learnt a lot from looking at Simon's code, and he was a terrific guy to boot. One of the really eye-opening things he was doing was obtaining very sophisticated behaviour by stringing together simple evaluation modules.


This is just like how programmers have always solved problems - breaking them down into manageable chunks - but goes much further. We're not solving a problem here (well we are, but it sorta falls out the end rather than being explicit), rather, we are approximating the sophistication of an expert human being's behaviour when solving the problem. Wow.


Simon was using a hybrid of two patterns; Chain of Responsibility; and Strategy. The basic approach was to iterate over an injected list of Strategy implementations, where the Strategy interface would normally be as simple as:

Operand applyTo(Operand op);
BUT instead of returning a possibly-modified Operand, he defined a Scorer interface that looked like this:
float determineScore(Scenario scenario);

Individual Scorers can be as simple or as complicated as required. For Simon's particular case, each one tended to inspect the database, looking for a particular situation/combination, and arrive at a score based on how close that was to the "ideal". For this, it made sense to have an AbstractDatabaseAccessingScorer which every Scorer extended.


The float that each scorer returned multiplied a running total, that started at 1.0. At the end of a scoring run, a possible Scenario would have a score - somewhere from 0.0 to 1.0. Some aspect of the Scenario would then be tweaked, and the score calculated again. At the end of the evaluation run, the highest-scoring Scenario would be selected as the optimal course of action.


While this worked very well, Simon realised that in developing his Scorers, he'd unwittingly assigned some of them lower importance, by getting them to return scores only {0.0, 0.5} for example. He went on to refactor this out, and instead each Scorer was required to provide a {0.0, 1.0} score, and assigned a weight multiplier, so that some Scorers could be given greater power in influencing the choice of Scenario. This really boosted the power and subtlety of the system - to the extent that he started logging his scoring runs profusely in order to get some understanding of how his home-grown "neural net" was coming up with some results.


Often, the choice of winning scenario was a matter of choosing between final scores of 0.00000123 versus 0.00000122 - when dealing with such close decisions, it was worthwhile flagging the situation to allow a human to examine it and possibly tweak some weight modifiers to get the optimal outcome. In time, this would lead to even better approximation to an expert human's selection behaviour.


We never came up with a name for this pattern, but it has always stuck in my mind as a nice one (albeit with limited applications). Evaluator Chain seems to sum it up fairly well, and I'm working on a library that will give a convenient, templated API for domain-specific implementations, the first of which will be the selection of a winning sports team based on past performance data.


So if this is my last blog post, you'll know I've cracked it and made my fortune in sports betting ...

Friday, 8 April 2011

NetVista Update

"Hey, how's that ultra-small, ultra-quiet, ultra-low-power app server going?" I don't hear you ask.


Awesome, thanks for not asking!

  root@netvista1 # uptime
    20:12:01 up 166 days, 3:11, load average: 0.10, 0.03, 0.02
  root@netvista1 #

Yep, that little black box has been a great success. Any effect on the household electricity bill has been negligible, and it's been a fantastic platform to host public-facing (if untrafficked) Java experiments.


I'm in the process of bringing up netvista2, emphatically not as a load-balanced production server but rather as a public-facing "staging" server so I can begin A/B production environment switching as part of a Continuous Delivery approach.


Something else I want to look at is using Java SE for Embedded to give a bit more RAM space for webapps. Stay tuned.

Tuesday, 5 April 2011

Whoooosh! Part 2

Making it happen

YSlow gave this site a solid 'B' for performance, but I knew it could be better.


The main culprit was the lack of expiry headers on resources, meaning the browser had to reload them each page visit. Dumb. I was shocked to find that Tomcat had no easy way to get this going, but this has now been rectified in Tomcat 7.


Next up was optimising my images. YSlow can helpfully slurp your site through Yahoo's smush.it service which saved me a good 25% in download cruft. Win.


"Sprited" CSS images were something I knew I wanted to do, but had yet to find a nice tool to make it happen. Now that SmartSprites is a reality, I've been using it to both optimise the images and generate the CSS that controls them. I'm also planning on using JAWR (which utilises SmartSprites) as part of my standard toolchain for Java web development, pushing out minified, optimised, spritified HTML/CSS/JS content whenever the build target is not "dev".


Things were going well, and I was up to 'A' Grade, but my "favicon" was still being re-downloaded each-and-every page visit. I had to tell Tomcat that it was an image, and then the Expires Filter could deal with it as per all the others:

  (in conf/web.xml):

  <mime-mapping>
    <extension>ico</extension>
    <mime-type>image/x-icon</mime-type>
  </mime-mapping>


The final step was getting in under Pebble's covers and tweaking the page template, such that:

  • Javascript sits at the bottom of the page; and
  • Practically-useless files (like an empty print.css) are not mentioned


And with that, I'm currently scoring 95%. I'm losing the 5% for not gzipping my content (which will probably have been remedied by the time you read this), and having too many separate JavaScript files (a fact I need to investigate further - where is this blog even using JavaScript?) - but I'm happy; my little corner of the web is a snappier place.

Friday, 1 April 2011

Whoooosh! Part 1

When was the last time a computer really amazed you with its speed?


So much power, all those cores, so many Gigahertz, tens of megabits per second and yet we still spend a lot of our time watching spinners/throbbers/hourglasses/progress meters. Why?


I'd have to say the last bastion of impressive speed is Google. Nobody else is doing breathtaking displays of performance any more, and it saddens me.


Growing up in the golden age of 8-bittery, my first experiences of computers involved painful cassette-based loads (not to mention hugely-unreliable saves), that took tens of minutes. Next stop was an Apple IIe, and the speed and reliability of its (single-sided) 5.25" drive made the frequent PLEASE INSERT SIDE 2 operations seem worthwhile.


My first hard-disk experience was at the other end of a high-school Farallon PhoneNet and it was barely quicker than the floppy drive in the Mac Plus I was accessing it from, but it was definitely an improvement.


Next stop was my Dad's 486 with a whopping 120Mb of Western Digital Caviar on its VESA Local Bus. What a beast - that thing flew through Write on Windows 3.11! Moore's Law (or at least the layman's interpretation of it - that raw speed doubles every 18 months) seemed to be in full effect; the world was an exciting place.


But then it ran Windows 95 like a dog. My Pentium 60 was definitely better (especially after I upped its RAM from 8 to 40Mb) but that snappiness was gone again when Windows 98 got its claws into it.


Now the internet's come along, and we've gone through the whole Intel giveth, Microsoft taketh away cycle again, but this time ADSL connections and Apache give, and poor HTML compliance, bloated, inefficient Javascript and whopping images are making it feel like the dial-up days are still here.


When I first started hacking together web pages, I would copy them in their entirety onto a floppy disk (remember those?) and load them into the browser from there for a taste of dialup speed. It worked really well for spotting places where I could get content to the eyeball faster.


If you are putting anything on the web, please do the modern-day equivalent and run YSlow against your stuff. And let's get that whoooosh back on the web!

Tuesday, 29 March 2011

Links - Q1-11

Some links that have caught my eye in the last 3 months:


  • Steve Losh - a stunningly clean blog design and thoughtful posts that make me suspect he's one of those sickening characters that can both make it work and make it beautiful...

  • CSS: Specificity Wars - a great (if geeky) explanation of CSS specificity that leverages your existing Star Wars knowledge (!)

  • XKCD on Good Code - funny because it's absolutely true

  • Cucumber - yes, it's Ruby-based, and it's a bit slow, but it's nice

  • CSS Innovations - you'll need a cutting-edge browser, but oh the things these guys will make it do!

  • XKCD on Servers - how can one guy be so deeply knowledgeable in so many areas?

  • Uncle Bob's Transformation Priority Premise - some pretty deep thinking from Mr. Clean Code. A quite fascinating theory about test-driven refactorings that makes a whole lotta sense.

Friday, 25 March 2011

Hot Skills or Not Skills?

A colleague sent me an intriguing email last week - Mixtent is a hot-or-not? website in the style of Mark Zuckerberg's first foray into social networking, as seen at the start of The Social Network.


Except Mixtent is aimed at rating your professional colleagues (via LinkedIn's API) on certain skillsets. It asks you to compare two of your LinkedIn connections based on a certain skill (e.g. Java Development), and then repeats the comparison process about twenty times.


The really clever bit is the Google PageRank-style weighting that is placed on each survey participant, based on how they have been rated against others. This elegant twist makes good common sense and should hopefully prevent the system from being "gamed" by "bots" or rooms full of 10c-a-click fleshbots.


I'll be keeping an eye on this great idea - as a "global top 5%" Java developer I'll also be looking out for recruiters eager to utilise a potentially-outstanding new tool!

Tuesday, 22 March 2011

Vale Nokia

Farewell Nokia, you have completed the final stage in your reverse-metamorphosis from elegant butterfly to wriggling larva.


You reigned in the 1990s, producing phones that everyone wanted and everyone could use.


You gave us phones that had a consistent and ubiquitous AC adapter, saving waste and countless millions of consumers who were able to borrow some electrons and stay contactable.


You gave us the first phone with no antenna to get caught on things and break off.


You gave us a "corporate phone" so good people are still using them almost ten years later.


You even gave us the phone Neo used in The Matrix!


And then you lost the plot. Along came practicable touchscreens, and your lovingly-honed, real-button-oriented OS looked simplistic (which was the main reason why you were so popular). So you desperately played catch-up and failed. Other technologies snuck up on you and you failed there too (I had the misfortune to have to develop for/work on an early N97 which actually "poisoned" any WiFi network it touched).


And now, in desparation, you've jumped into bed with another giant that has lost its way. They will carry on, as their name is now synonymous with mediocrity, but this will be the final straw for you, Nokia.


End Call

Tuesday, 15 March 2011

Long Weekends ...

... Like the one just passed in Victoria, are great. The short week that follows a Monday off is also extra-good; who doesn't love having to insert a leap-day in one's mental work-week calendar?!


I think my dream Optimal Office™ would have to feature a 9-day fortnight system just to get that feeling, every second week :-)

Friday, 11 March 2011

Getting Edgy

Us software developers, we're used to living on the edge. We need to be constantly thinking about the "seams" where we can make our code testable, we have to expend considerable brain-fuel on "edge cases" in our algorithms, and, let's be honest, as far as society goes, we're generally typecast as being outliers there too ... :-)


But probably the most important edges are the ones we expose to our fellow weirdos developers - our APIs.


A good API is one you can learn fast and completely. It should be lean and mean, exposing the minimum number of objects and methods to get the job done. It should use a consistent vocabulary and have a consistent approach (for example static void methods, "fluent" methods that return "this" for chaining, or conventional POJO/Manager interactions)


Some good examples:

  • Google Guava's Lists, Iterables, Sets and Maps utility classes, which offer very consistent static methods for dealing with their namesake Collection classes
  • Mockito and Hamcrest offer very readable and discoverable fluent interfaces: Mockito.when(mockThing.isReady()).thenReturn(false).thenReturn(true);

And some bad ones:
  • I always find Commons Lang's StringUtils.isBlank() vs StringUtils.isEmpty() to be a bit confusing!
  • JUnit's Assert.assertEquals(String, String, String) always requires a sneak peek at the javadoc to get the arguments right. They fixed it with the Assert.assertThat(String, T, Matcher<T>) style though!

Tuesday, 8 March 2011

Flip-The-Switch Deployment

The most recent website release at my current site and only be described as une grande débacle.


The reason was an incompatibility between the newly-deployed webapp and an (in-house, but non-local) service it depended on. It wasn't until the webapp refused to start in production that the incompatibility was detected.


Now I'm a very big fan of service-oriented architectures, particularly if they are of a lightweight (i.e. no XML/Schemas/SOAP Envelopes/WSDLs) nature, but they also require a greater level of diligence than a traditional big-ball-of-code when it comes to environments.


The problem here was that each project made its way through development→test→staging→production while connecting to other projects doing exactly the same thing. So in staging, our webapp pointed to the staging version of that other service. And because our website released to prod on its own, the versions didn't line up and a frenzy of finger-pointing began.


So how do we simultaneously avoid future débacles and lower the collective blood-pressure on deployment-day? Well for the comprehensive answer, I can only point you towards Continuous Delivery, but in short:


  • Make staging and production physically identical and interchangeable
  • Run up your environment (your code and everyone else's services) in staging in a code-frozen environment as release day approaches
  • Belt staging with regression tests, performance tests, whatever you've got
  • On release day, "flip the switch" on your router, making staging the "live" environment
  • Keep the old prod "warm", ready to flip it back if any problems are found

This strategy dissipates the stress and risk of releasing new software over the entire "we're in staging" period, rather than one horrible day.


Needless to say, I'll be recommending a Continuous-Delivery approach in our next retro!

Friday, 4 March 2011

Tidier Varargs, part 3

Tidying up the loose ends

Some of you might have been a little incredulous at my earlier statement that no existing library offers methods to neatly deal with possibly-null varargs.


I stand by my position - it's the possibly-null aspect that is the kicker. Google Guava offers methods that almost get over the line:

My varargs methodGoogle's almost-the-same method
Iterable<T> VarargIterator.forArgs
(T... optionalArgs)
ArrayList<E> Lists.newArrayList
(E... elements)
Iterable<T> VarargIterator.forCompulsoryAndOptionalArgs(T t, T... ts) List<E> Lists.asList
(E first, E[] rest)

BUT they explode when fed nulls - making them pointless for this particular wart-removing exercise!


So the only thing left now is to clean up the implementation a little, because I think it's failing two of the rules of clean code:

  1. Runs all the tests
  2. Contains no duplications
  3. Expresses the intent of the programmers
  4. Minimizes the number of classes and methods


Here's the entire class as it stands (imports removed):

public class VarargIterator {
    /**
     * @return an {@link Iterable} of a suitable type. 
     * Null-safe; passing a null varargs will simply return
     * an "empty" {@code Iterable}.
     */
    public static final <T> Iterable<T> forArgs(T... optionalArgs) {
        if (optionalArgs == null) {
            return Collections.emptyList();
        } else {
            return Arrays.asList(optionalArgs);
        }
    }

    /**
     * @return an {@link Iterable} of a suitable type.
     * Null-safe; passing a null {@code optionalArgs} will simply return
     * a single-element {@code Iterable}.
     * @throws IllegalArgumentException if {@code compulsoryArg} is null
     */
    public static final <T> Iterable<T> forCompulsoryAndOptionalArgs(
        final T compulsoryArg, final T... optionalArgs) {

        Validate.notNull(compulsoryArg,
            "the first argument to this method is compulsory");
        if (optionalArgs == null) {
            return Collections.singletonList(compulsoryArg);
        } else {
            List<T> iterable = new ArrayList<T>(optionalArgs.length + 1);
            iterable.add(compulsoryArg);
            iterable.addAll(Arrays.asList(>optionalArgs));
            return iterable;
        }
    }

}

The bits that I don't like are the repetition of the optionalArgs null check in both methods, and the code in the else case of the second method. It's working at a "different level" to the other code in the method, losing the intent, and there's too much of it - it's made the whole method too long.


Of course I have a full suite of unit tests so I can be confident that I'm not breaking anything when I do this refactoring work. I use Cobertura in its Maven and Eclipse plugin forms to ensure I'm achieving 100% coverage from these tests.


The first step is easy - extract out a well-named method for the else case:

    /**
     * @return an {@link Iterable} of a suitable type.
     * Null-safe; passing a null {@code optionalArgs} will simply return
     * a single-element {@code Iterable}.
     * @throws IllegalArgumentException if {@code compulsoryArg} is null
     */
    public static final <T> Iterable<T> forCompulsoryAndOptionalArgs(
        T compulsoryArg, T... optionalArgs) {

        Validate.notNull(compulsoryArg,
            "the first argument to this method is compulsory");
        if (optionalArgs == null) {
            return Collections.singletonList(compulsoryArg);
        } else {
            return createListFrom(compulsoryArg, optionalArgs);
        }
    }

    private static <T> Iterable<T> createListFrom(
        T compulsoryArg, T... optionalArgs) {

        final List<T> list = new ArrayList<T>(optionalArgs.length + 1);
        list.add(compulsoryArg);
        list.addAll(Arrays.asList(optionalArgs));
        return list;
    }
    

I dithered a bit about the null check. Doing some kind of is-null/is-not-null closurey-interfacey thing seemed like pretty major overkill, so in the end I just extracted out the logic into a method. As a bonus, I realised I could also check for a zero-length (as opposed to null) array and save some cycles in that case. One last tweak was to drop the explicit else branches - because the methods are now so short there seems little point. So here's the final version - enjoy!

public class VarargIterator {
    /**
     * @return an {@link Iterable} of a suitable type. 
     * Null-safe; passing a null varargs will simply return
     * an "empty" {@code Iterable}.
     */
    public static final <T> Iterable<T> forArgs(T... optionalArgs) {
        if (isEmpty(optionalArgs)) {
            return Collections.emptyList();
        } 
        return Arrays.asList(optionalArgs);
    }

    /**
     * @return an {@link Iterable} of a suitable type.
     * Null-safe; passing a null {@code optionalArgs} will simply return
     * a single-element {@code Iterable}.
     * @throws IllegalArgumentException if {@code compulsoryArg} is null
     */
    public static final <T> Iterable<T> forCompulsoryAndOptionalArgs(
        T compulsoryArg, T... optionalArgs) {

        Validate.notNull(compulsoryArg,
            "the first argument to this method is compulsory");
        if (isEmpty(optionalArgs)) {
            return Collections.singletonList(compulsoryArg);
        } 
        return createListFrom(compulsoryArg, optionalArgs);
    }

    private static boolean isEmpty(Object... optionalArgs) {
        return (optionalArgs == null) || (optionalArgs.length == 0);
    }

    private static <T> Iterable<T> createListFrom(
        T compulsoryArg, T... optionalArgs) {

        final List<T> list = new ArrayList<T>(optionalArgs.length + 1);
        list.add(compulsoryArg);
        list.addAll(Arrays.asList(optionalArgs));
        return list;
    }
}

Monday, 28 February 2011

Tidier Varargs, part 2

Last time we looked at how we could tidy up code that deals with a single, possibly-null varargs parameter. But of course there are lots of cases where the business rule is an "at-least-one" scenario.


An example of a real API that operates like this is the mighty Mockito's thenReturn method, which looks like this:

OngoingStubbing<T> thenReturn(T value, T... values)

Which is typically used to define mock behaviour like this:

Mockito.when(
    mockCountdownAnnouncer.getCurrent()).thenReturn(
        "Three", "Two", "One", "Liftoff");

The nice thing about this is the "fluent" way you can add or remove parameters from the list, and it just keeps working - as long as you've specified at-least-one return value.


This kind of API is really pleasant to use, but it'd be doubly great to have clean code within methods like this. The VarargIterator needs some extension!

    
    /**
     * @return an {@link Iterable} of a suitable type.
     * Null-safe; passing a null {@code optionalArgs} will simply return
     * a single-element {@code Iterable}.
     * @throws IllegalArgumentException if {@code compulsoryArg} is null
     */
    public static final <T> Iterable<T> forCompulsoryAndOptionalArgs(
        final T compulsoryArg, final T... optionalArgs) {

        Validate.notNull(compulsoryArg,
            "the first argument to this method is compulsory");
        if (optionalArgs == null) {
            return Collections.singletonList(compulsoryArg);
        } else {
            List<T> iterable = new ArrayList<T>(optionalArgs.length + 1);
            iterable.add(compulsoryArg);
            iterable.addAll(Arrays.asList(optionalArgs));
            return iterable;
        }
    }

As you can see, we encapsulate null-checking and null-safety in this method, allowing users to simply write (with a static import in this example):

    public OngoingStubbing<T> thenReturn(T value, T... values) {

        for (T retValue :forCompulsoryAndOptionalArgs(value, values)) {
            // Do Mockito magic with each retValue
        }
    }

Mmmm. Squeaky-clean!

Friday, 25 February 2011

Tidier Varargs, part 1

One of the less-appreciated of the many massive improvements made to Java in version 1.5 was varargs, aka those little dots that are syntactic sugar sprinkled onto a strongly-typed array:

    public static void printAllOfThese(String... things) {
        for (String thing : things) {
            System.out.println(thing);
        }
    }

It goes hand-in-glove with the enhanced for loop. Very tidy. Except what happens when I inadvertently do this:?

   printAllOfThese(null);

You probably correctly guessed: NullPointerException at line 1 of printAllOfThese - yep - sadly, the enhanced for loop is not null-safe. What we really want is for a null argument to be silently ignored. So our previously oh-so-tidy, oh-so-readable varargs method ends up needing a null-check wart:

    public static void printAllOfThese(String... things) {
        if (things != null) {   
            for (String thing : things) {
                System.out.println(thing);
            }
        }
    }

Ew. That's ugly, and in turn requires more test cases. There has to be a better way, but amazingly, I couldn't find any solution in the usual common libraries. So, somewhat reluctantly (because if premature optimisation is the root of all evil, then wheel-reinvention is surely sudo), I present the VarargIterator helper class:

public class VarargIterator {

    /**
     * @return an {@link Iterable} of a suitable type. 
     * Null-safe; passing a null varargs will simply return
     * an "empty" {@code Iterable}.
     */
    public static final <T> Iterable<T> forArgs(T... optionalArgs) {
        if (optionalArgs == null) {
            return Collections.emptyList();
        } else {
            return Arrays.asList(optionalArgs);
        }
    }
}

This allows us to rewrite our printAllOfThese method very neatly:

    public static void printAllOfThese(String... things) {
        for (String thing : VarargIterator.forArgs(things)) {
            System.out.println(thing);
        }
    }

Next time: Elegantly dealing with compulsory and optional elements.

Tuesday, 22 February 2011

In The Beginning ...

... There were packets. And They Were Good. Our data was ritualistically encoded into fields and offered unto the network, and with much latency and the occasional sacrifice, was reassembled in hopefully-the-right-order on the other side. We gave thanks to the Internet Elders for this.


And then Wise Hackers realized you could just memcpy() thine C data structure into thine transmit buffer. And it too was good. For marshalling and unmarshalling data to-and-from arbitrary formats is surely the devil's own drudgery.


But lo the Enterprise Architects did speak. And speak XML they did. And they spake that XML was all that any shall speak. And so (for fear of being struck down by these mighty warriors), we encoded our data once more, this time in a verbose form so that those heathens who should sniff the wire should sniff many an angle-bracket and naught of any interest.


But the hackers did once more rise again and thus the prophet Json was begat. And he was good, for he was once more a literal representation of The Word. And there was much rejoicing amongst those who had lived in the shadow of the Architects.


And so it is that what was once old is new again, and what was once bad is now good. And this cycle shall go on for eternity.


Like an Enterprise Architect.

Friday, 18 February 2011

Welcome to Stand Up

Ah dammit I was going to do the Fight Club thing but I've just remembered that the Secret Geek beat me to it.


Never mind. But there are a few additions I could make to that excellent list:


  • You really do have to stand up. It speeds up the process
  • By all means talk fast, but please talk loudly too - don't mumble!
  • If you worked with a team-mate yesterday, mention them; but;
  • Don't cover what somebody else already has. You don't HAVE to increment your awesome-quotient each and every day. Just "pass", optionally referring to the prior speaker
  • Use a "talking stick" or other such binary semaphore to ensure exactly one person is speaking at a time
  • Invoke the talking stick on anyone who violates the semaphore - either metaphorically (he who talks out of turn has to fix the next build breakage) or literally (you'll need a pretty chilled OH&S policy here!)
  • Change things up by throwing the talking stick randomly around the circle so people pay attention
  • ANYTHING controversial is taken "offline"
  • No Agendas, Minutes or Action Items

Tuesday, 15 February 2011

Zero-to-coding-hero in 5, 4, 3 ...

Improving the new-starter experience - Part 2

Continuing on from last time, let's assume you've managed to rustle up a desk, chair and network-connected PC for your new starter in software development. What's next?


In my experience, typically a good two days are wasted watching progress bars dawdle across screens - downloads, installations, setups, checkouts, plugins, pah! This has always seemed like a right-royal waste of time to me. Firstly, by installing "the latest version" of Java/Eclipse/whatever the new starter is almost certain to differ in some possibly-important way from the other developers' platform configurations. Secondly, downloading anything off the internet will always be slower than copying it from the local network.


Here's what I propose, which should have a newbie eyeballing the codebase within minutes of logging in:

  • Step 1. Install a virtuali[sz]ation environment. I'm a big fan of VirtualBox OSE on the Ubuntu desktop, but really, anything will do
  • Step 2. Copy a "developer's disk image" (aka "one I prepared earlier") from a network drive onto the local disk - anywhere you like really. This is a bit-for-bit copy of a fully-working desktop development environment:
    • OS
    • Java - the standard version for your site
    • Eclipse with all appropriate plugins
    • Tomcat
    • Database with JDBC drivers
    • A known-good (if old) source tree
    • etc
  • Step 3. Launch the image. On a modern PC, the performance hit of running under virtuali[sz]ation is so minimal as to be insignificant


Now wasn't that easy? Of course it will take a bit of work to set up that first "gold image" but it gets amortized over each new developer's immense time savings. The image can be kept regularly, if infrequently, up-to-date as a post-release action. A quick source-control update, and the new starter is ready to actually start! Huzzah!

Friday, 11 February 2011

Zero-to-coding-hero in 5, 4, 3 ...

Improving the new-starter experience - Part 1

Another new contract, another interesting getting-set-up experience. Let's break down what a developer needs in order to be a productive team member at a new site - firstly, basic/physical items:


  • An Access Pass/Code that allows ingress/egress to the place of work.
  • A Desk and a Chair, preferably close to their new team, if not right in the middle of it
  • A PC with a Functional Network Connection - note an OS is not necessarily required, but a valid IP address certainly is
  • A network login with an email address - something that can at least get through the (inevitable) corporate proxy and receive internal emails

So what can go wrong? Well, the paperwork for the new starter might not have come through or perhaps it's not clear what access the developer needs. A temporary access pass should be sufficient for the first week or so.


Desks and chairs can frequently be in very short supply - if a team is "skilling-up" there won't be a recently-vacated workstation so a degree of flexibility is required in accommodating the newbie. It's extremely important that they can be sited physically close to their team, in particular the most experienced/technically-strongest member(s) so that questions can be fielded quickly and efficiently.


The PC is probably the most interesting point. At my new site they allow developers to set up their PCs any way they like - although practically the OS choice seems to have boiled down to Windows or Ubuntu. This is impressively open - they are trusting that a developer will either not mangle their machine OR that they will be able to fix it if they do!
Of course it's also quite likely that a new starter will be taking over an ex-employee's PC. In that case it would be insane to blow away a known-good configuration.
The other likely (although least-developer-liked) option is a new locked-down, corporate-imaged desktop PC. The same one given to sales drones, secretaries and Milton from Accounts. This can be a real pain if the installing developer tools requires extra privileges! Which brings me to:


The network login - it can take a while to materialise from the bowels of IT. A nice advantage I had with my fresh Ubuntu installation was the complete independence from the corporate Active Directory, Windows domain malarkey. My login arrived a good 24 hours after my PC - but I already had a full Java development environment configured in the meantime, complete with root access. When the login arrived I just put the cherry on top - getting Evolution to access my email account.


So what would be the policy in my Optimal Office ™?

  • There should be a pool of Guest Access Passes at the front desk. These will be replaced as soon as possible with properly-customised access cards for new starters
  • Each team "pod" should always have a spare desk, chair and (possibly-unconfigured) PC available. That way a new starter can hit the ground running. The next desk/chair/PC combo should be ordered/allocated as soon as the new hire is confirmed
  • I certainly wouldn't have a Windows-based network so I'd be advocating Ubuntu desktops all around. They're free, install quickly, support any hardware, you can set them up without knowing any network credentials and there's tons of help available on the net

Tuesday, 8 February 2011

This One Goes Up To '11

... back for another big year

Happy belated New Year!


As promised/threatened, I'm back in the blogging seat for another year. My new contract has started and I'm getting nicely into the thick of things, but there have been a few niggles that I've been thinking about - my OptimalOffice™ tag will be getting more of a workout soon!


At some point fairly soon (perhaps coinciding with the release of the next Ubuntu Server version in April) I plan on embarking on what I will modestly title The Ultimate Continuous-Integration Server Setup Guide - which will be pitched at the developer who wants a Really Nice Build Environment - no prior system admin experience required! No doubt it'll be a monster, with gotchas galore, but hopefully it'll save a few follicles out there somewhere.


Hope you'll join me for another exciting year in modern software development.