Thursday, 23 May 2013

nginx as a CORS-enabled HTTPS proxy

So you need a CORS frontend to your HTTPS target server that is completely unaware of CORS.

I tried doing this with Apache but it couldn't support the creation of a response to the "preflight" HTTP OPTIONS request that is made by CORS-compliant frameworks like jQuery.

Nginx turned out to be just what I needed, and furthermore it felt better too - none of that module-enabling stuff required, plus the configuration file feels more programmer-friendly with indenting, curly-braces for scope and if statements allowing a certain feeling of control flow.

So without any further ado (on your Debian/Ubuntu box, natch) :

Get Nginx:
sudo apt-get install nginx

Get the Nginx HttpHeadersMore module which allows the CORS headers to be applied whether the request was successful or not (important!)
sudo apt-get install nginx-extras

Now for the all-important config (in /etc/nginx/sites-available/default ) - We'll go through the details after this:

# Act as a CORS proxy for the given HTTPS server(s)
server {
  listen 443 default_server ssl;
  server_name localhost;

  # Fake certs - fine for development purposes :-)
  ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
  ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;

  ssl_session_timeout 5m;

  # Make sure you specify all the methods and Headers 
  # you send with any request!
  more_set_headers 'Access-Control-Allow-Origin: *';
  more_set_headers 'Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, DELETE';
  more_set_headers 'Access-Control-Allow-Credentials: true';
  more_set_headers 'Access-Control-Allow-Headers: Origin,Content-Type,Accept';

  location /server1/  {
    include sites-available/cors-options.conf;
    proxy_pass https://<actual server1 url>/;

  location /server2/  {
    include sites-available/cors-options.conf;
    proxy_pass https://<actual server2 url>/;

And alongside it, in /etc/nginx/sites-available/cors-options.conf:
    # Handle a CORS preflight OPTIONS request 
    # without passing it through to the proxied server 
    if ($request_method = OPTIONS ) {
      add_header Content-Length 0;
      add_header Content-Type text/plain;
      return 204;
What I like about the Nginx config file format is how it almost feels like a (primitive, low-level, but powerful) controller definition in a typical web MVC framework. We start with some "globals" to indicate we are using SSL. Note we are only listening on port 443 so you can have some other server running on port 80. Then we specify the standard CORS headers, which will be applied to EVERY request, whether handled locally or proxied through to the target server, and even if the proxied request results in a 404:
  more_set_headers 'Access-Control-Allow-Origin: *';
  more_set_headers 'Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, DELETE';
  more_set_headers 'Access-Control-Allow-Credentials: true';
  more_set_headers 'Access-Control-Allow-Headers: Origin,Content-Type,Accept';

This last point can be important - your JavaScript client might need to inspect the body of an error response to work out what to do next - but if it doesn't have the CORS headers applied, the client is not actually permitted to see it!
The little if statement that is included for each location provides functionality that I simply couldn't find in Apache. This is the explicit response to a preflighted OPTIONS:
    if ($request_method = OPTIONS ) {
      add_header Content-Length 0;
      add_header Content-Type text/plain;
      return 204;
The target server remains blissfully unaware of the OPTIONS request, so you can safely firewall them out from this point onwards if you want. A 204 No Content is the technically-correct response code for a 200 OK with no body ("content") if you were wondering.
The last part(s) can be repeated for as many target servers as you want, and you can use whatever naming scheme you like:
 location /server1/  {
    include sites-available/cors-options.conf;

This translates requests for:

This config has proven useful for running some Jasmine-based Javascript integration tests - hopefully it'll be useful for someone else out there too.

Tuesday, 21 May 2013

2013 Nose-Wrinklers

You know what I'm talking about. A colleague starts at a new gig and you meet up after a few weeks; "So how is it?"

And they wrinkle up their nose and say "awwww, it's alright. The people are good..."

You press a bit more, and it's the technology that is making them sad. And it turns out that while it was there in the job description, your colleague thought that it wasn't as important as the other stuff. Or it was glossed over in the interview. Yeah.

So I hereby present my 2013 Nose-Wrinklers -  If you see these technologies in a position description, consider carefully whether you want to be working with them in 2013:

  • Struts - At the time, it was okay. That time has passed. The amount of boilerplate code and design compromises required to simply serve up some pages is unacceptable today
  • CVS/SVN - I'm not saying that Git will solve everything. But it will get in the way a whole lot less than these antique attempts at version control
  • Ant - Managing your own dependencies should take exactly one line of configuration. Specifying how to build your project's deliverable should take zero.
  • IBM WebSphere - an application server for companies that feel like they need to pay money to someone for something you can get for free. Except the free offering is faster, more secure, more standards-compliant and more stable. Riiiight.
  • "J2EE" - It's not J2EE, it's Java EE and has been for seven years. Whatever it's called, it's a very long way from the lightweight components most developers actually enjoy building.

Thursday, 9 May 2013

Really Grails, Really?

I'm a bit grumpy at Grails today. We're trying to get a Grails app into something like a production-ready state and Grails is repeatedly showing me that it really doesn't want to be there.

 A couple of weeks ago, our Ops guys requested (completely reasonably) that the app be configurable "externally" - i.e. tweak a properties file somewhere on the filesystem, restart the app, and bingo.

Rather than the godawful files-in-classpath mess that results in excessive trawling through unpacked WAR files in Tomcat's webapps directory, followed by nervous vi-ing and praying something doesn't come along and blow away your delicate changes. Nod sagely if you've done that crap about a bazillion times.

So the change was duly made, there are a million "externalise your Grails config" blogs out there, Google away if you care. But they almost all are totally on the Groovy Kool-Aid: the externalised config files are .groovy files - curly brackets and all that. Our guys not only don't like that (don't blame them), but they won't allow it for security reasons - you can put executable "stuff" in such a file and do who-knows-what to a system.

So the Grails Doco on externalised config says .properties files are A-OK. Great! Let's try something then:

In our externalised property file:
config.feature.enabled = false
In a Grails controller, some time later:
if (config.feature.enabled) {
  println "Combobulating the Doodads"
} else {
  println "Enabled I Am Not"


And get very, very VERY annoyed.

Friday, 3 May 2013


Some cool stuff that makes life way easier to do webapps in 2013:
  • Play! Framework 2.1 - Scala, Hit-refresh recompilation, built-in LESS, proper request routing (no annotations!) and no XML. Nuff said.
  • LESS CSS - Does a great job of de-repetitioning CSS.
  • Angular.JS - the most unobtrusive client-side framework I've ever used - just feels like "dynamic HTML"
Also (not yet used, but fully intend to):
I note that in 2013, Standing on the Shoulders Of Giants is more like Standing on the Shoulders of Giants Standing on the Shoulders of Giants - all of the above technologies build on something a bit older, a bit cruftier, a bit trickier.
Have we finally hit the Industrial Revolution in software development?