Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Sunday, 8 September 2019

I like to watch

Remember how computers were supposed to do the work, so people had time to think?

Do you ever catch yourself operating in a tight loop, something like:

  10 Edit file
  20 Invoke [some process] on file
  30 Read output of [some process]
  40 GOTO 10
Feels a bit mechanical, no? Annoying having to switch from the text editor to your terminal window, yes?

Let's break down what's actually happening in your meatspace loop*:

  10 Edit file in text editor
  15 Save file (e.g. Ctrl-S)
  17 Change active window to Terminal (e.g. Cmd/Alt+Tab)
  20 Press [up-arrow] once to recall the last command
  22 Visually confirm it's the expected command
  25 Hit [Enter] to execute the command
  30 Wait for the command to complete
  35 Interpret the output of the command
  40 Cmd/Alt+Tab back to the editor
  45 GOTO 10
That's even more blatant! Most of the work is telling the computer what to do next, even though it's exactly the same as the last iteration.
(*) Props to anyone recognising the BASIC line-numbering reference 😂

Wouldn't something like this be better?

  10 Edit file in text editor
  15 Save file (e.g. Ctrl-S)
  20 Swivel eyeballs to terminal window
  30 Wait for the command to complete
  35 Interpret the output of the command
  40 Swivel eyeballs back to editor
  45 GOTO 10

It can be better

If you've got npm on your system somewhere it's as simple as:

  $ npm install -g watch
and arranging your UI windows suitably. Having multiple monitors is awesome for this. Now by invoking:
  $ watch '[processing command with arguments]' dir1 dir2 ... dirN
you have the machine on your side. As soon as you save any file in dir1, dir2 etc, the command will be run for you. Here are some examples:

Validate a CircleCI build configuration
You're editing the circleci/config.yml of a CircleCI continuously-integrated project. These YAML files are notoriously tricky to get right (whitespace matters...🙄) - so you can get the circleci command-line tool to check your work each time you save the file:
  $ brew install circleci
  $ watch 'circleci config validate' .circleci
Validate a Terraform configuration
You're working on a Terraform infrastructure-as-code configuration. These .TF files can have complex interrelationships - so you can get the terraform command-line tool to check your work each time you save the file:
  $ brew install terraform
  $ watch 'terraform validate' .
Auto-word-count an entire directory of files
You're working on a collection of files that will eventually be collated together into a single document. There's a word-limit applicable to this end result. How about running wc to give you a word-count whenever you save any file in the working directory?:
  $ watch 'wc -w *.txt' .

Power tip

Sometimes, the command in your watch expression is so quick (and/or its output so terse), you can't tell whether you're seeing the most-recent output. One way of solving this is to prefix the time-of-day to the output - a quick swivel of the eyeballs to the system clock will confirm which execution you're looking at:

  $ watch 'echo `date '+%X'` `terraform validate`' .
  > Watching .
  13:31:59 Success! The configuration is valid. 
  13:32:23 Success! The configuration is valid.
  13:34:41 Success! The configuration is valid.
  

Saturday, 30 July 2016

Vultr Jenkins Slave GO!

I was alerted to the existence of VULTR on Twitter - high-performance compute nodes at reasonable prices sounded like a winner for Jenkins build boxes. After the incredible flaming-hoop-jumping required to get OpenShift Jenkins slaves running (and able to complete builds without dying) it was a real pleasure to have the simplicity of root-access to a Debian (8.x/Jessie) box and far-higher limits on RAM.

I selected a "20Gb SSD / 1024Mb" instance located in "Silicon Valley" for my slave. Being on the opposite side of the US to my OpenShift boxes feels like a small, but important factor in preventing total catastrophe in the event of a datacenter outage.

Setup Steps

(All these steps should be performed as root):

User and access

Create a jenkins user:
addgroup jenkins
adduser jenkins --ingroup jenkins

Now grab the id_rsa.pub from your Jenkins master's .ssh directory and put it into /home/jenkins/.ssh/authorized_keys. In the Jenkins UI, set up a new set of credentials corresponding to this, using "use a file from the Jenkins master .ssh". (which by the way, on OpenShift will be located at /var/lib/openshift/{userid}/app-root/data/.ssh/jenkins_id_rsa).
I like to keep things organised, so I made a vultr.com "domain" container and then created the credentials inside.

Install Java

echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu xenial main" | tee /etc/apt/sources.list.d/webupd8team-java.list
echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu xenial main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
apt-get update
apt-get install oracle-java8-installer

Install SBT

apt-get install apt-transport-https
echo "deb https://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823
apt-get update
apt-get install sbt

More useful bits

Git
apt-get install git
NodeJS
curl -sL https://deb.nodesource.com/setup_4.x | bash -
apt-get install nodejs

This machine is quite dramatically faster and has twice the RAM of my usual OpenShift nodes, making it extra-important to have those differences defined per-node instead of hard-coded into a job. One thing I was surprised to have to define was a memory limit for the JVM (via SBT's -mem argument) as I was getting "There is insufficient memory for the Java Runtime Environment to continue" errors when letting it choose its own upper limit. For posterity, here are the environment variables I have configured for my Vultr slave:

Friday, 12 December 2014

Walking away from CloudBees Part 5 - Publishing and Fine-Tuning

Publishing private artefacts to a private Nexus repository
As per my new world order diagram, I decided to use my third and final free OpenShift node as a Nexus box, and what a great move that turned out to be. Without a doubt the easiest setup of a Nexus box I've ever experienced:
  • Log in to OpenShift
  • Click theAdd Application... button
  • Scroll down to the Code Anything heading, and paste http://nexuscartridge-openshiftci.rhcloud.com/ into the URL textbox
  • Click Next, nominate the URL for the box, and wait a few minutes
Wow. More detail (if you need it) from OpenShift.
Publishing open-source artefacts to a public repository
As all of my open-source efforts are now written in Scala with SBT as the build tool, it was a simple matter to add the bintray-sbt plugin to each of them, allowing publication to BinTray, or more specifically, The Millhouse Group's little corner of it.

The only trick here was SSHing into the Jenkins Build slave (one time) and adding an ${OPENSHIFT_DATA_DIR}/.bintray/.credentials file so that an sbt publish would succeed.
Deployment of webapps to Heroku
As with most things open and/or free, someone has been here before - this blog post, together with the Heroku Jenkins Plugin README were a very good starting point for getting this all working.

In brief, the steps are:
  • Install the Heroku and Git Publisher Jenkins plugins
  • Grab your Heroku API key from your Account Settings page, and put it into Manage Jenkins -> Configure System -> Heroku -> API Key
  • Grab the details of the Heroku remote from your .git/config in your local repo, or from the "Git URL" in the Info on your app's Settings page on Heroku.
  • Set this up as an additional Git repo in your Jenkins build, and name it heroku. For safety, I like to name my other repo (i.e. the one holding the source that triggers builds) appropriately as well; it avoids confusion.
    • Actual example:
    • I name my source repo bitbucket
    • Thus my Branch Specifier is bitbucket/master
  • Add a new Git Publisher Post-Build Action, that pushes to heroku/master when the build succeeds
Fine-tuning the OpenShift build setup
Having to do "Layer-8" timezone conversion when reading build logs is just annoying so put the slave node into your local time zone by navigating to (Manage Jenkins -> Manage Nodes -> Slave -> Configure icon -> Launch Method -> Advanced -> JVM Options) (phew!) and setting it to:
-Duser.home=${OPENSHIFT_DATA_DIR} -Duser.timezone="Australia/Melbourne" -XX:MaxPermSize=1M -Xmx2M -Xss128k
(You might need to consult the list of Java timezone ids)

The final pieces of the puzzle were the configuring the "final destinations" of my private artifacts (my gets sent to BinTray courtesy of the bintray-sbt plugin). Details follow.

After that, a little bit of futzing around to get auto-triggered builds working from both GitHub and BitBucket, and I had everything back to normal, or possibly, even better - I now have unlimited app slots on Heroku versus four on CloudBees - and I'm somewhat insulated from outages of a single provider. Happy!

Tuesday, 4 November 2014

Walking away from CloudBees Episode 4: A New Hope

With CloudBees leaving the free online Jenkins scene, I was unable to Google up any obvious successors. Everyone seems to want cash for builds-as-a-service. It was looking increasingly likely that I would have to press some of my own hardware into service as a Jenkins host. And then I had an idea. As it turns out, one of those cloudy providers that I had previously dismissed, OpenShift, actually is exactly what is needed here!

The OpenShift Free Tier gives you three Small "gears" (OpenShift-speak for "machine instance"), and there's even a "cartridge" (OpenShift-speak for "template") for a Jenkins master!

There are quite a few resources to help with setting up a Jenkins master on OpenShift, so I won't repeat them, but it was really very easy, and so far, I haven't had to tweak the configuration of that box/machine/gear/cartridge/whatever at all. Awesome stuff. The only trick was that setting up at least one build-slave is compulsory - the master won't build anything for you. Again, there are some good pages to help you with this, and it's nothing too different to setting up a build slave on your own physical hardware - sharing SSH keys etc.

The next bit was slightly trickier; installing SBT onto an OpenShift Jenkins build slave. This blog post gave me 95 percent of the solution, which I then tweaked to get SBT 0.13.6 from the official source. This also introduced me to the Git-driven configuration system of OpenShift, which is super-cool, and properly immutable unlike things like Puppet. The following goes in .openshift/action_hooks/start in the Git repository for your build slave, and once you git push, the box gets stopped, wiped, and restarted with the new start script. If you introduce an error in your push, it gets rejected. Bliss.
cd $OPENSHIFT_DATA_DIR
if [[ -d sbt ]]; then
  echo “SBT installed”
else
  SBT_VERSION=0.13.6
  SBT_URL="https://dl.bintray.com/sbt/native-packages/sbt/${SBT_VERSION}/sbt-${SBT_VERSION}.tgz"
  echo Fetching SBT ${SBT_VERSION} from $SBT_URL
  echo Installing SBT ${SBT_VERSION} to $OPENSHIFT_DATA_DIR
  curl -L $SBT_URL  -o sbt.tgz
  tar zxvf sbt.tgz sbt
  rm sbt.tgz
fi

The next hurdle was getting SBT to not die because it can't write into $HOME on an OpenShift node, which was fixed by setting -Duser.home=${OPENSHIFT_DATA_DIR} when invoking SBT. (OPENSHIFT_DATA_DIR is the de-facto writeable place for persistent storage in OpenShift - you'll see it mentioned a few more times in this post)

But an "OpenShift Small gear" build slave is slow and severely RAM-restricted - so much so that at first, I was getting heaps of these during my builds:
...
Compiling 11 Scala sources to /var/lib/openshift//app-root/data/workspace//target/scala-2.11/test-classes... 
FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
hudson.remoting.RequestAbortedException: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel
 at hudson.remoting.RequestAbortedException.wrapForRethrow(RequestAbortedException.java:41)
 at hudson.remoting.RequestAbortedException.wrapForRethrow(RequestAbortedException.java:34)
 at hudson.remoting.Request.call(Request.java:174)
 at hudson.remoting.Channel.call(Channel.java:742)
 at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:168)
 at com.sun.proxy.$Proxy45.join(Unknown Source)
 at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:956)
 at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:137)
 at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:97)
 at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
 at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
 at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:756)
 at hudson.model.Build$BuildExecution.build(Build.java:198)
 at hudson.model.Build$BuildExecution.doRun(Build.java:159)
 at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
 at hudson.model.Run.execute(Run.java:1706)
 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
 at hudson.model.ResourceController.execute(ResourceController.java:88)
 at hudson.model.Executor.run(Executor.java:232)
...
which is actually Jenkins losing contact with the build slave, because it has exceeded the 512Mb memory limit and been forcibly terminated. The fact that it did this while compiling Scala - specifically while compiling Specs2 tests - reminds me of an interesting investigation done about compile time that pointed out how Specs2's trait-heavy style blows compilation times (and I suspect, resources) out horrendously compared to other frameworks - but that is for another day!

If you are experiencing these errors on OpenShift, you can actually confirm that it is a "memory limit violation" by reading a special counter that increments when the violation occurs. Note this count never resets, even if the gear is restarted, so you just need to watch for changes.

A temporary fix for these issues seemed to be running sbt test rather than sbt clean test; obviously this was using just slightly less heap space and getting away with it, but I felt very nervous at the fragility of not just this "solution" but also of the resulting artifact - if I'm going to the trouble of using a CI tool to publish these things, it seems a bit stupid to not build off a clean foundation.

So after a lot of trawling around and trying things, I found a two-fold solution to keeping an OpenShift Jenkins build slave beneath the fatal 512Mb threshold.

Firstly, remember while a build slave is executing a job there are actually two Java processes running - the "slave communication channel" (for want of a better phrase) and the job itself. The JVM for the slave channel can safely be tuned to consume very few resources, leaving more for the "main job". So, in the Jenkins node configuration for the build slave, under the "Advanced..." button, set the "JVM Options" to:
-Duser.home=${OPENSHIFT_DATA_DIR} -XX:MaxPermSize=1M -Xmx2M -Xss128k

Secondly, set some more JVM options for SBT to use - for SBT > 0.12.0 this is most easily done by providing a -mem argument, which will force sensible values for -Xms, -Xmx and -XX:MaxPermSize. Also, because "total memory used by the JVM" can be fairly-well approximated with the equation:
Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss]
it becomes apparent that it is very important to clamp down the Stack Size (-Xss) as a Scala build/test cycle can spin up a lot of them. So each of my OpenShift Jenkins jobs now does this in an "Execute Shell":
export SBT_OPTS="-Duser.home=${OPENSHIFT_DATA_DIR} -Dbuild.version=$BUILD_NUMBER"
export JAVA_OPTS="-Xss128k"

# the -mem option will set -Xmx and -Xms to this number and PermGen to 2* this number
../../sbt/bin/sbt -mem 128 clean test
This combination seems to work quite nicely in the 512Mb OpenShift Small gear.

Saturday, 1 November 2014

Walking away from Run@Cloud Part 3: Pause and Reflect

As a happy free-tier CloudBees user, my "build ecosystem" looked like this:


As CloudBees seem to have gone "Enterprise" in the worst possible way (from my perspective) and don't have any free offerings any more, I was now looking for:
  • Git repository hosting (for private repos - my open-source stuff is on GitHub)
  • A private Nexus instance to hold closed-source library artifacts
  • A public Nexus instance to hold open-source artifacts for public consumption
  • A "cloud" Jenkins instance to build both public- and private-repo-code when it changes;
    • pushing private webapps to Heroku
    • publishing private libs to the private Nexus
    • pushing open-source libs to the public Nexus
... and all for as close to $0 as possible. Whew!

I did a load of Googling, and the result of this is an ecosystem that is far more "diverse" (a charitable way to say "dog's breakfast") but still satisfies all of the above criteria, and it's all free. More detail in blog posts to come, but here's what I've come up with:

Tuesday, 28 October 2014

Walking away from Run@Cloud. Part 2: A Smooth Transition

So, having selected Heroku as my new runtime platform, how to move my stuff on there?

On the day of their announcement, Cloudbees provided an FAQ and a Migration Guide for their current customers.

In addition, Heroku most considerately have a CloudBees-to-Heroku migration guide (updated on the day of the CloudBees announcement, nice).

Setting up on Heroku proved delightfully simple, and with a git push heroku master from my machine, my first app was "migrated". Up and running, and actually (according to my simple metrics) responding more quickly than when it was hosted on CloudBees. Epic win, amirite?

Well, not entirely. The git push deploy method is all very well, but I dislike the implied trust it puts in the "pusher". How does anybody know what is in that push? Does it pass the tests? Does it even compile? When CloudBees was my end-to-end platform, I had the whole CI/CD chain thing happening so only verified, test-passing code actually made it through the gate. But Heroku doesn't offer such a thing - they just run what you push to them.

Well, if CloudBees wants to become the cloud Jenkins instance, and they continue to have a free offering, I will continue to use it. So let's get CloudBees building and testing my stuff, and then fire it over to Heroku to run it, all from a Jenkins instance on CloudBees.

Oh dear. CloudBees are no longer offering a free Jenkins service.

Back to the drawing-board!

Wednesday, 8 October 2014

Walking away from Run@Cloud. Part 1: Finding A Worthy Successor

In very disappointing news, last month CloudBees announced that they would be discontinuing their Run@Cloud service, a facility I have been happily using for a number of years.

I was using a number of CloudBees services, namely:
  • Dev@Cloud Repos - Git repositories
  • Dev@Cloud Builds - Jenkins in the cloud
  • Run@Cloud Apps - PaaS for hosted Java/Scala apps
  • Run@Cloud Database Service - for some MySQL instances
  • MongoHQ/Compose.io ecosystem service - MongoDB in the cloud

In short, a fair bit of stuff:
... the best bit being of course, that it was all free. So now I'm hunting for a new place for most, if not all, of this stuff to live, and run, for $0.00 or as close to it as conceivably possible. And before you mention it, I've done the build-it-yourself, host-it-yourself thing enough times to know that I do not ever want to do it again. It's most definitely not free for a start.

After a rather disappointing and fruitless tour around the block, it seemed there was only one solution that encompassed what I consider to be a true "run in the cloud" offering, for JVM-based apps, at zero cost. Heroku.

Wednesday, 21 May 2014

Easy artifact uploads: MacOS to WebDAV Nexus

I posted a while back about using the WebDAV plugin for SBT to allow publishing of SBT build artifacts to the CloudBees repository system.

Well it turns out that this plugin is sadly not compatible with SBT 0.13, so it's back to the drawing board when publishing a project based on the latest SBT.

Luckily, all is not lost. Hinted at in a CloudBees post, your repositories are available via WebDAV at exactly the same location you use in your build.sbt to access them, but via https.

And the MacOS Finder can mount such a beast automagically via the Connect To Server (Command-K) dialog - supply your account name (i.e. the word in the top-right of your CloudBees Grand Central window) rather than your email address, and boing, you've got a new filesystem mounted, viz:





The only thing the WebDAV plugin actually did was create a new directory (e.g. 0.6) on-demand - so if you simply create the appropriately-named "folder" via the MacOS Finder, a subsequent, completely-standard SBT publish will work just fine.

You might even want to create a whole bunch of these empty directories (e.g. 0.7, 0.8, 0.9) while you're in there, so you don't get caught out if you decide to publish on a whim from somewhere else.

Tuesday, 25 June 2013

Publishing from SBT to CloudBees

As I'm a massive CloudBees fan, I'm starting to use more and more of their offerings. I've got a number of Play! Framework 2.1 apps (Scala flavour, natch) hosted up there and they Just Work, but I wanted to write a Scala library on Github and push it up to a CloudBees-hosted Maven repo for general consumption.

SBT is a nice tool for developing Scala, and will happily publish to a Nexus or other such Maven/Ivy repository, but CloudBees is a little trickier than that, because it's best suited to hosting CloudBees-built stuff (i.e. the output of their Jenkins-In-The-Cloud builds).

Their support for uploading "external" dependencies is limited to the WebDAV protocol only - a protocol that SBT doesn't natively speak. Luckily, some excellent person has minimised the yak-shaving required by writing a WebDAV SBT plugin - here's how I got it working for my build:

In project/plugins.sbt, add the WebDAV plugin:
resolvers += "DiversIT repo" at "http://repository-diversit.forge.cloudbees.com/release"

addSbtPlugin("eu.diversit.sbt.plugin" % "webdav4sbt" % "1.3")


To avoid plastering your CloudBees credentials all over Github, create a file in ~/.ivy2/.credentials:
realm={account} repository
host=repository-{account}.forge.cloudbees.com
user={account}
password=
Where {account} is the value you see in the drop-down at the top right when you login to CloudBees Grand Central. Don't forget the Realm, this is just as important as the username/password!

Next, add this to the top of your build.sbt:
import eu.diversit.sbt.plugin.WebDavPlugin._
and this later on:
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")

seq(WebDav.globalSettings : _*)

publishTo := Some("Cloudbees releases" at "https://repository-{account}.forge.cloudbees.com/"+ "release")
What this does: The first line tells SBT where to find that credentials file we just added.
The second line makes the webdav:publish task replace the default publish task, which is most-likely what you want. If it's not, use WebDav.scopedSettings and invoke the task with webdav:publish.
The third line specifies where to publish to, replacing all other targets. I found if I used the notation in the WebDAV plugin documentation:
publishTo <<= version { v: String =>
  val cloudbees = "https://repository-diversit.forge.cloudbees.com/"
  if (v.trim.endsWith("SNAPSHOT"))
    Some("snapshots" at cloudbees + "snapshot")
  else
    Some("releases" at cloudbees + "release")
}
...SBT would attempt to upload my artifact to not-just CloudBees, but any other extra repository I had configured with the resolvers expression higher up in build.sbt, and hence try and upload to oss.sonatype.org, which I'm not ready for just yet! "Releases" is sufficient for me, I don't need the "snapshots" option.

And with that, it should just work like a charm:
> publish
[info] WebDav: Check whether (new) collection need to be created.
[info] WebDav: Found credentials for host: repository-{account}.forge.cloudbees.com
[info] WebDav: Creating collection 'https://repository-{account}.forge.cloudbees.com/release/net'
[info] WebDav: Creating collection 'https://repository-{account}.forge.cloudbees.com/release/net/foo'
[info] WebDav: Creating collection 'https://repository-{account}.forge.cloudbees.com/release/net/foo/bar_2.9.2'
[info] WebDav: Creating collection 'https://repository-{account}.forge.cloudbees.com/release/net/foo/bar_2.9.2/0.1'
[info] WebDav: Done.
...
[info]  published bar_2.9.2 to https://repository-{account}.forge.cloudbees.com/release/net/foo/bar_2.9.2/0.1/bar_2.9.2-0.1.pom
[info]  published bar_2.9.2 to https://repository-{account}.forge.cloudbees.com/release/net/foo/bar_2.9.2/0.1/bar_2.9.2-0.1.jar
[info]  published bar_2.9.2 to https://repository-{account}.forge.cloudbees.com/release/net/foo/bar_2.9.2/0.1/bar_2.9.2-0.1-sources.jar
[info]  published bar_2.9.2 to https://repository-{account}.forge.cloudbees.com/release/net/foo/bar_2.9.2/0.1/bar_2.9.2-0.1-javadoc.jar