Docker infrastructure

Since I saw the 5 minute long video “The future of Linux containers” last year in April, in which Solomon Hykes presented Docker the first time at PyCon, I wanted to use Docker for running my server side software.

The first thing I thought about, was using it for running our time tracking product timr. It is currently deployed on multiple hosts with multiple tomcat instances and a MySQL database. Docker would make it easy to spin up more instances and to deploy new versions.

We are also hosting server applications for our clients. Sometimes only for testing but in some cases also for production. Most of them are JVM based and it is a tedious task to set up yet another tomcat instance and apache virtual host for every new project. Docker would help here too.

Another topic docker could solve for us, was creating test environments triggered by jenkins builds. Currently we have jenkins jobs that can be run by the QA team to deploy a new version on the test server, but sometimes different testers want to have different versions deployed. I would be nice to create a new test instance from a jenkins build, accessible via an auto generated URL.

Breaking it down

After thinking too long about how to tackle all those tasks at once I thought I should start small. So I started with creating a Docker container for running our Sonatype Nexus artefact repository. Here is the resulting Dockerfile:

As you can see it is based on a docker image called troii/java7 which I will show you later. To install nexus inside the image it does the following:

  • downloads the latest sonatype nexus version into /usr/local
  • extracts the TAR file and deletes it
  • create a symbolic link /usr/local/nexus to the newly created folder
  • uses sed to adapt the nexus script to run as user root
  • changes the context path from /nexus to / in the file

The image exposes the port 8081 (the default port nexus uses) and adds a volume /usr/local/sonatype-work to persist the working directory of the repository. With this Dockerfile it is now possible to start up my nexus server wherever I want within seconds.

Currently it is running on an Ubuntu 14.04 root server at Hetzner. I made it available via apache reverse proxy with this config in a virtual host:

The nexusPort (in this case 8080) is defined in another apache config file where I configure all the ports for my reverse proxies.

Data volumes

If I would only run the image created by the Dockerfile before, keeping the persistent data between stopping and starting containers would not be very intuitive. Though I defined a volume for the directory sonatype-work (which makes it persistent) I would need to start the next container with the option –volumes-from to reuse the volume from the last nexus container I stopped.

Luckily there is an article called “Managing Data in Containers” that handles exactly this topic. What I did was creating a so called “data only” container that is used in the –volumes-from option every time to start the nexus container.

This leads to two images/containers being involved in my nexus docker setup, which raises the question on how to manage the startup configs for my docker containers. Those command lines with all the correct options for starting the container can get very long – should I create separate scripts just to start the containers?

Base images

As I wrote before the nexus docker image is based on the troii/java7 docker image which looks like this

It installs the Oracle JDK 7 and is based on the troii/base image which looks like this

The base image uses Ubuntu 14.04 Trusty and does the following on top of it

  • it configures a nearby ubuntu mirror
  • sets some environment variables, timezone and locale
  • installs zsh, vim, git and other stuff I like to have on the command line

First I started with one big Dockerfile but over the time I started to split them up in this way. The base image is used for every docker image I created in the last weeks.

By creating a separate image with Java installed, it is easy to update the Java version for containers base on it. Just by simply building the troii/java7 image again it downloads the latest version. When a container base on it is rebuilt the Java version is automatically updated too.


So far one of the biggest benefits of “dockerizing” our infrastructure was that the setup and configuration is reproducible stored in git repositories. Creating the Dockerfile really documents how the services are set up.

Another thing I noticed that is really nice about having nexus run in a docker container like this is that I just have to run three simple steps (docker stop, docker build, docker run) and I have updated nexus to the newest available version.

What is next

I want to start a series of blog posts with this one about my ongoing effort on moving more and more to a infrastructure based on Docker. There are still many questions left but some have already been answered … stay tuned!

Grails 2.2.x and the problem with inner classes

We have been using Grails 2.2.x in some of our projects since it came out last year. Last week when I tried to upgrade another project because it was time to develop some new features I ran into a strange problem after upgrading from 2.1.3 to 2.2.1:

org.codehaus.groovy.grails.web.pages.exceptions.GroovyPagesException: Error processing GroovyPageView: Error executing tag : Error executing tag : java.lang.VerifyError: (class: com/troii/project/tags/SomeTag$Info, method: getSession signature: ()Ljavax/servlet/http/HttpSession;) Incompatible object argument for function call
	at com.googlecode.psiprobe.Tomcat70AgentValve.invoke(
	at java.util.concurrent.ThreadPoolExecutor.runWorker(
	at java.util.concurrent.ThreadPoolExecutor$
Caused by: org.codehaus.groovy.grails.web.taglib.exceptions.GrailsTagException: Error executing tag : Error executing tag : java.lang.VerifyError: (class: com/troii/project/tags/SomeTag$Info, method: getSession signature: ()Ljavax/servlet/http/HttpSession;) Incompatible object argument for function call

I had never seen a java.lang.VerifyError exception before and the second strange thing was that the exception only occured when deploying the war in a tomcat not when starting the app with grails run-app.

Searching the web brought up some JIRA issues:

GRAILS-9627 inner class or enum in domain class breaks unit/integration testing
GRAILS-9784 Using an anonymous inner class in Controller causes VerifyError
GRAILS-10068 inner class in controller class breaks unit testing

and a blog post from someone who experienced the same issue.

Seems like with the update from Groovy 2.0.6 to 2.1.0 caused a compilation problem for inner classes. I had no chance to test with Grails 2.2.2 which was release three days ago but it seems the only workaround currently is to move the inner classe to external ones.

UPDATE: Seems Grails 2.2.2 improved a lot, at least for our project it works.

What to ignore

With the latest improvements we made at troii to our development workflow, we discussed what should be committed to a source code repository and what files should be ignored. We already had the rule in place, that nothing should be put into the repository, that is generated from other sources (typically .class files, .war, .jar, …) – this rule is very common and agreed by almost every developer.

How to handle another set of files usually leads to a lot of discussion: IDE settings, e.g. Eclipse .settings, .project, .classpath or IDEA IntelliJ .iml and .idea directory). Up until a year ago we mainly used Eclipse and I used to store the IDE configuration files in the repository. With switching to IntelliJ and working more with git branches, I startet to think about this again, because I had the feeling that those settings files changed more frequently.

I read some articles and posts online and I came to the conclusion that it is better to exclude those settings files too. After looking at the dedicated github repository for git ignore templates at it became even clearer. The template for IntelliJ looks like:


The reasons I had, for putting those IDE settings into the repository, were, that I want a new developer in the team to be able to check out the code and start developing as fast as possible. A new project member should not have to configure his IDE in a very specific way to get the project running or even compiling. With using maven, gradle and Grails in nearly every project this has gotten a lot easier because most IDE’s have import features, that can interprete those build systems and configure the project.

After deciding to stick with those rules, I looked for an easy way to use those git ignore templates and found gitignore-boilerplates – or short gibo. This is a nice little shell script that allows you put the right templates for your project together into the .gitignore file, for example the line

gibo Eclipse IntelliJ Windows OSX Maven Java > .gitignore

creates a gitignore file for a Java project that uses Maven and is developed with IntelliJ or Eclipse under Windows or OS X. You could put the Eclipse, IntellIJ, Windows and OSX ignore rules into your global gitignore file on your machine but I like to put them into the project to make sure that all the other developers of the project use those rules too.

I made a fork of each of those projects because there are configuration files I want to add to the repository:

  • code style settings – so that every developer on the team formats the code in the same way
  • encoding settings – important when working on different operating systems

You can find those forks under and