CD: Continuous Delivery or Continuous Deployment?

I’m writing this post to contribute my take on the acronym CD, and the distinction (or perhaps, confusion) between the phrases Continuous Delivery and Continuous Deployment. These phrases are used interchangeably, and sometimes incorrectly, in various books, blogs and talks. And while these software engineering approaches (or disciplines) share a lot, there is one key difference.

Continuous Everything

Let’s first discuss two important cornerstones of CD: Continuous Integration (CI) and automation.

Continuous Integration has been around for a while. A phrase that was first coined in the nineties, it was further refined and explained by Beck and Fowler. CI refers to the practice of integrating changes to a code base regularly, preferably multiple times per day. Each integration is verified by running some sort of build, which should be self-testing, automated, fast and repeatable. This enables quick detection of errors and conflicting changes, resulting in a more stable (and faster) development process.

Continuous Delivery expands on CI and adds a process to automatically deploy a build to an acceptance environment. There, the team can access and manually test the product. When the build is considered ready for general consumption, a developer can promote it to the production environment by a simple button click.

Now, one of the goals of CD (regardless of what the D means) is to achieve shorter development cycles, i.e. reduce the time for new features or product ideas to be put in production and validated. Having a manual gate somewhere in the release process means that the cycle time won’t be as short as it could be, if everything were automated. Enter Continuous Deployment: automating every single step of the way, no manual promotions. To me, this is true CD. Luckily, Wikipedia agrees with me.

Continuous Delivery vs. Continuous Deployment

In future blog posts I’ll discuss some of the requirements to effectively implement Continuous Deployment, and how to easily set up a proper pipeline in minutes, as code, with Jenkins 2. We’ve come a long way since CruiseControl

Final thoughts

I’ve been thinking about the phrase “acceptance environment”. In true CD, there are no acceptance tests, at least not of the traditional, manual kind. So, with nothing to “accept”, the phrase seems to have lost some of it’s value. From an automated test point of view however, the environment itself is still useful. Ideally equivalent to production, where we can run performance and load tests, and verify deployments before progressing to the production stage.

Pre-production? Functionally Equivalent To Production (FETP) ? Let me know if you can think of something better!

 

The road to continuous deployment: a case study

Below are the slides of my talk “The road to continuous deployment: a case study”, as presented at the Dutch PHP Conference in June 2016. I’m planning to dedicate a series of blogposts to this particular topic later this year.

It’s a situation many of us are familiar with: a large legacy application, limited or no tests, slow & manual release process, low velocity, no confidence…. Oh, and management wants new features, fast.

But how to proceed? Using examples and lessons learned from a real-world case, I’ll show you how to strangle the legacy application with a modern service architecture and build a continuous deployment pipeline to deliver value from the first sprint. On the way, we take a look at testing strategies and various (possibly controversial!) tips and best practices.

Docker & PHP: beyond virtual machines

(note: this post was first published on the blog of the Dutch Web Alliance)

Docker is currently one of the hottest technologies around, because it solves a very specific problem: the ability to easily package and deploy a (self contained) application, without the overhead of traditional virtualization solutions.

In this post you’ll learn how to build, run and host Docker containers, integrate with other containers, and see how Vagrant interacts with Docker.

Background

The Linux kernel contains a number of containment features that enable resource (CPU, network, memory, etc.) & process (user id’s, file systems, process trees) isolation on the same host, without the need for a virtual machine: cgroups and namespaces.

This operating system-level virtualization imposes far less overhead. User space instances (also called containers) run within the host OS, so a hypervisor and guest OS are not needed. This results in lightweight images and incredibly fast start/boot times: think seconds, not minutes. Containers are like vm’s, without the associated weight.

Docker is a toolkit built around those containment features. Before version 0.9, Docker relied on LXC (LinuX Containers): a (userspace) interface to the kernel containment features. As of version 0.9, Docker has its own interface layer called libcontainer.

Installing Docker

Installing Docker is easy if you are using Ubuntu. To install the official package (may not be the absolute latest version):

Should you really want the latest version:

Note: because Docker relies on Linux-specific kernel features, it is not natively supported on OSX or Windows.
There are some workarounds: use Vagrant (see below), or refer to the Docker installation instructions ( https://docs.docker.com/installation/windows/ and https://docs.docker.com/installation/mac/) for details.

Creating and starting a container

Let’s start with a small PHP app, which we will package and deploy as a Docker container. The app will expose a simple socket server on a port (1337), and output a string to connections on that port. To build the app we’ll be using React, a library that adds event-driven, non-blocking IO to PHP.

First, install composer (http://getcomposer.org), then run the following command:

Then save the following code as “app.php”:

If we then run the following command:

we should see something like this:

Open your browser, point it to http://localhost:1337/, and you should see something like:

React screenshot

Now, to get our app to run inside a Docker container, we first need to build an image. The instructions that define how to build the image are saved in a Dockerfile, in the form of commands:

In this Dockerfile we’ve used the following commands:

  • FROM: an existing image that our image should be based on
  • ADD: a tar file or existing directory pointing to our app
  • CMD: command to run when starting the container
  • EXPOSE: ports that we want accessible outside of the container

The full list of commands is available .

Now that our Dockerfile is finished, let’s build the image. Run the following command:

This builds a new image using the Dockerfile in the current directory, and will tag
the resulting image with the string “our_app_image”. The output of the command should look like this:

The next step is creating and running a new container that uses our freshly baked image.

The container will be named “our_app_container”, and will run in the background (the ‘-d’ option). Docker will also proxy port 1337 on the host to port 1337 in the container. After starting the container, ‘docker ps’ shows the list of running containers and their attributes, something like:

To verify that the app is running and available, we can use ‘docker logs app’ to retrieve the stdin/stderr streams of the running container.

Dependencies: linking containers

Let’s extend our app with an external dependency. The requirement: increase a counter with every request to port 1337. The counter will live inside a Redis instance.

First, add predis-async to the list of composer dependencies:

Then, insert the following lines in app.php:

Now, how to add a Redis server to our app? We can connect directly to a specific hostname, or assume that whoever works on our app has Redis installed and running. Both are somewhat fragile, and make setting up a development environment more difficult. There is a third option: installing Redis directly to the Docker image of our app. However, this would polute the image and make it less flexible.

Enter Fig, a tool to quickly build and start (isolated) development environments. Fig requires a single configuration file (default: fig.yml):

The above file defines two containers: app (which contains our php application), and redis. The first container depends on the latter through the “links” section. Fig will make sure that, within the app container, “redis” resolves and points to that container.

Save the file, and run:

This should produce the following output:

By default, Fig will start the containers in the foreground, blocking the terminal. To start in the background, add ‘-d’ to the command line.

Docker & Vagrant

As of version 1.6, Vagrant directly supports Docker and Docker containers through a new provider. Additionally, on platforms where Docker is not natively supported (Mac, Windows), Vagrant automatically starts a Linux-based (VirtualBox) vm which can then run a Docker container.

Let’s create a Vagrantfile for our app. We can re-use the existing Dockerfile, and let Vagrant use that:

Save the file, and run ‘vagrant up’:

And once again, our app is available on port 1337, proxied to the container that Vagrant built.

Conclusion

If you’ve made it this far, you now know how to start a Docker container that runs your app, and link that container to other dependencies. Ofcourse this is but a little intro into the exciting world of Docker, which is rapidly changing and growing, and has much more to offer!

Dutch Web Alliance, leaders in web technology

I’m proud to announce that I’ve joined forces with a group of very talented web developers in the Netherlands and Belgium: the Dutch Web Alliance.

Dutch Web Alliance

More information is available on our website, the full (dutch) press release is posted below:

APELDOORN: Freelance web developers bundelen krachten in de Dutch Web Alliance.
De Dutch Web Alliance (DWA) is een vereniging voor en door freelance
web developers uit Nederland en Belgie. Onze missie is om via kennisdeling, samenwerking en het combineren van resources, de positie van de freelancer en de kwaliteit van web professionals te verbeteren. Een lid van de DWA staat voor kennis en kunde op alle vlakken in web-ontwikkeling en kan zich als zodanig onderscheiden worden door klanten en opdrachtgevers.

De DWA richt zich niet op kwantiteit, maar op kwaliteit. Zo zijn het de leden onderling die bepalen of een nieuw lid ook daadwerkelijk kan toetreden tot de vereniging. Binnen de vereniging houden leden elkaar actief op de hoogte van alle laatste ontwikkelingen, bieden ze hulp en dienen ze als vraagbaak. Maar DWA leden springen ook graag bij andere leden in situaties zoals ziekte, deadlines, specifieke kennis. Zo krijgen afnemers van de DWA de flexibiliteit van freelancers, met de voordelen van full-service internet bureaus.

De DWA is meer dan alleen een keurmerk en kwaliteitswaarborg. Onze leden zijn voor het grootste deel actief in open source communities zoals PHP, dev/ops en system administration. Zij zijn dan ook vaak terug te vinden op nationale en internationale conferenties als spreker waarbij ze als experts en “leaders of the field”, andere ontwikkelaars helpen met nieuwe technieken, best practices en algemene kennis die is opgedaan uit de vele complexe projecten waar ze dagelijks aan werken. De DWA draagt hier nog een extra steentje aan bij door het organiseren van workshops en meetups voor niet alleen ontwikkelaars, maar ook leidinggevenden, project managers en CTO’s/CIO’s. Kortom: onze vereniging zet zich in voor een beter, professioneler en efficiënter klimaat binnen de web-ontwikkeling.

Using conditional build steps to speed up your Jenkins PHP builds

At my client Spotney, we have a pretty solid (and common) build infrastructure for our PHP projects; SVN commits are checked out by Jenkins, tests are run by PHPUnit, Sonar runs static code analysis, and artifacts are built and deployed to a staging environment by Phing. However, some of the code relies pretty heavily on (complex) db queries, adding the need for DbUnit style tests. The nature and quantity of the tests, combined with a slow VM (possibly related to this Xdebug issue) meant that our buildtimes were becoming prohibitively long.

An interesting post by @pascaldevink triggered a conversation and sparked an idea. I started working on our build setup, eventually resulting in a 60-70% decrease of our build times!

Here’s how I did it.

Starting point

Let’s assume we have a fairly standard Jenkins job. The job checks out an SVN repository, and periodically scans that repository for changes, triggering a new build if any are detected.

Each build of the job performs three steps:

  • Run phpunit
  • Run phing (target “build”)
  • Invoke Sonar (using the Jenkins Sonar plugin – this plugin also allows invoking Sonar as a post-build action, but that option requires Maven)

After the build steps, the job publishes the test and code coverage reports, and archives the generated artifacts.

Disabling code coverage and Sonar for regular builds

Two of the most obvious optimizations (also suggested by Pascal) are disabling code coverage on all tests and disabling Sonar runs during regular Jenkins builds. We define regular as either manually started by a user, or by an SCM trigger.

Disabling code coverage generation in PHPUnit is easy, simply remove the “coverage-xxx” elements from the logging section of your phpunit.xml configuration file (see this section of the PHPUnit manual). Disabling Sonar is trivial too, just remove the last build step from the job.

However, this is not an ideal solution: we do want to generate code coverage information and run Sonar at some point, such as during nightly builds, preferably without cloning our existing job. This means that we’ll need to skip code coverage and Sonar invocations on all but the scheduled (nightly) builds.

The Sonar plugin supports excluding SCM triggered builds (“Skip if triggered by SCM Changes”), but that only works if you use the post-build action. Additionally, we need to be able to change the PHPUnit configuration – one file to enable code coverage generation, one file to disable it.

Conditional build steps

The Conditional BuildStep plugin wraps one or more build steps in a conditional execution block. One of the possible conditions is the build cause, i.e. was the build triggered by a user, an upstream project, a timer, an SCM change, etc. etc.

First we define the steps that should be taken for each nightly build of our job. These steps should only be executed when the build is trigger by a timer.

We add a “Conditional steps (multiple)” build step, setting the condition to “Build Cause” and the Build Cause to “TimerTrigger”.

Conditional Sonar Build Config [Jenkins]2

Then we add our three (original) build steps:

Conditional Sonar Build Config [Jenkins]3

As stated before, regular builds are those that are triggered by a user or an SCM change.

We again add a “Conditional steps (multiple)” build step. The condition for this step is a little more interesting, as seen below. We combine two Build Cause conditions (one set to “UserCause”, the other to “SCMTrigger”) using the Or condition.

Conditional Sonar Build Config [Jenkins]4

We then add two build steps: the first will run PHPUnit without code coverage (note that we are specifying a different configuration file here), the second one will run Phing.

Conditional Sonar Build Config [Jenkins]5

Note that in the above build steps we’re invoking Phing from the shell instead of using the Phing plugin. Unfortunately this plugin is currently not supported as a conditional build step (probably because of this JIRA issue).

Build schedule

As a final step we need to update our build schedule.

Conditional Sonar Build Config [Jenkins]1

This will ensure our job runs somewhere after midnight (between 12:00 AM and 2:59 AM to be precise).

The end result:

  • A nightly scheduled build, with all the bells and whistles enabled
  • User and SCM triggered builds run (a lot) faster

Please let me know if you think this post is useful, or if you have any other Jenkins/PHP optimization tips!