Phing development update

A few years ago I pledged to publish regular updates on Phing’s development. I’m a little ashamed that I haven’t done that for nearly three years. Needless to say, a lot has happened, so it’s about time to post an update!

Recent releases

The latest version, 2.14.0, was released in March. Ever since the move of the code to GitHub, a steady flow of pull requests and contributions has resulted in a lot of new functionality, improved documentation and numerous squashed bugs.

Here’s a selection of interesting features that have recently been added:

  • Support for PHP 7 & HHVM.
  • A flurry of new tasks: stopwatch, switch, throw, retry and many more.
  • Better support for composer and numerous dependencies (such as PHPUnit).
  • Additional selectors and conditions.
  • Remember the old HTML user guide? That has been replaced by (significantly improved!) Docbook5 documentation.
  • Additional tests & test coverage.

Of course there’s much more and all the details can be found in the changelog.

Coming up

The next minor version, 2.15.0, should be released in a few weeks, the tickets assigned to that version can be found here.

Beyond Phing itself, there are a few planned changes to the website and the build process:

  • Migrating the ticket tracker (Trac) to GitHub issues.
  • Shipping a “full” .phar file with each release (containing recent versions of the most popular dependencies).
  • Testing the generated .phar file in the travis-ci builds.
  • Adding a documentation search.
  • Refreshing the website.

Phing 3.0

A serious refactor of Phing has been on my own wishlist for quite some time. Some work towards that goal has been committed to the 3.0 branch and I try to keep this branch more or less synchronized with master (albeit with some delay). The branch is entirely functional and requires PHP 5.6+ or above.

Some of the work that’s still in progress or on the roadmap: integrating SPL, changing the way booleans are handled, a proper task autoloader, cleaning up the code, improving the test harness, etc. If you have suggestions, then please let me know! Time available to spend on a pull request is even better 🙂

CD: Continuous Delivery or Continuous Deployment?

I’m writing this post to contribute my take on the acronym CD, and the distinction (or perhaps, confusion) between the phrases Continuous Delivery and Continuous Deployment. These phrases are used interchangeably, and sometimes incorrectly, in various books, blogs and talks. And while these software engineering approaches (or disciplines) share a lot, there is one key difference.

Continuous Everything

Let’s first discuss two important cornerstones of CD: Continuous Integration (CI) and automation.

Continuous Integration has been around for a while. A phrase that was first coined in the nineties, it was further refined and explained by Beck and Fowler. CI refers to the practice of integrating changes to a code base regularly, preferably multiple times per day. Each integration is verified by running some sort of build, which should be self-testing, automated, fast and repeatable. This enables quick detection of errors and conflicting changes, resulting in a more stable (and faster) development process.

Continuous Delivery expands on CI and adds a process to automatically deploy a build to an acceptance environment. There, the team can access and manually test the product. When the build is considered ready for general consumption, a developer can promote it to the production environment by a simple button click.

Now, one of the goals of CD (regardless of what the D means) is to achieve shorter development cycles, i.e. reduce the time for new features or product ideas to be put in production and validated. Having a manual gate somewhere in the release process means that the cycle time won’t be as short as it could be, if everything were automated. Enter Continuous Deployment: automating every single step of the way, no manual promotions. To me, this is true CD. Luckily, Wikipedia agrees with me.

Continuous Delivery vs. Continuous Deployment

In future blog posts I’ll discuss some of the requirements to effectively implement Continuous Deployment, and how to easily set up a proper pipeline in minutes, as code, with Jenkins 2. We’ve come a long way since CruiseControl

Final thoughts

I’ve been thinking about the phrase “acceptance environment”. In true CD, there are no acceptance tests, at least not of the traditional, manual kind. So, with nothing to “accept”, the phrase seems to have lost some of it’s value. From an automated test point of view however, the environment itself is still useful. Ideally equivalent to production, where we can run performance and load tests, and verify deployments before progressing to the production stage.

Pre-production? Functionally Equivalent To Production (FETP) ? Let me know if you can think of something better!

 

The road to continuous deployment: a case study

Below are the slides of my talk “The road to continuous deployment: a case study”, as presented at the Dutch PHP Conference in June 2016. I’m planning to dedicate a series of blogposts to this particular topic later this year.

It’s a situation many of us are familiar with: a large legacy application, limited or no tests, slow & manual release process, low velocity, no confidence…. Oh, and management wants new features, fast.

But how to proceed? Using examples and lessons learned from a real-world case, I’ll show you how to strangle the legacy application with a modern service architecture and build a continuous deployment pipeline to deliver value from the first sprint. On the way, we take a look at testing strategies and various (possibly controversial!) tips and best practices.

Docker & PHP: beyond virtual machines

(note: this post was first published on the blog of the Dutch Web Alliance)

Docker is currently one of the hottest technologies around, because it solves a very specific problem: the ability to easily package and deploy a (self contained) application, without the overhead of traditional virtualization solutions.

In this post you’ll learn how to build, run and host Docker containers, integrate with other containers, and see how Vagrant interacts with Docker.

Background

The Linux kernel contains a number of containment features that enable resource (CPU, network, memory, etc.) & process (user id’s, file systems, process trees) isolation on the same host, without the need for a virtual machine: cgroups and namespaces.

This operating system-level virtualization imposes far less overhead. User space instances (also called containers) run within the host OS, so a hypervisor and guest OS are not needed. This results in lightweight images and incredibly fast start/boot times: think seconds, not minutes. Containers are like vm’s, without the associated weight.

Docker is a toolkit built around those containment features. Before version 0.9, Docker relied on LXC (LinuX Containers): a (userspace) interface to the kernel containment features. As of version 0.9, Docker has its own interface layer called libcontainer.

Installing Docker

Installing Docker is easy if you are using Ubuntu. To install the official package (may not be the absolute latest version):

Should you really want the latest version:

Note: because Docker relies on Linux-specific kernel features, it is not natively supported on OSX or Windows.
There are some workarounds: use Vagrant (see below), or refer to the Docker installation instructions ( https://docs.docker.com/installation/windows/ and https://docs.docker.com/installation/mac/) for details.

Creating and starting a container

Let’s start with a small PHP app, which we will package and deploy as a Docker container. The app will expose a simple socket server on a port (1337), and output a string to connections on that port. To build the app we’ll be using React, a library that adds event-driven, non-blocking IO to PHP.

First, install composer (http://getcomposer.org), then run the following command:

Then save the following code as “app.php”:

If we then run the following command:

we should see something like this:

Open your browser, point it to http://localhost:1337/, and you should see something like:

React screenshot

Now, to get our app to run inside a Docker container, we first need to build an image. The instructions that define how to build the image are saved in a Dockerfile, in the form of commands:

In this Dockerfile we’ve used the following commands:

  • FROM: an existing image that our image should be based on
  • ADD: a tar file or existing directory pointing to our app
  • CMD: command to run when starting the container
  • EXPOSE: ports that we want accessible outside of the container

The full list of commands is available .

Now that our Dockerfile is finished, let’s build the image. Run the following command:

This builds a new image using the Dockerfile in the current directory, and will tag
the resulting image with the string “our_app_image”. The output of the command should look like this:

The next step is creating and running a new container that uses our freshly baked image.

The container will be named “our_app_container”, and will run in the background (the ‘-d’ option). Docker will also proxy port 1337 on the host to port 1337 in the container. After starting the container, ‘docker ps’ shows the list of running containers and their attributes, something like:

To verify that the app is running and available, we can use ‘docker logs app’ to retrieve the stdin/stderr streams of the running container.

Dependencies: linking containers

Let’s extend our app with an external dependency. The requirement: increase a counter with every request to port 1337. The counter will live inside a Redis instance.

First, add predis-async to the list of composer dependencies:

Then, insert the following lines in app.php:

Now, how to add a Redis server to our app? We can connect directly to a specific hostname, or assume that whoever works on our app has Redis installed and running. Both are somewhat fragile, and make setting up a development environment more difficult. There is a third option: installing Redis directly to the Docker image of our app. However, this would polute the image and make it less flexible.

Enter Fig, a tool to quickly build and start (isolated) development environments. Fig requires a single configuration file (default: fig.yml):

The above file defines two containers: app (which contains our php application), and redis. The first container depends on the latter through the “links” section. Fig will make sure that, within the app container, “redis” resolves and points to that container.

Save the file, and run:

This should produce the following output:

By default, Fig will start the containers in the foreground, blocking the terminal. To start in the background, add ‘-d’ to the command line.

Docker & Vagrant

As of version 1.6, Vagrant directly supports Docker and Docker containers through a new provider. Additionally, on platforms where Docker is not natively supported (Mac, Windows), Vagrant automatically starts a Linux-based (VirtualBox) vm which can then run a Docker container.

Let’s create a Vagrantfile for our app. We can re-use the existing Dockerfile, and let Vagrant use that:

Save the file, and run ‘vagrant up’:

And once again, our app is available on port 1337, proxied to the container that Vagrant built.

Conclusion

If you’ve made it this far, you now know how to start a Docker container that runs your app, and link that container to other dependencies. Ofcourse this is but a little intro into the exciting world of Docker, which is rapidly changing and growing, and has much more to offer!

Dutch Web Alliance, leaders in web technology

I’m proud to announce that I’ve joined forces with a group of very talented web developers in the Netherlands and Belgium: the Dutch Web Alliance.

Dutch Web Alliance

More information is available on our website, the full (dutch) press release is posted below:

APELDOORN: Freelance web developers bundelen krachten in de Dutch Web Alliance.
De Dutch Web Alliance (DWA) is een vereniging voor en door freelance
web developers uit Nederland en Belgie. Onze missie is om via kennisdeling, samenwerking en het combineren van resources, de positie van de freelancer en de kwaliteit van web professionals te verbeteren. Een lid van de DWA staat voor kennis en kunde op alle vlakken in web-ontwikkeling en kan zich als zodanig onderscheiden worden door klanten en opdrachtgevers.

De DWA richt zich niet op kwantiteit, maar op kwaliteit. Zo zijn het de leden onderling die bepalen of een nieuw lid ook daadwerkelijk kan toetreden tot de vereniging. Binnen de vereniging houden leden elkaar actief op de hoogte van alle laatste ontwikkelingen, bieden ze hulp en dienen ze als vraagbaak. Maar DWA leden springen ook graag bij andere leden in situaties zoals ziekte, deadlines, specifieke kennis. Zo krijgen afnemers van de DWA de flexibiliteit van freelancers, met de voordelen van full-service internet bureaus.

De DWA is meer dan alleen een keurmerk en kwaliteitswaarborg. Onze leden zijn voor het grootste deel actief in open source communities zoals PHP, dev/ops en system administration. Zij zijn dan ook vaak terug te vinden op nationale en internationale conferenties als spreker waarbij ze als experts en “leaders of the field”, andere ontwikkelaars helpen met nieuwe technieken, best practices en algemene kennis die is opgedaan uit de vele complexe projecten waar ze dagelijks aan werken. De DWA draagt hier nog een extra steentje aan bij door het organiseren van workshops en meetups voor niet alleen ontwikkelaars, maar ook leidinggevenden, project managers en CTO’s/CIO’s. Kortom: onze vereniging zet zich in voor een beter, professioneler en efficiënter klimaat binnen de web-ontwikkeling.