Zaher Ghaibeh
PHP Backend developer
I've experience in a few PHP Frameworks, such as Laravel, Lumen and Slim (The last two are used for building Microservices/API services).
Test your code with Docker Cloud
Published at Thursday, December 13, 2018 , Categorized under: php, testing, docker, docker cloud, laravel

Yesterday, I spent some times trying to understand the difference between Docker Cloud and Docker hub. In the past Docker Cloud used to have a lot of services and features, which was deprecated by Docker for some reasons, I think they added them to the CC version of Docker.

They are basically the same, except that Docker Cloud has the ability to test your code too.

Basically they left two services within Docker Cloud, building/hosting your images and testing them. I guess we all familiar with how hosting works, you build your image locally (or somewhere) then you push it to the registry to be hosted, but if you linked your account with either Github or Bitbuket you can initiate an auto-build functionality, so once you pushed your code Docker hub will pickup the changes and start the building process for you automatically.

What we might not know, is that you can also add a test to your image, and once it pass it will be pushed to the registry. Yes, there are many other great options out there, but if you are already a paid user to Docker hub/Docker Cloud why not to take it to the next level and use what they have, at the end they dont charge you per minute nor per built, they charge you per private image.

What I'll explain here will also works with free accounts, except that free accounts has only one parallel process, so you can't start a new process unless you finished the one before.

Linking your Github account

From the documentation

To automate building and testing of your images, you link to your hosted source code service to Docker Cloud so that it can access your source code repositories. You can configure this link for user accounts or organizations.

If you only push pre-built images to Docker Cloud’s registry, you do not need to link your source code provider.

You can read more about how to link your account at this page Link Docker Cloud to a source code provider.

Automated builds

Am not going to explain this one too, as its straightforward and explained in details at Automated builds page. But I need to mention that this is an old explanation, which am not sure when they will update it, but the difference that you might see when using the system, is that now you dont need to choose where to build your image, it will always be on Docker Cloud infrastructure.

Automated repository tests

Now the documentation about the tests is not so extensive, not sure why to be honest, but it's also a straightforward process which you can pick up fast if you are familiar with Docker Compose.

Basically, you will need to have a file called docker-compose.test.yml which should contain the information and your test commands. The simple form of it will be something like:

version: "3.6"

services:
    sut:
      build: .
      command: php ./vendor/bin/phpunit

Docker Cloud will pickup the content of the file docker-compose.test.yml and run everything within the service sut, no idea why they choose this name but it is what it is, and as you can see it build the image first and then run the command we specified.

This is the simplest way to run a test, which may not be the most effective one, but it works. The only note I want to mention here is that instead of using build to rebuild your image again, you can use the image name:tag that you have pushed, I'll show you more about this in few seconds.

In real life, we dont run tests on one simple image, we have many more services connected to each others which we need to use to run our tests, for example, you may need a database service or a redis service .. etc, so how we can do that within Docker Cloud?

After some trails/errors I can tell you that it is a simple process which can be achieved without any problem, I mean at the end we are dealing with docker-compose.yml file.

As a scenario, am going to use a fresh laravel project and run the tests using Docker Cloud, you can find all the code at this github repository.

I always try to mash/mesh my project structure, and I still don't have a standard way to do it, but for now lets assume that we have the following structure:

And this not the most perfect structure, remember you can have any structure you want.

.dockerignore
├── bin // shell scripts
├── hooks // docker hooks
└── src // laravel code
    ├── bootstrap
    ├── config
    ├── database
    ├── public
    ├── resources
    ├── routes
    ├── storage
    └── tests
Dockerfile
docker-compose.test.yml
docker-compose.yml

I am not going to go into each and every file I have here, but am going to explain the most important one:

  1. Dockerfile
  2. hooks/build
  3. docker-compose.test.yml
  4. bin/run_tests

1. Dockefile File

The content of this file is not complicated, am going to strip most of the none-essential data from it

FROM zaherg/php-swoole:7.2

USER root

ADD ./src /var/www
ADD ./bin /var/test

WORKDIR /var/www

RUN composer global require hirak/prestissimo && \
    composer install --no-progress --no-suggest --prefer-dist --optimize-autoloader

CMD ["php", "artisan","swoole:http","start"]
  1. The image depends on my [PHP Swoole]() docker image.
  2. I add the content of the laravel application to /var/www.
  3. Add the content of the bin directory which holds some shell scripts to /var/tests directory.
  4. Specify the working directory.
  5. Run composer and install all the required packages.

2. hooks/build File

This file will overwrite the build command which Docker Cloud/Docker Hub run to build our images, we are free to use it or ignore it, I like to use it so I can inject some of the information back into the label section of my docker images.

#!/bin/bash

# $IMAGE_NAME var is injected into the build so the tag is correct.

echo "Build hook running"
docker build --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
             --build-arg VCS_REF=`git rev-parse --short HEAD` \
             --build-arg DOCKER_REPO=$DOCKER_REPO \
             --build-arg IMAGE_NAME=$IMAGE_NAME \
             -t $IMAGE_NAME .

You can read more about the Custom build phase hooks here.

3. docker-compose.test.yml File

AS I said before, this is a normal docker-compose.yml file, so I'll not go deep in explaining it, but I'll point out to the issues I faced and how to solve them.

version: "3.6"

services:
  sut:
    image: zaherg/laravel-test:latest
    command: /var/test/run_tests
    environment:
      - "APP_ENV=testing"
      - "APP_KEY=base64:qvxL3ijR2uwH268mkoz7B8DyuJ9mFectDiQbUy5D6/E="
      - "APP_DEBUG=true"
      - "APP_URL=http://localhost"
      - "DB_CONNECTION=mysql"
      - "DB_HOST=database"
      - "DB_DATABASE=docker"
      - "DB_USERNAME=docker"
      - "DB_PASSWORD=secret"
    links:
    - database

  database:
    image: mysql:5.7
    hostname: database
    ports:
      - "3306:3306"
    environment:
      - "MYSQL_RANDOM_ROOT_PASSWORD=yes"
      - "MYSQL_DATABASE=docker"
      - "MYSQL_USER=docker"
      - "MYSQL_PASSWORD=secret"

As you can see in this file, I have two services, the first one is responsible of testing my code, the second one is the database.

Other than naming my test service sut you may have noticed nothing is different than any other service, which is great, as this make it easy for everyone to understand.

However, what I've found is that we need to specify the links between the two services, or any other service, and using the network wont link them together, yes Docker Cloud will create the network, but sadly it wont inject the name of the other services within the /etc/hosts file, instead it will always inject the IP address and the container ID, which we have no control over.

So you can use either links or depends_on to overcome this issue and connect your services together.

If you noticed that in sut am not using the build command instead am specifying the image name, that's because the test will always run after the build/tag finished.

So why to waste time building the image again?

If your tests passed you will get a green success message

If not, you will get a red failed message

Either way, you can read the logs to figure out what is the issue and fix it.

Please note that if your tests failed, docker will not tag your image.

You will also have a nice chart to show you the build activities you have had for the last 32 builds

4. bin/run_tests File

This file should contain all the commands I want to run to test my code, including any specific instructions that I don't want to include in my final image, so remember those commands will be running within your final build image not the build infrastructure.

Am going to present two different versions of my code, one which use a simple shell script called wait-for-it and the second will use a program called Dockerize, and both do the same thing, they will wait till the other services is ready to accept connections.

The main reasons we need to use them is that, we have no control over when the other services will be ready, which means that if our image is ready to run the tests we are not sure if the database is ready or not, that's why we need to have some delay before we can start, but also we can't just have any delay we need to make sure it's ready.

Wait-for-it version

#!/usr/bin/env sh

echo "Waiting for the database to be ready"

// we define the timeout, the default value is 10
// then we send the name of the second service and the port as parameter
export TIMEOUT=50 && /var/test/wait-for-it database:3306

echo "Running tests"

php /var/www/vendor/bin/phpunit --testdox --colors=never

Dockerize version

#!/usr/bin/env sh

// Since its not installed, we need to install the app first before we use it

echo "Installing Dockerize from github"

export DOCKERIZE_VERSION=v0.6.1

apk update && apk add --no-cache openssl wget && \
wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz && \
tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz && \
rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz

echo "Waiting for the database to be ready"

dockerize -wait tcp://database:3306 -timeout 50s

echo "Running tests"

php /var/www/vendor/bin/phpunit --testdox --colors=never

Conclusion

In conclusion, as you can see Docker Cloud has a nice feature which we might didnt know about, and using it will help us to have everything in one place, as with Docker Cloud you can host, build and test your docker images.

Share It via: