Please be aware that this is an old post, more than one year old, so you might need to look for an updated version of this article either on this site or using your favourite search engine.
In the past few days, I was only reading about Docker, trying to understand it more and more, see how can I use it and how can I implement it in my workflow, and yet still need a lot to learn.
In my learning journey I depends on reading articles (a lot of articles), checking Docker documentation, watching online courses or even youtube videos, and trust me not all of them is worth the time I spend because either they are old or does not cover the situation am trying to achieve in my head. And am not saying they are bad, but everyone knows that Docker(http://docker.com) release cycle is fast, and with each new version they introduce a big breaking features which will not work with the older versions, so if you don't follow up fast you will lose the track and that's also apply to any article you read, if its talking about version 1.11 mean while you are working with 1.12, you may get lucky and you find the solution for the issue you are facing.
Now enough with this long talk, which is debatable by the way, lets focus about what I have learned in the past few days and how I got what I want fast.
Basically I wanted to have a small API + Database images, based on Docker like this
Now to do so, you will need to create a
docker-compose.yml file to define each service, and here we have two one for
the API and one for the Database, my API service will depend on node.js so basically you can either build your own
base image or you can just pull someone else image, for me and since am learning I choose to
build my own image (and please don't use it, its for old nodejs),
plus I wanted to make my
Dockerfile for the API some how cleaner this way I will not need to build the base
image each time I want to build the API, so the rule number one :
Always try to have a base image that you can use all the time.
Now that we have the base image for my API, I need decide what to do for the DB, I choose the other option here and
got the official base image.
Now that we have agreed about the base images, lets see how to build our own image for the API first.
Lets see first the content of my
FROM zaherg/node:0.10.33 EXPOSE 3000 WORKDIR /app ENV NODE_ENV local ADD ./src/package.json /app RUN npm install
- I Exposed the port 3000 from the image.
- I choose the dir
/appto be the working directory.
- Created an environment var
- Now copied the
npm installto install all the packages inside the image.
As we can see the content is simple and easy to understand, and the first part of the
docker-compose.yml is like this :
version: "2" services: api: build: context: . dockerfile: Dockerfile command: dockerize -wait tcp://db:3306 -timeout 80s node server.js ports: - 3000:3000 environment: - PORT=3000 volumes: - ./src:/app - /app/node_modules
Its just a simple file where I define some other information, and define the volumes that I will share between my image and my computer.
Am not going to explain that much about it, but you should notice the
command part where I run
node server.js to
start my API, and the
ports part where I map the port
3000 from my host to the port
3000 on the image.
docker-compose up --build worked like a charm but for sure the API will face some problem as I didn't setup
the database image yet, lucky for me I had some vagrant image running for the database, cause I need to make sure that
each part is working like a charm before I can build the second one.
docker-compose you can scale your image and run more than one node from the same image using the
command, so I though to myself why not, lets try it and see how
we can scale this simple API image, so basically what I though this
will result is something like this
so I run the command
docker-compose scale api=4 and it failed. So rule number two : Never map ports if you
want to scale your service. and the solution to my problem was by removing the
docker-composer.yml file, but then now I have no idea what is the port which will be mapped from my host
computer to the image, cause each time I'll get a different random port. So after searching and reading again I found
that the only way to make it work is to put a load balancer in front of my images and it should be the main
communication channel between me and the images like this
So after some search I found out that I can use an image for haProxy
as my load balancer entry, so the
docker-composer.yml file is now like this:
version: "2" services: api: build: context: . dockerfile: Dockerfile command: dockerize -wait tcp://db:3306 -timeout 80s node server.js environment: - PORT=3000 - TCP_PORTS=3000 networks: - back-tier volumes: - ./src:/app - /app/node_modules lb: image: dockercloud/haproxy ports: - 80:3000 - 443:3000 networks: - back-tier volumes: - /var/run/docker.sock:/var/run/docker.sock networks: back-tier: driver: bridge
Things that have changed is :
- added a new environment variable
haproxythat the communication with my
APIservice will be via
- mapped the local ports
433to the image port
3000so any request over the ports
80 or 433will be redirected to port
3000which is the port that mapped by my
- mapped the file
/var/run/docker.sockto the load balancer image so it can be managed by the load balancer.
- created a new network so that both images will be on the same network and they can be communicate without any problem.
docker-compose scale api=4 now worked like a charm, and it scaled the service as requested, so rule number
three: Read about the tools that you are using and check what options they provide.
Now the last part would be to add the database service, which is not a big issue just a new service which can be added
docker-compose.yml file so our infrastructure will be like this
Now if you look close you will notice that I have added the database to load balancer too, and that's because in my
head I didn't want to deal with the databases directly, but then I remembered that I do need to access it via
sequelpro so the final code for
docker-compose.yml file is the following:
version: "2" services: api: build: context: . dockerfile: Dockerfile command: dockerize -wait tcp://db:3306 -timeout 80s node server.js environment: - PORT=3000 - TCP_PORTS=3000 networks: - back-tier volumes: - ./src:/app - /app/node_modules depends_on: - db lb: image: dockercloud/haproxy ports: - 80:3000 - 443:3000 - 3306:3306 networks: - back-tier volumes: - /var/run/docker.sock:/var/run/docker.sock links: - api - db db: image: mysql:latest networks: - back-tier ports: - 3306 environment: - MYSQL_ROOT_PASSWORD=root - TCP_PORTS=3306
and as you can noticed we liked both the API and the Database to the Load Balancer service.
now even if you tried to scale the database it will not be a problem for you, but you should not do that, so rule number four: Scaling database servers is not good if you didnt configure a cluster. yes you do need a cluster for your databases so that the data will be distributed to all instances.
- Please note that this is not a so nice infrastructure for production, it was just for leaning and experimental only.
- Yes I did build my API with nodejs, but this does not mean that you cant do it with PHP (for example laravel app), the API here is just a service and has nothing to do with the programming language.
- If you want to read more about Dockerize you can check https://github.com/jwilder/dockerize
- English is not my first language, so if you noticed any lingual mistake please let me know to fix it.