Build a PI Cluster for Local Development - Part 2
Published 1 month and 1 day ago, Categorized under: development, docker, docker-swarm, coding, pi, raspberrypi

In the last post we talked bout preparing your PIs to be part of a cluster, configure it and make sure it is up-to-date. In this article I'll be talking about installing the most essential programs:

  1. Docker & setup Docker Swarm on both PIs
  2. Gluster; which is a free and open source software scalable network filesystem.

The main reason for using Gluster is to have a sharable persistent storage between your PIs that Docker Swarm can use, since Docker Swarm (at least the CE version) does not support a persistent storage out of the box (if you know otherwise feel free to share the information).

Installing Docker:

The easiest way to install Docker is to use the installation script provided by Docker team, which you can obtain from this link.

To install Docker you run the following command as root on each PI:

$ curl -fsSL  | sh -

Once the script finish installing Docker, you need to add your user to the docker group, and this can be done using the following command on all your PIs:

$ sudo usermod -aG docker pi

Remember to replace pi with your username if you choose to use a different user than pi. You will need to logout/login again for your user to get the correct permissions.

That's it, now you have docker installed on your pi, and you can verify it by running the following command:

$ docker info

If everything was correct you will get some info like the following:

 Debug Mode: false
  app: Docker Application (Docker Inc., v0.8.0)
  buildx: Build with BuildKit (Docker Inc., v0.3.1-tp-docker)
  mutagen: Synchronize files with Docker Desktop (Docker Inc., testing)

 Containers: 4
  Running: 4
  Paused: 0
  Stopped: 0
 Images: 8
 Server Version: 19.03.11
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
   Profile: default
 Kernel Version: 5.4.44-v8+
 Operating System: Debian GNU/Linux 10 (buster)
 OSType: linux
 Architecture: aarch64
 CPUs: 4
 Total Memory: 3.534GiB
 Name: main
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
 Live Restore Enabled: false

Things might be a bit different at your end from what I presented, but this is a good sign that everything is working as plan.

Docker in PI will produce a few warnings, you can ignore them as they are harmless.

Docker Swarm:

Now, you have docker installed on your PIs, and it is the time to initiate the Swarm cluster and join the PIs together, but before we do that we need to change a few things.

Change the hostname & hosts file:

All your PIs will have the raspberry as the hostname, but we need to make sure we differentiate each one by giving it a unique name, so we need to edit the /etc/hostname file and change the name to something we choose.

$ sudo nano /etc/hostname

You can call the first one main, the second one worker1, the third one worker2 .. etc. Since I only have two PIs I've called them main & worker.

Then you need to edit /etc/hosts file to reflect the changes and to identify the PIs you have in your cluster, so the file should have something like the following:

# /etc/hosts   localhost
::1     localhost ip6-localhost ip6-loopback
ff02::1     ip6-allnodes
ff02::2     ip6-allrouters  main  worker

As I mentioned, this is what the file should look like in all your PIs.

Now we need to go to the main one, and run the following command:

$ docker swarm init

You will then presented with something like:

Swarm initialized: current node (3yj9paq9hn1dp6rpjjhvxjs06) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4posux641n7yiqbniwf3sr6rjpqbuth0bnogebajajhqf4bpub-805veloekj7fkx6cd0h8x1tjw

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

AS the message noted we need to run the following command, to get the token for adding a manager:

$ docker swarm join-token manager

It will return something like:

To add a manager to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4posux641n7yiqbniwf3sr6rjpqbuth0bnogebajajhqf4bpub-5hft39oha9rdtvrhvbwum2zkv

Now SSH into the second device and run the following command:

$ docker swarm join --token SWMTKN-1-4posux641n7yiqbniwf3sr6rjpqbuth0bnogebajajhqf4bpub-5hft39oha9rdtvrhvbwum2zkv

Now running docker node ls on any of the PIs should return something like:

ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
uty5oo1shl9dl7k119ov1xcnw *   main                Ready               Active              Leader              19.03.11
6a5v2sofsow9qsefybip8901z     worker              Ready               Active              Reachable           19.03.11

Please note that I am not following the recommendations from docker about building a swarm cluster as I am running two managers, but the minimum number of managers for redundancy is 3. More info can be found here.


The talk about having a persistent shared volume in Docker swarm is not going to be small, but since many have already talked about the topic, I am not going to go into details, I'll just explain how to install Gluster as it is the only solution I have found working with my PI Cluster.

By installing Gluster we will achieve the following:

Docker Swarm Shared Storage Image credited to

Installing Gluster:

To install Gluster we need to run the following commands on each PI:

$ sudo apt update -y
$ sudo apt install -y glusterfs-server

These commands will install glusterfs-server version 5.5-3 for arm64, sadly this is the latest version which was published to the repository, at the time of writing the latest version is 7 but I am not going to install it from the source. If you like to do so, you can check Gluster documentations.

Configuring Gluster:

All commands should be run on each PI unless I mentioned otherwise.

First of all you need to enable Gluster by running the command:

$ sudo systemctl enable glusterd

Then we need to start it by running

$ sudo systemctl start glusterd

To make sure everything is working as it should be, we run the following commands:

# Verify glusterd is enabled
$ sudo systemctl is-enabled glusterd

# Check the system service status of the glusterd
$ sudo systemctl status glusterd

The result for the last command shoule be something like:

● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/lib/systemd/system/glusterd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-06-10 10:44:32 +03; 1h 15min ago
     Docs: man:glusterd(8)
  Process: 467 ExecStart=/usr/sbin/glusterd -p /run/ --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCES
 Main PID: 502 (glusterd)
    Tasks: 43 (limit: 4033)
   CGroup: /system.slice/glusterd.service
           ├─502 /usr/sbin/glusterd -p /run/ --log-level INFO
           ├─689 /usr/sbin/glusterfsd -s master --volfile-id staging-gfs.master.gluster-brick -p /var/run/gluster/vols/staging-gfs/mast
           └─771 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/run/gluster/glustershd/ -l /var

Jun 10 10:44:18 main systemd[1]: Starting GlusterFS, a clustered file-system server...
Jun 10 10:44:32 main systemd[1]: Started GlusterFS, a clustered file-system server.

For simplicity, I'll probe all nodes so run the following command:

$ sudo gluster peer probe main; sudo gluster peer probe worker;

Running the following command on our main PI should return something like:

$ sudo gluster pool list

# should return
UUID                    Hostname    State
715ab501-177d-4337-8b9b-e1405ff133c3    worker      Connected
0afa69d0-4484-4a6b-b500-d79f8c6eab2d    localhost   Connected

# to check the peer status run
$ sudo gluster peer status

# should return
Number of Peers: 1

Hostname: worker
Uuid: 715ab501-177d-4337-8b9b-e1405ff133c3
State: Peer in Cluster (Connected)

We are now ready to create the gluster storage directory and volume by running the following commands:

$ sudo mkdir -p /gluster/brick

# create a gluster volume across all nodes
$ sudo gluster volume create staging-gfs replica 2 main:/gluster/brick worker:/gluster/brick force

# Start the volume
$ sudo gluster volume start staging-gfs

# Check the volume status
$ sudo gluster volume info

# You will get something like

Volume Name: staging-gfs
Type: Replicate
Volume ID: 7004cb0a-0040-4b62-b4d4-79489c37ef68
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: master:/gluster/brick
Brick2: worker:/gluster/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet

If you have more than two nodes, the number of replica should reflect to the number or your nodes so if for example you have five nodes the volume command should be something like

$ sudo gluster volume create staging-gfs replica 5 main:/gluster/brick worker1:/gluster/brick worker2:/gluster/brick worker3:/gluster/brick worker4:/gluster/brick force

Now we need to specify a place where we should mount the gluster volume to, I prefer not to use /mnt directly instead creating a directory inside /mnt called data or shared, as from my experience whenever you restart your node gluster won't be able to mount it and your cluster will fail to work.

So we run the following commands:

$ sudo mkdir -p /mnt/data
$ sudo echo 'localhost:/staging-gfs /mnt/data glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0' >> /etc/fstab
$ sudo mount.glusterfs localhost:/staging-gfs /mnt/data
$ sudo chown -R pi:pi /mnt/data

Let's validate that we have staging-gfs is listed under our mounted partitions:

# run the following command
$ sudo df -h

# should get something like
Filesystem              Size  Used Avail Use% Mounted on
/dev/root                59G  5.6G   51G  11% /
devtmpfs                1.7G     0  1.7G   0% /dev
tmpfs                   1.8G     0  1.8G   0% /dev/shm
tmpfs                   1.8G  480K  1.8G   1% /run
tmpfs                   5.0M  4.0K  5.0M   1% /run/lock
tmpfs                   1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/mmcblk0p1          253M   54M  199M  22% /boot
localhost:/staging-gfs   59G  6.2G   51G  11% /mnt/data
tmpfs                   362M     0  362M   0% /run/user/1000

As we can see localhost:/staging-gfs is mounted on /mnt/data.

Just a reminder, all the commands should be run on all your nodes not only the manager.

To validate that everything is working as it should, we create a file in our main node, and it should be listed on our worker directory.

example of using the shared directory Click here for bigger animated image.

Build a PI Cluster for Local Development:

  1. Part one: Preparing your PIs.
  2. Part two: Installing Docker, Docker swarm and Gluster.
  3. Part three: Create your Stack (MariaDB, PostgreSQL, Redis and Minio)
Share It via: