Difference between revisions of "A Raspberry Pi based 'Bramble' Cluster for Docker Swarm"
m |
|||
Line 1: | Line 1: | ||
==Introduction== | ==Introduction== | ||
− | This Article describes all of the steps to create a | + | This Article describes all of the steps to create a 'Beowulf' compute cluster using Raspberry Pi 3s. In this example 7 Raspbery Pis are used to give 28 cores of ARM processing with Docker Swarm providing 28 load balanced instances of the Nginx web server servicing HTTP requests from machines sitting outside of the Swarm. In effect there will be 28 Nginx web servers distributed across 7 Docker Nodes running on 7 Raspberry Pis. One node will be a manager node the other four will be worker nodes. All nodes will host Nginx servers. |
The article touches briefly on different aspects of clustering and Docker, the aim is provide enough information to build a working Docker Swarm cluster whilst at the same time encouraging the reader to discover the specific components in more detail for themselves. | The article touches briefly on different aspects of clustering and Docker, the aim is provide enough information to build a working Docker Swarm cluster whilst at the same time encouraging the reader to discover the specific components in more detail for themselves. | ||
===Hardware=== | ===Hardware=== | ||
+ | |||
+ | [[File:myfile]] | ||
Full details of the cluster built for this project will be published at some point in the future (watch this space). However, for the purposes of this article a collection of Raspberry Pi 3s connected on the same LAN will suffice. | Full details of the cluster built for this project will be published at some point in the future (watch this space). However, for the purposes of this article a collection of Raspberry Pi 3s connected on the same LAN will suffice. |
Revision as of 20:07, 12 January 2018
Contents
Introduction
This Article describes all of the steps to create a 'Beowulf' compute cluster using Raspberry Pi 3s. In this example 7 Raspbery Pis are used to give 28 cores of ARM processing with Docker Swarm providing 28 load balanced instances of the Nginx web server servicing HTTP requests from machines sitting outside of the Swarm. In effect there will be 28 Nginx web servers distributed across 7 Docker Nodes running on 7 Raspberry Pis. One node will be a manager node the other four will be worker nodes. All nodes will host Nginx servers.
The article touches briefly on different aspects of clustering and Docker, the aim is provide enough information to build a working Docker Swarm cluster whilst at the same time encouraging the reader to discover the specific components in more detail for themselves.
Hardware
Full details of the cluster built for this project will be published at some point in the future (watch this space). However, for the purposes of this article a collection of Raspberry Pi 3s connected on the same LAN will suffice.
Arch Linux
All of the Raspberry Pis are running Arch Linux, to install Arch Linux on a single node, follow the instructions Installing Arch Linux on a Raspberry Pi 3. When following the article, it is not necessary to include the optional or Wifi elements of the installation.
Installing Docker
Docker needs the loop module on first usage. The following steps may be required before installation.
tee /etc/modules-load.d/loop.conf <<< "loop" modprobe loop
Install Docker
pacman -S docker
Add the user to the docker group.
gpasswd -a john docker
Start and Enable the service
systemctl start docker.service systemctl enable docker.service
Configure the storage driver to be overlay2 as the compatible option, devicemapper offers sub-optimal performance. In addition, devicemappper is not recommended in production. Modern docker installation should already use overlay2 by default.
To see current storage driver, run
docker info | head
Test the installation by running an ARM Hello World example.
docker pull hypriot/armhf-hello-world docker run -it hypriot/armhf-hello-world
To start the remote API with the docker daemon, create a Drop-in snippet (/etc/systemd/system/docker.socket.d/override.conf) with the following content, note that the directory will need to be created.
[Service] ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
The -H tcp://0.0.0.0:4243 part is for opening the Remote API and the -H unix:///var/run/docker.sock part is for host machine access via terminal.
Python
This is needed for Ansible, Ansible is an optional component, (see below).
pacman -S python
RSync
RSync is used by the Ansible synchronize module and is useful for deploying files between nodes.
pacman -S rsync
Cloning the SD card for use with other nodes
Follow the article here to clone the SD Card.
Clone and boot each node in turn and change the IP Address and hostname (/etc/hostname) as required.
Create a Serial Console on node-00
One node of the cluster can be designated a 'manager node'. It may be appropriate to add a serial console to this node. This step is not necessary but be useful should something go wrong. An alternative may be to connect a USB keyboard and HDMI monitor to the designated manager node.
Details of adding a serial port to a Raspberry Pi 3 can be found in the article Adding a Serial Console to a Raspberry Pi.
Install Ansible on node-00
Ansible is a simple method of managing multiple computers and is ideally suited for use on a cluster, however, this is purely an optional component.
Note: There is no installation required on the hosts that are to be administered using Ansible, as the product uses SSH to perform tasks. However, each node must have Python installed.
Install Ansible using the command
pacman -S ansible
Create an inventory file at /etc/ansible/hosts e.g.
[control] 192.168.1.80 [managed] 192.168.1.81 192.168.1.82
You can check if all the nodes listed in the inventory are alive by
ansible all -m ping
Create a playbook that will run an package update on each node and save to syu.yml e.g.
--- - name: All hosts up-to-date hosts: control managed become: yes tasks: - name: full system upgrade pacman: update_cache: yes upgrade: yes
Execute the playbook with the following command
ansible-playbook --ask-become-pass syu.yml
Setting up the Docker Swarm (Swarm Mode)
Assuming the manager host is at 192.168.1.80, run the following command on that host to create a swarm
docker swarm init --advertise-addr 192.168.1.80
The response shows the command that can be run on the other hosts to allow them to join the swarm, this command includes a token, see below. This response can be re-displayed if required, with the command
docker swarm join-token worker
Setting up the worker Nodes
Run the command returned from the docker init command above e.g
docker swarm join --token <TOKEN> 192.168.1.80:2377
Details of the swarm can be displayed with the following commands
docker node ls docker info
Creating the Web Server Services
To add web server services to the swarm, create a service. Allex Ellis (http://alexellis.io/) has an Arm based image called alexellis2/nginx-arm that can be used on a Raspberry Pi cluster other Nginx images are available for other platforms including the official image (see http://hub.docker.com for details). The command to create 4 Arm based Nginx servers is as follows. I have given the service the name 'nginx'.
docker service create --replicas 4 --publish 8080:80 --name nginx alexellis2/nginx-arm
Details of the service can be obtained with the following commands
docker service inspect --pretty nginx docker service ls docker service ps nginx
The following command will scale this up from 4 to 32 containers (tasks)
docker service scale nginx=32
Accessing the Swarm Services
During the create command, port 8080 was published and bound to the internal container port 80. As Nginx listens on port 80 within the container, this means that from outside the container Nginx can be accessed on port 8080. This applies to every node in the swarm even if there isn't an Nginx container running on the specific node. However, accessing a node without a running container is not an issue as the the Routing Mesh ensures that no matter which node you access on port 8080, you will be directed to a node with the service running.
From another machine outside of the cluster but on the same network, the Nginx server can be accessed with a browser using any node IP on port 8080 e.g
http://192.168.1.80:8080
Summary
This post describes how to use all of the in-built Docker functionality to create a swarm running 32 instances of the Nginx web server on a small custer of Raspberry Pis.