Building ARM-based Kubernetes Cluster


Using Mixtile Blade 3 single board computers to build an ARM based Kubernetes Cluster.


Things used in this project

Hardware components            Mixtile Blade 3 ×2

Software apps and online services         Kubernetes



Recently, ARM based single-board computersare became powerful enough so that we could use them for more complex computing tasks. With SoC-s like the Rockchip RK3588 we can get single-board computers with 8 CPU cores and up to 32 GB RAM. This is already respectable for a single computer, but what if we could combine the power of multiple boards into a computing cluster?

Here comes Kubernetes into the picture. Kubernetes is an open-source platform for managing compute clusters of containerized applications. It allows us to aggregate multiple compute nodes (such as SBC-s) into a cluster, that can then accommodate multiple applications.

Using multiple nodes to run to run our application, instead of just a single PC, gives as advantages such as resilience, better scalability, and others. More information on Kubernetes can on their website.

In this project I will show how to set up a Kubernetes cluster from a couple of Mixtile Blade 3 boards.


Mixtile Blade 3

The Mixtile Blade 3 is a high-performance single-board computer based on the Rockchip RK3588 chip.

The Blade 3 comes with:

  • up to32 GB of memory and 256 GB of eMMC storage
  • 2 x 2.5 Gbps Ethernet ports
  • 2 x USB Type-C ports (with PD and DisplayPort support)
  • 2 x HDMI Ports (one input and one output)
  • various connectors for PCIe, GPI and clustering support

For the cluster the two boards need to connected them in the same network like follows:

Next, we will go trough a set up setup steps that need to ran at each board,


The Mixtile Blade 3 comes pre-installed with Ubuntu Desktop (22.04.4 LTS).

For the initial setup a display (HDMI or USB Type-C), a keyboard and a mouse is needed. Here we should:

  • set a host name – I used mixtile-blade-3-001 / 002
  • set a user name and password

At this point we should have SSH access, so the display and peripherals are no longer needed.

The pre-installed Ubuntu Desktop is a good choice for desktop use, but for server / compute setups we don’t really need desktop environment such as Gnome.

Fortunately, we can easily convert our Ubuntu Desktop to an Ubuntu Server install. To do this we just need to run the following three commands with a restart between them (ref):

# Install Ubuntu Server components
$ sudo apt install ubuntu-server

# Disable the desktop environment, and set console as default
$ sudo systemctl set-default

# Clean-up the Desktop components
$ sudo apt purge ubuntu-desktop -y && sudo apt autoremove -y && sudo apt autoclean

Next, we need to make a couple of adjustments needed for Kubernetes:

  • disable swap space
# Disable the SystemD swap related services
$ sudo systemctl mask swapfile.swap mkswap.service
$ sudo systemctl disable --now swapfile.swap
$ sudo rm /swapfile
  • enable some Kernel networking features
# Enable IP forwarding and IP tables for bridge interfaces
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1

# Enable the overlay, br_netfilter Kernel modules
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
$ sudo modprobe overlay
$ sudo modprobe br_netfilter

Installing a Container Runtime

As you probably know Kubernetes works with containers. To run the containers Kubernetes needs a container runtime. There are multiple options supported, but the most popular and straightforward to install is ContainerD.

To install ContainerD we just need to run:

$ sudo apt install containerd

after which we need to change a couple of settings as needed by Kubernetes:

# Create a default ContainerD configuration
$ sudo mkdir -p /etc/containerd/; sudo bash -c 'containerd config default > /etc/containerd/config.toml

# Edit the configuration, and do the bellow changes
$ sudo vim /etc/containerd/config.toml
# use SystemD cgroups
SystemdCgroup = true
# set the Sandbox image to the one used in Kubernetes
sandbox_image = ""

# Restart the ContainerD service
$ sudo systemctl restart containerd

Installing Kubernetes

At this point our Mixtile Blade 3 hosts should be ready to install Kubernetes on them.

To install Kubernetes, I decided use to use the kubeadm tool. This, along with the other components, can be installed from a custom Kubernetes APT repository:

# Install pre-requirments
$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# Add apt repository
$ curl -fsSL | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install Kubernetes components
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
$ sudo systemctl enable --now kubelet

Now, we can start setting up our Kubernetes cluster. For a Kubernetes cluster we need at least one Control Plane node. We can have multiple control plane nodes for redundancy, but for this article I will keep it simple I will use just one.

So, on one nodes we should now be ready to initialize our Kubernetes cluster with kubeadm:

[user@mixtile-blade-3-001] $ sudo kubeadm init --pod-network-cidr=

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join --token 1xp002.4fw0...
--discovery-token-ca-cert-hash sha256:26a61d7...

If everything went well, we will get some instructions on how to configure kubectl. If we run kubectl get nodes we should see our nodes.

Adding Worker Nodes

When we initialized the Kubernetes control plane, we also got a command which can be used to join additional nodesto the cluster. This nodes will be Worker nodes.

To join or 2nd Blade 3 as a worker node we should run:

[user@mixtile-blade-3-002] $ kubeadm join --token 1xp002.4fw0... --discovery-token-ca-cert-hash sha256:26a61d7...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Now, if we run the get nodes command again, we should see that we have 2 nodes:

user@mixtile-blade-3-001:~$ kubectl get nodes
mixtile-blade-3-001 Ready control-plane 1h v1.29.4
mixtile-blade-3-002 Ready <none> 1h v1.29.4

As we have a small cluster, with just two nodes, is to enable Pod scheduling on control plane nodes. To do this we should remove a taint present by default:

$ kubectl taint nodes --all

Cluster Networking

One other essential component needed for a fully functional Kubernetes cluster is a network fabric, that enables network communication between the pods in the cluster.

There are many options to chose from. My choice now was to use Flannel, which is a simple layer 3 network fabric. Installing it is pretty easy:

$ kubectl apply -f

At this point the pods from our cluster should be able communicate with eachother.

Kubernetes Dashboard

Until now we controlled and monitored our Kubernetes cluster with the kubectl tool. One way we can improve this is to install a Kubernetes Dashboard.

We can deploy Kubernetes Dashboard with helm:

# Install Helm
$ sudo snap install helm --classic

# Add kubernetes-dashboard repository
$ helm repo add kubernetes-dashboard

# Deploy a Kubernetes Dashboard with Helm
$ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

To access the dashboard we first need to create a proxy to it:

$ kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy --address= 8443:443

After this we should be able to access the Kubernetes Dashboard at the mixtile-blade-3-001.local:8443 address. To login we will also need an access token. For testing purposes I generated a token for the deployment-controller service account, as follows:

$ kubectl -n kube-system create token deployment-controller
eyJhbGciOiJSUzI1.... <-- use this access token to login

With this we should be able to login. We will get an overview of the cluster, and we should be able to inspect deployments and pods, and create resources:

Kubernetes Dashboard

Sample Application

Now, as the cluster is more or less stable, we can deploy something useful on it.

As in the future I want to experiment with some AI / ML workloads, I decided to install a Jupyter Lab deployment in the cluster.

Jupyter has some pre-built Docker stacks which can be used to deploy Jupyter Lab in containerized environment. Based on those, I created and installed a example Jupyter Lab deployment:

$ vim jupyter-deployment.yaml
apiVersion: apps/v1
kind: Deployment
name: jupyter-scipy
replicas: 1
app: jupyter-scipy
app: jupyter-scipy
- name: jupyter
- containerPort: 8888
hostPort: 8888

$ kubectl apply -f jupyter-deployment.yaml

In this example the deployment exposes the port on which Jupyter runs (8888) as a host ports. This means, we can access the deployment connecting to port (8888) on which our Jupyter pods run:

Jupyter Lab running in the Kubernetes cluster

Next Steps

This project was a good start in which we got a compute cluster for future projects. Here are some of my plans with the cluster:

  • add more nodes to the cluster
  • experiment with Apache Airflow and Kubeflow
  • experiment with the NPU from RK3588

Hope you enjoyed this project! 😎



Network Connections


Jupyter Deployment YAML

apiVersion: apps/v1
kind: Deployment
  name: jupyter-scipy
  replicas: 1
      app: jupyter-scipy
        app: jupyter-scipy
      - name: jupyter
        - containerPort: 8080


Attila Tőkés

Software Engineer experimenting with hardware projects involving IoT, Computer Vision, ML & AI, FPGA, Crypto and other related technologies.