ARMベースのKubernetesクラスタの構築

 

Using Mixtile Blade 3 single board computers to build an ARM based Kubernetes Cluster.

 

Things used in this project


Hardware components            Mixtile Blade 3 ×2

Software apps and online services         Kubernetes

 

Story


Recently, ARM based single-board computersare became powerful enough so that we could use them for more complex computing tasks. With SoC-s like the Rockchip RK3588 we can get single-board computers with 8 CPU cores and up to 32 GB RAM. This is already respectable for a single computer, but what if we could combine the power of multiple boards into a computing cluster?

Here comes Kubernetes into the picture. Kubernetes is an open-source platform for managing compute clusters of containerized applications. It allows us to aggregate multiple compute nodes (such as SBC-s) into a cluster, that can then accommodate multiple applications.

Using multiple nodes to run to run our application, instead of just a single PC, gives as advantages such as resilience, better scalability, and others. More information on Kubernetes can on their website.

In this project I will show how to set up a Kubernetes cluster from a couple of Mixtile Blade 3 boards.

 

Mixtile Blade 3

について Mixtile Blade 3 is a high-performance single-board computer based on the Rockchip RK3588 chip.

について Blade 3 comes with:

  • up to32 GB of memory and 256 GB of eMMC storage
  • 2 x 2.5 Gbps Ethernet ports
  • 2 x USB Type-C ports (with PD and DisplayPort support)
  • 2 x HDMI Ports (one input and one output)
  • various connectors for PCIe, GPI and clustering support

For the cluster the two boards need to connected them in the same network like follows:

Next, we will go trough a set up setup steps that need to ran at each board,

Preparation

The Mixtile Blade 3 comes pre-installed with Ubuntu Desktop (22.04.4 LTS).

For the initial setup a display (HDMI or USB Type-C), a keyboard and a mouse is needed. Here we should:

  • set a host name – I used mixtile-blade-3-001 / 002
  • set a user name そして パスワード

At this point we should have SSH access, so the display and peripherals are no longer needed.

The pre-installed Ubuntu Desktop is a good choice for desktop use, but for server / compute setups we don’t really need desktop environment such as Gnome.

Fortunately, we can easily convert our Ubuntu Desktop に対する Ubuntuサーバーのインストール.そのためには、以下の3つのコマンドをリスタートさせながら実行する必要がある。 参照:

# Install Ubuntu Server components
$ sudo apt install ubuntu-server

# Disable the desktop environment, そして set console as default
$ sudo systemctl set-default multi-user.target

# Clean-up the Desktop components
$ sudo apt purge ubuntu-desktop -y && sudo apt autoremove -y && sudo apt autoclean

次に、Kubernetesに必要な調整をいくつか行う必要がある:

  • スワップ領域を無効にする
# Disable the SystemD swap related services
$ sudo systemctl mask swapfile.swap mkswap.service
$ sudo systemctl disable --now swapfile.swap
$ sudo rm /swapfile
  • カーネルのネットワーク機能を有効にする
# Enable IP forwarding and IP tables for bridge interfaces
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Enable the overlay, br_netfilter Kernel modules
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter

コンテナランタイムのインストール

ご存知のように Kubernetes はコンテナで動作する。Kubernetesがコンテナを実行するには、コンテナランタイムが必要だ。複数のオプションがサポートされているが、最も人気があり、インストールが簡単なのは コンテナD.

インストール方法 コンテナD 走るだけだ:

$ sudo apt install containerd

その後、Kubernetesが必要とするいくつかの設定を変更する必要がある:

# Create a default ContainerD configuration
$ sudo mkdir -p /etc/containerd/; sudo bash -c 'containerd config default > /etc/containerd/config.toml

# Edit the configuration, and do the bellow changes
$ sudo vim /etc/containerd/config.toml
...
# use SystemD cgroups
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
...
# set the Sandbox image to the one used in Kubernetes
sandbox_image = "registry.k8s.io/pause:3.9"
...

# Restart the ContainerD service
$ sudo systemctl restart containerd

Kubernetesのインストール

この時点で Mixtile Blade 3 ホストにKubernetesをインストールする準備が整っているはずだ。

インストール方法 Kubernetesを使うことにした。 クベドム ツールを使用する。これは他のコンポーネントとともに、カスタムKubernetes APTリポジトリからインストールできる:

# Install pre-requirments
$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# Add apt repository
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install Kubernetes components
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
$ sudo systemctl enable --now kubelet

さて、Kubernetesクラスタのセットアップを始めよう。Kubernetesクラスタには、少なくとも1つの コントロールプレーン ノードを使用する。冗長性を持たせるために複数のコントロールプレーン・ノードを持つこともできるが、今回はシンプルに1つだけ使うことにする。

ノードを初期化する準備が整いました。 Kubernetesクラスタ と クベドム:

[user@mixtile-blade-3-001] $ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
走る "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.78:6443 --token 1xp002.4fw0...
--discovery-token-ca-cert-hash sha256:26a61d7...

すべてがうまくいったら、設定方法を教えてもらう。 クベクトル.もし ノードを取得する ノードが見えるはずだ。

ワーカーノードの追加

Kubernetesのコントロール・プレーンを初期化したとき、クラスタに追加のノードを参加させるためのコマンドも入手した。このノードは ワーカーノード.

ブレイド3への参加、または2ndブレイド3への参加 ワーカーノード 我々は走るべきだ:

[user@mixtile-blade-3-002] $ kubeadm join 192.168.0.78:6443 --token 1xp002.4fw0... --discovery-token-ca-cert-hash sha256:26a61d7...

...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

もう一度get nodesコマンドを実行すると、2つのノードがあることがわかる:

user@mixtile-blade-3-001:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
mixtile-blade-3-001 Ready control-plane 1h v1.29.4
mixtile-blade-3-002 Ready <none> 1h v1.29.4

2ノードしかない小さなクラスタなので、コントロールプレーンノードでPodスケジューリングを有効にする必要があります。そのためには、デフォルトで存在するテイントを取り除く必要がある:

$ kubectl taint nodes --all node-role.kubernetes.io/control-plane-

クラスター・ネットワーキング

Kubernetesクラスターを完全に機能させるために必要な、もう1つの重要なコンポーネントがある。 ネットワークファブリッククラスタ内のポッド間のネットワーク通信を可能にする。

多くの選択肢から選ぶことができる。私が今選んだのは フランネルこれはシンプルなレイヤー3のネットワーク・ファブリックだ。インストールはとても簡単だ:

$ kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

この時点で、クラスタのポッドは互いに通信できるはずだ。

Kubernetesダッシュボード

これまでは、Kubernetesクラスタを クベクトル ツールを使用します。これを改善するひとつの方法は Kubernetesダッシュボード.

私たちは Kubernetesダッシュボード と helm:

# インストール Helm
$ すど snap install helm --classic

# Add kubernetes-dashboard repository
$ helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

# Deploy a Kubernetes Dashboard Helm
$ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

To access the dashboard we first need to create a proxy to it:

$ kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy --address=0.0.0.0 8443:443

After this we should be able to access the Kubernetes Dashboard at the mixtile-blade-3-001.local:8443 address. To login we will also need an access token. For testing purposes I generated a token for the deployment-controller service account, as follows:

$ kubectl -n kube-system create token deployment-controller
eyJhbGciOiJSUzI1.... <-- use this access token to login

With this we should be able to login. We will get an overview of the cluster, and we should be able to inspect deployments and pods, and create resources:

Kubernetesダッシュボード

Sample Application

Now, as the cluster is more or less stable, we can deploy something useful on it.

As in the future I want to experiment with some AI / ML workloads, I decided to install a Jupyter Lab deployment in the cluster.

Jupyter has some pre-built Docker stacks which can be used to deploy Jupyter Lab in containerized environment. Based on those, I created and installed a example Jupyter Lab deployment:

$ vim jupyter-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter-scipy
spec:
replicas: 1
selector:
matchLabels:
app: jupyter-scipy
template:
metadata:
labels:
app: jupyter-scipy
spec:
containers:
- name: jupyter
image: quay.io/jupyter/scipy-notebook
ports:
- containerPort: 8888
hostPort: 8888

$ kubectl apply -f jupyter-deployment.yaml

In this example the deployment exposes the port on which Jupyter runs (8888) as a host ports. This means, we can access the deployment connecting to port (8888) on which our Jupyter pods run:

Jupyter Lab running in the Kubernetes cluster

Next Steps

This project was a good start in which we got a compute cluster for future projects. Here are some of my plans with the cluster:

  • add more nodes to the cluster
  • experiment with Apache Airflow and Kubeflow
  • experiment with the NPU from RK3588

Hope you enjoyed this project! 😎


 

Schematics


Network Connections

Code


Jupyter Deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  名称: jupyter-scipy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jupyter-scipy
  template:
    metadata:
      labels:
        app: jupyter-scipy
    spec:
      containers:
      - 名称: jupyter
        イメージ: quay.io/jupyter/scipy-notebook
        ports:
        - containerPort: 8080

Credits


Attila Tőkés

Software Engineer experimenting with hardware projects involving IoT, Computer Vision, ML & AI, FPGA, Crypto and other related technologies.