Raspberry Pi cluster setup

- 3pc 라즈베리파이4B 모델 

- 3pc 16 GB SD card 

- 3pc ethernet cables

- 1pc 공유기

- 3pc C타입 아답터

Cluster Static ip

- master: 192.168.10.3

- worker-01: 192.168.10.4

- worker-02: 192.168.10.5

 


Infrastructure Setting

1. 고정 IP 설정

- vi /etc/netplan/xxx.yaml

network:
    ethernets:
        eth0:
           addresses: [192.168.10.x/24]
           gateway4: 192.168.10.1
           nameservers:
             addresses: [8.8.8.8]
    version: 2

- sudo netplan apply

2. 호스트네임 변경

# 명령어로 변경
hostnamectl set-hostname xx

 

3. 계정 생성 및 ssh 접속

- 계정추가

sudo adduser newuser

 

- 새로운 계정에 sudo 그룹 권한 할당

sudo usermod -aG sudo newuser

 

- 새로운 계정으로 ssh 접속하고 업데이트

ssh newuser@192.168.0.10

sudo apt update

 

4. Kernel Cgroup 설정

sudo vi /boot/firmware/nobtcmd.txt

# 아래내용 append
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

Build k8s cluster through Kubeadm

1. iptables가 브리지된 트래픽을 보게하기

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

 

2. 도커 설치

- Set up the repository

1. Update the apt package index and install packages to allow apt to use a repository over HTTPS:

 sudo apt-get update
 sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

2. Add Docker's official GPG key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

3. Use the following command to set up the stable repository. To add the nightly or test repository, add the word nightly or test (or both) after the word stable in the commands below. 

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

 

- Install Docker Engine

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo usermod -a -G docker $USER # 해당 유저를 도커 그룹에 넣는다.

 

- systemd를 사용하도록 도커 데몬을 구성

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#%EB%8F%84%EC%BB%A4

 

3. kubeadm, kubelet, kubectl 설치

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

 

4. Initialize Master node

- sudo kubeadm init

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.3:6443 --token 64xvn4.a7wdy0u0y0iu7cwv \
	--discovery-token-ca-cert-hash sha256:4421c3cc7bd5011d38072d1c365b515dc43f6d3cd501213e12fc0c3f1a559fd0

 

5. 워커 노드를 클러스터에 조인 시킨다.

kubeadm join 192.168.10.3:6443 --token 64xvn4.a7wdy0u0y0iu7cwv \
	--discovery-token-ca-cert-hash sha256:4421c3cc7bd5011d38072d1c365b515dc43f6d3cd501213e12fc0c3f1a559fd0

 

6. Settin up Flannel as the container network on the Master node

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml

각 노드의 podCIDR 이 설정되어 있지 않아 flannel Pod가 정상 동작하지 않는 경우

k logs -f kube-flannel-ds-88nvk -n kube-system
I0119 00:34:14.341794       1 main.go:218] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: help:false version:false autoDetectIPv4:false autoDetectIPv6:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true subnetFile:/run/flannel/subnet.env subnetDir: publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 charonExecutablePath: charonViciUri: iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0119 00:34:14.342055       1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0119 00:34:14.740596       1 kube.go:120] Waiting 10m0s for node controller to sync
I0119 00:34:14.740824       1 kube.go:378] Starting kube subnet manager
I0119 00:34:15.740911       1 kube.go:127] Node controller sync successful
I0119 00:34:15.740978       1 main.go:238] Created subnet manager: Kubernetes Subnet Manager - master-node-01
I0119 00:34:15.741037       1 main.go:241] Installing signal handlers
I0119 00:34:15.741551       1 main.go:460] Found network config - Backend type: vxlan
I0119 00:34:15.741619       1 main.go:652] Determining IP address of default interface
I0119 00:34:15.742772       1 main.go:699] Using interface with name eth0 and address 192.168.10.3
I0119 00:34:15.742871       1 main.go:721] Defaulting external address to interface address (192.168.10.3)
I0119 00:34:15.742890       1 main.go:734] Defaulting external v6 address to interface address (<nil>)
I0119 00:34:15.743068       1 vxlan.go:137] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E0119 00:34:15.744236       1 main.go:326] Error registering network: failed to acquire lease: node "master-node-01" pod cidr not assigned
I0119 00:34:15.744432       1 main.go:440] Stopping shutdownHandler...

 

아래와 같이 수동으로 설정을 해주어야한다. 기본적으로 클라우드 프로바이더가 이러한 설정을 해주기를 기대하는 것으로 보인다.

master-01@master-node-01:/etc/kubernetes/manifests$ kubectl patch node master-node-01 -p '{"spec":{"podCIDR":"10.244.0.0/24"}}'
node/master-node-01 patched
master-01@master-node-01:/etc/kubernetes/manifests$ kubectl patch node worker-node-01 -p '{"spec":{"podCIDR":"10.244.3.0/24"}}'
node/worker-node-01 patched
master-01@master-node-01:/etc/kubernetes/manifests$ kubectl patch node worker-node-02 -p '{"spec":{"podCIDR":"10.244.4.0/24"}}'
node/worker-node-02 patched

 

7. 검증

- kubectl get nodes

NAME        STATUS   ROLES                  AGE     VERSION
master-node-01      Ready    control-plane,master   8m40s   v1.21.2
worker-node-01   Ready    <none>                 4m17s   v1.21.2
worker-node-02   Ready    <none>                 3m48s   v1.21.2

Optional) NFS test on k8s worker node

1. External NFS 서버 설정

1.1. NFS 서버를 위한 패키지 프로그램 설치

apt-get install nfs-common nfs-kernel-server rpcbind portmap

1.2. 공유할 폴더 생성

mkdir /mnt/data
chmod -R 777 /mnt/data

1.3. NFS 설정 수정

설정 파일은 /etc/exports 파일이다.

아래는 NFS를 걸 폴더는 /mnt/data이고, 172.31.0.0/16에 대해 다 열겠다는 뜻이다.

Kubernetes Node CIDR 대역으로 설정하면됨.

/mnt/data 172.31.0.0/16(rw,sync,no_subtree_check)
  • rw: read and write operations
  • sync: write any change to the disc before applying it
  • no_subtree_check: prevent subtree checking

1.4. 반영

exportfs -a
systemctl restart nfs-kernel-server

 

2. NFS 클라이언트

2.1. NFS 클라이언트를 위한 패키지 프로그램 설치

모든 Kubernetes Node에 설치를 해야함.

apt-get install nfs-common

 

2.2. NFS 연결을 위한 PV 생성

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pc
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.10.x
    path: /mnt/k8s

 

Kubernetes Storage Class - NFS dymanic provisioner도 활용 가능

- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

- https://sarc.io/index.php/os/1780-ubuntu-nfs-configuration

 

 

Optional) NFS 마운트

- NFS 마운트할 폴더 생성

mkdir /public_data

- 마운트

NFS 서버 IP가 172.31.2.2라고 하면,

mount 172.31.2.2:/mnt/data /public_data

 


rpi image : https://www.raspberrypi.com/software/

ubunt 18.04.5 image for rasberry pi : https://cdimage.ubuntu.com/ubuntu/releases/18.04.6/release/

+ Recent posts