Kubeadm 설치를 하며 아래 에러를 본적이 있다.

[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: missing optional cgroups: hugetlb
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Mem]: the system RAM (924 MB) is less than the minimum 1700 MB
	[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 

최소 권장 메모리보다 적은 메모리 사양을 지니고 있다는 에러는 그렇다 치고, 왜 swap을 disable 하도록 하는 것일까?

 

물리적 메모리(RAM) 사용량이 가득 차게될 경우 하드 디스크 상에 위치하는 스왑 공간을 사용하여 시스템 장애에 대비할 수 있지 않은가?

 

역시나 이 부분에 대해서 여러사람들이 코멘트를 남긴 기록들이 존재한다.

 

특히, 눈에 들어왔던 부분은 이 부분이다.

 

srevenant commented on 3 Apr 2018

Not supporting swap as a default? I was surprised to hear this -- I thought Kubernetes was ready for the prime time? Swap is one of those features.

This is not really optional in most open use cases -- it is how the Unix ecosystem is designed to run, with the VMM switching out inactive pages.

If the choice is no swap or no memory limits, I'll choose to keep swap any day, and just spin up more hosts when I start paging, and I will still come out saving money.

Can somebody clarify -- is the problem with memory eviction only a problem if you are using memory limits in the pod definition, but otherwise, it is okay?

It'd be nice to work in a world where I have control over the way an application memory works so I don't have to worry about poor memory usage, but most applications have plenty of inactive memory space.

I honestly think this recent move to run servers without swap is driven by the PaaS providers trying to coerce people into larger memory instances--while disregarding ~40 years of memory management design. The reality is that the kernel is really good about knowing what memory pages are active or not--let it do its job.

 

결론은 srevenant씨에 따르면 기술적인 문제라기 보다는 비지니스 적인 이유라는 것이다. 좋아요를 210개 넘게 받았다..

 

그래도 쿠버네티스 측의 입장을 한번 들어보아야 하지 않을까?

 

역시내 개인적으로 눈에 들어왔던 내용을 살펴보자.

Swap Memory: The QoS proposal assumes that swap memory is disabled. If swap is enabled, then resource guarantees (for pods that specify resource requirements) will not hold. For example, suppose 2 guaranteed pods have reached their memory limit. They can start allocating memory on swap space. Eventually, if there isn’t enough swap space, processes in the pods might get killed. TODO: ensure that swap space is disabled on our cluster setups scripts.

 

또 다른 내용이다.

Adding swap slows down jobs and introducing more bandwidth to disk and isolation issues. We don't manage disk io yet, and it is hard to manage too. Without better disk io management, simply enabling swap for container/pod is bad solution.
Disabling swap is a good approach. When you have multiple containers and multiple machines they could be scheduled on, it is better to kill one container, than to have all the containers on a machine operate at an unpredictable, probably slow, rate.

 

결론은 본인들이 정의해 놓은 QoS 전략에 맞지 않고, 지원하려면 고려해야될 것들이 너무 많다.. 차라리 문제가 되는 컨테이너는 죽이고, 다른 컨테이너들은 살리겠다는 접근인 것 같다.

 

이외에 많은 쿠버네티스 엔지니어들이 여러 다양한 코멘트를 남겨놓았다. 

 

이 괜찮은 시스템을 만드는데 얼마나 많은 고생을 했을까? 그들이 남긴 코멘트에 마음이 기울었고, 현재 나는 빙산의 일각을 이해하는 입장이기 때문에 아직은 엔지니어 말에 동의해야겠다.

 

다음 포스팅에서는 Network Policy(weave container network)에 대해서 살펴보겠다.

출처

- https://github.com/kubernetes/kubernetes/issues/7294#issuecomment-215637455

- https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/proposals/resource-qos.md

- https://www.evernote.com/shard/s360/client/snv?noteGuid=caa3d18e-4bda-4516-9ec9-1180999015e2&noteKey=46fa507ba5b78edc&sn=https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs360%2Fsh%2Fcaa3d18e-4bda-4516-9ec9-1180999015e2%2F46fa507ba5b78edc&title=191120%2Bwhy%2Bk8s%2Bdisable%2Bswap%253F 

- https://github.com/kubernetes/kubernetes/issues/53533

Master Node with Control Plane Components

- Master Node : Manage, Plan, Schedule, Monitor Nodes with control plane components

- 클러스트 전반에 걸친 중요한 결정을 내린다. 예를들면 스케줄링, 이벤트 감지 및 대응 등이 있다.

- Control Plane Component 들은 같은 머신에 위치시키는 것을 추천하고, 유저 컨테이너를 이 머신에서 실행시키지 말라고 한다.

- 아래는 Control Plane Component 들이다. 살펴보도록 하자.

Name Description
kube-apiserver orchestrating all operations within the cluster
- 외부에서 모니터링할 수 있도록 REST API 제공
- worker node와의 커뮤니케이션 수단 제공
kube-controller-manager HA를 유지하는데 도움을 주는 역할
- node-controller
- replication-controller
...
etcd-cluster - configuration storage with key-value format
kube-scheduler - identifies the right node to place a container based on the container's resource requirement, worker node's capacity or any other policies
container runtime engine - docker, rkt ...

 

Worker Nodes

- Host Application as Containers

Name Description
kube-proxy worker node 사이의 container들의 커뮤니케이션 수단 제공 
kubelet master node와 컨테이너 상태 등에 대해서 커뮤니케이션 및 container를 제어하는 역할
- listens for instructions from the kube-apiserver
- deploys or destroys containers on the nodes
- kube-api server periodically fetches status reports from the kubelet
container runtime engine - docker, rkt ...

 

다음 포스팅에서는 kubeadm 설치를 하며 swap을 disable 해야하는 이유에 대해서 살펴보겠다.


출처

- https://v1-18.docs.kubernetes.io/docs/concepts/overview/components/

- https://kubernetes.io/docs/concepts/overview/components/

Kubeadm 실습을 위해서 아래 사진과 같이 하드웨어 준비를 하였다.

- 스위칭허브

- 라즈베리파이 4개

- 전원장치

- 와이파이

 

 

이제 라즈베리파이에 라즈비안 설치를 시작해볼까!

 

여러가지 방법중에서 Raspberry Pi Imager 를 이용하여 쉽고 빠르게 설치해보았다.

 

https://www.raspberrypi.org/software/

 

Raspberry Pi OS – Raspberry Pi

The Raspberry Pi is a tiny and affordable computer that you can use to learn programming through fun, practical projects. Join the global Raspberry Pi community.

www.raspberrypi.org

 

첫째, 라즈베리파이 공식 홈페이지에서 맥용 Raspberry Pi Imager를 다운로드 받는다.

 

Raspberry Pi Imager를 이용하여 SD 카드에 라즈비언을 설치한다.

- Operation System : Raspberry Pi OS(32-BIT)

- Storage : SD card

 

라즈베리 파이 1대에 설치 완료까지 약 5분 정도 소요되었다.. 

 

라즈비안의 기본적인 환경세팅(언어 설정, 키보드 설정 ..)은 라즈비언에서 가이드 해주는대로 하면 큰 문제 없이 완료할 수 있을 것이다.

 

 

둘째, 이제 SSH로 접속하여 고정 IP를 부여해보자.

 

DHCP로 IP를 할당받게되면 작업할때마다 IP가 바뀔 수 있기 때문에 4개의 라즈베리 파이에 고정 IP를 부여해야한다.

 

혹시 라지브언에 SSH server가 disable 되어있다면 enable 하자.

- sudo raspi-config를 통해 관련 설정을 할 수 있었다.

 

모든 라즈베리파이에 유선으로 인터넷을 연결 할 것이기 때문에 eth0 네트워크 인터페이스에 대해서 네트워크 설정을 추가할 것이다.

 

자신의 유선 네트워크 인터페이스를 모르겠다면 ifconfig 명령을 통해 네트워크 정보를 확인하자.

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.123.200  netmask 255.255.255.0  broadcast 192.168.123.255
        inet6 fe80::7876:9c60:88d4:ccde  prefixlen 64  scopeid 0x20<link>
        ether b8:27:eb:a4:4f:42  txqueuelen 1000  (Ethernet)
        RX packets 24722  bytes 15151148 (14.4 MiB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 6056  bytes 637019 (622.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

또한, netstat -nr 명령을 통해서 게이트웨이 주소를 알아낸다.

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.123.254 0.0.0.0         UG        0 0          0 eth0
0.0.0.0         192.168.123.254 0.0.0.0         UG        0 0          0 wlan0
192.168.123.0   0.0.0.0         255.255.255.0   U         0 0          0 eth0
192.168.123.0   0.0.0.0         255.255.255.0   U         0 0          0 wlan0

 

처음 고정 IP 설정을 하기위해서 /etc/network/interfaces 필요한 내용을 입력하려고 했으나, 고정 IP 설정은 되었으나 인터넷 연결이 제대로 되지않았다.

 

참고) 네트워크 대역(network)은 192.168.123.0 ~ 192.168.123.255 를 의미한다.

# interfaces(5) file used by ifup(8) and ifdown(8)

# Please note that this file is written to be used with dhcpcd
# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'

auth eth0
iface eth0 inet static
address 192.168.123.200
netmask 255.255.255.0
gateway 192.168.123.254
network 192.168.123.0
broadcast 8.8.8.8 # DNS server

 

파일 가장 위 부분에 고정 IP 설정을 위해서 /etc/dhcpcd.conf 에 필요한 내용을 입력하라고 안내하고있다. 느낌에 /etc/dhcpcd.conf가 /etc/network/interfaces를 overwrite 한다는 느낌을 받았다.

 

각  라즈비언의  네트워크 설정파일(/etc/dhcpcd.conf)에 아래 내용을 추가한다.
interface eth0
static ip_address=192.168.123.200/24
static routers=192.168.123.254
static domain_name_servers=192.168.123.254 8.8.8.8
reboot을 통해 변경내용을 반영한다.
sudo reboot

 

이 모든 작업이 완료되면 앞으로는 다음과 같이 라즈비언에 접속하여 원하는 작업을 할 수 있을 것이다.

 

 

셋째, ssh pi@192.168.123.x 그리고 로그인!

 

맥에서 4대의 라즈비언에 접속한 화면

 

다음주는 Kubeadm 설치에 대해서 살펴볼 예정!


출처

- https://d-tail.tistory.com/4

- https://dullwolf.tistory.com/18

- https://ng1004.tistory.com/105

- https://superuser.com/questions/99949/what-does-network-mean-in-the-etc-network-interfaces-file

- https://www.raspberrypi.org/software/

Kubernetes Installation by Kubeadm

Kubernetes 설치 Overview

Raspberry Pi cluster setup

- 4pc 라즈베리파이3 B+ 모델 

- 4pc 16 GB SD card 

- 4pc ethernet cables

- 1pc 스위치 허브

- 1pc USB power hub

- 4pc Micro-USB cables

 

Cluster Static ip

- master: 192.168.123.200

- worker-01: 192.168.123.201

- worker-02: 192.168.123.202

- worker-03: 192.168.123.203

 

1. Docker 설치, Disable swap

# Install Docker
curl -sSL get.docker.com | sh && \

# docker without sudo
sudo usermod pi -aG docker

# Disable Swap
sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo update-rc.d dphys-swapfile remove
echo Adding " cgroup_enable=cpuset cgroup_enable=memory" to /boot/cmdline.txt
sudo cp /boot/cmdline.txt /boot/cmdline_backup.txt
orig="$(head -n1 /boot/cmdline.txt) cgroup_enable=cpuset cgroup_enable=memory"
echo $orig | sudo tee /boot/cmdline.txt

 

2. Install kubeadm, kubelet, kubectl

# Add repo list and install kubeadm
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update -y && \
sudo apt update && sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

 

참고) Kubectl Alias 설정

- ~/.bash_profile

alias k=kubectl
source .bashrc

 

3. Initialize Master Node

Master Node HA

  • load balancer : manage multiple kube-apiserver access on master node
  • leader election : scheduler, controller manager
kube-controller-manager --leader-elect true [other options]
						--leader-elect-lease-duration 15s
						--leader-elect-renew-deadline 10s
						--leader-elect-retry-period 2s
  • two topology for etcd : stacked, external

 

추후 테스트를 위해 kubeadm init를 할 때 Master node의 설정을 아래와 같이 추가한다.

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
controllerManager:
  extraArgs:
    pod-eviction-timeout: 10s
    node-monitor-grace-period: 10s

 


sudo kubeadm init --config kubeadm_conf.yaml

 

2가지 faltal error가 발생한다.

[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: missing optional cgroups: hugetlb
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Mem]: the system RAM (924 MB) is less than the minimum 1700 MB
	[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 

Memory 부족한 것은 이미 알고 있는 문제이고, Swap disable은 적용이 안되었나보네..?

sudo kubeadm init --config kubeadm_conf.yaml --ignore-preflight-errors=Mem --ignore-preflight-errors=Swap

 

간단한 쿠버네티스 테스트를 진행할 것이기 때문에 무시하고 진행해보기로 결정했다.

 

하지만 kubelet이 동작하지 않아서  에러 발생.

로그를 통해 이유를 살펴보니 swap disable 되어 있지 않아서 였다.. 결국 swap disable은 필수로 해야함.

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

 

라즈베리파이 Swap disable 실행 : sudo swapoff -a

 

이미 kubeadm init를 통해서 일부 서버들이 동작하고 있기 때문에 reset이 필요! 

init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
	[WARNING Mem]: the system RAM (924 MB) is less than the minimum 1700 MB
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: missing optional cgroups: hugetlb
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[ERROR Port-10250]: Port 10250 is in use
	[ERROR Port-2379]: Port 2379 is in use
	[ERROR Port-2380]: Port 2380 is in use
	[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 

초기화 작업을 진행전에 apiserver advertise address 체크!

Check apiserver advertise address. For example, ifconfig eth0
kubeadm init --apiserver-cert-extra-sans=controlplane --apiserver-advertise-address xx.xx.xx.xx --pod-network-cidr=xx.xx.xx.xx/16

kubeadm reset을 통해 작업했던 내용을 제거하고, 다시 초기화를 진행한다.

sudo kubeadm reset
sudo kubeadm init --config kubeadm_conf.yaml --ignore-preflight-errors=Mem

 

드디어 master node 초기화 완료!

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.123.200:6443 --token 3yy6m6.xhvwbsd6bpkmb0yv \
	--discovery-token-ca-cert-hash sha256:1c9bdf76dc892517a0b1b1dd32068b0f2b9b981c3b60a650d6c4d122aa68661b 
    

 

일반 유저로 클러스터를 제어하기 위해 가이드에 따라 아래 명령어를 실행해보자.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

명령 실행 후 kubectl get no 를 통해 마스터노드의 상태를 확인해보자.

 

NAME          STATUS     ROLES                  AGE     VERSION
raspberrypi   NotReady   control-plane,master   5m36s   v1.21.2

현재 마스터노드의 상태가 NotReady인 이유는 컨테이너 네트워크를 설치하지 않았기 때문이다.

 

컨테이너 네트워크를 설치하기 전에 Worker node를 셋업하자.

 

각 워커노드도 마찬가지로 swap disable를 해준뒤에 

아래 명령어를 통해 클러스터에 조인시킨다.

kubeadm join 192.168.123.200:6443 --token 3yy6m6.xhvwbsd6bpkmb0yv \
                                                                --discovery-token-ca-cert-hash sha256:1c9bdf76dc892517a0b1b1dd32068b0f2b9b981c3b60a650d6c4d122aa68661b
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

4. Setting up weave as the container network

 

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

 

약 5분뒤에 클러스터 구성이 완료됬을 것 같았지만....No...

 

이유는 hostname이 raspberry로 모두 같아서 master node가 구분할 수 없기 때문이었다.

따라서  raspi-config 를 이용하여 hostname을 master, worker-01, worker-02, worker-03으로 변경 하였다.

 

이후에 master node에서 kubeadm reset, kubeadm init을 다시금 해준뒤 worker nodes 들을 다시금 join 해주었더니 

 

드! 디! 어! 모든 node들을 확인할 수 있었다.

 

kubectl get nodes
NAME        STATUS   ROLES                  AGE     VERSION
master      Ready    control-plane,master   8m40s   v1.21.2
worker-01   Ready    <none>                 4m17s   v1.21.2
worker-02   Ready    <none>                 3m48s   v1.21.2
worker-03   Ready    <none>                 3m13s   v1.21.2

 

또한, kubernetes 컴포넌트들이 모두 정상적인 상태임을 확인!

coredns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler, weave-net

kubectl get po -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-558bd4d5db-bdmx6         1/1     Running   0          11m
kube-system   coredns-558bd4d5db-dfrwd         1/1     Running   0          11m
kube-system   etcd-master                      1/1     Running   0          11m
kube-system   kube-apiserver-master            1/1     Running   0          11m
kube-system   kube-controller-manager-master   1/1     Running   0          11m
kube-system   kube-proxy-7p95s                 1/1     Running   0          6m28s
kube-system   kube-proxy-c592m                 1/1     Running   0          7m31s
kube-system   kube-proxy-jdqv6                 1/1     Running   0          7m3s
kube-system   kube-proxy-ndljz                 1/1     Running   0          11m
kube-system   kube-scheduler-master            1/1     Running   0          11m
kube-system   weave-net-bj9bb                  2/2     Running   0          6m15s
kube-system   weave-net-fhb9l                  2/2     Running   0          6m15s
kube-system   weave-net-khtvg                  2/2     Running   0          6m15s
kube-system   weave-net-qcdqk                  2/2     Running   0          6m15s

참고

  • Change docker cgroup driver
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {"max-size": "100m"},
    "storage-driver": "overlay2"
}

# Restart docker to load new configuration
sudo systemctl restart docker

# Add docker to start up programs
sudo systemctl enable docker
  • Disable swap on ubuntu
# See if swap is enabled
swapon --show

# Turn off swap
sudo swapoff -a

# Disable swap completely
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  • Configure docker for kubeadm on Ubuntu
# Configure docker to use overlay2 storage and systemd
sudo mkdir -p /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {"max-size": "100m"},
    "storage-driver": "overlay2"
}
EOF

# Restart docker to load new configuration
sudo systemctl restart docker

# Add docker to start up programs
sudo systemctl enable docker
  • Kubeadm 설치 on ubuntu
설치

1. apt 패키지 색인을 업데이트하고, 쿠버네티스 apt 리포지터리를 사용하는 데 필요한 패키지를 설치한다.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

2. 구글 클라우드의 공개 사이닝 키를 다운로드 한다.

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

3. 쿠버네티스 apt 리포지터리를 추가한다.

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

4. apt 패키지 색인을 업데이트하고, kubelet, kubeadm, kubectl을 설치하고 해당 버전을 고정한다.

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
  • Cluster 구성 on ubuntu
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.22.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [test kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.12]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [test localhost] and IPs [192.168.0.12 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [test localhost] and IPs [192.168.0.12 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.004376 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node test as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node test as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i5s88s.flaarpikxddb7mpb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.12:6443 --token i5s88s.flaarpikxddb7mpb \
	--discovery-token-ca-cert-hash sha256:b6c2f15eaf56df98ca1c163142c1239ed53187ba4ad348bcd30cdf47242a77b5
  • Single Node를 위한 Untaint 처리 on ubuntu
kubectl taint nodes --all node-role.kubernetes.io/master-
  • Single Node를 위한 local path provisioner 설치
$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

$ kubectl -n local-path-storage get pod
NAME                                      READY   STATUS    RESTARTS   AGE
local-path-provisioner-556d4466c8-9xgcf   1/1     Running   0          5m48s

the directory /var/local-path-provisioner will be used across all the nodes as the path for provisioning (a.k.a, store the persistent volume data)

 

* local path provisioner 접근모드 : ReadWriteOnce만 지원

  • ReadWriteOnce -- 하나의 노드에서 볼륨을 읽기-쓰기로 마운트할 수 있다
  • ReadOnlyMany -- 여러 노드에서 볼륨을 읽기 전용으로 마운트할 수 있다
  • ReadWriteMany -- 여러 노드에서 볼륨을 읽기-쓰기로 마운트할 수 있다
  • ReadWriteOncePod -- 하나의 파드에서 볼륨을 읽기-쓰기로 마운트할 수 있다. 쿠버네티스 버전 1.22 이상인 경우에 CSI 볼륨에 대해서만 지원된다.

 

다음 포스팅은 오늘 설치하면서 학습이 필요한 개념에 대해서 살펴보도록하겠다.

- 왜 swap disable 할까?

- weave container network?

- coredns, etcd, kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler


출처

local-path-provisioner : https://github.com/rancher/local-path-provisioner

Kubeadm 설치 : https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

External IP : https://stackoverflow.com/questions/44110876/kubernetes-service-external-ip-pending

kubeadm on Single Node : https://blog.radwell.codes/2021/05/provisioning-single-node-kubernetes-cluster-using-kubeadm-on-ubuntu-20-04/

기타

- https://www.raspberrypi.org/forums/viewtopic.php?t=49925

- https://github.com/kubernetes/kubernetes/issues/61224

- https://www.edureka.co/community/85295/error-failed-kubelet-running-supported-please-disable-fail

- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/

- https://www.raspberrypi.org/forums/viewtopic.php?t=203128 

- https://github.com/kubernetes/kubernetes/issues/61224

- https://www.weave.works/docs/net/latest/kubernetes/kube-addon/

- https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/

- https://kubecloud.io/setting-up-a-kubernetes-1-11-raspberry-pi-cluster-using-kubeadm-952bbda329c8

'클라우드 컴퓨팅 > 쿠버네티스' 카테고리의 다른 글

Kubernetes JSONPATH 사용법  (0) 2021.08.08
Kubernetes Network Model & Policy  (0) 2021.07.08
Kubeadm Swap Disable  (0) 2021.07.04
Kubernetes Components  (0) 2021.07.03
Kubeadm Infrastructure With Raspberry Pi  (0) 2021.06.20

+ Recent posts