Kubeadm 설치를 하며 아래 에러를 본적이 있다.

[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: missing optional cgroups: hugetlb
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Mem]: the system RAM (924 MB) is less than the minimum 1700 MB
	[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 

최소 권장 메모리보다 적은 메모리 사양을 지니고 있다는 에러는 그렇다 치고, 왜 swap을 disable 하도록 하는 것일까?

 

물리적 메모리(RAM) 사용량이 가득 차게될 경우 하드 디스크 상에 위치하는 스왑 공간을 사용하여 시스템 장애에 대비할 수 있지 않은가?

 

역시나 이 부분에 대해서 여러사람들이 코멘트를 남긴 기록들이 존재한다.

 

특히, 눈에 들어왔던 부분은 이 부분이다.

 

srevenant commented on 3 Apr 2018

Not supporting swap as a default? I was surprised to hear this -- I thought Kubernetes was ready for the prime time? Swap is one of those features.

This is not really optional in most open use cases -- it is how the Unix ecosystem is designed to run, with the VMM switching out inactive pages.

If the choice is no swap or no memory limits, I'll choose to keep swap any day, and just spin up more hosts when I start paging, and I will still come out saving money.

Can somebody clarify -- is the problem with memory eviction only a problem if you are using memory limits in the pod definition, but otherwise, it is okay?

It'd be nice to work in a world where I have control over the way an application memory works so I don't have to worry about poor memory usage, but most applications have plenty of inactive memory space.

I honestly think this recent move to run servers without swap is driven by the PaaS providers trying to coerce people into larger memory instances--while disregarding ~40 years of memory management design. The reality is that the kernel is really good about knowing what memory pages are active or not--let it do its job.

 

결론은 srevenant씨에 따르면 기술적인 문제라기 보다는 비지니스 적인 이유라는 것이다. 좋아요를 210개 넘게 받았다..

 

그래도 쿠버네티스 측의 입장을 한번 들어보아야 하지 않을까?

 

역시내 개인적으로 눈에 들어왔던 내용을 살펴보자.

Swap Memory: The QoS proposal assumes that swap memory is disabled. If swap is enabled, then resource guarantees (for pods that specify resource requirements) will not hold. For example, suppose 2 guaranteed pods have reached their memory limit. They can start allocating memory on swap space. Eventually, if there isn’t enough swap space, processes in the pods might get killed. TODO: ensure that swap space is disabled on our cluster setups scripts.

 

또 다른 내용이다.

Adding swap slows down jobs and introducing more bandwidth to disk and isolation issues. We don't manage disk io yet, and it is hard to manage too. Without better disk io management, simply enabling swap for container/pod is bad solution.
Disabling swap is a good approach. When you have multiple containers and multiple machines they could be scheduled on, it is better to kill one container, than to have all the containers on a machine operate at an unpredictable, probably slow, rate.

 

결론은 본인들이 정의해 놓은 QoS 전략에 맞지 않고, 지원하려면 고려해야될 것들이 너무 많다.. 차라리 문제가 되는 컨테이너는 죽이고, 다른 컨테이너들은 살리겠다는 접근인 것 같다.

 

이외에 많은 쿠버네티스 엔지니어들이 여러 다양한 코멘트를 남겨놓았다. 

 

이 괜찮은 시스템을 만드는데 얼마나 많은 고생을 했을까? 그들이 남긴 코멘트에 마음이 기울었고, 현재 나는 빙산의 일각을 이해하는 입장이기 때문에 아직은 엔지니어 말에 동의해야겠다.

 

다음 포스팅에서는 Network Policy(weave container network)에 대해서 살펴보겠다.

출처

- https://github.com/kubernetes/kubernetes/issues/7294#issuecomment-215637455

- https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/proposals/resource-qos.md

- https://www.evernote.com/shard/s360/client/snv?noteGuid=caa3d18e-4bda-4516-9ec9-1180999015e2&noteKey=46fa507ba5b78edc&sn=https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs360%2Fsh%2Fcaa3d18e-4bda-4516-9ec9-1180999015e2%2F46fa507ba5b78edc&title=191120%2Bwhy%2Bk8s%2Bdisable%2Bswap%253F 

- https://github.com/kubernetes/kubernetes/issues/53533

+ Recent posts