쿠버네티스 컨트롤플레인을 수동으로 설치하는 방법을 통해 부트스트랩과정을 이해하고 배워보고자한다.

 

참고 : https://github.com/kelseyhightower/kubernetes-the-hard-way/tree/master/docs

 

4개의 라즈베리파이 중 2개를 마스터 노드로 활용하여 컨트롤플레인을 구축할 것이고, 나머지 2대는 워커 노드로서 간단한 웹 페이지를 띄우고 정상 동작하는 지를 확인하려한다.

Infrastructure Setting

Raspberry Pi cluster setup

- 4pc 라즈베리파이3 B+ 모델 

- 4pc 16 GB SD card 

- 4pc ethernet cables

- 1pc 스위치 허브

- 1pc USB power hub

- 4pc Micro-USB cables

 

Cluster Static IP

Host Name IP
k8s-master-1 172.30.1.40
k8s-master-2 172.30.1.41
k8s-worker-1 172.30.1.42
k8s-worker-2 172.30.1.43

 


1. Infrastructure

우분터 서버 이미지는 기본적으로 ssh를 내장하고 있다.

Default user/pwd : ubuntu/ubuntu

- hostname 변경

sudo hostnamectl set-hostname xxx

- static ip 설정

vi /etc/netplan/xxx.yaml
network:
    ethernets:
        eth0:
            #dhcp4: true
            #optional: true
                addresses: [192.168.10.x/24]
                gateway4: 192.168.10.1
                nameservers:
                        addresses: [8.8.8.8]
    version: 2

sudo netplan apply

- /etc/hosts

172.30.1.40       k8s-master-1
172.30.1.41       k8s-master-2
172.30.1.42       k8s-worker-1
172.30.1.43       k8s-worker-2

- /boot/firmware/nobtcmd.txt

아래 내용 추가

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

2. Generating an CA and Certificates for k8s cluster(PKI certificates)

- Certificate Authority

 

Client Certificates

- Admin Client

- Kubelet Client

- Kube Proxy Client

- Controller Manager Client

- Service Account key pair : Kube controller manager uses a key pair to generate and sign service account tokens

- Scheduler Client

 

Server Certificates

- API Server

 


install cfssl on Mac to provision a PKI Infrastructure

- brew install cfssl

 

# Certificate Authority

참고 : https://kubernetes.io/ko/docs/tasks/administer-cluster/certificates/

Write an ca-config.json

{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}

 

Create a CSR for your new CA(ca-csr.json)

- CN : the name of the user

- C : country

- L : city

- O : the group that this user will belong to

- OU : organization unit

- ST : state

{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "South Korea",
      "L": "Seoul",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Seoul"
    }
  ]
}

 

Initialize a CA

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
>> ca-key.pem, ca.csr, ca.pem

 

# Admin Client Certificate

Write admin-csr.json

{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "South Korea",
      "L": "Seoul",
      "O": "system:masters",
      "OU": "Kubernetes the Hard Way",
      "ST": "Seoul"
    }
  ]
}

 

Generate an Admin Client Certificate

- client certificate for the kubernetes admin user

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin
  
>> admin-key.pem, admin.csr, admin.pem

 

# Kubelet Client Certificate

Write k8s-node-x-csr.json

- Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API request made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:<nodeName>

{
    "CN": "system:node:k8s-worker-1",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "South Korea",
        "L": "Seoul",
        "O": "system:nodes",
        "OU": "Kubernetes The Hard Way",
        "ST": "Seoul"
      }
    ]
}

 

Generate an Kubelet Client Certificate for each Worker Node

export WORKER_IP=172.30.1.42
export WORKER_HOST=k8s-worker-1
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${WORKER_IP},${WORKER_HOST} \
  -profile=kubernetes \
  ${WORKER_HOST}-csr.json | cfssljson -bare ${WORKER_HOST}
  
>> k8s-worker-1-key.pem, k8s-worker-1.csr, k8s-worker-1.pem

 

# Kube Proxy Client Certificate

Write kube-proxy-csr.json

{
    "CN": "system:kube-proxy",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "South Korea",
        "L": "Seoul",
        "O": "system:node-proxier",
        "OU": "Kubernetes the Hard Way",
        "ST": "Seoul"
      }
    ]
}

 

Generate Kube Proxy Client certificate

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy
  
>> kube-proxy.csr, kube-proxy.pem, kube-proxy-key.pem

 

# Controller Manager Client Certificate

Write kube-controller-manager-csr.json

{
    "CN": "system:kube-controller-manager",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "South Korea",
        "L": "Seoul",
        "O": "system:kube-controller-manager",
        "OU": "Kubernetes the Hard Way",
        "ST": "Seoul"
      }
    ]
}

 

Generate Controller Manager Client Certificate

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
  
  >> kube-controller-manager-csr, kube-controller-manager.pem, kube-controller-manager-key.pem

 

# Service Account Key Pair

Write service-account-csr.json

{
    "CN": "service-accounts",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "South Korea",
        "L": "Seoul",
        "O": "Kubernetes",
        "OU": "Kubernetes the Hard Way",
        "ST": "Seoul"
      }
    ]
}

 

Generate Service Account Key pair

- The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens(https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/)

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account
  
>> service-account.csr, service-account.pem, service-account-key.pem

 

# Scheduler Client Certificate

Write kube-scheduler-csr.json

{
    "CN": "system:kube-scheduler",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "South Korea",
        "L": "Seoul",
        "O": "system:kube-scheduler",
        "OU": "Kubernetes the Hard Way",
        "ST": "Seoul"
      }
    ]
}

 

Generate Kube Scheduler Client certificate

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler
  
>> kube-scheduler.csr, kube-scheduler.pem, kube-scheduler-key.pem

 

# Kubernetes API Server Certificate

Write kubernetes-csr.json

{
    "CN": "kubernetes",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "South Korea",
        "L": "Seoul",
        "O": "Kubernetes",
        "OU": "Kubernetes the Hard Way",
        "ST": "Seoul"
      }
    ]
}

 

Generate Kubernetes API Server certificate

- The Kubernetes API server is automatically assigned the kubernetes internal dns name, which will be linked to the first IP address(10.32.0.1) from the address range(10.32.0.0/24) reserved for internal cluster services during the control plane bootstrapping.

- Master node IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.

- LB 사용시에 CERT_HOSTNAME에 LB 호스트명과 IP도 추가한다.

CERT_HOSTNAME=10.32.0.1,172.30.1.40,k8s-master-1,172.30.1.41,k8s-master-2,127.0.0.1,localhost,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${CERT_HOSTNAME} \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes
  
>> kubernetes.csr, kubernetes.pem, kubernetes-key.pem

 


3. Generating Kubernetes Configuration Files for Authentication

kubeconfigs enable k8s clients to locate and authenticate to the kubernetes api servers.

 

Worker nodes need

- ${worker_node}.kubeconfig (for Kubelet)

- kube-proxy.kubeconfig (for Kube-proxy)

 

# Generate Kubelet-kubeconfig

- When generating kubeconfig files for kubelets the client certificate matching the kubelet's node name must be used. This will ensure kubelets are properly authorized bye the kubernetes Node Authorizer.

- KUBERNETES_ADDRESS 를 마스터 노드의 IP로 설정하였지만 앞단에 LB를 두는 경우 LB의 IP를 지정.

KUBERNETES_ADDRESS=172.30.1.40
INSTANCE=k8s-worker-2

kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=ca.pem \
        --embed-certs=true \
        --server=https://${KUBERNETES_ADDRESS}:6443 \
        --kubeconfig=${INSTANCE}.kubeconfig

kubectl config set-credentials system:node:${INSTANCE} \
    --client-certificate=${INSTANCE}.pem \
    --client-key=${INSTANCE}-key.pem \
    --embed-certs=true \
    --kubeconfig=${INSTANCE}.kubeconfig

kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${INSTANCE} \
    --kubeconfig=${INSTANCE}.kubeconfig
    
kubectl config use-context default --kubeconfig=${INSTANCE}.kubeconfig

>> k8s-worker-1.kubeconfig, k8s-worker-2.kubeconfig

 

# Generate kubeproxy-kubeconfig

KUBERNETES_ADDRESS=172.30.1.40

kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_ADDRESS}:6443 \
    --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

>> kube-proxy.kubeconfig

 

Master nodes needs

- admin.kubeconfig (for user admin)

- kube-controller-manager.kubeconfig (for kube-controller-manager)

- kube-scheduler.kubeconfig (for kube-scheduler)

 

# Generate admin.kubeconfig

kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=admin.kubeconfig

kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=admin \
    --kubeconfig=admin.kubeconfig

kubectl config use-context default --kubeconfig=admin.kubeconfig

>> admin.kubeconfig

 

# Generate kube-controller-manager-kubeconfig

kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

>> kube-controller-manager.kubeconfig

 

# Generate kubescheduler-kubeconfig

kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

>> kube-scheduler.kubeconfig

 


4. Generating the Data Encryption Config and Key(Encrypt)

- Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.

- Generate an encryption key and an encryption config suitable for encrypting Kubernetes Secrets.

 

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

cat > encryption-config.yaml << EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

5. Distribute the Client and Server Certificates

Copy certificate to worker nodes

sudo scp ca.pem k8s-worker-1-key.pem k8s-worker-1.pem k8s-worker-1.kubeconfig kube-proxy.kubeconfig pi@172.30.1.42:~/
sudo scp ca.pem k8s-worker-2-key.pem k8s-worker-2.pem k8s-worker-2.kubeconfig kube-proxy.kubeconfig pi@172.30.1.43:~/

 

Copy certificates to master nodes

sudo scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem encryption-config.yaml kube-controller-manager.kubeconfig kube-scheduler.kubeconfig admin.kubeconfig encryption-config.yaml pi@172.30.1.40:~/
sudo scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem encryption-config.yaml kube-controller-manager.kubeconfig kube-scheduler.kubeconfig admin.kubeconfig encryption-config.yaml pi@172.30.1.41:~/

 


6. Bootstrapping the ETCD cluster

Kubernetes components are stateless and store cluster state in etcd.

# Download and Install the etcd binaries

wget -q --show-progress --https-only --timestamping   "https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-arm64.tar.gz"
tar -xvf etcd-v3.4.15-linux-arm64.tar.gz
sudo mv etcd-v3.4.15-linux-arm64/etcd* /usr/local/bin/

---
wget https://raw.githubusercontent.com/robertojrojas/kubernetes-the-hard-way-raspberry-pi/master/etcd/etcd-3.1.5-arm.tar.gz
tar -xvf etcd-3.1.5-arm.tar.gz

 

# Certs to their desired location

sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/

 

# Create the etcd.service systemd unit file

주요 파라미터

- initial-advertise-peer-urls

- listen-peer-urls

- listen-client-urls

- advertise-client-urls

- initial-cluster

ETCD_NAME=k8s-master-1
INTERNAL_IP=172.30.1.40
INITIAL_CLUSTER=k8s-master-1=https://172.30.1.40:2380,k8s-master-2=https://172.30.1.41:2380
cat << EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster ${INITIAL_CLUSTER} \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
Environment="ETCD_UNSUPPORTED_ARCH=arm64"

[Install]
WantedBy=multi-user.target
EOF

 

# Strat etcd

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
sudo systemctl status etcd

 

# Verification

sudo ETCDCTL_API=3 etcdctl member list \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem

 


7. Bootstrapping the Kubernetes Control Plane

The following components will be installed on each master node : Kubernetes API Server, Scheduler and Controller Manager

Kubernetes API Server

# Download and move them to the right position

sudo mkdir -p /etc/kubernetes/config
wget -q --show-progress --https-only --timestamping \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/arm64/kube-apiserver" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/arm64/kube-controller-manager" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/arm64/kube-scheduler" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/arm64/kubectl"

chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
  sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

 

# Certs to their desired location

sudo mkdir -p /var/lib/kubernetes/

sudo cp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
  service-account-key.pem service-account.pem \
  encryption-config.yaml /var/lib/kubernetes/

 

# Create the kube.apiserver systemd unit file

주요 파라미터

- advertise-address : ${INTERNAL_IP}

- apiserver-count : 2

- etcd-servers : ${CONTROLLER0_IP}:2379,${CONTROLLER1_IP}:2379

- service-account-issuer : https://${KUBERNETES_PUBLIC_ADDRESS}:6443

- service-cluster-ip-range : 10.32.0.0/24

INTERNAL_IP=172.30.1.40
KUBERNETES_PUBLIC_ADDRESS=172.30.1.40
CONTROLLER0_IP=172.30.1.40
CONTROLLER1_IP=172.30.1.41
cat << EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=2 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://${CONTROLLER0_IP}:2379,https://${CONTROLLER1_IP}:2379 \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --runtime-config='api/all=true' \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

 

# Start kube-apiserver

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver

sudo systemctl status kube-apiserver

 

Controller Manager

# Certs to their desired location

sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/

 

# Create the kube-controller-manager systemd unit file

주요 파라미터

- cluster-cidr : 10.200.0.0/16

- service-cluster-ip-range : 10.32.0.0/24

cat << EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

 

# Start kube-controller-manager

sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl start kube-controller-manager

sudo systemctl status kube-controller-manager

 

Kube Scheduler

# Certs to their desired location

sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/

 

# Create the kube-scheduler systemd unit file

 

cat << EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
EOF


cat << EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

 

# Start kube-scheduler

sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler

sudo systemctl status kube-scheduler

 

RBAC for Kubelet Authoriztion

- kube-apiserver가 worker node에 대한 정보를 요청 권한을 가지도록

- Access to tke kubelet API is requried for retrieving metrics, logs and executing commands in pods.

# Create an Admin ClusterRole and bind it to kubeconfig

ClusterRole

cat << EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
EOF

ClusterRoleBinding

cat << EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

 

# Verify All component working properly

kubectl get componentstatuses --kubeconfig admin.kubeconfig
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}

 

(Optional) Enable HTTP Health Checks

# Install nginx and expose health check over http

sudo apt-get update
sudo apt-get install -y nginx

 

# Config proxying health check

cat > kubernetes.default.svc.cluster.local <<EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}
EOF

sudo mv kubernetes.default.svc.cluster.local \
	/etc/nginx/sites-available/kubernetes.default.svc.cluster.local

sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/

 

# start nginx and verify health check

sudo systemctl restart nginx
sudo systemctl enable nginx

kubectl cluster-info --kubeconfig admin.kubeconfig
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz

 


(Optional) 8. Kubectl remote access from local machine

kubectl config set-cluster kubernetes-the-hard-way \
  --certificate-authority='ca.pem' \
  --embed-certs=true \
  --server=https://172.30.1.40:6443

kubectl config set-credentials admin \
  --client-certificate='admin.pem' \
  --client-key='admin-key.pem'

kubectl config set-context kubernetes-the-hard-way \
  --cluster=kubernetes-the-hard-way \
  --user=admin

kubectl config use-context kubernetes-the-hard-way

 


9. Bootstrapping the Kubernetes Worker Nodes

- The following components will be installed on each node: runc, container, networking plugins, containerd, kubelet, and kube-proxy

# Install the OS dependencies & Disable Swap

sudo apt-get update
sudo apt-get -y install socat conntrack ipset

sudo swapoff -a

 

# Download binaries and move them to desired location

wget -q --show-progress --https-only --timestamping \
  https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-arm64.tar.gz \
 https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.arm64 \
  https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-arm64-v0.9.1.tgz \
  https://github.com/containerd/containerd/releases/download/v1.6.1/containerd-1.6.1-linux-arm64.tar.gz \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/arm64/kubectl \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/arm64/kube-proxy \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/arm64/kubelet

sudo mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes

mkdir containerd
tar -xvf crictl-v1.21.0-linux-arm64.tar.gz
tar -xvf containerd-1.6.1-linux-arm64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-arm64-v0.9.1.tgz -C /opt/cni/bin/
sudo mv runc.arm64 runc
chmod +x crictl kubectl kube-proxy kubelet runc 
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/

 

# Configure CNI Networking

- Create the bridge network and loopback network configuration

POD_CIDR=10.200.1.0/24

cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
    "cniVersion": "0.4.0",
    "name": "bridge",
    "type": "bridge",
    "bridge": "cnio0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "${POD_CIDR}"}]
        ],
        "routes": [{"dst": "0.0.0.0/0"}]
    }
}
EOF

cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
    "cniVersion": "0.4.0",
    "name": "lo",
    "type": "loopback"
}
EOF

 

CNI

# Configure Containerd

sudo mkdir -p /etc/containerd/

cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
  [plugins.cri.containerd]
    snapshotter = "overlayfs"
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runc"
      runtime_root = ""
EOF

 

# Create the containerd systemd unit file

cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

 

# Start containerd 

sudo systemctl daemon-reload
sudo systemctl enable containerd
sudo systemctl start containerd

 

Kubelet

# Certi to the desired location

sudo cp k8s-worker-1-key.pem k8s-worker-1.pem /var/lib/kubelet/
sudo cp k8s-worker-1.kubeconfig /var/lib/kubelet/kubeconfig

 

# Configure the kubelet

- authorization mode : webhook

POD_CIDR=10.200.0.0/24

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF

 

# Create the kubelet systemd unit file

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --config=/var/lib/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

 

# Start kubelet 

sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet

* reboot 후 아래 메시지 없어짐.

"Failed to get the kubelet's cgroup. Kubelet system container metrics may be missing." err="mountpoint for memory not found"

 

Kube-proxy

# Certi to the desired location

sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

 

# Configuration & Systemd unit

cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF

cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

 

# Start  kubelet, kube-proxy 

sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy

 

# Verification

ubectl get no --kubeconfig=admin.kubeconfig
\NAME           STATUS   ROLES    AGE   VERSION
k8s-worker-1   Ready    <none>   40m   v1.21.0
k8s-worker-2   Ready    <none>   40m   v1.21.0

10. Networking

Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network routes.

We are going to install Flannel to implement the Kubernetes networking model.

 

Flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml --kubeconfig=admin.kubeconfig

 

- pod cidr not assigned

kubectl logs -f kube-flannel-ds-x56l5 -n kube-system --kubeconfig=admin.kubeconfig

I0316 05:37:58.801461       1 main.go:205] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0316 05:37:58.801882       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0316 05:37:59.907489       1 kube.go:120] Waiting 10m0s for node controller to sync
I0316 05:37:59.907871       1 kube.go:378] Starting kube subnet manager
I0316 05:38:00.908610       1 kube.go:127] Node controller sync successful
I0316 05:38:00.908732       1 main.go:225] Created subnet manager: Kubernetes Subnet Manager - k8s-worker-1
I0316 05:38:00.908762       1 main.go:228] Installing signal handlers
I0316 05:38:00.909447       1 main.go:454] Found network config - Backend type: vxlan
I0316 05:38:00.909557       1 match.go:189] Determining IP address of default interface
I0316 05:38:00.911032       1 match.go:242] Using interface with name eth0 and address 172.30.1.42
I0316 05:38:00.911156       1 match.go:264] Defaulting external address to interface address (172.30.1.42)
I0316 05:38:00.911399       1 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E0316 05:38:00.912700       1 main.go:317] Error registering network: failed to acquire lease: node "k8s-worker-1" pod cidr not assigned
I0316 05:38:00.913055       1 main.go:434] Stopping shutdownHandler...
W0316 05:38:00.913524       1 reflector.go:436] github.com/flannel-io/flannel/subnet/kube/kube.go:379: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding

아래 명령으로 해결

kubectl patch node k8s-worker-1 -p '{"spec":{"podCIDR":"10.200.0.0/24"}}' --kubeconfig=admin.kubeconfig
kubectl patch node k8s-worker-2 -p '{"spec":{"podCIDR":"10.200.1.0/24"}}' --kubeconfig=admin.kubeconfig

 


10. 클러스터 구축 결과  및 테스트

워커노드 확인

kubectl get no --kubeconfig admin.kubeconfig
NAME           STATUS   ROLES    AGE    VERSION
k8s-worker-1   Ready    <none>   110m   v1.21.0
k8s-worker-2   Ready    <none>   53m    v1.21.0

Flannel 배포 확인

kubectl get po -n kube-system -o wide --kubeconfig admin.kubeconfig -w
NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATES
kube-flannel-ds-hbk2w   1/1     Running   0          3m18s   172.30.1.43   k8s-worker-2   <none>           <none>
kube-flannel-ds-kmx9k   1/1     Running   0          3m18s   172.30.1.42   k8s-worker-1   <none>           <none>

Nginx 배포 및 확인

cat << EOF | kubectl apply -f -
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: nginx
 spec:
   selector:
     matchLabels:
       run: nginx
   replicas: 2
   template:
     metadata:
       labels:
         run: nginx
     spec:
       containers:
       - name: my-nginx
         image: nginx
         ports:
         - containerPort: 80
EOF
deployment.apps/nginx created
kubectl get po -o wide  --kubeconfig admin.kubeconfig -w
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
nginx-7bddd8c596-6x7gn   1/1     Running   0          44s   10.200.0.10   k8s-worker-1   <none>           <none>
nginx-7bddd8c596-vptkl   1/1     Running   0          44s   10.200.1.2    k8s-worker-2   <none>           <none>

kubectl run curl-tester --image=nginx --kubeconfig admin.kubeconfig

kubectl exec -it curl-tester --kubeconfig admin.kubeconfig -- curl http://10.200.1.2 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

kubectl exec -it curl-tester --kubeconfig admin.kubeconfig -- curl http://10.200.0.10 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

11. Deploying the DNS Cluster Add-on

DNS add-on provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.

 

# Deploy the coredns cluster add-on

kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
kubectl get svc -n kube-system --kubeconfig admin.kubeconfig
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.32.0.10   <none>        53/UDP,53/TCP,9153/TCP   44m

kubectl get svc --kubeconfig admin.kubeconfig
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.32.0.1    <none>        443/TCP   6h19m
nginx        ClusterIP   10.32.0.37   <none>        80/TCP    40m

kubectl get po -n kube-system --kubeconfig admin.kubeconfig -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
coredns-8494f9c688-658c2   1/1     Running   0          69m   10.200.0.12   k8s-worker-1   <none>           <none>
coredns-8494f9c688-k8lql   1/1     Running   0          69m   10.200.1.4    k8s-worker-2   <none>           <none>

 

# Verification

kubectl exec -it busybox --kubeconfig admin.kubeconfig -- nslookup nginx
Server:		10.32.0.10
Address:	10.32.0.10:53

Name:	nginx.default.svc.cluster.local
Address: 10.32.0.37
kubectl exec -it curl-tester --kubeconfig admin.kubeconfig -- curl http://nginx.default.svc.cluster.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

 

# 디버깅

kubectl exec -it busybox --kubeconfig admin.kubeconfig -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.32.0.10
options ndots:5
E0316 08:14:40.416650   23982 v3.go:79] EOF
[INFO] 10.200.0.11:48289 - 37888 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000881653s
[INFO] 10.200.0.11:48289 - 37888 "A IN kubernetes.svc.cluster.local. udp 46 false 512" NXDOMAIN qr,aa,rd 139 0.001551175s
[INFO] 10.200.0.11:48289 - 37888 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000719937s
[INFO] 10.200.0.11:48289 - 37888 "AAAA IN kubernetes.svc.cluster.local. udp 46 false 512" NXDOMAIN qr,aa,rd 139 0.001028995s
[INFO] 10.200.0.11:48289 - 37888 "A IN kubernetes.cluster.local. udp 42 false 512" NXDOMAIN qr,aa,rd 135 0.001927212s
[INFO] 10.200.0.11:48289 - 37888 "AAAA IN kubernetes.cluster.local. udp 42 false 512" NXDOMAIN qr,aa,rd 135 0.00215127s

 

kubectl edit -n kube-system configmaps coredns --kubeconfig admin.kubeconfig

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        cache 30
        loop
        reload
        loadbalance
        log
    }



간단하게 Corefile 구성을 살펴보자.

  • errors : 에러가 발생하면 stdout으로 보낸다.
  • health : http://localhost:8080/health를 통해 CoreDNS 상태를 확인할 수 있다.
  • ready : 준비 요청이 되어 있는지 확인하기 위해 포트 http://localhost:8181/ready로 HTTP 요청을 보니면 200 OK가 반환된다.
  • kubenetes : 쿠버네티스의 Service 도메인과 POD IP 기반으로 DNS 쿼리를 응답한다. ttl 설정으로 타임아웃을 제어할 수 있다.
    • pods 옵션은 POD IP를 기반으로 DNS 질의를 제어하기 위한 옵션이다. 기본값은 disabled이며, insecure값은 kube-dns 하위 호환성을 위해서 사용한다.
    • pods disabled옵션을 설정하면 POD IP 기반 DNS 질의가 불가능하다. 예를 들어 testbed 네임스페이스에 있는 POD IP가 10.244.2.16라고 한다면, 10-244-2-16.testbed.pod.cluster.local질의에 A 레코드를 반환하지 않게 된다.
    • pods insecure 옵션은 같은 네임스페이스에 일치하는 POD IP가 있는 경우에만 A 레코드를 반환한다고 되어 있다. 하지만 간단하게 테스트 해보기 위해 다른 네임스페이스 상에 POD를 만들고 서로 호출했을 때 계속 통신이 되었다. 제대로 이해를 못했거나 테스트 방식이 잘못된 것인지 잘 모르겠다. ㅠㅠ
  • prometheus : 지정한 포트(:9153)로 프로메테우스 포맷의 메트릭 정보를 확인할 수 있다. 위에서 다룬 health의 :8080포트나 ready 옵션의 :8181포트를 포함해서 CoreDNS로 HTTP 요청을 보내려면 CoreDNS Service 오브젝트 설정에 :9153, :8080, :8181 포트를 바인딩을 설정해야 한다.

 

 


12. Pod to Internet

참조 : https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-coredns

 

Corefile에 forward 추가.

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        cache 30
        loop
        reload
        loadbalance
        log
        forward . 8.8.8.8
    }

구글에 핑을 날려 확인

kubectl exec -it busybox --kubeconfig admin.kubeconfig -- ping www.google.com
PING www.google.com (172.217.175.36): 56 data bytes
64 bytes from 172.217.175.36: seq=0 ttl=113 time=31.225 ms
64 bytes from 172.217.175.36: seq=1 ttl=113 time=31.720 ms
64 bytes from 172.217.175.36: seq=2 ttl=113 time=32.036 ms
64 bytes from 172.217.175.36: seq=3 ttl=113 time=32.769 ms
64 bytes from 172.217.175.36: seq=4 ttl=113 time=31.890 ms
kubectl exec -it busybox --kubeconfig admin.kubeconfig -- nslookup nginx
Server:		10.32.0.10
Address:	10.32.0.10:53

Name:	nginx.default.svc.cluster.local
Address: 10.32.0.37

 

디버깅

[INFO] Reloading complete
[INFO] 127.0.0.1:51732 - 10283 "HINFO IN 790223070724377814.6189194672555963484. udp 56 false 512" NXDOMAIN qr,rd,ra,ad 131 0.033668927s
[INFO] 10.200.0.11:44705 - 5 "AAAA IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 74 0.034105169s
[INFO] 10.200.0.11:44224 - 6 "A IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 62 0.034133246s
[INFO] 10.200.0.11:41908 - 6656 "A IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.000872899s
[INFO] 10.200.0.11:41908 - 6656 "A IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.001095134s
[INFO] 10.200.0.11:41908 - 6656 "AAAA IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 142 0.001334348s
[INFO] 10.200.0.11:41908 - 6656 "AAAA IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.001338932s
[INFO] 10.200.0.11:41908 - 6656 "A IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 96 0.002403233s
[INFO] 10.200.0.11:41908 - 6656 "AAAA IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.00264s
[INFO] 10.200.0.11:41908 - 6656 "A IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 96 0.000500251s
[INFO] 10.200.0.11:41908 - 6656 "AAAA IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.000283484s
[INFO] 10.200.0.11:41908 - 6656 "AAAA IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 142 0.000317755s
[INFO] 10.200.0.11:41908 - 6656 "A IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.000322389s
[INFO] 10.200.0.11:41908 - 6656 "AAAA IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.00025812s
[INFO] 10.200.0.11:41908 - 6656 "A IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.000913992s
[INFO] 10.200.0.11:56880 - 7936 "A IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 96 0.000680195s
[INFO] 10.200.0.11:56880 - 7936 "AAAA IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 142 0.000718059s
[INFO] 10.200.0.11:56880 - 7936 "A IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.001300027s
[INFO] 10.200.0.11:56880 - 7936 "AAAA IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.002253237s
[INFO] 10.200.0.11:56880 - 7936 "AAAA IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.001340651s
[INFO] 10.200.0.11:56880 - 7936 "A IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.000766964s
[INFO] 10.200.0.11:56880 - 7936 "A IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 96 0.000423221s
[INFO] 10.200.0.11:56880 - 7936 "A IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.000411711s
[INFO] 10.200.0.11:56880 - 7936 "A IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.000823578s
[INFO] 10.200.0.11:56880 - 7936 "AAAA IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 142 0.001064718s
[INFO] 10.200.0.11:56880 - 7936 "AAAA IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.001470023s
[INFO] 10.200.0.11:56880 - 7936 "AAAA IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.001802204s
[INFO] Reloading
W0316 08:37:06.872387       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0316 08:37:06.878515       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
[INFO] plugin/reload: Running configuration MD5 = 4507d64c02fd8d12322b2944d3f2f975
[INFO] Reloading complete
[INFO] 127.0.0.1:37902 - 16284 "HINFO IN 968904221266650254.5526836193363943015. udp 56 false 512" NXDOMAIN qr,rd,ra,ad 131 0.034053296s
[INFO] 10.200.0.0:56308 - 7 "A IN www.google.com. udp 32 false 512" NOERROR qr,rd,ra 62 0.03481522s
[INFO] 10.200.0.0:60813 - 9216 "A IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 96 0.001246092s
[INFO] 10.200.0.0:60813 - 9216 "A IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.001978799s
[INFO] 10.200.0.0:60813 - 9216 "A IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.002515881s
[INFO] 10.200.0.0:60813 - 9216 "AAAA IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.003136974s
[INFO] 10.200.0.0:60813 - 9216 "AAAA IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.004195045s
[INFO] 10.200.0.0:60813 - 9216 "AAAA IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 142 0.004489419s
[INFO] 10.200.0.0:60813 - 9216 "A IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.000557082s
[INFO] 10.200.0.0:60813 - 9216 "A IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.000709165s
[INFO] 10.200.0.0:60813 - 9216 "AAAA IN nginx.svc.cluster.local. udp 41 false 512" NXDOMAIN qr,aa,rd 134 0.002432965s
[INFO] 10.200.0.0:60813 - 9216 "AAAA IN nginx.cluster.local. udp 37 false 512" NXDOMAIN qr,aa,rd 130 0.002911193s
[INFO] 10.200.0.0:60813 - 9216 "AAAA IN nginx.default.svc.cluster.local. udp 49 false 512" NOERROR qr,aa,rd 142 0.003832859s
W0316 08:45:26.938383       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice

- https://www.daveevans.us/posts/kubernetes-on-raspberry-pi-the-hard-way-part-1/

 

Kubernetes on Raspberry Pi, The Hard Way - Part 1

Starting a Kubernetes project for Raspberry Pi.

www.daveevans.us

 

- https://www.daveevans.us/posts/kubernetes-on-raspberry-pi-the-hard-way-part-1/

- https://github.com/cloudfoundry-incubator/kubo-deployment/issues/346

- https://github.com/robertojrojas/kubernetes-the-hard-way-raspberry-pi/blob/master/docs/04-kubernetes-controller.md

- https://phoenixnap.com/kb/enable-ssh-raspberry-pi

- https://github.com/Nek0trkstr/Kubernetes-The-Hard-Way-Raspberry-Pi/tree/master/00-before_you_begin

- https://github.com/kelseyhightower/kubernetes-the-hard-way/tree/master/docs

 

- k8s 인증 : https://coffeewhale.com/kubernetes/authentication/x509/2020/05/02/auth01/

- coredns : https://jonnung.dev/kubernetes/2020/05/11/kubernetes-dns-about-coredns/

Raspberry Pi cluster setup

- 3pc 라즈베리파이4B 모델 

- 3pc 16 GB SD card 

- 3pc ethernet cables

- 1pc 공유기

- 3pc C타입 아답터

Cluster Static ip

- master: 192.168.10.3

- worker-01: 192.168.10.4

- worker-02: 192.168.10.5

 


Infrastructure Setting

1. 고정 IP 설정

- vi /etc/netplan/xxx.yaml

network:
    ethernets:
        eth0:
           addresses: [192.168.10.x/24]
           gateway4: 192.168.10.1
           nameservers:
             addresses: [8.8.8.8]
    version: 2

- sudo netplan apply

2. 호스트네임 변경

# 명령어로 변경
hostnamectl set-hostname xx

 

3. 계정 생성 및 ssh 접속

- 계정추가

sudo adduser newuser

 

- 새로운 계정에 sudo 그룹 권한 할당

sudo usermod -aG sudo newuser

 

- 새로운 계정으로 ssh 접속하고 업데이트

ssh newuser@192.168.0.10

sudo apt update

 

4. Kernel Cgroup 설정

sudo vi /boot/firmware/nobtcmd.txt

# 아래내용 append
cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

Build k8s cluster through Kubeadm

1. iptables가 브리지된 트래픽을 보게하기

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

 

2. 도커 설치

- Set up the repository

1. Update the apt package index and install packages to allow apt to use a repository over HTTPS:

 sudo apt-get update
 sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

2. Add Docker's official GPG key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

3. Use the following command to set up the stable repository. To add the nightly or test repository, add the word nightly or test (or both) after the word stable in the commands below. 

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

 

- Install Docker Engine

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo usermod -a -G docker $USER # 해당 유저를 도커 그룹에 넣는다.

 

- systemd를 사용하도록 도커 데몬을 구성

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#%EB%8F%84%EC%BB%A4

 

3. kubeadm, kubelet, kubectl 설치

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

 

4. Initialize Master node

- sudo kubeadm init

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.3:6443 --token 64xvn4.a7wdy0u0y0iu7cwv \
	--discovery-token-ca-cert-hash sha256:4421c3cc7bd5011d38072d1c365b515dc43f6d3cd501213e12fc0c3f1a559fd0

 

5. 워커 노드를 클러스터에 조인 시킨다.

kubeadm join 192.168.10.3:6443 --token 64xvn4.a7wdy0u0y0iu7cwv \
	--discovery-token-ca-cert-hash sha256:4421c3cc7bd5011d38072d1c365b515dc43f6d3cd501213e12fc0c3f1a559fd0

 

6. Settin up Flannel as the container network on the Master node

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml

각 노드의 podCIDR 이 설정되어 있지 않아 flannel Pod가 정상 동작하지 않는 경우

k logs -f kube-flannel-ds-88nvk -n kube-system
I0119 00:34:14.341794       1 main.go:218] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: help:false version:false autoDetectIPv4:false autoDetectIPv6:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true subnetFile:/run/flannel/subnet.env subnetDir: publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 charonExecutablePath: charonViciUri: iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0119 00:34:14.342055       1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0119 00:34:14.740596       1 kube.go:120] Waiting 10m0s for node controller to sync
I0119 00:34:14.740824       1 kube.go:378] Starting kube subnet manager
I0119 00:34:15.740911       1 kube.go:127] Node controller sync successful
I0119 00:34:15.740978       1 main.go:238] Created subnet manager: Kubernetes Subnet Manager - master-node-01
I0119 00:34:15.741037       1 main.go:241] Installing signal handlers
I0119 00:34:15.741551       1 main.go:460] Found network config - Backend type: vxlan
I0119 00:34:15.741619       1 main.go:652] Determining IP address of default interface
I0119 00:34:15.742772       1 main.go:699] Using interface with name eth0 and address 192.168.10.3
I0119 00:34:15.742871       1 main.go:721] Defaulting external address to interface address (192.168.10.3)
I0119 00:34:15.742890       1 main.go:734] Defaulting external v6 address to interface address (<nil>)
I0119 00:34:15.743068       1 vxlan.go:137] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E0119 00:34:15.744236       1 main.go:326] Error registering network: failed to acquire lease: node "master-node-01" pod cidr not assigned
I0119 00:34:15.744432       1 main.go:440] Stopping shutdownHandler...

 

아래와 같이 수동으로 설정을 해주어야한다. 기본적으로 클라우드 프로바이더가 이러한 설정을 해주기를 기대하는 것으로 보인다.

master-01@master-node-01:/etc/kubernetes/manifests$ kubectl patch node master-node-01 -p '{"spec":{"podCIDR":"10.244.0.0/24"}}'
node/master-node-01 patched
master-01@master-node-01:/etc/kubernetes/manifests$ kubectl patch node worker-node-01 -p '{"spec":{"podCIDR":"10.244.3.0/24"}}'
node/worker-node-01 patched
master-01@master-node-01:/etc/kubernetes/manifests$ kubectl patch node worker-node-02 -p '{"spec":{"podCIDR":"10.244.4.0/24"}}'
node/worker-node-02 patched

 

7. 검증

- kubectl get nodes

NAME        STATUS   ROLES                  AGE     VERSION
master-node-01      Ready    control-plane,master   8m40s   v1.21.2
worker-node-01   Ready    <none>                 4m17s   v1.21.2
worker-node-02   Ready    <none>                 3m48s   v1.21.2

Optional) NFS test on k8s worker node

1. External NFS 서버 설정

1.1. NFS 서버를 위한 패키지 프로그램 설치

apt-get install nfs-common nfs-kernel-server rpcbind portmap

1.2. 공유할 폴더 생성

mkdir /mnt/data
chmod -R 777 /mnt/data

1.3. NFS 설정 수정

설정 파일은 /etc/exports 파일이다.

아래는 NFS를 걸 폴더는 /mnt/data이고, 172.31.0.0/16에 대해 다 열겠다는 뜻이다.

Kubernetes Node CIDR 대역으로 설정하면됨.

/mnt/data 172.31.0.0/16(rw,sync,no_subtree_check)
  • rw: read and write operations
  • sync: write any change to the disc before applying it
  • no_subtree_check: prevent subtree checking

1.4. 반영

exportfs -a
systemctl restart nfs-kernel-server

 

2. NFS 클라이언트

2.1. NFS 클라이언트를 위한 패키지 프로그램 설치

모든 Kubernetes Node에 설치를 해야함.

apt-get install nfs-common

 

2.2. NFS 연결을 위한 PV 생성

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pc
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.10.x
    path: /mnt/k8s

 

Kubernetes Storage Class - NFS dymanic provisioner도 활용 가능

- https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

- https://sarc.io/index.php/os/1780-ubuntu-nfs-configuration

 

 

Optional) NFS 마운트

- NFS 마운트할 폴더 생성

mkdir /public_data

- 마운트

NFS 서버 IP가 172.31.2.2라고 하면,

mount 172.31.2.2:/mnt/data /public_data

 


rpi image : https://www.raspberrypi.com/software/

ubunt 18.04.5 image for rasberry pi : https://cdimage.ubuntu.com/ubuntu/releases/18.04.6/release/

Kubernetes Networking을 담당하는 컴포넌트는 cni plugin(weavnet, flannel 등), kube-proxy, coredns가 있다. 

먼저 CNI 플러그인에 대해서 살펴보도록 하자. 

 

CNI 플러그인

기본적으로 쿠버네티스는 kubenet 이라는 기본적인 네트워크 플러그인을 제공하지만 크로스 노드 네트워킹이나 네트워크 정책 설정과 같은 고급 기능은 구현되어 있지 않다. 따라서 Pod 네트워킹 인터페이스로 CNI 스펙을 준수하는(Kubernete Networking Model 구현하는) 네트워크 플러그인을 사용해야한다.

Kubernetes Networking Model
- Every Pod should have an IP Address
- Every Pod should be able to communicate with every other POD in the same node.
- Every Pod should be able to communicate with every other POD on other nodes without NAT.

 

CNI 플러그인의 주요 기능

1. Container Runtime must create network namespace

2. Identify network the container must attach to

3. Container Runtime to invoke Network Plugin(bridge) when container is Added

- create veth pair

- attach veth pair

- assign ip address

- bring up interface

4. Container Runtime to invoke Network Plugin(bridge) when container is Deleted

- delete veth pair

5. JSON format of the Network Configuration

 

CNI 플러그인의 주요 책임

  • Must support arguments ADD/DEL/CHECK
  • Must support parameters container id, network ns etc..
  • Must manage IP Address assignment to PODS
  • Must Retrun results in a specific format

 

참고. CNI 설정

kubelet 실행시 CNI 설정을 함께한다.

> ps -aux | grep kubelet

# kubelet.service
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--cni-bin-dir=/opt/cni/bin \\
--cni-conf-dir=/etc/cni/net.d \\
--register-node=true \\
--v=2

 

참고. CNI IPAM

ip 할당에 대한 관리(DHCP, host-local, ...)

cat /etc/cni/net.d/net-script.conf
{
	"cniVersion": "0.2.0",
	"name": "mynet",
	"type": “net-script",
	"bridge": "cni0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.244.0.0/16",
		"routes": [
					{ "dst": "0.0.0.0/0" }
				]
	}
}
# cat /etc/cni/net.d/10-bridge.conf
{
	"cniVersion": "0.2.0",
	"name": "mynet",
	"type": "bridge",
	"bridge": "cni0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.22.0.0/16",
		"routes": [
			{ "dst": "0.0.0.0/0" }
        ]
	}
}

Pod Networking with CNI 플러그인(flannel)

Pod 내 컨테이너 들은 가상 네트워크 인터페이스(veth)를 통해 서로 통신할 수 있고, 고유한 IP를 갖게된다. 각 Pod는 위에서 설명한 CNI로 구성된 네트워크 인터페이스를 통하여 고유한 IP 주소로 서로 통신할 수 있다. 추가로 각기 다른 노드에 존재하는 Pod들은 서로 통신하기 위해 라우터를 거처야 한다.

출처 :https://medium.com/finda-tech/kubernetes-%EB%84%A4%ED%8A%B8%EC%9B%8C%ED%81%AC-%EC%A0%95%EB%A6%AC-fccd4fd0ae6

 

Service Networking with Kube-proxy

service는 selector를 통해 전달받을 트래픽을 특정 Pod로 전달한다. Pod 네트워크와 동일하게 가상 IP 주소를 갖지만 ip addr, router로 조회할 수 없는 특징을 가지고 있다. 대신 iptable 명령을 통해 NAT 테이블을 조회해보면 관련 설정이 있음을 확인할 수 있다. 

 

쿠버네티스는 Service Networking을 위해 Kube-proxy를 이용한다. IP 네트워크(Layer 3)는 기본적으로 자신의 호스트에서 목적지를 찾지 못하면 상위 게이트웨이로 패킷을 전달하고, 라우터에 트래픽이 도달하기전에 Kube-proxy를 통해 최종 목적지를 찾는 경우가 있다.

 

Kube-proxy는 현재(2021.11.28) iptables 모드가 기본 프록시 모드로 설정되어 있어 있고, 쿠버네티스에는 데몬셋으로 배포되기 때문에 모든 노드에 존재한다. 이때 kube-proxy는 직접 proxy 역할을 수행하지 않고, 그 역할을 전부 netfilter(service ip 발견 & Pod로 전달)에게 맡긴다. kube-proxy는 단순히 iptables를 통해 netfilter의 규칙을 수정한다. 

 

--proxy-mode ProxyMode
사용할 프록시 모드: 'userspace' (이전) or 'iptables' (빠름) or 'ipvs' or 'kernelspace' (윈도우). 공백인 경우 가장 잘 사용할 수 있는 프록시(현재는 iptables)를 사용한다. iptables 프록시를 선택했지만, 시스템의 커널 또는 iptables 버전이 맞지 않으면, 항상 userspace 프록시로 변경된다.

 

참고. iptables, netfilter

- iptables : 유저 스페이스에 존재하는 인터페이스로 패킷 흐름을 제어. netfilter를 이용하여 규칙을 지정하여 패킷을 포워딩한다.

- netfilter : 커널 스페이스에 위치하여 모든 패킷의 생명주기를 관찰하고, 규칙에 매칭되는 패킷이 발생되면 정의된 액션을 수행한다.

출처 : proxying(https://medium.com/finda-tech/kubernetes-%EB%84%A4%ED%8A%B8%EC%9B%8C%ED%81%AC-%EC%A0%95%EB%A6%AC-fccd4fd0ae6)

 

출처 : Nodeport(https://medium.com/finda-tech/kubernetes-%EB%84%A4%ED%8A%B8%EC%9B%8C%ED%81%AC-%EC%A0%95%EB%A6%AC-fccd4fd0ae6)

 

sudo iptables -S or -L -t nat 명령을 통해 노드에 설정되어 있는 NAT 테이블을 조회할 수 있다.

# netfilter의 체인룰
KUBE-SVC-XXX
KUBE-SERVICES
KUBE-SEP-XXX
KUBE-POSTROUTING
KUBE-NODEPORTS
KUBE-MARK-DROP
KUBE-MARK-MASQ
DOCKER
POSTROUTING
PREROUTING
OUTPUT
KUBE-PROXY-CANARY
KUBE-KUBELET-CANRY
등
# 특정 서비스의 체인룰 조회
iptables –L –t net | grep db-service
KUBE-SVC-XA5OGUC7YRHOS3PU tcp -- anywhere 10.103.132.104 /* default/db-service: cluster IP */ tcp dpt:3306
DNAT tcp -- anywhere anywhere /* default/db-service: */ tcp to:10.244.1.2:3306
KUBE-SEP-JBWCWHHQM57V2WN7 all -- anywhere anywhere /* default/db-service: */

 

kube-proxy는 마스터 노드의 API server에서 정보를 수신하여 이러한 체인 룰들을 추가하거나 삭제한다. 이렇게 지속적으로 iptables를 업데이트하여 netfilter 규칙을 최신화하며 Service 네트워크를 관리하는 것이다.

 


Ingress

Ingress는 리버스 프록시를 통해 클러스터 내부 Service로 패킷을 포워딩 시키는 방법을 명시한다. 대표적으로 많이 사용하는 nginx ingress controller는 ingress 리소스를 읽어서 그에 맞는 리버스 프록시를 구성한다.

 

Ingress 특징

  • using a single externally accessible URL that you can configure to route to different services Based on URL path and implementing SSL security as well
  • layer 7 load balancer
  • Ingress controller(Deploy) required - NOT deployed by default

Inginx Ingress Controller(Deployment) 예시

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nginx-ingress
  template:
    metadata:
      labels:
        name: nginx-ingress
    spec:
      containers:
        - name: nginx-ginress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
      args:
        - /nginx-ingress-controler
        - --configmap=$(POD_NAMESPACE)/nginx-configuration
      env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443

 

Ingress Resources(Configure)

Path types

Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. There are three supported path types:

  • ImplementationSpecific: With this path type, matching is up to the IngressClass. Implementations can treat this as a separate pathType or treat it identically to Prefix or Exact path types.
  • Exact: Matches the URL path exactly and with case sensitivity.
  • Prefix: Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the / separator. A request is a match for path p if every p is an element-wise prefix of p of the request path.
Note: If the last element of the path is a substring of the last element in request path, it is not a match (for example: /foo/bar matches/foo/bar/baz, but does not match /foo/barbaz).

 

Rule example 1.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-wear
spec:
  backend:
    serviceName: wear-service
    servicePort: 80

 

Rule example 2(splitting traffic by URL).

  • no host is specified. The rule applies to all inbound HTTP traffic through the IP address specified.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-wear-watch
spec:
 rules:
 - http:
     paths:
     - path: /wear
       backend:
          serviceName: wear-service
          servicePort: 80
     - path: /watch
       backend:
          serviceName: watch-service
          servicePort: 80

 

Rule example 3(spliting by hostname).

  • If a host is provided (for example, foo.bar.com), the rules apply to that host
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-wear-watch
spec:
 rules:
 - host: wear.my-online-store.com
   http:
     paths:
     - backend:
         serviceName: wear-service
         servicePort: 80
 - host: watch.my-online-store.com
   http:
     paths:
     - backend:
         serviceName: watch-service
         servicePort: 80

 

참고) rewrite-target

Without the rewrite-target option, this is what would happen:

http://<ingress-service>:<ingress-port>/watch --> http://<watch-service>:<port>/watch

http://<ingress-service>:<ingress-port>/wear --> http://<wear-service>:<port>/wear

 

참고) replace("/something(/|$)(.*)", "/$2")

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  name: rewrite
  namespace: default
spec:
  rules:
  - host: rewrite.bar.com
    http:
      paths:
      - backend:
          serviceName: http-svc
          servicePort: 80
        path: /something(/|$)(.*)

 

Nginx Ingress Service(NodePort) 예시

apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
  selector:
    name: nginx-ingress

 


CoreDNS

쿠버네티스 클러스터의 DNS 역할을 수행할 수 있는 유연하고 확장 가능한 DNS 서버이고, 서비스 디스커버리에 사용된다.

 

FQDN(Fully Qualified Domain Name)

쿠버네티스에서 도메인으로 다양한 네임스페이스의 서비스 혹은 파드와 통신하고자 할 때 사용한다.

사용방법 : {service or host name}.{namespace name}.{svc or pod}.cluster.local

Hostname Namespace Type Root IP Address
web-service apps svc cluster.local 10.107.37.188
10-244-2-5 default pod cluster.local 10.244.2.5
# Service
curl http://web-service.apps.svc.cluster.local

# POD
curl http://10-244-2-5.apps.pod.cluster.local

 

coreDNS 설정

  • /etc/coredns/Corefile
$ cat /etc/coredns/Corefile
.:53 {
	errors
	health
	kubernetes cluster.local in-addr.arpa ip6.arpa {
		pods insecure
		upstream
		fallthrough in-addr.arpa ip6.arpa
	}
	prometheus :9153
	proxy . /etc/resolv.conf
	cache 30
	reload
}

coreDNS service

  • kube-dns
pi@master:~ $ kubectl get service -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   21d
  • kubelet config for coreDNS
root@master:/var/lib/kubelet# cat config.yaml 
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: cgroupfs
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
  • external DNS server

forwarded to the nameserver specified in the coredns pods and /etc/resolv.conf file is set to use the nameserver from the kubernetes(POD에서 요청한 도메인이 DNS server에서 찾을 수 없는 경우)

pi@master:~ $ cat /etc/resolv.conf
# Generated by resolvconf
nameserver 192.168.123.254
nameserver 8.8.8.8
nameserver fd51:42f8:caae:d92e::1
nameserver 61.41.153.2
nameserver 1.214.68.2

 


Quiz 1.

1. 네트워크 인터페이스/맥주소 조회

  • ifconfig -a : 전체 네트워크 인터페이스 조회
  • cat /etc/network/interfaces : 시스템의 네트워크 기본 정보 설정
  • ifconfig eth0 : 네트워크 인터페이스/맥주소 조회
  • ip link show eth0 : 네트워크 인터페이스/맥주소 조회
  • ip a | grep -B2 10.3.116.12 on controlplane

네트워크 인터페이스 : eth0, MAC address(ether) : 02:42:0a:08:0b:06

root@controlplane:~# ip a | grep -B2 10.8.11.6
4890: eth0@if4891: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 02:42:0a:08:0b:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.8.11.6/24 brd 10.8.11.255 scope global eth0

 

worker 노드의 네트워크 인터페이스 on controlplane

  • ssh node01 ifconfig eth0 로도 확인 가능!
  • arp node01
root@controlplane:~# arp node01
Address                  HWtype  HWaddress           Flags Mask            Iface
10.8.11.8                ether   02:42:0a:08:0b:07   C                     eth0

 

2. What is the interface/bridge created by Docker on this host?

  • ifconfig -a : docker0
  • ip addr
root@controlplane:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:8e:70:63:21 brd ff:ff:ff:ff:ff:ff
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/ether ae:8e:7a:c3:ed:cd brd ff:ff:ff:ff:ff:ff
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether ee:04:24:06:10:7d brd ff:ff:ff:ff:ff:ff
5: veth0cbf7e57@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master cni0 state UP mode DEFAULT group default 
    link/ether 4e:9d:38:c6:36:3f brd ff:ff:ff:ff:ff:ff link-netnsid 2
6: vethe11f72d7@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue master cni0 state UP mode DEFAULT group default 
    link/ether 26:8a:96:40:ad:fc brd ff:ff:ff:ff:ff:ff link-netnsid 3
4890: eth0@if4891: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:0a:08:0b:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
4892: eth1@if4893: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ac:11:00:39 brd ff:ff:ff:ff:ff:ff link-netnsid 1

what is the state of the interface docker0

  • state DOWN
root@controlplane:~# ip link show docker0
2: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:8e:70:63:21 brd ff:ff:ff:ff:ff:f

3. What is the default gateway

  • ip route show default
root@controlplane:~# ip route show default
default via 172.17.0.1 dev eth1

4. What is the port the kube-scheduler is listening on in the controlplane node?

  • netstat -nplt | grep scheduler
  • netstat -natulp | grep kube-scheduler
root@controlplane:~# netstat -nplt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      735/ttyd            
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      3716/kube-controlle 
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      3776/kube-scheduler 
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      636/systemd-resolve 
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      758/sshd            
tcp        0      0 127.0.0.1:36697         0.0.0.0:*               LISTEN      4974/kubelet        
tcp        0      0 127.0.0.11:45217        0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      4974/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      6178/kube-proxy     
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      3893/etcd           
tcp        0      0 10.8.11.6:2379          0.0.0.0:*               LISTEN      3893/etcd           
tcp        0      0 10.8.11.6:2380          0.0.0.0:*               LISTEN      3893/etcd           
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      3893/etcd           
tcp6       0      0 :::10256                :::*                    LISTEN      6178/kube-proxy     
tcp6       0      0 :::22                   :::*                    LISTEN      758/sshd            
tcp6       0      0 :::8888                 :::*                    LISTEN      5342/kubectl        
tcp6       0      0 :::10250                :::*                    LISTEN      4974/kubelet        
tcp6       0      0 :::6443                 :::*                    LISTEN      4003/kube-apiserver

 

5. Notice that ETCD is listening on two ports. Which of these have more client connections established?

  • netstat -natulp | grep etcd | grep LISTEN
  • netstat -anp | grep etcd | grep 2379 | wc -l
root@controlplane:~# netstat -anp | grep etcd
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      3893/etcd           
tcp        0      0 10.8.11.6:2379          0.0.0.0:*               LISTEN      3893/etcd           
tcp        0      0 10.8.11.6:2380          0.0.0.0:*               LISTEN      3893/etcd           
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35234         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:33034         127.0.0.1:2379          ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34830         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35498         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35394         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35458         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35546         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35378         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35418         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35450         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35130         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:33034         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35148         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35250         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34916         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34812         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34758         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35232         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34846         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35060         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35566         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34872         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35368         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35442         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35356         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34724         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34972         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34794         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34618         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35528         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35208         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34862         ESTABLISHED 3893/etcd           
tcp        0      0 10.8.11.6:2379          10.8.11.6:37174         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35468         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35310         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35220         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35254         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35472         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35040         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34744         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35506         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35036         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34908         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35512         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34638         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35228         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35300         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35106         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35170         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35486         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35434         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35284         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34628         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35196         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35412         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35336         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34964         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35184         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35422         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34942         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34924         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35540         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35294         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35246         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35428         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34822         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35384         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35108         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34746         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35072         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34752         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35086         ESTABLISHED 3893/etcd           
tcp        0      0 10.8.11.6:37174         10.8.11.6:2379          ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35714         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35584         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:34730         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35324         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35268         ESTABLISHED 3893/etcd           
tcp        0      0 127.0.0.1:2379          127.0.0.1:35706         ESTABLISHED 3893/etcd

 


Quiz 2.

1. Inspect the kubelet service and identify the network plugin configured for Kubernetes.

  • ps -aux | grep kubelet | grep --color network-plugin=
root@controlplane:~# ps -aux | grep kubelet | grep --color network-plugin=   
root      4819  0.0  0.0 4003604 104604 ?      Ssl  04:51   1:12 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2

 

2. What is the path configured with all binaries of CNI supported plugins?

  • The CNI binaries are located under /opt/cni/bin by default.

 

Identify which of the below plugins is not available in the list of available CNI plugins on this host?

  • Run the command: ls /opt/cni/bin and identify the one not present at that directory.
$ls /opt/cni/bin/
bandwidth  bridge  dhcp  firewall  flannel  host-device  host-local  ipvlan  loopback  macvlan  portmap  ptp  sbr  static  tuning  vlan

 

3. What is the CNI plugin configured to be used on this kubernetes cluster?

  • ls /etc/cni/net.d/
  • 포드를 조회해보고 이름을 통해 알 수도 있음.
controlplane $ cat /etc/cni/net.d/10-weave.conflist 
{
    "cniVersion": "0.3.0",
    "name": "weave",
    "plugins": [
        {
            "name": "weave",
            "type": "weave-net",
            "hairpinMode": true
        },
        {
            "type": "portmap",
            "capabilities": {"portMappings": true},
            "snat": true
        }
    ]
}

 

4. What binary executable file will be run by kubelet after a container and its associated namespace are created.

  • Look at the type field in file /etc/cni/net.d/10-flannel.conflist. (flannel)
root@controlplane:~# cat /etc/cni/net.d/10-flannel.conflist
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

 

참고) cni plugin 설치 안되었을시 Pod 에러(No Network configured)

Events:
  Type     Reason                  Age               From               Message
  ----     ------                  ----              ----               -------
  Normal   Scheduled               46s               default-scheduler  Successfully assigned default/app to node01
  Warning  FailedCreatePodSandBox  44s               kubelet, node01    Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "6b3b648ea4a547e46b96ca4d23841e9acc0edc0d43f0e879af8f45ebb498c74e" network for pod "app": networkPlugin cni failed to set up pod "app_default" network: unable to allocate IP address: Post "http://127.0.0.1:6784/ip/6b3b648ea4a547e46b96ca4d23841e9acc0edc0d43f0e879af8f45ebb498c74e": dial tcp 127.0.0.1:6784: connect: connection refused, failed to clean up sandbox container "6b3b648ea4a547e46b96ca4d23841e9acc0edc0d43f0e879af8f45ebb498c74e" network for pod "app": networkPlugin cni failed to teardown pod "app_default" network: Delete "http://127.0.0.1:6784/ip/6b3b648ea4a547e46b96ca4d23841e9acc0edc0d43f0e879af8f45ebb498c74e": dial tcp 127.0.0.1:6784: connect: connection refused]

 

5. Identify the name of the bridge network/interface created by weave on each node

  • ifconfig : weave
  • ip link : weave
controlplane $ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 02:42:ac:11:00:10 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:da:b0:4f:0c brd ff:ff:ff:ff:ff:ff
6: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:f9:14:7b:9e:e8 brd ff:ff:ff:ff:ff:ff
8: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 06:a9:f6:0f:2d:d6 brd ff:ff:ff:ff:ff:ff
10: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP mode DEFAULT group default 
    link/ether fa:69:cd:fe:bd:65 brd ff:ff:ff:ff:ff:ff
11: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP mode DEFAULT group default 
    link/ether ea:09:2e:7d:f1:74 brd ff:ff:ff:ff:ff:ff
12: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 0a:8f:c0:a8:02:9f brd ff:ff:ff:ff:ff:ff

 

6. What is the POD IP address range configured by weave?

  • ip addr show weave
controlplane $ ip addr show weave
8: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 06:a9:f6:0f:2d:d6 brd ff:ff:ff:ff:ff:ff
    inet 10.32.0.1/12 brd 10.47.255.255 scope global weave
       valid_lft forever preferred_lft forever
    inet6 fe80::4a9:f6ff:fe0f:2dd6/64 scope link 
       valid_lft forever preferred_lft forever

 

7. What is the default gateway configured on the PODs scheduled on node03?

  • kubectl run busybox --imag=busybox --command sleep 1000 --dry-run=client -o yaml > pod.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: busybox
  name: busybox
spec:
  nodeName: node03 # node3!
  containers:
  - command:
    - sleep
    - "1000"
    image: busybox
    name: busybox
  • kubectl exec -it busybox -- sh
  • ip route : 10.38.0.0
# ip r
default via 10.38.0.0 dev eth0
10.32.0.0/12 dev eth0 scope link src 10.38.0.1
  • ssh node03 ip route : 10.46.0.0
controlplane $ ssh node03 ip route
default via 172.17.0.1 dev ens3 
10.32.0.0/12 dev weave proto kernel scope link src 10.46.0.0 
172.17.0.0/16 dev ens3 proto kernel scope link src 172.17.0.22 
172.18.0.0/24 dev docker0 proto kernel scope link src 172.18.0.1 linkdown

 


Quiz 3.

1. What network range are the nodes in the cluster part of?

  • ip addr
controlplane $ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:42:ac:11:00:0f brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.15/16 brd 172.17.255.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:f/64 scope link 
       valid_lft forever preferred_lft forever

2. What is the range of IP addresses configured for PODs on this cluster?

  • kubectl logs weave-net-xxx -c weave -n kube-system | grep ipalloc-range
  • weave configuration으로도 확인가능.
controlplane $ kubectl logs weave-net-4p55r -c weave -n kube-system | grep ipalloc-range
INFO: 2021/07/17 06:50:50.095797 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true http-addr:127.0.0.1:6784 ipalloc-init:consensus=1 ipalloc-range:10.32.0.0/12 metrics-addr:0.0.0.0:6782 name:06:a9:f6:0f:2d:d6 nickname:controlplane no-dns:true no-masq-local:true port:6783]

 

3. What is the IP Range configured for the services within the cluster?

  • cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep cluster-ip-range
controlplane $ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep cluster-ip-range
    - --service-cluster-ip-range=10.96.0.0/12

 

4. What type of proxy is the kube-proxy configured to use?

  • kubectl logs kube-proxy-xxx -n kube-system
  • assuming iptables proxy
controlplane $ k logs kube-proxy-fsn4s -n kube-system
I0717 06:51:30.922706       1 node.go:136] Successfully retrieved node IP: 172.17.0.17
I0717 06:51:30.922791       1 server_others.go:111] kube-proxy node IP is an IPv4 address (172.17.0.17), assume IPv4 operation
W0717 06:51:30.977974       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I0717 06:51:30.978240       1 server_others.go:186] Using iptables Proxier.
I0717 06:51:30.978587       1 server.go:650] Version: v1.19.0
I0717 06:51:30.979181       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0717 06:51:30.979710       1 config.go:315] Starting service config controller
I0717 06:51:30.979723       1 shared_informer.go:240] Waiting for caches to sync for service config
I0717 06:51:30.979739       1 config.go:224] Starting endpoint slice config controller
I0717 06:51:30.979742       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0717 06:51:31.079918       1 shared_informer.go:247] Caches are synced for service config 
I0717 06:51:31.079925       1 shared_informer.go:247] Caches are synced for endpoint slice config

 

 

참고) kube-proxy as daemonset

kubectl -n kube-system get ds 을 통해 kube-proxy가 데몬셋이라는 것을 알 수 있다. 따라서 kube-proxy는 모든 노드에 걸쳐 실행되고 있다.

pi@master:~ $ kubectl -n kube-system get ds
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-proxy   4         4         4       4            4           kubernetes.io/os=linux   21d
weave-net    4         4         4       4            4           <none>                   21d

 


Quiz 4.

1. What is the IP of the CoreDNS server that should be configured on PODs to resolve services?

  • kube-dns
controlplane $ k get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   60m

 

2. Where is the configuration file located for configuring the CoreDNS service?

  • Deployment configuration : /etc/coredns/Corefile
controlplane $ kubectl -n kube-system describe deployments.apps coredns | grep -A2 Args | grep Corefile
      /etc/coredns/Corefile

 

The Corefile is passed in to the CoreDNS POD by a ConfigMap object : mount 정보 확인!

pi@master:~ $ kubectl describe deploy coredns -n kube-system
Name:                   coredns
Namespace:              kube-system
CreationTimestamp:      Sat, 26 Jun 2021 18:21:35 +0900
Labels:                 k8s-app=kube-dns
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kube-dns
Replicas:               2 desired | 2 updated | 2 total | 0 available | 2 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kube-dns
  Service Account:  coredns
  Containers:
   coredns:
    Image:       k8s.gcr.io/coredns/coredns:v1.8.0
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
  Volumes:
   config-volume:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               coredns
    Optional:           false
  Priority Class Name:  system-cluster-critical

 

3. From the hr pod nslookup the mysql service and redirect the output to a file /root/CKA/nslookup.out

  • nslookup mysql.payroll
kubectl exec -it hr -- nslookup mysql.payroll > /root/CKA/nslookup.out


Server: 10.96.0.10
Address: 10.96.0.10

Name: mysql.payroll.svc.cluster.local
Address: 10.111.253.233

-> nslookup을 통해 fqdn 정보와 address를 알 수 있음.

 

참고) 서비스의 Selector는 Pod의 라벨 내용을 참고한다. kubectl get po xx --show-labels

 


Quiz 5.

1. 네임스페이스를 지정하여 ingress resource를 추가할 수 있다.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  namespace: critical-space
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /pay
        backend:
          serviceName: pay-service
          servicePort: 8282

 

2. deploy ingress controller

2-0. namespace

  • kubectl create namespace(ns) ingress-space

2-1. configmap

  • kubectl create configmap(cm) nginx-configuration --namespace ingress-space
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-space

2-2. service account

  • kubectl create serviceaccount(sa) ingress-serviceaccount --namespace ingress-space
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-serviceaccount
  namespace: ingress-space

2.3. Role과 RoleBinding은 자동 생성

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: "2021-07-25T07:23:17Z"
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  name: ingress-role-binding
  namespace: ingress-space
  resourceVersion: "1409"
  uid: abd25bb3-4ae5-45c0-b1aa-c5c98b49c351
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-role
subjects:
- kind: ServiceAccount
  name: ingress-serviceaccount
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: "2021-07-25T07:23:17Z"
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  name: ingress-role
  namespace: ingress-space
  resourceVersion: "1408"
  uid: 535a7ee6-aabd-4a9f-9144-2799ab1327c6
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resourceNames:
  - ingress-controller-leader-nginx
  resources:
  - configmaps
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - endpoints
  verbs:
  - get

 

2-4. Ingress Controller 자동 생성

kind: Deployment
metadata:
  name: ingress-controller
  namespace: ingress-space
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nginx-ingress
  template:
    metadata:
      labels:
        name: nginx-ingress
    spec:
      serviceAccountName: ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --default-backend-service=app-space/default-http-backend
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443

 

2-5. create service as nodePort

  • kubectl -n ingress-space expose deployment ingress-controller --name ingress --port 80 --target-port 80 --type NodePort --dry-run=client -o yaml > ingress-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: ingress
  namespace: ingress-space
spec:
  type: NodePort
  selector:
    name: nginx-ingress
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080

 

2-6. create ingress resource at namespace :: app-space 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tt
  namespace: app-space
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /wear
        pathType: Prefix
        backend:
          service:
            name: wear-service
            port:
              number: 8080
      - path: /watch
        pathType: Prefix
        backend:
          service:
            name: video-service
            port:
              number: 8080

출처

- https://kubernetes.github.io/ingress-nginx/examples/

- https://kubernetes.github.io/ingress-nginx/examples/rewrite/

- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#steps-for-the-first-control-plane-node

- https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model

- https://kubernetes.io/docs/concepts/cluster-administration/addons/

- https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports 

- https://github.com/kubernetes/dns/blob/master/docs/specification.md

- https://coredns.io/plugins/kubernetes/

- https://medium.com/finda-tech/kubernetes-%EB%84%A4%ED%8A%B8%EC%9B%8C%ED%81%AC-%EC%A0%95%EB%A6%AC-fccd4fd0ae6

Docker Network를 살펴보기전에 네트워크 기본 개념인 Switching, Routing, DNS에 대해서 먼저 살펴보고, 네트워크 환경 격리를 위한 네트워크 네임스페이스와 격리된 네트워크와의 연결를 위해 브리지 네트워크에 대해서 살펴 볼 것이다.

Network Basic

 

출처 :&amp;amp;amp;nbsp;https://circuitglobe.com/difference-between-router-and-switch.html

Switching

같은 네트워크 대역의 여러 장치들이 필요할 때 스위치를 통해 연결되어 통신 할 수 있도록 한다.

예를들면, 192.168.1.10 <-> 192.168.1.0(switch) <-> 192.168.1.11

 

참고) 네트워크 디바이스에 ip를 설정하는 방법

# eth0 네트워크 디바이스에 ip 설정
ip addr add 192.168.1.10/24 dev eth0
ip addr add 192.168.1.11/24 dev eth0
$ ip link

 

Routing

둘 이상의 다른 대역의 네트워크 간 데이터 전송을 위한 경로를 설정해 주고 데이터가 해당 경로에 따라 통신할 수 있도록 한다.

(gateway : door to the outside)

 

예를들면,

192.168.1.10 <-> 192.168.1.0(switch) <-> 192.168.1.11

                              192.168.1.1

                                 (gateway)

                                 192.168.2.1

192.168.2.10 <-> 192.168.2.0(switch) <-> 192.168.2.11

 

참고) 라우팅을 추가하는 방법

ip route add 192.168.2.0/24 via 192.168.1.1
$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.2.0    192.168.1.1     255.255.255.0   UG     0      0        0 eth0

 

Default Gateway

default는 따로 라우팅 규칙을 적용받지 않을 때 나머지 모든 ip에 대한 라우트를 처리합니다.

 

예를들면,

192.168.1.10 <-> 192.168.1.0(switch) <-> 192.168.1.11

                              192.168.1.1

                              (gateway)  -----------------------------------Internet

                              192.168.2.1

192.168.2.10 <-> 192.168.2.0(switch) <-> 192.168.2.11

 

참고) default gateway를 추가하는 방법

ip route add default via 192.168.2.1
= ip route add 0.0.0.0/0 via 192.168.2.1

$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.2.1     255.255.255.0   UG     0      0        0 eth0
192.168.1.0     192.168.2.1     255.255.255.0   UG     0      0        0 eth0
192.168.2.0     0.0.0.0         255.255.255.0   UG     0      0        0 eth0

# 일반 서버를 라우터처럼 사용할 때
cat /proc/sys/net/ipv4/ip_forward
# /etc/sysctl.conf
net.ipv4.ip_forward = 1(0 : not forward)

 

DNS

각 서버의 별칭을 /etc/hosts에 작성할 수 있지만 관리해야하는 별칭이 많아지거나 IP가 변경됬을 시에 관리해야하는 포인트가 많아 지기 때문에 한 곳에서 관리하고자하는 필요성이 제기된다. 이렇게 DNS Server가 탄생했다.

 

일반적인 사람들은 도메인 이름을 통해 온라인으로 정보에 엑세스한다. 이 도메인이름을 IP 주소로 변환해주는 것이 DNS 서버의 역할이다.

* Name Resolution : Translating Hostname to IP address

 

출처 :&amp;amp;amp;nbsp;http://dailusia.blog.fc2.com/blog-entry-362.html

 

DNS 설정

모르는 host name인 경우 DNS Server에 요청(기본 설정 : /etc/hosts가 우선순위가 높다)

리눅스의 경우 /etc/resolv.conf에 DNS 서버를 여러개 정의할 수 있다.

#/etc/resolv.conf
nameserver 192.168.1.100 8.8.8.8

Domain Names

top level domain : 웹사이트의 목적에 따라  com, .net, .edu, .org, .io 등을 사용한다.

출처 :&amp;amp;amp;nbsp;https://ko.wikipedia.org/wiki/%EB%8F%84%EB%A9%94%EC%9D%B8_%EB%84%A4%EC%9E%84

Search domain 

Organization DNS Server -> RootDNS -> .com DNS -> google DNS server

 

#/etc/resolv.conf
nameserver 192.168.1.100

# web 검색시 web.mycompany.com, web.prod.mycompany.com 이 검색됨
search mycompany.com prod.mycompany.com

 

Main Record Types

Type   example
A 주어진 호스트에 해당하는 IPv4를 알려준다. 192.168.1.1
AAAA 주어진 호스트에 해당하는 IPv6를 알려준다. 2001:0db8:85a3:0000:0000:8a2e:0370:7334
CNAME 도메인 이름의 별칭을 만드는데 사용한다. eat.web-server, hungry.web-server

 

Resolution Tool

ping, nslookup(only DNS), dig(only DNS) 

 


Network Namespaces / Bridge network

네트워크 네임스페이스는 프로세스간에 네트워크 환경을 격리할 수 있는 매우 강력한 기능을 제공한다. 

ip 명령은 네트워크 네임스페이스를 다루는 기능이 기본적으로 내장되어 있고, 네트워크 상태를 확인하고 제어하는 표준적인 명령어이다. ip 명령을 통해 네트워크 네임스페이스와 브리지 네트워크를 실습해 볼 것이다.

 

네트워크 디바이스 조회

  • ip -n {network namespace name} link
root@71bef26e2a21:/# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/tunnel6 :: brd ::
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0

 

네트워크 네임스페이스 관리

  • ip netns : 네트워크 네임스페이스 조회
  • ip netns add {name} : 네트워크 네임스페이스 생성
  • ip -n {network namespace} link set {vitual interface name} netns {name} : 가상 네트워크 인터페이스에 네임스페이스 지정
  • ip netns exec {name} {command} : 네임워크 네임스페이스에서 명령 실행
  • ip netns exec ip --br link : 네트워크 네임스페이스 내에 네트워크 디바이스 조회
  • ip netns exec {name} ip link set dev {device name} up : 네트워크 네임스페이스 내에 네트워크 인터페이스 실행
root@71bef26e2a21:/# ip netns add red
root@71bef26e2a21:/# ip netns add blue
root@71bef26e2a21:/# ip netns
blue
red

# 특정 네트워크 네임스페이스의 네트워크 디바이스 조희
root@71bef26e2a21:/# ip netns exec red ip --br link
lo               DOWN           00:00:00:00:00:00 <LOOPBACK> 
tunl0@NONE       DOWN           0.0.0.0 <NOARP> 
ip6tnl0@NONE     DOWN           :: <NOARP> 

# 루프백 인터페이스 실행
$ ip netns exec red ip link set dev lo up
$ ip netns exec red ip -br link

root@71bef26e2a21:/# ip netns exec red ip --br link
lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> 
tunl0@NONE       DOWN           0.0.0.0 <NOARP> 
ip6tnl0@NONE     DOWN           :: <NOARP>

 

Virtual Network Interface

  • ip link add {name1} type veth peer name {name2} : 가상 네트워크 인터페이스 생성
  • ip link del {pare 중 1개의 이름} : 하나만 삭제해도 연결된 것 같이 삭제됨.
  • ip a add xx.xx.xx.xx/xx dev {name} : 가상 네트워크 인터페이스에 ip 지정
  • = ip -n {network namespace} addr add xx.xx.xx.xx dev {name} (optional, not recommended)
  • ip -n {network namespace} link set dev {name} up : 가상 네트워크 인터페이스 실행
# veth0, veth1 (virtual interface)
$ ip link add veth0 type veth peer name veth1
$ root@71bef26e2a21:/# ip -br link
lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> 
tunl0@NONE       DOWN           0.0.0.0 <NOARP> 
ip6tnl0@NONE     DOWN           :: <NOARP> 
veth1@veth0      DOWN           0e:7d:e8:e3:60:fd <BROADCAST,MULTICAST,M-DOWN>  <----
veth0@veth1      DOWN           36:e2:ec:f0:95:23 <BROADCAST,MULTICAST,M-DOWN>  <----
eth0@if10        UP             02:42:ac:11:00:03 <BROADCAST,MULTICAST,UP,LOWER_UP>

 

실습 1. 가상 네트워크 인터페이스를 이용하여 네트워크 네임스페이스와 호스트를 연결

첫째, 기존 가상 네트워크 인터페이스(veth1)에 네트워크 네임스페이스(red) 지정을 한다.

$ ip link set veth1 netns red
root@71bef26e2a21:/# ip netns exec red ip --br link 
lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> 
tunl0@NONE       DOWN           0.0.0.0 <NOARP> 
ip6tnl0@NONE     DOWN           :: <NOARP> 
veth1@if5        DOWN           0e:7d:e8:e3:60:fd <BROADCAST,MULTICAST> <------ 네트워크 네임스페이스 virtual interface


root@71bef26e2a21:/# ip --br link
lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> 
tunl0@NONE       DOWN           0.0.0.0 <NOARP> 
ip6tnl0@NONE     DOWN           :: <NOARP> 
veth0@if4        DOWN           36:e2:ec:f0:95:23 <BROADCAST,MULTICAST> <------- 호스트 virtual interface
eth0@if10        UP             02:42:ac:11:00:03 <BROADCAST,MULTICAST,UP,LOWER_UP>

둘째, 가상 네트워크 인터페이스(veth0, veth1) IP 지정 및 Up 상태로 변경 후 ping 테스트

root@71bef26e2a21:/# ip a add 10.200.0.2/24 dev veth0
root@71bef26e2a21:/# ip netns exec red ip a add 10.200.0.3/24 dev veth1
root@71bef26e2a21:/# ip link set dev veth0 up
root@71bef26e2a21:/# ip netns exec red ip link set dev veth1 up
root@71bef26e2a21:/# ip -br link
lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> 
tunl0@NONE       DOWN           0.0.0.0 <NOARP> 
ip6tnl0@NONE     DOWN           :: <NOARP> 
veth0@if4        UP             36:e2:ec:f0:95:23 <BROADCAST,MULTICAST,UP,LOWER_UP> 
eth0@if10        UP             02:42:ac:11:00:03 <BROADCAST,MULTICAST,UP,LOWER_UP> 
root@71bef26e2a21:/# ip netns exec red ip -br link
lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> 
tunl0@NONE       DOWN           0.0.0.0 <NOARP> 
ip6tnl0@NONE     DOWN           :: <NOARP> 
veth1@if5        UP             0e:7d:e8:e3:60:fd <BROADCAST,MULTICAST,UP,LOWER_UP>


root@71bef26e2a21:/# ping 10.200.0.3
PING 10.200.0.3 (10.200.0.3) 56(84) bytes of data.
64 bytes from 10.200.0.3: icmp_seq=1 ttl=64 time=0.077 ms

root@71bef26e2a21:/# ip netns exec red ping 10.200.0.2
PING 10.200.0.2 (10.200.0.2) 56(84) bytes of data.
64 bytes from 10.200.0.2: icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from 10.200.0.2: icmp_seq=2 ttl=64 time=0.057 ms

 

Virtual Network Switch(Bridge)

브리지는 데이터링크(L2) 계층의 장비로 네트워크 세그먼트를 연결해주는 역할을 한다. 브리지는 물리 장비나 소프트웨어로 구성할 수 있다. ip 명령어를 사용하면 veth 가상 인터페이스 뿐만 아니라, 가상 브리지를 만드는 것도 가능하다. 

  • ip link add {name} type bridge : 브리지 생성
  • ip link set {name} up : 브리지 실행
  • ip link set {virtual interface name} master {name} : 브리지에 가상 네트워크 인터페이스를 연결
  • ip link set dev {name} up : 가상 네트워크 인터페이스 활성화
  • ip addr add 10.201.0.1/24 brd 10.201.0.255 dev {name} : 브리지에 ip와 브로드캐스트 ip 셋업(10.201.0.0/24 IP 대역이 bridge로 연결됨)
  • ip addr add 10.201.0.1/24 dev {name} 
  • ip a show {name}

 

실습 2. 브리지를 통해 서로 다른 네트워크 네임스페이스(컨테이너)를 연결

 

첫째, 브리지 생성 및 활성화

# 브리지 생성
root@71bef26e2a21:/# ip link add br0 type bridge
root@71bef26e2a21:/# ip link set br0 up

둘째, 가상 네트워크 인터페이스 생성 및 네트워크 네임스페이스 지정

# 네트워크 네임스페이스, virtual interface 1
root@71bef26e2a21:/# ip netns add container10 
root@71bef26e2a21:/# ip link add brid10 type veth peer name veth10
root@71bef26e2a21:/# ip link set veth10 netns container10


# 네트워크 네임스페이스, virtual interface 2
root@71bef26e2a21:/# ip netns add container11
root@71bef26e2a21:/# ip link add brid11 type veth peer name veth11
root@71bef26e2a21:/# ip link set veth11 netns container11

셋째, 가상 네트워크 인터페이스 ip 할당 및 활성화

# 네트워크 네임스페이스, IP 할당 및 실행
root@71bef26e2a21:/# ip netns exec container10 ip a add 10.201.0.4/24 dev veth10
root@71bef26e2a21:/# ip netns exec container10 ip link set dev veth10 up

root@71bef26e2a21:/# ip netns exec container11 ip a add 10.201.0.5/24 dev veth11
root@71bef26e2a21:/# ip netns exec container11 ip link set dev veth11 up

넷째, 가상 네트워크 인터페이스를 브리지에 연결 및 활성화

# virtual interface를 브리지에 연결 1
root@71bef26e2a21:/# ip link set brid10 master br0
root@71bef26e2a21:/# ip link set dev brid10 up

# virtual interface를 브리지에 연결 2
root@71bef26e2a21:/# ip link set brid11 master br0
root@71bef26e2a21:/# ip link set dev brid11 up

다섯째, ping 테스트

# 테스트
root@71bef26e2a21:/# ip netns exec container10 ping 10.201.0.5
PING 10.201.0.5 (10.201.0.5) 56(84) bytes of data.
64 bytes from 10.201.0.5: icmp_seq=1 ttl=64 time=0.142 ms

root@71bef26e2a21:/# ip netns exec container11 ping 10.201.0.4
PING 10.201.0.4 (10.201.0.4) 56(84) bytes of data.
64 bytes from 10.201.0.4: icmp_seq=1 ttl=64 time=0.260 ms

 

실습 3. 호스트와 네임스페이스를 연결하고 인터넷과 DNS를 사용할 수 있도록 셋업

호스트에서는 위의 container10과 container11 네임스페이스와 연결되어 있지 않기 때문에 통신을 할 수가 없는 상태이다. 

root@71bef26e2a21:/# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
10.200.0.0      0.0.0.0         255.255.255.0   U     0      0        0 veth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

 

첫째, 브리지(br0)에 ip와 브로드캐스트 ip를 셋업하자.(10.201.0.0/24 IP 대역이 br0로 연결됨)

root@71bef26e2a21:/# ip addr add 10.201.0.1/24 brd 10.201.0.255 dev br0
root@71bef26e2a21:/# ip a show br0
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 62:92:6c:c8:b5:28 brd ff:ff:ff:ff:ff:ff
    inet 10.201.0.1/24 brd 10.201.0.255 scope global br0
       valid_lft forever preferred_lft forever
       
       
root@71bef26e2a21:/# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
10.200.0.0      0.0.0.0         255.255.255.0   U     0      0        0 veth0
10.201.0.0      0.0.0.0         255.255.255.0   U     0      0        0 br0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

root@71bef26e2a21:/# ping 10.201.0.4
PING 10.201.0.4 (10.201.0.4) 56(84) bytes of data.
64 bytes from 10.201.0.4: icmp_seq=1 ttl=64 time=0.072 ms

 

둘째, 네트워크 네임스페이스에서 인터넷을 사용할 수 있도록 default, NAT, DNS 셋업

  • ip route add default via xx.xx.xx.xx
  • /etc/netns/{namespace name}/resolv.conf : DNS 서버 지정
root@71bef26e2a21:/# ip netns exec container10 ip route add default via 10.201.0.1
root@71bef26e2a21:/# ip netns exec container11 ip route add default via 10.201.0.1
root@71bef26e2a21:/# ip netns exec container10 route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.201.0.1      0.0.0.0         UG    0      0        0 veth10
10.201.0.0      0.0.0.0         255.255.255.0   U     0      0        0 veth10

# NAT 셋업 - linux IP 포워드 기능 활성화
$ sysctl -w net.ipv4.ip_forward=1
$ iptables -t nat -A POSTROUTING -s 10.201.0.0/24 -j MASQUERADE

root@71bef26e2a21:/# ip netns exec container10 ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=36 time=92.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=36 time=86.7 ms

# DNS 셋업 - 네트워크 네임스페이스 도메인 요청
root@71bef26e2a21:/# mkdir -p /etc/netns/container10/
root@71bef26e2a21:/# echo 'nameserver 8.8.8.8' > /etc/netns/container10/resolv.conf
root@71bef26e2a21:/# ip netns exec container10 curl www.naver.com
<html>
<head><title>302 Found</title></head>
<body>
<center><h1>302 Found</h1></center>
<hr><center> NWS </center>
</body>
</html>
root@71bef26e2a21:/# ip netns exec container10 curl google.com   
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>

 

실습 4. 네트워크 인터페이스에서 인터넷에 연결된 호스트를 통해 다른 네트워크로 통신할 수 있도록 셋업

호스트를 게이트웨이로 활용(192.168.15.2 &amp;gt; 192.168.1.3)

첫째, 네트워크 네임스페이스 셋업

$ ip netns exec {network_namespace} ip route add 192.168.1.0/24 via 192.168.15.5(bridge network ip address)

둘째, 인터넷 접근을 위한 네트워크 네임스페이스 default 셋업

# 인터넷 접근 from internal network
ip netns exec {network_namespace} ip route add default via 192.168.15.5
ip netns exec {network_namespace} ping 8.8.8.8

셋째, NAT 설정

# packet의 source(출처) 정보를 게이트웨이로 변경하는 작업이 필요함. 그래야 외부입장에서는 게이트웨이에서 보낸 것으로 간주하고
# 통신이 가능하게된다.
iptables -t nat -A POSTROUTING -s 192.168.15.0/24 -j MASQUERADE

넷째, 테스트

ip netns exec {network_namespace} ping 192.168.1.3

 

실습 5. 외부 네트워크에서 Virtual network interface 로의 접근

- 첫번째 방법. 위의 방법과 동일하게 호스트를 게이트웨이로 사용한다.

- 두번째 방법. 호스트에 포워딩 룰을 추가한다

iptables -t nat -A PREROUTING --dport 80 --to-destination 192.168.15.2:80 -j DNAT

 

참고

1. While testing the Network Namespaces, if you come across issues where you can't ping one namespace from the other, make sure you set the NETMASK while setting IP Address. ie: 192.168.1.10/24

 

첫째, ip -n red addr add 192.168.1.10/24 dev veth-red

 

둘째, Another thing to check is FirewallD/IP Table rules. Either add rules to IP Tables to allow traffic from one namespace to another. Or disable IP Tables all together (Only in a learning environment).

 

2. Common command

- arp

- netstat -plnt

 


Docker Networking

도커 네트워크 종류

- None

- Host network : docker container isolation 적용 안됨. docker container 끼리 같은 포트를 사용할 수 없음

- Bridge(default) : internal private network

pi@master:~ $ sudo docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
dd74c3b7034f   bridge    bridge    local
87b3579ecb45   host      host      local
ae52f9cf2344   none      null      local

 

Docker0

Linux는 Docker를 설치하면 서버의 물리 NIC(Network Interface Card)가 docker0이라는 가상 브리지 네트워크로 연결된다. 이 docker0은 Docker를 실행시킨 후에 디폴트로 만들어진다. 앞서 배웠던 네임스페이스 개념이 각 컨테이너에게 적용되고, Bridge와 연결 또한 그렇다.

Docker 컨테이너가 실행되면 컨테이너에 172.17.0.0/16 이라는 서브넷 마스크를 가진 프라이빗 IP주소가 eth0으로 자동으로 할당된다. 이 가상 NIC는 OSI 참조 모델의 레이어 2인 가상 네트워크 인터페이스로, 페어인 NIC와 터널링 통신을 한다.

https://jonnung.dev/images/docker_network.png - veth(가상 NIC)

 

NAPT(Network Address Port Translation)

Docker 컨테이너와 외부 네트워크가 통신을 할 때는 가상 브리지 docker0과 호스트 OS의 물리 NIC에서 패킷을 전송하는 장치가 필요하다. Docker에서는 NAPT 기능을 사용하여 연결한다.

NAPT(Network Address Port Translation)란 하나의 IP 주소를 여러 컴퓨터가 공유하는 기술로, IP 주소와 포트 번호를 변환하는 기능이다. 프라이빗 IP 주소와 글로벌 IP 주소를 투과적으로 상호 변환하는 기술로, TCP/IP의 포트 번호까지 동적으로 변환하기 때문에 하나의 글로벌 IP 주소로 여러 대의 머신이 동시에 연결할 수 있다. Docker에서는 NAPT에 Linux의 iptables를 사용하고 있다.

 

예를들면, 컨테이너 시작 시에 컨테이너 안의 웹 서버가 사용하는 80번 포트를 호스트 OS의 8080번 포트로 전송하도록 설정한다. 그러면 외부 네트워크에서 호스트 OS의 8080번 포트에 엑세스하면 컨테이너 안의 80번 포트로 연결된다.

(docker0과 물리 NIC 사이에서 컨테이너의 포트와 호스트 OS 포트를 IP 마스커레이드(NAPT)를 사용하여 변환하고 있다.)

iptables \
–t nat \
-A DOCKER \
-j DNAT \
--dport 8080 \
--to-destination 172.17.0.3:80

 

참고

NAT와 IP 마스커레이드의 차이

프라이빗 IP 주소와 글로벌 IP 주소를 변환하여 프라이빗 IP 주소가 할당된 컴퓨터에 대해 인터넷 엑세스를 가능하게 할때 사용하는 기술로는 NAT과 NAPT(IP 마스커레이드)가 있다.

 

- NAT(Network Address Translation)

프라이빗 IP 주소가 할당된 클라이언트가 인터넷상에 있는 서버에 엑세스할 때 NAT 라우터는 클라이언트의 프라이빗 IP 주소를 NAT가 갖고 있는 글로벌 IP 주소로 변환하여 요청을 송신한다. 응답은 NAT 라우터가 송신처를 클라이언트의 프라이빗 IP 주소로 변환하여 송신한다.

이러한 주소 변환에 의해 프라이빗 네트워크상의 컴퓨터와 인터넷상의 서버 간의 통신이 성립된다. 그런데 NAT의 경우 글로벌 IP 주소와 프라이빗 IP 주소를 1:1로 변환하기 때문에 동시에 여러 클라이언트가 엑세스할 수가 없다.

 

- NAPT(Network Address Port Translation)

NAPT는 프라이빗 IP 주소와 함께 포트 번호도 같이 변환하는 기술이다. 프라이빗 IP 주소를 글로벌 IP 주소로 변환할 때 프라이빗 IP 주소별로 서로 다른 포트 번호로 변환한다. NAPT는 포트 번호를 바탕으로 프라이빗 IP 주소로 변환할 수 있다. 이로써 하나의 글로벌 IP 주소와 여러 개의 프라이빗 IP 주소를 변환할 수 있는 것이다.

Linux에서 NAPT를 구축하는 것을 IP 마스커레이드라고 부른다.

 

 

Container Network Interface(CNI)

Network Namespaces, Docker ... 모두 비슷하게 브리지 네트워크 구성을 함. 그러므로 CNI 표준이 등장.

- docker does not implement CNI. Docker has its own set of standards known as CNM

- 쿠버네티스에서 docker container를 생성할 때 none 네트워크로 실행하고, 나머지 설정은 CNI plugins에게 맡긴다.

 

* 브리지 네트워크 셋업 절차

1. Create Network Namespace

2. Create Bridge Network/Interface

3. Create VETH Pairs(Pipe, Virtual Cable)

4. Attach vEth to Namespace

5. Attach Other vEth to Bridge

6. Assign IP Addr

7. Bring the tinerfaces up

8. Enable NAT - IP Masquerade

 


출처

- [책] 완벽한 IT 인프라 구축을 위한 Docker 2판

- 네트워크 네임스페이스와 브리지 네트워크 : https://www.44bits.io/ko/post/container-network-2-ip-command-and-network-namespace

+ Recent posts