Kubernetes Cluster Installation Step by Step Guide

Kevin W Tech Notes
10 min readJun 5, 2020

Topology

k8s-master01: 192.168.0.31

k8s-node01: 192.168.0.32

k8s-node02: 192.168.0.33

1. OS Installation

CentOS7 minimal installation

2. Kubernets Installation Preparation

2.1 Configure hostname and hosts file

[root@k8s-master01 ~]# hostnamectl set-hostname k8s-master01

Configure /etc/hosts files or use DNS server

[root@k8s-master01 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.31 k8s-master01
192.168.0.32 k8s-node01
192.168.0.33 k8s-node02
[root@k8s-master01 ~]# scp /etc/hosts root@k8s-node01:/etc/hosts
The authenticity of host 'k8s-node01 (192.168.0.32)' can't be established.
ECDSA key fingerprint is SHA256:c9KSc2pZiUWMeqQCru0qsfZS2n2ZlOZiTfW+XNAFYR4.
ECDSA key fingerprint is MD5:b6:26:e4:49:01:74:93:7f:04:0e:0f:21:37:df:e2:9e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8s-node01,192.168.0.32' (ECDSA) to the list of known hosts.
root@k8s-node01's password:
hosts 100% 232 190.4KB/s 00:00
[root@k8s-master01 ~]# scp /etc/hosts root@k8s-node02:/etc/hosts
The authenticity of host 'k8s-node02 (192.168.0.33)' can't be established.
ECDSA key fingerprint is SHA256:FfWwqK5Xog5Xr4xtQhjKPXQEPE6lqWtmTbpR4WUq80w.
ECDSA key fingerprint is MD5:ad:1f:0d:40:da:18:ef:e2:4e:ab:52:46:5d:a9:91:02.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8s-node02,192.168.0.33' (ECDSA) to the list of known hosts.
root@k8s-node02's password:
hosts 100% 232 160.6KB/s 00:00

2.2 Install dependency packages

Configure on k8s-master01, k8s-node01 and k8s-node02

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

2.3 Disable firewalld and enable iptables

systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

2.4 Disable SELIUXand swapoff partition, Docker will have some issues if running in swap partition.

swapoff -a && sed -i ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab
setenforce 0 && sed -i ‘s/^SELINUX=.*/SELINUX=disabled/’ /etc/selinux/config

2.5 Optimize kernel for k8s

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1 #mandatory
net.bridge.bridge-nf-call-ip6tables=1 #mandatory
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0 # enable OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1 #mandatory
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

2.6 Configure timezone if not correctly set before

timedatectl set-timezone America/Los_Angeles

timedatectl set-local-rtc 0

systemctl restart rsyslog
systemctl restart crond

2.7 Disable unnecessary services

systemctl stop postfix && systemctl disable postfix

2.8 Configure rsyslogd and systemd journald.

CentOS7 have two syslog module, disable rsyslogd and only use journald, <optional>

mkdir /var/log/journal 
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF
systemctl restart systemd-journald

2.9 Upgrade Linux Kernel from 3.10 to 4.44

[root@k8s-master01 ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
Retrieving http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
warning: /var/tmp/rpm-tmp.nvok7s: Header V4 DSA/SHA1 Signature, key ID baadae52: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:elrepo-release-7.0-3.el7.elrepo ################################# [100%]


[root@k8s-master01 ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.sonic.net
* elrepo: repos.lax-noc.com
* elrepo-kernel: repos.lax-noc.com
* extras: mirror.sfo12.us.leaseweb.net
* updates: mirror.sfo12.us.leaseweb.net
elrepo | 2.9 kB 00:00:00
elrepo-kernel | 2.9 kB 00:00:00
(1/2): elrepo/primary_db | 438 kB 00:00:00
(2/2): elrepo-kernel/primary_db | 1.9 MB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package kernel-lt.x86_64 0:4.4.207-1.el7.elrepo will be installed
--> Finished Dependency Resolution
Dependencies Resolved====================================================================================================
Package Arch Version Repository Size
====================================================================================================
Installing:
kernel-lt x86_64 4.4.207-1.el7.elrepo elrepo-kernel 39 M
Transaction Summary
====================================================================================================
Install 1 Package
Total download size: 39 M
Installed size: 180 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/elrepo-kernel/packages/kernel-lt-4.4.207-1.el7.elrepo.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID baadae52: NOKEY
Public key for kernel-lt-4.4.207-1.el7.elrepo.x86_64.rpm is not installed
kernel-lt-4.4.207-1.el7.elrepo.x86_64.rpm | 39 MB 00:00:07
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org
Importing GPG key 0xBAADAE52:
Userid : "elrepo.org (RPM Signing Key for elrepo.org) <secure@elrepo.org>"
Fingerprint: 96c0 104f 6315 4731 1e0b b1ae 309b c305 baad ae52
Package : elrepo-release-7.0-3.el7.elrepo.noarch (installed)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
Installing : kernel-lt-4.4.207-1.el7.elrepo.x86_64 1/1
Verifying : kernel-lt-4.4.207-1.el7.elrepo.x86_64 1/1Installed:
kernel-lt.x86_64 0:4.4.207-1.el7.elrepo
Complete!
[root@k8s-master01 ~]#

[root@k8s-master01 ~]# grub2-set-default 'CentOS Linux (4.4.207-1.el7.elrepo.x86_64) 7 (Core)'

reboot and verify the system is bootup with kernel 4.44

[root@k8s-master01 ~]# reboot
[root@k8s-master01 ~]# uname -a
Linux k8s-master01 4.4.207-1.el7.elrepo.x86_64 #1 SMP Sat Dec 21 08:00:19 EST 2019 x86_64 x86_64 x86_64 GNU/Linux

3. Kubadm Installation

3.1 Enable ipvs for kube-proxy

modprobe br_netfilter  # load netfilter module
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

Configure on k8s-master01, k8s-node01 and k8s-node02

[root@k8s-master01 ~]# modprobe br_netfilter
[root@k8s-master01 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
[root@k8s-master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 20480 0
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 147456 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 114688 2 ip_vs,nf_conntrack_ipv4
libcrc32c 16384 2 xfs,ip_vs

3.2 Install Docker-ce

# 
yum install -y yum-utils device-mapper-persistent-data lvm2
# Set docker repos
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum update -y && yum install -y docker-ce
# create /etc/docker direcoty
mkdir /etc/docker
# Configure daemon, use systemd cgroupdriver and use json format to save docker logs
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# restart docker service
systemctl daemon-reload && systemctl start docker && systemctl enable docker

3.3 Install Kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

3.4 Initiallize master node configuration

k8s master only

[root@k8s-master01 ~]# kubeadm config print init-defaults > kubeadm-config.yaml
W0104 23:39:13.663661 1675 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0104 23:39:13.663707 1675 validation.go:28] Cannot validate kubelet config - no validator is available
[root@k8s-master01 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.0.31 #change ip address to master node
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # add the podSubnet, we use the default flannel subnet here which is 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
# New lines added, change the default scheduler to ipvs
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
#Verify the configuration
[root@k8s-master01 ~]# kubeadm config migrate --old-config kubeadm-config.yaml
W0105 00:41:30.549924 8090 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0105 00:41:30.549974 8090 validation.go:28] Cannot validate kubelet config - no validator is available
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.0.31
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
W0105 00:36:36.316267 3617 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0105 00:36:36.316309 3617 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.31]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.0.31 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.0.31 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0105 00:36:39.134013 3617 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0105 00:36:39.134614 3617 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 33.501954 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
3038dee127b90b18d79f39c3645ea3caf787578361ef7351439efc052eed2c79
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.0.31:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6dd4f08582df786e2d8e3107342bf2b762d130cb6e1f6ff4deacc06bec0758e6

[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# kubectl get node # flannel is not configured yet, so master node status shown NotReady
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 2m55s v1.17.0

3.5 Configure flannel network

k8s-[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2020-01-04 23:53:50-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14416 (14K) [text/plain]
Saving to: ‘kube-flannel.yml’
100%[======================================================================================================================================================================================================================>] 14,416 --.-K/s in 0.007s2020-01-04 23:53:50 (2.02 MB/s) - ‘kube-flannel.yml’ saved [14416/14416]
[root@k8s-master01 ~]# kubectl create -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 6m23s v1.17.0
[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-lq9d4 1/1 Running 0 3m33s
coredns-6955765f44-vwddr 1/1 Running 0 3m33s
etcd-k8s-master01 1/1 Running 0 3m29s
kube-apiserver-k8s-master01 1/1 Running 0 3m29s
kube-controller-manager-k8s-master01 1/1 Running 0 3m29s
kube-flannel-ds-amd64-bpcrp 1/1 Running 0 2m51s
kube-proxy-hh4nd 1/1 Running 0 3m33s
kube-scheduler-k8s-master01 1/1 Running 0 3m29s NotReady master 2m55s v1.17.0

3.6 Add node into cluster

on k8s-node01 and k8s-node02

[root@k8s-node02 ~]# kubeadm join 192.168.0.31:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:6dd4f08582df786e2d8e3107342bf2b762d130cb6e1f6ff4deacc06
bec0758e6
W0105 00:44:09.524618 13522 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "k8s-node02.es.equinix.com" could not be reached
[WARNING Hostname]: hostname "k8s-node02.es.equinix.com": lookup k8s-node02.es.equinix.com on 8.8.8.8:53: server misbehaving
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.7 Verification

[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 8m8s v1.17.0
k8s-node01.es.equinix.com Ready <none> 45s v1.17.0
k8s-node02.es.equinix.com Ready <none> 50s v1.17.0

--

--