k8s node节点重新加入master集群的实现

发布时间:2022-04-16 发布网站:脚本宝典
脚本宝典收集整理的这篇文章主要介绍了k8s node节点重新加入master集群的实现脚本宝典觉得挺不错的,现在分享给大家,也给大家做个参考。

1、删除node节点

执行kubectl delete node node01

2、这时如果直接执行加入,会报错。如下:

[root@k8s-node02 pki]# kubeadm join 192.168.140.128:6443 --token abcdef.0123456789abcdef   --discovery-token-ca-cert-hash sha256:a3d9827be411208258aea7f3ee9aa396956c0a77c8b570503dd677aa3b6eb6d8 
[PReflight] Running pre-flight checks
 [WARNING SystemVerification]: this docker version is not on the list of validated versions: 19.03.12. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
 [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
 [ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
 [ERROR Port-10250]: Port 10250 is in use
 [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal wITh `--ignore-preflight-errors=...`

 解决方案

根据报错可以看到端口被占用,配置文件可ca证书已经生成,所以需要删除这些配置文件和证书,并且kill掉占用的端口。建议删除之前先备份。

[root@k8s-node02 pki]# lsof -i:10250
COMMAND PID USER  FD  TYPE DEVICE SIZE/OFF NODE NamE
kubelet 694 root  30u IPv6 26021   0t0 TCP *:10250 (LISTEN)
[root@k8s-node02 pki]# kill -9 694
[root@k8s-node02 pki]# cd /etc/kubernetes/
[root@k8s-node02 kubernetes]# ls
bootstrap-kubelet.conf kubelet.conf manifests pki
[root@k8s-node02 kubernetes]# mv bootstrap-kubelet.conf bootstrap-kubelet.conf_bk
[root@k8s-node02 kubernetes]# mv kubelet.conf kubelet.conf_bk
[root@k8s-node02 kubernetes]# cd pki/
[root@k8s-node02 pki]# ls
ca.crt
[root@k8s-node02 pki]# rm -rf ca.crt 

3、再次执行加入,又出现报错。

[root@k8s-node02 ~]# kubeadm join 192.168.140.128:6443 --token abcdef.0123456789abcdef   --discovery-token-ca-cert-hash sha256:a3d9827be411208258aea7f3ee9aa396956c0a77c8b570503dd677aa3b6eb6d8 
[preflight] Running pre-flight checks
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.12. Latest validated version: 18.09
[preflight] Reading configuration From the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaML'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/VAR/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition

解决方案:

执行kubeadm reset子节点重置

[root@k8s-node02 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0710 10:22:57.487306  31093 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IpvS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was SETUP to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

4、最后执行加入,问题解决。

[root@k8s-node02 ~]# kubeadm join 192.168.140.128:6443 --token abcdef.0123456789abcdef   --discovery-token-ca-cert-hash sha256:a3d9827be411208258aea7f3ee9aa396956c0a77c8b570503dd677aa3b6eb6d8 
[preflight] Running pre-flight checks
 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.12. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

5、查看master节点,加入成功。

[root@k8s-master01 ~]# kubectl get nodes
NAME      STATUS  ROLES  AGE  VERSION
k8s-master01  Ready  master  120m  v1.15.1
k8s-node01   Ready  <none>  100m  v1.15.1
k8s-node02   Ready  <none>  83m  v1.15.1

到此这篇关于k8s node节点重新加入master集群的实现的文章就介绍到这了,更多相关k8s node master集群内容请搜索脚本宝典以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本宝典!

脚本宝典总结

以上是脚本宝典为你收集整理的k8s node节点重新加入master集群的实现全部内容,希望文章能够帮你解决k8s node节点重新加入master集群的实现所遇到的问题。

如果觉得脚本宝典网站内容还不错,欢迎将脚本宝典推荐好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。
标签:k8sk8snode