How to fix 'connection refused' for Kubernetes or how to renew certificates
Posted on October 4, 2021 from Hillsboro, OregonOne of the lesser known causes for getting a connection refused
error message when connecting to a Kubernetes cluster is expired certificates. This post will outline the steps to first, determine if you have expired certs and second, renew the expired certs associated with a Kubernetes cluster.
The Symptom
After reviving a cluster I hadn't used for months, I ran into an issue connecting to it. Running simple commands like kubectl get nodes
would fail with something line the following message:
curl: (7) Failed to connect to 192.168.7.121 port 6443: Connection refused
Having exhausted some of the usual suspects, I finally determined that the certificates associated with my cluster had just expired. To determine if you have expired certificates, run the following from the master node:
sudo kubeadm alpha certs check-expiration
The result should look something like the following:
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Sep 24, 2021 17:07 UTC <invalid> no
apiserver Sep 24, 2021 17:07 UTC <invalid> ca no
apiserver-etcd-client Sep 24, 2021 17:07 UTC <invalid> etcd-ca no
apiserver-kubelet-client Sep 24, 2021 17:07 UTC <invalid> ca no
controller-manager.conf Sep 24, 2021 17:07 UTC <invalid> no
etcd-healthcheck-client Sep 24, 2021 17:06 UTC <invalid> etcd-ca no
etcd-peer Sep 24, 2021 17:06 UTC <invalid> etcd-ca no
etcd-server Sep 24, 2021 17:06 UTC <invalid> etcd-ca no
front-proxy-client Sep 24, 2021 17:07 UTC <invalid> front-proxy-ca no
scheduler.conf Sep 24, 2021 17:07 UTC <invalid> no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jun 20, 2030 19:19 UTC 8y no
etcd-ca Jun 20, 2030 19:19 UTC 8y no
front-proxy-ca Jun 20, 2030 19:19 UTC 8y no
Notice all the values under the RESIDUAL TIME
column are <invalid>
:)
The Fix
Step 1: For each certificate that's expired, run the following command (with the name of the certificate as the last argument) to renew it:
sudo kubeadm alpha certs renew etcd-healthcheck-client
Step 2: Once you've renewed all the expired certificates, you'll need to restart kubelet and docker services:
sudo systemctl restart kubelet
sudo systemctl restart docker
Now, if you attempt to use the kubectl
command again, you'll get a different error message:
error: You must be logged in to the server (Unauthorized)
Step 3: Update ~/.kube/config
:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Note: You may need to copy the kube config to the other nodes of the cluster as well.
Finally, you should now be able to successfully connect to the cluster again:
$ kubectl get no
NAME STATUS ROLES AGE VERSION
nuc1 Ready,SchedulingDisabled master 468d v1.19.2
nuc2 Ready <none> 468d v1.19.2
nuc3 Ready <none> 468d v1.19.2
nuc4 Ready <none> 468d v1.19.2