Failed to list *v1.ConfigMap

I was setting up an OKD single node cluster and to test if it was up I ran:

openshift-install --dir=/opt/okd4/install_dir/ wait-for bootstrap-complete --log-level=debug

Then I got the following error messages:

https://api.lab.okd.local:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dbootstrap&limit=500&resourceVersion=0: EOF
 E0104 15:32:18.605736    1642 reflector.go:153] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to list *v1.ConfigMap: Get https://api.lab.okd.local:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dbootstrap&limit=500&resourceVersion=0: EOF
 E0104 15:32:19.607338    1642 reflector.go:153] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to list *v1.ConfigMap: Get https://api.lab.okd.local:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dbootstrap&limit=500&resourceVersion=0: EOF

I did a lot of searching and found some others with the same problem but no solution.

One solution that appeared to help was based on https://www.reddit.com/r/openshift/comments/kqxftm/openshift_46_authentication_fail_clobbers_web/.

In essence from the services node you did:

  • cordon
  • drain
  • restart
  • uncordon

Each of those steps is described in:

Understanding how to evacuate pods on nodes
https://docs.openshift.com/container-platform/4.6/nodes/nodes/nodes-nodes-working.html#nodes-nodes-working-evacuating_nodes-nodes-working

But the real solution was…

In my particular case it appears that the solution was that in the ‘install-config.yaml” I had not populated the “pullSecret”. I found once I went to the Red Hat at https://cloud.redhat.com/openshift/install/pull-secret and collected a copy of my pull secret and built the bootstrap and control-plane nodes from it I didn’t see the error.