For our production Kubernetes instances, we run them on CoreOS in AWS. Our architecture is setup so that all instances are hidden inside private subnets. To access the resources there for administration tasks, we utilize bastion hosts.
This is a problem since now we don't have direct access to the API server for kubectl to work. To access the cluster using
kubectl we'll need to setup an SSH tunnel between our laptop and the bastion host.
The following example shows how to setup this tunnel where
10.0.0.50 is the internal IP of my k8s API server and
22.214.171.124 is the ip of my bastion host:
ssh -L 9443:10.0.0.50:443 firstname.lastname@example.org -N
Now that we have a tunnel up and running, we should be able to hit our API server via localhost (e.g.
curl https://localhost:9443) but we can't because I don't have localhost in the SAN of my certs. I could add
localhost or use a different name which is what I did (
k8s.stevesloka.com). To add the SAN name we need to add an entry to our local hosts file or setup a CNAME in public dns for simplicity (This way additional users of the cluster don't need to mess with their hosts files).
The following example shows my example hosts file:
## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost 127.0.0.1 k8s.stevesloka.com
Now just configure kubectl to talk to my cluster and I'm in!
$ kubectl config set-cluster default-cluster --server=https://k8s.stevesloka.com:9443 --certificate-authority=ca.pem $ kubectl config set-credentials default-admin --certificate-authority=ca.pem --client-key=apiserver-key.pem --client-certificate=apiserver.pem $ kubectl config set-context default-system --cluster=default-cluster --user=default-admin $ kubectl config use-context default-system