SSH into Kubernetes pod without public IP access

In this guide we’ll demonstrate how to SSH into a Kubernetes pod without any external tools or services bridging between the pod and the web. All that’s required is a locally installed kubectl which is configured to communicate with the cluster.

Photo by Jaye Haych on Unsplash

Generate SSH keys

Install and configure openssh-server on the pod

kubectl exec -it $pod -c $container -- bash

Then we’ll install openssh-server:

apt install -y openssh-server

Once that’s installed, we’ll want to set the SSH connection port to something which kubectl port-forward will later be able to access and control. Let’s say port 2300. To configure that, we’ll need to find the # Port 22 line within the default etc/ssh/sshd_config file, and replace it with the chosen port number — in our case 2300. A convenient shortcut is to run the following sed command:

sed -i -e 's/Port 2300/#Port 22/g' /etc/ssh/sshd_config

Configure openssh-server on the pod to accept our local SSH key

First copy the public key file:

kubectl cp ~/.ssh/ $pod:/root/.ssh/ -c $container

Then add it to ~/.ssh/authorized_keys on the pod:

kubectl exec -it $pod -c $container -- bash -c "cat ~/.ssh/ >> ~/.ssh/authorized_keys"

Finally… we can start the SSH server on the pod:

kubectl exec -it $pod -c $container -- bash -c "service ssh start"

The final piece — port forwarding

kubectl port-porward $pod 2300:2300

Now you can just run

ssh <pod-user>@localhost -p 2300

And you’re in.

Fullstack developer, Entrepreneur and Open Source enthusiast