Kubernetes: VCF 4.0 on VxRail 7. SSH into a pod

The last blog post, I showed how to provision a pod containing SSH service. I want to do this so that I can then get basic dial-tone access into a pod so I can test and view networking aspects of the pod. This will help me get a better understanding of what’s going on with the container as it relates to networking.

To review, we created a Docker file and uploaded to Harbor or Docker.com and then we created a yaml file pointing to the image. We then apply the kubernetes yaml file to deploy the application.

From VMware VCenter, under Hosts and Clusters, we can see the sample pods under namespaces. I created a namespace for demo2 and a namespace for demo3.


We then go to our linux client and login to the kubernetes environment. You can install the client on a Windows or Mac system just as well. I created a alias link from kubectl to ‘k’ for brevity.


We can see the external IP is 172.28.84.2. The yaml file apply to create the cluster is published to github. One can view and download the yaml file here. https://github.com/batx123/k8s-sshd

The yaml file will pull from docker.com / docker.io so if your cluster has internet access, then it should work fine in your setup.

Now that we have the External IP info, let’s SSH in as root using VMware1! for the password.

Ok. That works. Let’s check demo3 or recreate it..

But first we need to switch context to demo3.


OK. let’s change it up a bit and create the cluster by running the yaml file directly off of github.

$ k apply -f https://raw.githubusercontent.com/batx123/k8s-sshd/master/egsshd.yaml


As we can see, we created a new cluster on demo3 and we have a external IP of 172.28.84.5. Let’s log into that pod.


So far nothing new, we know we can log into both pods in their respective namespace. Now, let’s see if I can get more details from the pod in demo3. But first, I’ll need to install some tools as they are not installed by default. We want to install ping and view ip address.

~# apt install iproute2 iputils-ping -y

Once we install those packages we can run ip address. Let’s install the packages and try to ping the other pod in namespace demo2 (172.28.84.2).

We can see the ip address, which is the internal networking of 10.22.1.18, and we can ping external ip of the other pod in demo2. Can we ssh from one pod to the other? Yes we can. While i’m in this pod, I install the ping and ip addr tool here as well.


172.28.84.2 is demo2


172.28.84.5 is demo3

Let’s try using the internal IP’s assigned by kubernetes instead of the loadbalancer / front end IP.

demo2 egssh pod (egsshd-7c78696976-sq46t) internal IP is 10.22.1.2/28

demo3 egssh pod (egsshd-7c78696976-twn2f) internal IP is 10.22.1.18/28


We cannot ping from one pod, internal IP, to the other as they are in different namespaces (demo2 and demo3).

The cluster IP’s are not pingable as well. So far we’ve only been able to ping the external IP. Sounds reasonable so far.


What if we put two of these pods in the same namespace?

in order to do this, I edit my yaml file and change the name of the service from egsshd to egssh to make it unique. After I edit the yaml file, I then apply the yaml file again, against the same cluster.


After I edit the yaml file I then apply it. For transparency, i’ll show the yaml file I edited. (I removed the ‘d’ from sshd app name so that the name is unique, not egsshd.)


After the edit, we then apply the yaml file. What’s interesting, is we now have two cluster IP’s. We will have to circle back around on that later. Ignoring the Cluster-IP for now.


After installing ping, I can now ping the other pod in this namespace.


Since I can ping.. let’s try SSH.


Here we showed how pod networking is isolated on a per namespace. If one namespace wants to talk to a different namespace, then it must use the front end / external IP, which just so happens to be a NSX-T LoadBalancer IP.

now to clean up. removing the ssh pods from demo2.


Check VMware VCenter and it looks good. No pods in demo2 now.

Let’s switch context to demo3 and delete that pod as well.


Once again we log into VCenter and demo3 has no pods.


Thanks,

Published by Ben

I do stuff in the datacenter.

Leave a comment

Your email address will not be published. Required fields are marked *