If you have previously reviewed my other blog post, you’ll notice that we have stood up VMware vCenter 7 with Kubernetes. VSphere 7 with Kubernetes provides first class native workloads in VMware much like VM’s. Now, what do we do? Well, we need to create or source a image from somewhere. Often folks will store images at various places like docker.io or something similar. What about just keeping my image local so one doesn’t have to storage the image on a public image site? Harbor registry service allows DevOps folks to push and pull images to a local registry and deploy pods using these images. Let’s see how this can be accomplished.
Log into vSphere and select Hosts and Clusters and navigate to the namespace part and enable Harbor.
You’ll need to specify a vSAN storage policy for Harbor.
After a minute or two you’ll see new namespace pop up on the left under Hosts and Clusters > Namespaces.
After all the pods and services start we should see a the Image Registry is healthy and running with a Link to Harbor UI.
Let’s log into Harbor using firstname.lastname@example.org account
You can also click on the More info… link to be directed to a nice readme on github https://github.com/goharbor/harbor
Upon logging into Harbor, we see the namespace that I previously created called demo1. That’s interesting. We can click into demo1 and get more info about this namespace, but I’ll save that for next demo.
Going back into vSphere menu Workload Management, we see the new registry namespace there as well.
I’m not going to click around too much on this namespace, but we can see that some persistent volume claims were created.
Let’s go back to the Namespace view, and make a new namespace.
We make a new namespace, give permissions, and assign a storage policy to the new namespace, demo2. I’m going to skip over the permissions and storage selection as we already covered creating a namespace in previous blog.
Let’s flip back to Harbor and see what happened.
We now have a new namespace in Harbor called demo2.
Let’s publish something in Harbor. But first we need something something to publish. Let’s build a sample dial tone test using SSH to make sure we can get access to something after we publish and deploy into Kubernetes. You will need to stand up Docker somewhere to get a sample app going. Luckily Docker has a great example on this at: https://docs.docker.com/engine/examples/running_ssh_service/
Let’s build it following the instructions and you’ll see it go through the list of commands and output.
$ docker build -t egsshd .
For brevity, i’m jumping straight to the bottom of the docker build output.
We have a fresh image created with name egsshd
Let’s run the docker image to make sure we can get into it as expected.
Ok, it appears that we can SSH into this container. This will provide us with a basic dial tone service ensuring we do have access and things work as designed. Exit out of the image instance.
In order to log into Harbor, we first need to figure out how to either get a real SSL certificate.. but didn’t, so let’s accept a insecure or self signed certificate. https://docs.docker.com/registry/insecure/
I opted to accept the self-signed certificate of my Harbor appliance. To do that I plugged in my IP address of Harbor into the /etc/docker/daemon.json file. Follow that with restarting the Docker service.
Let’s login to Harbor. Login Succeeded. Hooray! no change to the images yet.. but let’s tag and push the image.
Next we will need to create a tag then push the image. Once we tag the image, then we push it to Harbor.
What happened in Harbor?
We login to Harbor using email@example.com and we can see the repository count has incremented to reflect the push. If you are already in Harbor UI, click refresh
Click on demo2 and learn more about it. It’s taking up about 78MB of space.
Click on Logs and we can see the push operation.
Let’s login to kubernetes now. Get the control plane IP from Workload Management.
Let’s create a yaml file for the deployment.
We login to Kubernetes.
Switch context to demo2 namespace.
Create the cluster.
get pod status. Ready should be 1/1 and Status Running.
Now that we a running pod, let’s get the front end service IP
OK.. moment of truth… Let’s SSH into it.