Cross Cloud Kubernetes

Last year I wrote an article about setting up Docker Swarm on my local home network, Taming the Docker Swarm - Part 1. Since then I have been looking at Kubernetes and have now replaced my local cluster as well as created a cross cloud cluster on the Internet.

This post shows how I have got a cross cloud Kubernetes cluster up and running using a VPN to connect all the nodes together.


When Docker announced that they were going to support Kubernetes I thought it was time that I started to play around with it. Another reason was that, at Chef, we have been working to get Habitat working easily and efficiently in Kubernetes and I wanted to understand how it all worked.

One of the first things I did was read through the excellent Kubernetes Up & Running by Kelsey Hightower. This explains the different types of resources in Kubernetes and the concepts behind them. The appendix describes how to configure several Raspberry PI computers into a Kubernetes Cluster which was invaluable.

After this I replaced my Docker Swarm cluster at home and got all my services up and running again, e.g. Elastic Search, Plex, RequestBin. Of course as you start to play with new things, they start to snowball as you get used to them and I found myself wanting to play with things like OpenFAAS which was only possible once I had got Kubernetes working, both in my head and on computers.

I am in the fortunate position that I am able to tinker with lots of different cloud providers, and all of the them (Azure, GCP and AWS) all have a Kubernetes offering. However as I had my own dedicated servers and accounts in AWS and Azure I wondered it was possible to create a cluster that worked as one across all these different hosts.

So why did I go through all of this? Two reasons mainly, the first being that I wanted to utilise the servers that I have running a bit more efficiently. I host websites for friends and family as well as this blog. Secondly I like to tinker with things and understand how they work. As I was playing with Kubernetes I wanted to see what it can do and how I can break it and then fix it. I have ended up with two Kubernetes clusters, one at home for “dev” work and one on the Internet for live websites.

This is the first of a few posts about my journey into Kubernetes. I have the following posts planned:

  • Blogging on the Blog that Habitat Built
  • Dynamic storage using GlusterFS
  • Kubernetes helper Cookbook



In order to complete the setup of this cross cloud cluster I needed to connect the nodes together without going over the public Internet. I have played around with ZeroTier VPN before and decided to use it again here. Although I have not tried it there is no reason that something like OpenVPN or similar could not be used.

If using ZeroTier then an account is required and a new network setup. This is all done through the ZeroTier VPN website.

For the purposes of this article the network that I created is


As I have mentioned the servers that are involved in my Kubernetes cluster are in different cloud platforms and geographic locations. I have 4 machines in the cluster:

Name Role OS Location VPN IP Address
k8s-ctrl-1 Controller Ubuntu 18.04 AWS
k8s-worker-1 Worker Ubuntu 18.04 Azure
k8s-worker-2 Worker Ubuntu 18.04 Germany (Hetzner)
k8s-worker-3 Worker Ubuntu 18.04 Germany (Hetzner)

All the machines are on the public Internet, however crucially none of the internal Kubernetes traffic runs over this, all of that traffic runs over a ZeroTier VPN network.


To ease the constant teardown and rebuild that I was doing I wrote a Chef cookbook to do it all for me. I will be open sourcing it, but it needs a lot otf tidying up to make it respectable.

All of the machines are installed with stock Ubuntu 18.04. Each on had Kubernetes and Docker installed in the same way.

curl -s | sudo apt-key add
sudo sh -c 'echo "deb kubernetes-xenial main"  >  /etc/apt/sources.list.d/kubernetes.list'
sudo apt-get update
sudo apt-get install -y docker kubelet kubeadm kubectl kubernetes-cni zerotier-one

Kubernetes does not support running on machines with swap enabled, so this needs to be disabled. To do this, edit the /etc/fstab file so that the swap line is commented out. If the machine is not going to rebooted ay this point run the command swapoff -a.

Connect the machine to the ZeroTier VPN using the following command

sudo zerotier-cli join <NETWORK_ID>

More likely than not the network that has been created will be a private one, so the machine will need to be authorised in the ZeroTier portal.


The first thing to do when setting up the cluster is to create the controller for the cluster. This will generate the joining key that worker nodes need to join the cluster and deploy the networking overlay.

The following command shows how to initialise the cluster. Note that the apiserver-advertise-address is the VPN IP address of the controller.

sudo kubeadm init --pod-network-cidr --apiserver-advertise-address 

The pod-cidr-network has not been changed from the default that is required if the Flannel network is used.

The output of this command will show the token that has been generated, which will be required to join the worker nodes to the cluster.

Now is the time to add the pod network, however in this case I will be using kube-router to do proxying and provide a pod network. The reason for using this is that the out-of-the-box kube-proxy was not able to router traffic over the VPN. No matter what I did I could not convince it to handle the routing properly. I asked about it on SO How does DNS resolution work on Kubernetes with multiple networks? and whilst the debugging information was very useful it is not solve my problem. After doing some searching I came across kube-router which uses IP Virtual Server (IPVS) rather than ipTables to manage the rules for services and containers. When I replaced kube-proxy with this, everything started to work as I wanted it to.

kube-router has the option of just providing a proxy service as well as providing the network pod network for the cluster. I used all the options that are available, which are:

  • Service Proxy
  • Router
  • Firewall

This can be easily deployed using the YAML file from the GitHub repo and the kubectl command:

kubectl apply -f

After this has been deployed, the old kube-proxy needs to be removed, as well as the IPTables rules that have been configured. By doing this all now on the master, there is no cleanup work to do on the worker nodes as they have not joined the cluster yet.

kubectl -n kube-system delete ds kube-proxy
docker run --privileged -v /lib/modules:/lib/modules --net=host kube-proxy --cleanup

Be sure to change the version of the kube-proxy to the version of Kubernetes that is running in the above command

Now the cluster has a router and a pod network the worker nodes can join the cluster.

Worker Nodes

On each of the three worker nodes, the kubeadm command is used. The key thing to note here is that the address for the API server is the VPN address.

kubeadm --join --token <TOKEN>

Thats it, that is all that is required to create a cross cloud Kubernetes cluster.

Is it working?

There are several ways to check that this is all working, however the two best methods are to check that kubectl lists all of the nodes and to perform a simple deployment to the cluster.

Displaying Cluster Nodes

When the following command is run the nodes in the cluster will be listed.

kubectl get nodes -o wide

k8s-ctrl-1       Ready     master    56d       v1.11.2   <none>        Ubuntu 18.04.1 LTS   4.15.0-1021-aws     docker://18.6.1
k8s-worker-1     Ready     <none>    56d       v1.11.2   <none>        Ubuntu 18.04.1 LTS   4.15.0-29-generic   docker://18.6.1
k8s-worker-2     Ready     <none>    56d       v1.11.2   <none>        Ubuntu 18.04.1 LTS   4.15.0-34-generic   docker://18.6.1
k8s-worker-3     Ready     <none>    56d       v1.11.2   <none>        Ubuntu 18.04.1 LTS   4.15.0-34-generic   docker://18.6.1

My laptop is running the ZeroTier VPN client so I can run this command from the my local machine. To make it work just copy the configuration file /etc/kubernetes/admin.conf from the controller and place it in ~/.kube/config.
Now the kubectl command will reference this file and communicate with the controller over the VPN.

Deploying to the Cluster

A very simple way of testing that everything is working properly is to deploy a busybox container in interactive mode. This will show that the cluster is performing correctly and tests can be run within the container to ensure that DNS is working as expected.

kubectl run -i --tty debug --image busybox --restart Never -- sh



DNS Resolution

After I had setup the cluster for the first time I was not able to resolve services and containers within the cluster, even though kube-dns appeared to be working OK. I was stumped on on this for ages, it was really annoying. The answer turned out to be systemd.

systemd has a DNS cache built into it, so now the /etc/resolve.conf file points to a local address for DNS resolution. This all works OK when just running apps on the machine, however when a container is created it will pull in the /etc/resolv.conf from the host machine. This means that the container is looking for itself for a DNS server which is not going to work. The solution is to change the link that is in place for /etc/resolv.conf which reverts the machine back to old behaviour.

rm /etc/resolv.conf
ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf

Whilst I was investigating this issue I was luckily enough to chat to Ian Meill about this who presented at the Chef Community Summit in London in October 2018.
He has a written a set of articles about how DNS resolution works on Linux. The first of these article is Anatomy of a Linux DNS Lookup - Part I

Future Development

As you will no doubt have picked up this is not exactly production grade yet, but I hope to make it so one day. Here are some of things that I will be looking to sort out:

  • Multiple Kubernetes controllers
  • Managed the rules for inbound traffic in IpTables
  • Exposing services without a cloud load-balancer
Share Comments