Pando - The Aspen Mesh Blog


« Back to Blogs

Building Istio with Minikube-in-a-Container and Jenkins

Posted on Jan 23, 2018 by Andrew Jenkins

AspenMesh provides a supported distribution of Istio, which means that we need to be able to test and release bugfixes even if they are out-of-cadence with the upstream Istio project. To do this we’ve developed our own build and test infrastructure. Now that we’ve got many of these pieces up and running, we figured some parts might be useful if you are also interested in CI for Istio but not committed to Circle CI or GKE.

This post will show how we made an updated Minikube-in-a-Container and a Jenkins pipeline that uses it to build and test Istio. If you want, you can docker run the minikube container right now and get a functioning kubernetes cluster inside the container that you can throw away when you’re done. The Jenkins bits will help you build Istio today and also give you a head-start if you want to build containers inside of containers.

Minikube-in-a-container

This part describes how we made a Minikube-in-a-container that we use to run the Istio smoke tests during a build. This isn’t our idea - we started with localkube-dind. We couldn’t get it working out-of-the-box, we think due to a little bit of drift between localkube and minikube, so this is a record of what we changed to get it working for us. We also added some options and tooling so that we can use Istio in the resulting container. Nothing too fancy but we’re hoping it gives you a head start if you’re heading down a similar path.

Minikube may be familiar to you as a project to start up your own kubernetes cluster in a VM that you can carry around on your laptop. This approach is very convenient but there are some situations where you can’t/don’t want to provision a VM, like cloud providers that don’t offer nested virtualization. Since docker can now run inside of docker, we decided to try making our own kubernetes cluster inside of a docker container. An ephemeral kubernetes container is easy to start, run a few tests, and throw away when you’re done and is a good fit for CI.

In our model, the Kubernetes cluster creates child docker containers (not sibling containers in the lingo of Jérôme Petazzoni’s consideration ). We did this intentionally - we preferred the isolation of child containers over sharing the docker build cache. But you should check out Jérôme’s article before committing to DinD for your application - maybe DooD (Docker-outside-of-Docker) is better for you. FYI - we’ve avoided the “it gets worse” part, and it looks like the “bad” and “ugly” parts are fixed/avoidable for us.

When you start a docker container, you’re asking docker to create and setup a few namespaces in the kernel, and then start your container inside these namespaces. A namespace is a sandbox - when you’re inside the namespace, you can generally only see other things that are also inside the namespace. A chroot, but for more than just filesystems - PIDs, network interfaces, etc. If you start a docker container with --privileged then the namespaces that are created get extra privileges, like the ability to create more child namespaces. That’s the trick at the core of docker-in-docker. For any more details, again, Jérôme’s the expert - check out his explanation (complete with Xzibit memes) here.

OK, so here’s the flow:

  1. Build a container that’s got docker, minikube, kubectl and dependencies installed.

  2. Add a “fake-systemctl” shim to trick Minikube into running without a real systemd installation.

  3. Start the container with --privileged

  4. Have the container start its own “inner” dockerd - this is the DinD part.

  5. Have the container start minikube --vm-driver=none so that minikube (in the container) talks to the dockerd running right alongside it.

All you have to do is docker run --privileged this container and you’re ready to go with kubectl. If you want, you can run the kubectl inside the container and get a truly throw-away environment.

You can try it now:

docker run --privileged --rm -it quay.io/aspenmesh/minikube-dind
docker exec -it <container> /bin/bash
# kubectl get nodes
<....>
# kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/shell-demo.yaml
# kubectl exec -it shell-demo -- /bin/bash

when you exit, the --rm flag means that docker will tear down and throw away everything for you.

For heavier usage, you’ll probably want to “docker cp” the kubeconfig file to your host and talk to kubernetes inside the container over the exposed kube API port 8443.

Here’s the Dockerfile that makes it go (you can clone this and support scripts here):

Jenkins for Istio

Now that we’ve got Kubernetes-in-a-container we can use this for our Istio builds. Dockerized build systems are nice because developers can quickly create higher fidelity replicas of the CI build. Here’s an outline of our CI architecture for Istio builds:

  • Jenkins worker: This is a VM started by Jenkins for running builds. It may be shared by other builds at the same time. It’s important that any tooling we install on the worker is locally-scoped (so it doesn’t interfere with other builds) and ephemeral (we autoscale Jenkins workers to save costs).

  • Minikube container: The first thing we do is build and enter the Minikube container we talked about above. The rest of the build proceeds inside this container (or its children). The Jenkins workspace is mounted here. Jenkins’ docker plugin takes care of tearing this container down in success or failure, which is all we need to clean up all the running Kubernetes and Istio components.

  • Builder container: This is a container with build tools like the golang toolchain installed. It’s where we compile Istio and build containers for the Istio components. We test those components in the minikube container, and if they pass, declare the build a success and push the containers to our registry.

Most of the Jenkinsfile is about getting those pieces set up. After that, we run the same steps to build Istio that you would on your laptop: make depend, make build, make test.

Check out the Jenkinsfile here:

If you want to grab the files from this post and the supporting scripts, go here.

comments powered by Disqus