AspenMesh provides a supported distribution of Istio, which means that we need to be able to test and release bugfixes even if they are out-of-cadence with the upstream Istio project. To do this we’ve developed our own build and test infrastructure. Now that we’ve got many of these pieces up and running, we figured some parts might be useful if you are also interested in CI for Istio but not committed to Circle CI or GKE.

This post will show how we made an updated Minikube-in-a-Container and a Jenkins pipeline that uses it to build and test Istio. If you want, you can docker run the minikube container right now and get a functioning Kubernetes cluster inside the container that you can throw away when you’re done. The Jenkins bits will help you build Istio today and also give you a head-start if you want to build containers inside of containers.

Minikube-in-a-Container

This part describes how we made a Minikube-in-a-container that we use to run the Istio smoke tests during a build. This isn’t our idea – we started with localkube-dind. We couldn’t get it working out-of-the-box, we think due to a little bit of drift between localkube and minikube, so this is a record of what we changed to get it working for us. We also added some options and tooling so that we can use Istio in the resulting container. Nothing too fancy but we’re hoping it gives you a head start if you’re heading down a similar path.

Minikube may be familiar to you as a project to start up your own Kubernetes cluster in a VM that you can carry around on your laptop. This approach is very convenient but there are some situations where you can’t/don’t want to provision a VM, like cloud providers that don’t offer nested virtualization. Since docker can now run inside of docker, we decided to try making our own Kubernetes cluster inside of a docker container. An ephemeral Kubernetes container is easy to start, run a few tests, and throw away when you’re done and is a good fit for CI.

In our model, the Kubernetes cluster creates child docker containers (not sibling containers in the lingo of Jérôme Petazzoni’s consideration ). We did this intentionally – we preferred the isolation of child containers over sharing the docker build cache. But you should check out Jérôme’s article before committing to DinD for your application – maybe DooD (Docker-outside-of-Docker) is better for you. FYI – we’ve avoided the “it gets worse” part, and it looks like the “bad” and “ugly” parts are fixed/avoidable for us.

When you start a docker container, you’re asking docker to create and setup a few namespaces in the kernel, and then start your container inside these namespaces. A namespace is a sandbox – when you’re inside the namespace, you can generally only see other things that are also inside the namespace. A chroot, but for more than just filesystems – PIDs, network interfaces, etc. If you start a docker container with --privileged then the namespaces that are created get extra privileges, like the ability to create more child namespaces. That’s the trick at the core of docker-in-docker. For any more details, again, Jérôme’s the expert – check out his explanation (complete with Xzibit memes) here.

OK, so here’s the flow:

  1. Build a container that’s got docker, minikube, kubectl and dependencies installed.
  2. Add a “fake-systemctl” shim to trick Minikube into running without a real systemd installation.
  3. Start the container with --privileged
  4. Have the container start its own “inner” dockerd – this is the DinD part.
  5. Have the container start minikube --vm-driver=none so that minikube (in the container) talks to the dockerd running right alongside it.

All you have to do is docker run --privileged this container and you’re ready to go with kubectl. If you want, you can run the kubectl inside the container and get a truly throw-away environment.

You can try it now:

docker run --privileged --rm -it quay.io/aspenmesh/minikube-dind
docker exec -it <container> /bin/bash
# kubectl get nodes
<....>
# kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/shell-demo.yaml
# kubectl exec -it shell-demo -- /bin/bash

when you exit, the --rm flag means that docker will tear down and throw away everything for you.

For heavier usage, you’ll probably want to “docker cp” the kubeconfig file to your host and talk to kubernetes inside the container over the exposed kube API port 8443.

Here’s the Dockerfile that makes it go (you can clone this and support scripts here):

# Portions Copyright 2016 The Kubernetes Authors All rights reserved.
# Portions Copyright 2018 AspenMesh
#
# Licensed under the Apache License, Version 2.0 (the “License”);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an “AS IS” BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on:
# https://github.com/kubernetes/minikube/tree/master/deploy/docker/localkube-dind
FROM debian:jessie
# Install minikube dependencies
RUN DEBIAN_FRONTEND=noninteractive apt-get update -y && \
DEBIAN_FRONTEND=noninteractive apt-get -yy -q –no-install-recommends install \
iptables \
ebtables \
ethtool \
ca-certificates \
conntrack \
socat \
git \
nfs-common \
glusterfs-client \
cifs-utils \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common \
bridge-utils \
ipcalc \
aufs-tools \
sudo \
&& DEBIAN_FRONTEND=noninteractive apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install docker
RUN \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add – && \
apt-key export “9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88” | gpg – && \
echo “deb [arch=amd64] https://download.docker.com/linux/debian jessie stable” >> \
/etc/apt/sources.list.d/docker.list && \
DEBIAN_FRONTEND=noninteractive apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get -yy -q –no-install-recommends install \
docker-ce \
&& DEBIAN_FRONTEND=noninteractive apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
VOLUME /var/lib/docker
EXPOSE 2375
# Install minikube
RUN curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.24.1/minikube-linux-amd64 && chmod +x minikube
ENV MINIKUBE_WANTUPDATENOTIFICATION=false
ENV MINIKUBE_WANTREPORTERRORPROMPT=false
ENV CHANGE_MINIKUBE_NONE_USER=true
# minikube –vm-driver=none checks systemctl before starting. Instead of
# setting up a real systemd environment, install this shim to tell minikube
# what it wants to know: localkube isn’t started yet.
COPY fake-systemctl.sh /usr/local/bin/systemctl
EXPOSE 8443
# Install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.1/bin/linux/amd64/kubectl && \
chmod a+x kubectl && \
mv kubectl /usr/local/bin
# Copy local start.sh
COPY start.sh /start.sh
RUN chmod a+x /start.sh
# If nothing else specified, start up docker and kubernetes.
CMD /start.sh & sleep 4 && tail -F /var/log/docker.log /var/log/dind.log /var/log/minikube-start.log
view rawDockerfile.minikube hosted with ❤ by GitHub

Jenkins for Istio

Now that we’ve got Kubernetes-in-a-container we can use this for our Istio builds. Dockerized build systems are nice because developers can quickly create higher fidelity replicas of the CI build. Here’s an outline of our CI architecture for Istio builds:

  • Jenkins worker: This is a VM started by Jenkins for running builds. It may be shared by other builds at the same time. It’s important that any tooling we install on the worker is locally-scoped (so it doesn’t interfere with other builds) and ephemeral (we autoscale Jenkins workers to save costs).
  • Minikube container: The first thing we do is build and enter the Minikube container we talked about above. The rest of the build proceeds inside this container (or its children). The Jenkins workspace is mounted here. Jenkins’ docker plugin takes care of tearing this container down in success or failure, which is all we need to clean up all the running Kubernetes and Istio components.
  • Builder container: This is a container with build tools like the golang toolchain installed. It’s where we compile Istio and build containers for the Istio components. We test those components in the minikube container, and if they pass, declare the build a success and push the containers to our registry.

Most of the Jenkinsfile is about getting those pieces set up. After that, we run the same steps to build Istio that you would on your laptop: make dependmake buildmake test.

Check out the Jenkinsfile here:

node(docker) {
properties([disableConcurrentBuilds()])
wkdir = src/istio.io/istio
stage(Checkout) {
checkout scm
}
// withRegistry writes to /home/ubuntu/.dockercfg outside of the container
// (even if you run it inside the docker plugin) which won’t be visible
// inside the builder container, so copy them somewhere that will be
// visible. We will symlink to .dockercfg only when needed to reduce
// the chance of accidentally using the credentials outside of push
docker.withRegistry(https://quay.io, name-of-your-credentials-in-jenkins) {
stage(Load Push Credentials) {
sh cp ~/.dockercfg ${pwd()}/.dockercfg-quay-creds
}
}
k8sImage = docker.build(
k8s-${env.BUILD_TAG},
-f $wkdir/.jenkins/Dockerfile.minikube +
$wkdir/.jenkins/
)
k8sImage.withRun(–privileged) { k8s ->
stage(Get kubeconfig) {
sh docker exec ${k8s.id} /bin/bash -c \”while ! [ -e /kubeconfig ]; do echo waiting for kubeconfig; sleep 3; done\”
sh rm -f ${pwd()}/kubeconfig && docker cp ${k8s.id}:/kubeconfig ${pwd()}/kubeconfig
// Replace “127.0.0.1” with the path that peer containers can use to
// get to minikube.
// minikube will bake certs including the subject “kubernetes” so
// the kube-api server needs to be reachable from the client’s concept
// of “https://kubernetes:8443” or kubectl will refuse to connect.
sh sed -i” -e ‘s;server: https://127.0.0.1:8443;server: https://kubernetes:8443;’ kubeconfig
}
builder = docker.build(
istio-builder-${env.BUILD_TAG},
-f $wkdir/.jenkins/Dockerfile.jenkins-build +
–build-arg UID=`id -u` –build-arg GID=`id -g` +
$wkdir/.jenkins,
)
builder.inside(
-e GOPATH=${pwd()} +
-e HOME=${pwd()} +
-e PATH=${pwd()}/bin:\$PATH +
-e KUBECONFIG=${pwd()}/kubeconfig +
-e DOCKER_HOST=\”tcp://kubernetes:2375\” +
–link ${k8s.id}:kubernetes
) {
stage(Check) {
sh ls -al
// If there are old credentials from a previous build, destroy them –
// we will only load them when needed in the push stage
sh rm -f ~/.dockercfg
sh cd $wkdir && go get -u github.com/golang/lint/golint
sh cd $wkdir && make check
}
stage(Build) {
sh cd $wkdir && make depend
sh cd $wkdir && make build
}
stage(Test) {
sh cp kubeconfig $wkdir/pilot/platform/kube/config
sh “””PROXYVERSION=\$(grep envoy-debug $wkdir/pilot/docker/Dockerfile.proxy_debug |cut -d: -f2) &&
PROXY=debug-\$PROXYVERSION &&
curl -Lo – https://storage.googleapis.com/istio-build/proxy/envoy-\$PROXY.tar.gz | tar xz &&
mv usr/local/bin/envoy ${pwd()}/bin/envoy &&
rm -r usr/“””
sh cd $wkdir && make test
}
stage(Push) {
sh cd && ln -sf .dockercfg-quay-creds .dockercfg
sh cd $wkdir && +
make HUB=yourhub TAG=$BUILD_TAG push
gitTag = getTag(wkdir)
if (gitTag) {
sh cd $wkdir && +
make HUB=yourhub TAG=$gitTag push
}
sh cd && rm .dockercfg
}
}
}
}
String getTag(String wkdir) {
return sh(
script: cd $wkdir && +
git describe –exact-match –tags \$GIT_COMMIT || true,
returnStdout: true
).trim()
}
view rawJenkinsfile hosted with ❤ by GitHub

If you want to grab the files from this post and the supporting scripts, go here.