Ready for the
Next Generation
Service Mesh?

If you’re struggling to manage your microservices, Aspen Mesh can help.

Let's get started.

Submit Your Resume

Upload your resume. (5 MB max - .pdf, .doc, or .docx)

January 23, 2018

Building Istio with Minikube-in-a-Container and Jenkins

 

AspenMesh provides a supported distribution of Istio, which means that we need to be able to test and release bugfixes even if they are out-of-cadence with the upstream Istio project. To do this we’ve developed our own build and test infrastructure. Now that we’ve got many of these pieces up and running, we figured some parts might be useful if you are also interested in CI for Istio but not committed to Circle CI or GKE.

This post will show how we made an updated Minikube-in-a-Container and a Jenkins pipeline that uses it to build and test Istio. If you want, you can docker run the minikube container right now and get a functioning Kubernetes cluster inside the container that you can throw away when you’re done. The Jenkins bits will help you build Istio today and also give you a head-start if you want to build containers inside of containers.

Minikube-in-a-Container

This part describes how we made a Minikube-in-a-container that we use to run the Istio smoke tests during a build. This isn’t our idea – we started with localkube-dind. We couldn’t get it working out-of-the-box, we think due to a little bit of drift between localkube and minikube, so this is a record of what we changed to get it working for us. We also added some options and tooling so that we can use Istio in the resulting container. Nothing too fancy but we’re hoping it gives you a head start if you’re heading down a similar path.

Minikube may be familiar to you as a project to start up your own Kubernetes cluster in a VM that you can carry around on your laptop. This approach is very convenient but there are some situations where you can’t/don’t want to provision a VM, like cloud providers that don’t offer nested virtualization. Since docker can now run inside of docker, we decided to try making our own Kubernetes cluster inside of a docker container. An ephemeral Kubernetes container is easy to start, run a few tests, and throw away when you’re done and is a good fit for CI.

In our model, the Kubernetes cluster creates child docker containers (not sibling containers in the lingo of Jérôme Petazzoni’s consideration ). We did this intentionally – we preferred the isolation of child containers over sharing the docker build cache. But you should check out Jérôme’s article before committing to DinD for your application – maybe DooD (Docker-outside-of-Docker) is better for you. FYI – we’ve avoided the “it gets worse” part, and it looks like the “bad” and “ugly” parts are fixed/avoidable for us.

When you start a docker container, you’re asking docker to create and setup a few namespaces in the kernel, and then start your container inside these namespaces. A namespace is a sandbox – when you’re inside the namespace, you can generally only see other things that are also inside the namespace. A chroot, but for more than just filesystems – PIDs, network interfaces, etc. If you start a docker container with --privileged then the namespaces that are created get extra privileges, like the ability to create more child namespaces. That’s the trick at the core of docker-in-docker. For any more details, again, Jérôme’s the expert – check out his explanation (complete with Xzibit memes) here.

OK, so here’s the flow:

  1. Build a container that’s got docker, minikube, kubectl and dependencies installed.
  2. Add a “fake-systemctl” shim to trick Minikube into running without a real systemd installation.
  3. Start the container with --privileged
  4. Have the container start its own “inner” dockerd – this is the DinD part.
  5. Have the container start minikube --vm-driver=none so that minikube (in the container) talks to the dockerd running right alongside it.

All you have to do is docker run --privileged this container and you’re ready to go with kubectl. If you want, you can run the kubectl inside the container and get a truly throw-away environment.

You can try it now:

docker run --privileged --rm -it quay.io/aspenmesh/minikube-dind
docker exec -it <container> /bin/bash
# kubectl get nodes
<....>
# kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/shell-demo.yaml
# kubectl exec -it shell-demo -- /bin/bash

when you exit, the --rm flag means that docker will tear down and throw away everything for you.

For heavier usage, you’ll probably want to “docker cp” the kubeconfig file to your host and talk to kubernetes inside the container over the exposed kube API port 8443.

Here’s the Dockerfile that makes it go (you can clone this and support scripts here):

# Portions Copyright 2016 The Kubernetes Authors All rights reserved.
# Portions Copyright 2018 AspenMesh
#
# Licensed under the Apache License, Version 2.0 (the “License”);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an “AS IS” BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Based on:
# https://github.com/kubernetes/minikube/tree/master/deploy/docker/localkube-dind
FROM debian:jessie
# Install minikube dependencies
RUN DEBIAN_FRONTEND=noninteractive apt-get update -y && \
DEBIAN_FRONTEND=noninteractive apt-get -yy -q –no-install-recommends install \
iptables \
ebtables \
ethtool \
ca-certificates \
conntrack \
socat \
git \
nfs-common \
glusterfs-client \
cifs-utils \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common \
bridge-utils \
ipcalc \
aufs-tools \
sudo \
&& DEBIAN_FRONTEND=noninteractive apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install docker
RUN \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add – && \
apt-key export “9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88” | gpg – && \
echo “deb [arch=amd64] https://download.docker.com/linux/debian jessie stable” >> \
/etc/apt/sources.list.d/docker.list && \
DEBIAN_FRONTEND=noninteractive apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get -yy -q –no-install-recommends install \
docker-ce \
&& DEBIAN_FRONTEND=noninteractive apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
VOLUME /var/lib/docker
EXPOSE 2375
# Install minikube
RUN curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.24.1/minikube-linux-amd64 && chmod +x minikube
ENV MINIKUBE_WANTUPDATENOTIFICATION=false
ENV MINIKUBE_WANTREPORTERRORPROMPT=false
ENV CHANGE_MINIKUBE_NONE_USER=true
# minikube –vm-driver=none checks systemctl before starting. Instead of
# setting up a real systemd environment, install this shim to tell minikube
# what it wants to know: localkube isn’t started yet.
COPY fake-systemctl.sh /usr/local/bin/systemctl
EXPOSE 8443
# Install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.1/bin/linux/amd64/kubectl && \
chmod a+x kubectl && \
mv kubectl /usr/local/bin
# Copy local start.sh
COPY start.sh /start.sh
RUN chmod a+x /start.sh
# If nothing else specified, start up docker and kubernetes.
CMD /start.sh & sleep 4 && tail -F /var/log/docker.log /var/log/dind.log /var/log/minikube-start.log
view rawDockerfile.minikube hosted with ❤ by GitHub

Jenkins for Istio

Now that we’ve got Kubernetes-in-a-container we can use this for our Istio builds. Dockerized build systems are nice because developers can quickly create higher fidelity replicas of the CI build. Here’s an outline of our CI architecture for Istio builds:

  • Jenkins worker: This is a VM started by Jenkins for running builds. It may be shared by other builds at the same time. It’s important that any tooling we install on the worker is locally-scoped (so it doesn’t interfere with other builds) and ephemeral (we autoscale Jenkins workers to save costs).
  • Minikube container: The first thing we do is build and enter the Minikube container we talked about above. The rest of the build proceeds inside this container (or its children). The Jenkins workspace is mounted here. Jenkins’ docker plugin takes care of tearing this container down in success or failure, which is all we need to clean up all the running Kubernetes and Istio components.
  • Builder container: This is a container with build tools like the golang toolchain installed. It’s where we compile Istio and build containers for the Istio components. We test those components in the minikube container, and if they pass, declare the build a success and push the containers to our registry.

Most of the Jenkinsfile is about getting those pieces set up. After that, we run the same steps to build Istio that you would on your laptop: make dependmake buildmake test.

Check out the Jenkinsfile here:

node(docker) {
properties([disableConcurrentBuilds()])
wkdir = src/istio.io/istio
stage(Checkout) {
checkout scm
}
// withRegistry writes to /home/ubuntu/.dockercfg outside of the container
// (even if you run it inside the docker plugin) which won’t be visible
// inside the builder container, so copy them somewhere that will be
// visible. We will symlink to .dockercfg only when needed to reduce
// the chance of accidentally using the credentials outside of push
docker.withRegistry(https://quay.io, name-of-your-credentials-in-jenkins) {
stage(Load Push Credentials) {
sh cp ~/.dockercfg ${pwd()}/.dockercfg-quay-creds
}
}
k8sImage = docker.build(
k8s-${env.BUILD_TAG},
-f $wkdir/.jenkins/Dockerfile.minikube +
$wkdir/.jenkins/
)
k8sImage.withRun(–privileged) { k8s ->
stage(Get kubeconfig) {
sh docker exec ${k8s.id} /bin/bash -c \”while ! [ -e /kubeconfig ]; do echo waiting for kubeconfig; sleep 3; done\”
sh rm -f ${pwd()}/kubeconfig && docker cp ${k8s.id}:/kubeconfig ${pwd()}/kubeconfig
// Replace “127.0.0.1” with the path that peer containers can use to
// get to minikube.
// minikube will bake certs including the subject “kubernetes” so
// the kube-api server needs to be reachable from the client’s concept
// of “https://kubernetes:8443” or kubectl will refuse to connect.
sh sed -i” -e ‘s;server: https://127.0.0.1:8443;server: https://kubernetes:8443;’ kubeconfig
}
builder = docker.build(
istio-builder-${env.BUILD_TAG},
-f $wkdir/.jenkins/Dockerfile.jenkins-build +
–build-arg UID=`id -u` –build-arg GID=`id -g` +
$wkdir/.jenkins,
)
builder.inside(
-e GOPATH=${pwd()} +
-e HOME=${pwd()} +
-e PATH=${pwd()}/bin:\$PATH +
-e KUBECONFIG=${pwd()}/kubeconfig +
-e DOCKER_HOST=\”tcp://kubernetes:2375\” +
–link ${k8s.id}:kubernetes
) {
stage(Check) {
sh ls -al
// If there are old credentials from a previous build, destroy them –
// we will only load them when needed in the push stage
sh rm -f ~/.dockercfg
sh cd $wkdir && go get -u github.com/golang/lint/golint
sh cd $wkdir && make check
}
stage(Build) {
sh cd $wkdir && make depend
sh cd $wkdir && make build
}
stage(Test) {
sh cp kubeconfig $wkdir/pilot/platform/kube/config
sh “””PROXYVERSION=\$(grep envoy-debug $wkdir/pilot/docker/Dockerfile.proxy_debug |cut -d: -f2) &&
PROXY=debug-\$PROXYVERSION &&
curl -Lo – https://storage.googleapis.com/istio-build/proxy/envoy-\$PROXY.tar.gz | tar xz &&
mv usr/local/bin/envoy ${pwd()}/bin/envoy &&
rm -r usr/“””
sh cd $wkdir && make test
}
stage(Push) {
sh cd && ln -sf .dockercfg-quay-creds .dockercfg
sh cd $wkdir && +
make HUB=yourhub TAG=$BUILD_TAG push
gitTag = getTag(wkdir)
if (gitTag) {
sh cd $wkdir && +
make HUB=yourhub TAG=$gitTag push
}
sh cd && rm .dockercfg
}
}
}
}
String getTag(String wkdir) {
return sh(
script: cd $wkdir && +
git describe –exact-match –tags \$GIT_COMMIT || true,
returnStdout: true
).trim()
}
view rawJenkinsfile hosted with ❤ by GitHub

If you want to grab the files from this post and the supporting scripts, go here.

4 thoughts on “Building Istio with Minikube-in-a-Container and Jenkins

  1. Hi,

    I am testing your image, I am trying to launch a single pod with “kubectl run mynginx –image=nginx:alpine”, but it does not work at all.

    I see that in the logs:

    ==> /var/log/docker.log <==
    time="2018-06-13T09:00:03.369568213Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/containers/create type="*events.ContainerCreate"
    time="2018-06-13T09:00:03Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/afc6128431a88c858e7563354ce01a4cc943aac3c05a5012ca0dc5a5d76bbd9d/shim.sock" debug=false module="containerd/tasks" pid=363
    time="2018-06-13T09:00:03.490519933Z" level=warning msg="unknown container" container=afc6128431a88c858e7563354ce01a4cc943aac3c05a5012ca0dc5a5d76bbd9d module=libcontainerd namespace=plugins.moby
    time="2018-06-13T09:00:03.515864791Z" level=warning msg="unknown container" container=afc6128431a88c858e7563354ce01a4cc943aac3c05a5012ca0dc5a5d76bbd9d module=libcontainerd namespace=plugins.moby
    time="2018-06-13T09:00:28.924742602Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/containers/create type="*events.ContainerCreate"
    time="2018-06-13T09:00:28Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/703d9311dd05da5e5558556bbde8a7e1c02a52c3bcabb9449e8a1709ddd07276/shim.sock" debug=false module="containerd/tasks" pid=470
    time="2018-06-13T09:00:29.021090994Z" level=warning msg="unknown container" container=703d9311dd05da5e5558556bbde8a7e1c02a52c3bcabb9449e8a1709ddd07276 module=libcontainerd namespace=plugins.moby
    time="2018-06-13T09:00:29.049912568Z" level=warning msg="unknown container" container=703d9311dd05da5e5558556bbde8a7e1c02a52c3bcabb9449e8a1709ddd07276 module=libcontainerd namespace=plugins.moby
    time="2018-06-13T09:00:30.122556354Z" level=info msg="Container 703d9311dd05da5e5558556bbde8a7e1c02a52c3bcabb9449e8a1709ddd07276 failed to exit within 0 seconds of signal 15 – using the force"
    time="2018-06-13T09:00:30.165355369Z" level=warning msg="unknown container" container=703d9311dd05da5e5558556bbde8a7e1c02a52c3bcabb9449e8a1709ddd07276 module=libcontainerd namespace=plugins.moby

    Any idea why this container dies?

    1. I’m seeing the same thing myself when I try to run the container now. I’m using docker for mac and I know I’ve upgraded docker several times since January. Maybe something changed there?

      The dind is capable of running other docker containers – I could successfully “docker run hello-world” and “docker run -it ubuntu bash”. Just not full Kubernetes via localkube.

      Sorry zoobab, I’ll have to dig deeper.

  2. Have you put any thought into if its possible to run `minikube start..` in the docker build phase rather than the docker run phase? What you have described takes around 2mins to create a healthy 1 node kube cluster (with all the downloads done in the build phase). It would be great to reduce that by having the run phase of the container just start an already configured minikube instance. I’ve tried it, but hit weird issues where pods don’t have network access to the api server via 10.96.0.x. I assume its some left over configuration that has changed, Any ideas?

    1. That’s interesting.

      It should definitely be possible to do “minikube cache add …” in the docker build phase, which would mean you don’t have to download any of the kubernetes stuff during “docker run”, but you’d still have to run it.

      As far as “minikube start” I think you’re running into issues where Minikube wants to remember the IP address (it puts it into the .kubeconfig and probably other places) but this changes between when you “docker build” and “docker start” (and each instance of “docker start”).

      Even if you could work around that, I think there may be other side effects to “minikube start” during docker build. I think if you get it to work, one side effect would be that anyone with your docker container could connect to anyone else’s k8s cluster that was started from that same container, because the cert pair was baked in during “docker build”.

      That may not be a big deal since this is supposed to be run locally, but I’d hate to publicly host something with keys pre-baked. Might be just fine for your own local purposes.

      (You’re going to have to re-do at least part of “minikube start” during “docker run” because we at least have to start the associated kubernetes containers i.e. create the namespaces and start the processes)

      If I update this, I’ll definitely try the “minikube cache add …” pieces and I’m interested to hear if you have success with “minikube start” during docker build.

Leave a Reply

Your email address will not be published. Required fields are marked *