Deis Router on Arm

Overview

If you've followed me on Twitter or come to any of my talks at CodeMash, Pittsburgh Tech Fest, or coming up at Abstractions (Here and Here), you know that I'm into Kubernetes and deploying it out to a Raspberry Pi Cluster.

Steve Pi Cluster

Why DNS?

In prep for my talks coming up at Abstractions, I wanted to make the demo's a little bit more realistic by using real dns names for the demos instead of using NodePorts (because who is going to access a site on the public internet like this: https://foo.com:30243)?

If you're not familiar with Kubernetes, NodePorts are a simple way to expose containers running in a cluster to the outside world. You access one the nodes in the cluster via this port and magically you are routed to the container. You can read more about NodePorts here.

Solutions

I tried a few solutions to get this dns routing to work.

First I tried to implement Ingress routing. This is a k8s native solution which lets you route traffic to the cluster, and a router inside the cluster then looks at the request and sends you to the appropriate container inside the cluster.

I struggled to get ingress working in Minikube on my Mac laptop. There was some errors with my certs and the service account. I then tried to deploy to my AWS cluster running on CoreOS. This time I got past the cert errors, but the Nginx controller wouldn't pick up the services.

Being a big fan of the work the Deis team has been doing. They have built / are building some really cool things and I encourage everyone to check it out.

They make a router which works similarly to Ingress but instead of deploying Ingress resources to your cluster, you just attach labels to the services.

You just need to deploy the router, add a label (router.deis.io/routable: "true") and an annotation (router.deis.io/domains: foo,bar,www.foobar.com) to a service and that's it!

Example:

Compiling for ARM

I love me some raspberry pi's and think it's totally cool that we can run kubernetes (and lots of other cool stuff) on small, cheap hardware. I think building out a k8s cluster by hand is a great learning path to understand how kubernetes works and how to troubleshoot (because stuff will ultimately go wrong).

Since the Deis router isn't compiled for ARM, that's up to us to do.

Build the Router Binary
  1. On my mac I cloned the Deis Router repo down: git clone https://github.com/deis/router.git
  2. Run make boostrap to get the environment running
  3. Edit the make file and change GOARCH to be arm instead of amd64
  4. Build the router binary for arm: make binary-build
  5. Copy that file over to your raspberry pi
  6. Clone the router repo onto your pi and move the router binary into ${GITREPO}/rootfs/opt/router/sbin/router
Build the helper containers:

The Deis team uses some base images they built as a common starting point. First, I had to update the base image (quay.io/deis/base:0.2.0). I did this by changing the root image to be hypriot/rpi-golang which turned into: stevesloka/deis-base-arm.

Then just update the base image in rootfs/Dockerfile to be the base image we created in the previous step.

My final image is here: stevesloka/deis-router-arm

Replication Controller

Here's my replication controller which I used to deploy the router to my pi:

apiVersion: v1  
kind: ReplicationController  
metadata:  
  name: deis-router
  namespace: deis
  labels:
    heritage: deis
spec:  
  replicas: 1
  selector:
    app: deis-router
  template:
    metadata:
      labels:
        app: deis-router
    spec:
      containers:
      - name: deis-router
        image: stevesloka/deis-router-arm:0.1.0
        imagePullPolicy: Always
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - containerPort: 8080
          hostPort: 80
        - containerPort: 6443
          hostPort: 443
        - containerPort: 2222
          hostPort: 2222
        - containerPort: 9090
          hostPort: 9090
        livenessProbe:
          httpGet:
            path: /healthz
            port: 9090
          initialDelaySeconds: 1
          timeoutSeconds: 1
        readinessProbe:
          httpGet:
            path: /healthz
            port: 9090
          initialDelaySeconds: 1
          timeoutSeconds: 1

Summary

Hope this helps anyone run this on their pi. I probably need to come up with a more automated way to build these images, but for now it should work out fine.

I was pinged on Twitter someone does have Ingress working on ARM here, but haven't yet taken a look yet, but will soon.

~ Steve

Remove all Docker Images

Use the following command to stop all containers and remove all images from your docker client. WARNING: There's no going back! With great power...blah, blah... =)

#!/bin/bash
# Stop all containers
docker stop $(docker ps -qa)  
# Delete all containers
docker rm $(docker ps -aq)  
# Delete all images
docker rmi $(docker images -q)  

Force Update of CoreOS

sudo /usr/bin/systemctl unmask update-engine.service  
sudo /usr/bin/systemctl start update-engine.service  
sudo update_engine_client -update  
sudo /usr/bin/systemctl stop update-engine.service  
sudo /usr/bin/systemctl mask update-engine.service  
sudo reboot  

Create docker-machine 1.9

Docker machine is a slick tool to let you bring up a docker instance on your machine, however, it defaults to latest for the Docker version. I needed a way to test our Artifactory server with issues related to docker version, so I wanted to build a docker 1.9 instance.

Turns out this is pretty easy if you have the right urls to pass. Here's the command I used:

docker-machine create -d virtualbox --virtualbox-boot2docker-url=https://github.com/boot2docker/boot2docker/releases/download/v1.9.1/boot2docker.iso old  

Access Kubernetes API behind bastion host

For our production Kubernetes instances, we run them on CoreOS in AWS. Our architecture is setup so that all instances are hidden inside private subnets. To access the resources there for administration tasks, we utilize bastion hosts.

This is a problem since now we don't have direct access to the API server for kubectl to work. To access the cluster using kubectl we'll need to setup an SSH tunnel between our laptop and the bastion host.

The following example shows how to setup this tunnel where 10.0.0.50 is the internal IP of my k8s API server and 1.2.3.4 is the ip of my bastion host:

ssh -L 9443:10.0.0.50:443 ec2-user@1.2.3.4 -N  

Now that we have a tunnel up and running, we should be able to hit our API server via localhost (e.g. curl https://localhost:9443) but we can't because I don't have localhost in the SAN of my certs. I could add localhost or use a different name which is what I did (k8s.stevesloka.com). To add the SAN name we need to add an entry to our local hosts file or setup a CNAME in public dns for simplicity (This way additional users of the cluster don't need to mess with their hosts files).

The following example shows my example hosts file:

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1       localhost  
255.255.255.255 broadcasthost  
::1             localhost
127.0.0.1       k8s.stevesloka.com  

Now just configure kubectl to talk to my cluster and I'm in!

$ kubectl config set-cluster default-cluster --server=https://k8s.stevesloka.com:9443 --certificate-authority=ca.pem
$ kubectl config set-credentials default-admin --certificate-authority=ca.pem --client-key=apiserver-key.pem --client-certificate=apiserver.pem
$ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
$ kubectl config use-context default-system