awsecr-creds minikube addon

Using Minikube for local Kubernetes development is the easiest path to getting a cluster spun up. You just run a few commands and within minutes you've got yourself a kube cluster ready to go!

But what if you are looking at testing images you're going to use in a cloud environment. My company used AWS and we use ECR (Elastic Container Registry) as a private docker repository. If you try this, however, you'll be stuck because you do not have IAM credentials to pull from the registry.

You could manually login and pass those credentials to your minikube vm, but who wants to do that? I wrote a post on how to do this with a project I wrote called AWS ECR Creds, but it again required some manual steps to get running.

Now with a PR I sent to the minikube project, you can now just create a secret with the needed IAM credentials to access ECR and then enable the plugin.

Steps to enable plugin

Create Secret

Base64 encode your secrets and copy to sample secrets file. Do this for your AWS AccountId, AWS Secret and AWS Key.

echo -n "awsAccount" | base64  

Copy those into the secret file and apply to your cluster:

kubectl create -f secret.yaml  

Enable the plugin:

minikube plugin enable awsecr-creds  

Pull Images from AWS ECR on Minikube

I'm a big fan of Minikube for local Kubernetes development. If you haven't checked it out yet, I encourage you to do so; short of GKE, it's the easiest way to spin up a single node k8s cluster.

Minikube QuickStart

To get running on minikube first download the latest binary and put into your $PATH somewhere:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.10.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/  

Next spin up a new cluster:

 minikube start --cpus=4 --memory=8096 --vm-driver=virtualbox --disk-size=40g

Pulling Images

Pulling public images on a Kubernetes cluster is super easy, it just works! However, if you are pulling from a private repo, there may be some extra work to do. Depending on how you want to attack the problem outlines what might need to be done. You can find docs here on how to do other repos: http://kubernetes.io/docs/user-guide/images

For the rest of this article, I'm going to focus on AWS ECR as the registry to connect to. If there's interest, I can add more, however, I want to address ECR right now.

Running in AWS

If your cluster is running in AWS and you have the correct CloudProvider set, then there's nothing else to do, ECR is supported out of the box.

Running in Minikube

Since Minikube doesn't run inside AWS (but on your local machine), we can't leverage the built-in cloud provider to help out. Before the cloud provider supported ECR natively, it was difficult to use ECR as a container registry so I wrote a tool which automates the process.

You can find the github repo here which does all the work: https://github.com/upmc-enterprises/awsecr-creds

How this tool works is it leverages ImagePullSecrets on the pod by first authenticating and getting credentials to pull images from ECR. Then it creates an ImagePullSecret so that when a pod gets created, those credentials are automatically placed into the pod.

12 Hour Max

The only 'gotcha' of how ECR works is that credentials are only good for 12 hours, so ever 11 hours and 55 minutes, the credentials are refreshed.

Setup

So how do you get running with awsecr-credson your Minikube cluster?

Simply edit the sample controller with credentials and account id's matching your AWS environment and deploy!

kubectl create -f k8s/replicationController.yml  

Why pull from ECR?

I utilize AWS for many cloud resources today and letting AWS manage that resource is great. At the same time it's a good way to validate things since I can now tap into my CI system which is generating images for me. Now I can pull images and quickly test out components of my app without having to rebuild them all locally!

VMWare Workstation on Fedora 24

I struggled to get VMWare workstation running on Fedora 24. With some google searching I came across this site: http://vcojot.blogspot.com/2015/11/vmware-worksation-12-on-fedora-core-23.html

After getting workstation installed, I couldn't compile the additional components which just kept failing with errors.

After some more google-foo, I ran across this posting which outlines some changes to how the networking pieces are compiled: https://communities.vmware.com/thread/536705?tstart=0

I've references the relevant bits here:

# sudo -i  
# cd /usr/lib/vmware/modules/source  
# tar xf vmmon.tar  
# mv vmmon.tar vmmon.old.tar  
# sed -i -e 's/get_user_pages/get_user_pages_remote/g' vmmon-only/linux/hostif.c  
# tar cf vmmon.tar vmmon-only  
# rm -r vmmon-only  

# tar xf vmnet.tar  
# mv vmnet.tar vmnet.old.tar  
# sed -i -e 's/get_user_pages/get_user_pages_remote/g' vmnet-only/userif.c  
# tar cf vmnet.tar vmnet-only  
# rm -r vmnet-only  

UPDATE: On kernel 4.7 you'll need this as well https://bbs.archlinux.org/viewtopic.php?id=215808:

# cd /usr/lib/vmware/modules/source
# tar xf vmnet.tar
# mv vmnet.tar vmnet.old.tar
# sed -i -e 's/dev->trans_start = jiffies/netif_trans_update(dev)/g' vmnet-only/netif.c
# tar cf vmnet.tar vmnet-only
# rm -r vmnet-only

# vmware-modconfig --console --install-all

Deis Router on Arm

Overview

If you've followed me on Twitter or come to any of my talks at CodeMash, Pittsburgh Tech Fest, or coming up at Abstractions (Here and Here), you know that I'm into Kubernetes and deploying it out to a Raspberry Pi Cluster.

Steve Pi Cluster

Why DNS?

In prep for my talks coming up at Abstractions, I wanted to make the demo's a little bit more realistic by using real dns names for the demos instead of using NodePorts (because who is going to access a site on the public internet like this: https://foo.com:30243)?

If you're not familiar with Kubernetes, NodePorts are a simple way to expose containers running in a cluster to the outside world. You access one the nodes in the cluster via this port and magically you are routed to the container. You can read more about NodePorts here.

Solutions

I tried a few solutions to get this dns routing to work.

First I tried to implement Ingress routing. This is a k8s native solution which lets you route traffic to the cluster, and a router inside the cluster then looks at the request and sends you to the appropriate container inside the cluster.

I struggled to get ingress working in Minikube on my Mac laptop. There was some errors with my certs and the service account. I then tried to deploy to my AWS cluster running on CoreOS. This time I got past the cert errors, but the Nginx controller wouldn't pick up the services.

Being a big fan of the work the Deis team has been doing. They have built / are building some really cool things and I encourage everyone to check it out.

They make a router which works similarly to Ingress but instead of deploying Ingress resources to your cluster, you just attach labels to the services.

You just need to deploy the router, add a label (router.deis.io/routable: "true") and an annotation (router.deis.io/domains: foo,bar,www.foobar.com) to a service and that's it!

Example:

Compiling for ARM

I love me some raspberry pi's and think it's totally cool that we can run kubernetes (and lots of other cool stuff) on small, cheap hardware. I think building out a k8s cluster by hand is a great learning path to understand how kubernetes works and how to troubleshoot (because stuff will ultimately go wrong).

Since the Deis router isn't compiled for ARM, that's up to us to do.

Build the Router Binary
  1. On my mac I cloned the Deis Router repo down: git clone https://github.com/deis/router.git
  2. Run make boostrap to get the environment running
  3. Edit the make file and change GOARCH to be arm instead of amd64
  4. Build the router binary for arm: make binary-build
  5. Copy that file over to your raspberry pi
  6. Clone the router repo onto your pi and move the router binary into ${GITREPO}/rootfs/opt/router/sbin/router
Build the helper containers:

The Deis team uses some base images they built as a common starting point. First, I had to update the base image (quay.io/deis/base:0.2.0). I did this by changing the root image to be hypriot/rpi-golang which turned into: stevesloka/deis-base-arm.

Then just update the base image in rootfs/Dockerfile to be the base image we created in the previous step.

My final image is here: stevesloka/deis-router-arm

Replication Controller

Here's my replication controller which I used to deploy the router to my pi:

apiVersion: v1  
kind: ReplicationController  
metadata:  
  name: deis-router
  namespace: deis
  labels:
    heritage: deis
spec:  
  replicas: 1
  selector:
    app: deis-router
  template:
    metadata:
      labels:
        app: deis-router
    spec:
      containers:
      - name: deis-router
        image: stevesloka/deis-router-arm:0.1.0
        imagePullPolicy: Always
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - containerPort: 8080
          hostPort: 80
        - containerPort: 6443
          hostPort: 443
        - containerPort: 2222
          hostPort: 2222
        - containerPort: 9090
          hostPort: 9090
        livenessProbe:
          httpGet:
            path: /healthz
            port: 9090
          initialDelaySeconds: 1
          timeoutSeconds: 1
        readinessProbe:
          httpGet:
            path: /healthz
            port: 9090
          initialDelaySeconds: 1
          timeoutSeconds: 1

Summary

Hope this helps anyone run this on their pi. I probably need to come up with a more automated way to build these images, but for now it should work out fine.

I was pinged on Twitter someone does have Ingress working on ARM here, but haven't yet taken a look yet, but will soon.

~ Steve

Remove all Docker Images

Use the following command to stop all containers and remove all images from your docker client. WARNING: There's no going back! With great power...blah, blah... =)

#!/bin/bash
# Stop all containers
docker stop $(docker ps -qa)  
# Delete all containers
docker rm $(docker ps -aq)  
# Delete all images
docker rmi $(docker images -q)