Steve Sloka's Picture

Steve Sloka

18 posts

Access Minikube service from Linux Host

A few weeks back I did a blog post summing together some information on how to setup Minikube with local access on an OSX machine. In my ever tinkering with Linux on the desktop, I am now just about switched over, but found I couldn't hit local IP's for k8s services for a new operator I'm developing on Minikube like I could on OSX.

This guide will setup your machine with Minikube and I tested it out on Fedora 25. Please let me know if it works out for you and would like to expand to other distro's as well!

Set it up

Route all service IP traffic to Minikube:

sudo ip route delete 10/24 > /dev/null 2>&1 #Cleanup  
sudo ip route add 10.0.0.0/24 via $(minikube ip) #Create Route  

Enable dnsmasq for lookups by editing the file /etc/NetworkManager/NetworkManager.conf and under the [main] section, add the following:

dns=dnsmasq  

Add a file named svc.cluster.local to /etc/NetworkManager/dnsmasq.d with the following contents:

server=/svc.cluster.local/10.0.0.10  
local=/svc.cluster.local/  

Edit the file /etc/nssswitch.conf, find the path hosts: and change it to look like this:

hosts:      files dns myhostname mdns4_minimal  

At this point I had to reboot my system to have everything take.

Test it out!

Start minikube & do all steps above first!

Create a deployment + service:

$ kubectl run nginx --image=nginx     
$ kubectl expose deployment nginx --port=80

Find the ip of the service:

$ nslookup nginx.default.svc.cluster.local                                                                                    
Server:        127.0.0.1  
Address:    127.0.0.1#53

Name:    nginx.default.svc.cluster.local  
Address: 10.0.0.105  

Curl that ip to verify ip connectivity:

$ curl http://10.0.0.105                                                                                                      
<!DOCTYPE html>  
<html>  
<head>  
<title>Welcome to nginx!</title>  
<style>  
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>  
</head>  
<body>  
<h1>Welcome to nginx!</h1>  
<p>If you see this page, the nginx web server is successfully installed and  
working. Further configuration is required.</p>

<p>For online documentation and support please refer to  
<a href="http://nginx.org/">nginx.org</a>.<br/>  
Commercial support is available at  
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>  
</body>  
</html>  

Curl via dns:

$ curl http://nginx.default.svc.cluster.local                                                                                                      
<!DOCTYPE html>  
<html>  
<head>  
<title>Welcome to nginx!</title>  
<style>  
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>  
</head>  
<body>  
<h1>Welcome to nginx!</h1>  
<p>If you see this page, the nginx web server is successfully installed and  
working. Further configuration is required.</p>

<p>For online documentation and support please refer to  
<a href="http://nginx.org/">nginx.org</a>.<br/>  
Commercial support is available at  
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>  
</body>  
</html>  

Access Minikube services from Host on OSX

Background

Deploying applications to Kubernetes has been getting better and better. Things like Deployments, Dynamic Volume Provisioning and other features are allowing folks to focus on the apps more-so than the infrastructure which is great.

However, in a development model, you want more access to the cluster to allow for easier debugging and access.

I use Minikube all the time in my day-to-day job to test out new Operators I'm working on or to deploy an application locally to test out some features.

The one component that was always difficult was how to connect to services on Minikube. The only real way today has been to expose the service via a NodePort which works, but it's difficult if since now there's a different configuration to deal with. The service type now has to be NodePort when I only want it to be ClusterIP (meaning no outside cluster access). Additionally, that port is dynamic, so now I've got to find out the new port each time or set it statically.

Solution

In slack the other day, I saw a post from Dale Hamel (@dalehamel-shopify on Slack), which outlined a way to route traffic from your host machine to the minikube cluster allowing you to curl Kubernetes services via ClusterIP or DNS name.

There are just a few simple commands that you need to do to set this up, and a big thanks to Dale for sharing with everyone!

Implementation

NOTE: These instructions are build for an OSX system, but can be translated to Linux as well pretty easily.

Create a route to allow traffic from the host machine to route to the minikube IP:

# Remove any existing routes
$ sudo route -n delete 10/24 > /dev/null 2>&1

# Create route
$ sudo route -n add 10.0.0.0/24 $(minikube ip)

Create a DNS resolver to allow traffic to use the minikube internal DNS server for resolution:

$ sudo cat <<EOF >/etc/resolver/svc.cluster.local
nameserver 10.0.0.10  
domain svc.cluster.local  
search svc.cluster.local default.svc.cluster.local  
options ndots:5  
EOF  

Update mac dns resolver, then REBOOT:

sudo defaults write /Library/Preferences/com.apple.mDNSResponder.plist AlwaysAppendSearchDomains -bool YES  

Get the list of interfaces:

# xhyve uses "bridge100" / virtualbox uses "bridge0"
$ ifconfig 'bridge100' | grep member | awk '{print $2}' 

Open up the firewall to allow access:

# Take all interfaces from the previous command and apply
$ sudo ifconfig bridge100 -hostfilter en5

Test it out!

# --- Delete current route:
$ sudo route -n delete 10/24 > /dev/null 2>&1                                                                                                              
Password:

# --- Add new route:
$ sudo route -n add 10.0.0.0/24 $(minikube ip)                                                                                                             
add net 10.0.0.0: gateway 192.168.64.30

# --- Get interfaces:
$ ifconfig 'bridge100' | grep member | awk '{print $2}'
en5

# --- Set firewall:
$ sudo ifconfig bridge100 -hostfilter en5

# --- Get services:
$ kubectl get svc
NAME                      CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE  
elasticsearch             10.0.0.181   <none>        9200/TCP   2m  
elasticsearch-discovery   10.0.0.88    <none>        9300/TCP   2m  
es-data-svc               10.0.0.55    <none>        9300/TCP   2m  
kubernetes                10.0.0.1     <none>        443/TCP    8m

# --- Curl service by IP:
$ curl -k https://10.0.0.181:9200
{
  "name" : "71b5524c-524c-4f4c-9621-e45c5c34f22c",
  "cluster_name" : "myesdb",
  "cluster_uuid" : "Y0D14nKKRTuPH8x2Yq3kuQ",
  "version" : {
    "number" : "5.3.1",
    "build_hash" : "5f9cf58",
    "build_date" : "2017-04-17T15:52:53.846Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.2"
  },
  "tagline" : "You Know, for Search"
}

# --- Curl service by DNS:
$ curl -k https://elasticsearch:9200
{
  "name" : "71b5524c-524c-4f4c-9621-e45c5c34f22c",
  "cluster_name" : "myesdb",
  "cluster_uuid" : "Y0D14nKKRTuPH8x2Yq3kuQ",
  "version" : {
    "number" : "5.3.1",
    "build_hash" : "5f9cf58",
    "build_date" : "2017-04-17T15:52:53.846Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.2"
  },
  "tagline" : "You Know, for Search"
}

Elasticsearch Operator on Kubernetes

Deploy an Elasticsearch cluster and enable snapshots to AWS S3 utilizing Kubernetes. Define which zones to deploy the data nodes to and they will be spread evenly across those zones as well as providing data persistence.

Happy for feedback and missing use-cases. It's very new so there is much more to do to make it fully featured including making it production ready as well as more clouds and a minikube version are coming soon!

https://github.com/upmc-enterprises/elasticsearch-operator

awsecr-creds minikube addon

Using Minikube for local Kubernetes development is the easiest path to getting a cluster spun up. You just run a few commands and within minutes you've got yourself a kube cluster ready to go!

But what if you are looking at testing images you're going to use in a cloud environment. My company used AWS and we use ECR (Elastic Container Registry) as a private docker repository. If you try this, however, you'll be stuck because you do not have IAM credentials to pull from the registry.

You could manually login and pass those credentials to your minikube vm, but who wants to do that? I wrote a post on how to do this with a project I wrote called AWS ECR Creds, but it again required some manual steps to get running.

Now with a PR I sent to the minikube project, you can now just create a secret with the needed IAM credentials to access ECR and then enable the addon.

Steps to enable addon

Create Secret

Base64 encode your secrets and copy to sample secrets file. Do this for your AWS AccountId, AWS Secret and AWS Key.

echo -n "awsAccount" | base64  

Copy those into the secret file and apply to your cluster:

kubectl create -f secret.yaml  

Enable the addon:

minikube addons enable awsecr-creds  

Now deploy a pod to your cluster referencing the imaged pushed to ECR and it should automatically pull! By default the add-on will refresh the credentials every 11 hours and 55 minutes (the default expiry is 12 hours from AWS).

Pull Images from AWS ECR on Minikube

I'm a big fan of Minikube for local Kubernetes development. If you haven't checked it out yet, I encourage you to do so; short of GKE, it's the easiest way to spin up a single node k8s cluster.

Minikube QuickStart

To get running on minikube first download the latest binary and put into your $PATH somewhere:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.10.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/  

Next spin up a new cluster:

 minikube start --cpus=4 --memory=8096 --vm-driver=virtualbox --disk-size=40g

Pulling Images

Pulling public images on a Kubernetes cluster is super easy, it just works! However, if you are pulling from a private repo, there may be some extra work to do. Depending on how you want to attack the problem outlines what might need to be done. You can find docs here on how to do other repos: http://kubernetes.io/docs/user-guide/images

For the rest of this article, I'm going to focus on AWS ECR as the registry to connect to. If there's interest, I can add more, however, I want to address ECR right now.

Running in AWS

If your cluster is running in AWS and you have the correct CloudProvider set, then there's nothing else to do, ECR is supported out of the box.

Running in Minikube

Since Minikube doesn't run inside AWS (but on your local machine), we can't leverage the built-in cloud provider to help out. Before the cloud provider supported ECR natively, it was difficult to use ECR as a container registry so I wrote a tool which automates the process.

You can find the github repo here which does all the work: https://github.com/upmc-enterprises/awsecr-creds

How this tool works is it leverages ImagePullSecrets on the pod by first authenticating and getting credentials to pull images from ECR. Then it creates an ImagePullSecret so that when a pod gets created, those credentials are automatically placed into the pod.

12 Hour Max

The only 'gotcha' of how ECR works is that credentials are only good for 12 hours, so ever 11 hours and 55 minutes, the credentials are refreshed.

Setup

So how do you get running with awsecr-credson your Minikube cluster?

Simply edit the sample controller with credentials and account id's matching your AWS environment and deploy!

kubectl create -f k8s/replicationController.yml  

Why pull from ECR?

I utilize AWS for many cloud resources today and letting AWS manage that resource is great. At the same time it's a good way to validate things since I can now tap into my CI system which is generating images for me. Now I can pull images and quickly test out components of my app without having to rebuild them all locally!