Kubectl vs HTTP API

by Ivan Pedrazas 2016-06-09 kubernetes security api utils

One of the best things Kubernetes has is its API, however, I’ve seen a few tools that instead of using the HTTP API use a wrapper on kubectl. I tweeted about it and a discussion was created around the differences between kubectl and the HTTP API.

This is the tweet:

One thing that I hope it’s clear it’s that kubectl is designed to be used by people and HTTP API is designed to be used by code. In fact, if you look at the documentation you will see that there’s a list of different APIs and kubectl is under kubectl CLI, this is the list of all the kubernetes APIs:

  • Kubernetes API
  • Extension API
  • Autoscaling API
  • Batch API
  • kubectl CLI

So, let’s see what these differences are!

In order to create this test, I’ve spun a new cluster in AWS. To be able to make remote calls to the API server you either need the token or the user and password for the basic auth:

root@ip-172-20-0-9:~# cat /srv/kubernetes/basic_auth.csv
PFK15d8JoTczqb3T,admin,admin

# We need to base64 encode the user/password

echo -n "admin:PFK15d8JoTczqb3T" | base64

root@ip-172-20-0-9:~# cat /srv/kubernetes/known_tokens.csv
iHNCmbTQCM4DRoN66PwEDqTFo2RA7JtJ,admin,admin
59OcyDFhgBlqutaWzU7jxxMSAbVQs3oU,kubelet,kubelet
jCbn8QSROs0gE525I7MIl0G1V7TEsXT4,kube_proxy,kube_proxy

Token: iHNCmbTQCM4DRoN66PwEDqTFo2RA7JtJ,admin,admin

Basic Auth: YWRtaW46UEZLMTVkOEpvVGN6cWIzVA==

API_SERVER: https://52.51.69.149

Let’s export these values so we can re-use them easily:

export TOKEN=iHNCmbTQCM4DRoN66PwEDqTFo2RA7JtJ
export AUTH=YWRtaW46UEZLMTVkOEpvVGN6cWIzVA==
export API_SERVER=https://52.51.69.149

Let’s test that the authentication works:

-> % curl -k -X GET -H "Authorization: Bearer $TOKEN" $API_SERVER
    {
      "paths": [
        "/api",
        "/api/v1",
        "/apis",
        "/apis/apps",
        "/apis/apps/v1alpha1",
        "/apis/autoscaling",
        "/apis/autoscaling/v1",
        "/apis/batch",
        "/apis/batch/v1",
        "/apis/batch/v2alpha1",
        "/apis/extensions",
        "/apis/extensions/v1beta1",
        "/apis/policy",
        "/apis/policy/v1alpha1",
        "/apis/rbac.authorization.k8s.io",
        "/apis/rbac.authorization.k8s.io/v1alpha1",
        "/healthz",
        "/healthz/ping",
        "/logs/",
        "/metrics",
        "/swaggerapi/",
        "/ui/",
        "/version"
      ]
    }

We will do basic auth also, for completion:

curl -k -X GET -H "Authorization: Basic $AUTH" $API_SERVER

List of Nodes

Kubectl:

kubectl get nodes

API:

curl -k -X GET -H "Authorization: Bearer $TOKEN" $API_SERVER/api/v1/nodes

If you have jq this command will be more useful:

curl -k -X GET -H "Authorization: Bearer $TOKEN" $API_SERVER/api/v1/nodes | jq '.items[].status.addresses

Let’s get a list of pods:

kubectl get pods

This command returns nothing because we haven’t created a single object yet… but, we can do:

kubectl get pods --all-namespaces


NAMESPACE     NAME                                                               READY     STATUS             RESTARTS   AGE
kube-system   elasticsearch-logging-v1-7n3fn                                     1/1       Running            0          1h
kube-system   elasticsearch-logging-v1-qcr9m                                     1/1       Running            0          1h
kube-system   fluentd-elasticsearch-ip-172-20-0-166.eu-west-1.compute.internal   1/1       Running            0          1h
kube-system   fluentd-elasticsearch-ip-172-20-0-167.eu-west-1.compute.internal   1/1       Running            0          1h
kube-system   heapster-v1.1.0.beta2-2783873945-91yz7                             4/4       Running            0          1h
kube-system   kibana-logging-v1-f4q1e                                            0/1       CrashLoopBackOff   16         1h
kube-system   kube-proxy-ip-172-20-0-166.eu-west-1.compute.internal              1/1       Running            0          1h
kube-system   kube-proxy-ip-172-20-0-167.eu-west-1.compute.internal              1/1       Running            0          1h
kube-system   kubernetes-dashboard-v1.1.0-beta1-imh41                            1/1       Running            0          1h
kube-system   monitoring-influxdb-grafana-v3-12tud                               2/2       Running            0          1h

Excellent, let’s do the http query now:

curl -k -X GET -H "Authorization: Bearer $TOKEN" $API_SERVER/api/v1/pods

Result is pretty large, so, let’s gist it:

{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/pods",
    "resourceVersion": "1172"
  },
  "items": [
    {
     "metadata": {
     "name": "elasticsearch-logging-v1-7n3fn",
     "generateName": "elasticsearch-logging-v1-",
     "namespace": "kube-system",
     "selfLink": "/api/v1/namespaces/kube-system/pods/elasticsearch-logging-v1-7n3fn",
     "uid": "19617fb6-2e4d-11e6-a8cc-0a800fdf3429",
     ...

Ok, let’s create a namespace, then. The namespace will be defined in a file new_namespace.yaml with the content:

apiVersion: v1
kind: Namespace
metadata:
  name: api-vs-kubectl

All the files used in this post can be found in our github repo Now, let’s create the namespace:

kubectl create -f new_namespace.yaml

Now via HTTP API

curl -k -H "Content-Type: application/yaml" -H "Authorization: Bearer $TOKEN" -XPOST -d"$(cat api_ns.yaml)" $API_SERVER/api/v1/namespaces
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "api",
    "selfLink": "/api/v1/namespaces/api",
    "uid": "aee75ba2-2e5a-11e6-a8cc-0a800fdf3429",
    "resourceVersion": "1421",
    "creationTimestamp": "2016-06-09T15:56:09Z"
  },
  "spec": {
    "finalizers": [
      "kubernetes"
    ]
  },
  "status": {
    "phase": "Active"
  }

Now, let’s create some containers:

kubectl create -f kubectl_nginx.yaml --namespace=kubectl

With our API:

curl -k -H "Content-Type: application/yaml" -H "Authorization: Bearer $TOKEN" -XPOST -d"$(cat api_nginx.yaml)" $API_SERVER/api/v1/namespaces/api/replicationcontrollers

Now, if we pull all the pods we will see that we have the same exepected results:

NAMESPACE     NAME                                                               READY     STATUS             RESTARTS   AGE
kube-system   elasticsearch-logging-v1-7n3fn                                     1/1       Running            0          1h
kube-system   elasticsearch-logging-v1-qcr9m                                     1/1       Running            0          1h
kube-system   fluentd-elasticsearch-ip-172-20-0-166.eu-west-1.compute.internal   1/1       Running            0          1h
kube-system   fluentd-elasticsearch-ip-172-20-0-167.eu-west-1.compute.internal   1/1       Running            0          1h
kube-system   heapster-v1.1.0.beta2-2783873945-91yz7                             4/4       Running            0          1h
kube-system   kibana-logging-v1-f4q1e                                            0/1       CrashLoopBackOff   25         1h
kube-system   kube-proxy-ip-172-20-0-166.eu-west-1.compute.internal              1/1       Running            0          1h
kube-system   kube-proxy-ip-172-20-0-167.eu-west-1.compute.internal              1/1       Running            0          1h
kube-system   kubernetes-dashboard-v1.1.0-beta1-imh41                            1/1       Running            0          1h
kube-system   monitoring-influxdb-grafana-v3-12tud                               2/2       Running            0          1h
kubectl       nginx-we1fb                                                        1/1       Running            0          6m
api           nginx-7nnad                                                        1/1       Running            0          46s

It’s clear that the CRUD is covered by both, the API and kubectl, let’s scale up/down pods:

kubectl scale rc nginx --replicas=3 --namespace=kubectl
NAME          READY     STATUS    RESTARTS   AGE
nginx-onnme   1/1       Running   0          8s
nginx-p3zwa   1/1       Running   0          8s
nginx-we1fb   1/1       Running   0          8m

With the API, we’re using PUT, that updates the object, so, we have to re-submit the modified object

curl -k -H "Content-Type: application/yaml" -H "Authorization: Bearer $TOKEN" -XPUT -d"$(cat api_nginx-3.yaml)" $API_SERVER/api/v1/namespaces/api/replicationcontrollers/nginx

What about watching resources for changes?

watch kubectl get pods

With the API:

curl -k -X GET -H "Authorization: Bearer $TOKEN"   $API_SERVER/api/v1/pods?watch=true

In summary, kubectl is a great tool to interact with kubernetes, but remember that there’s also a fantastic API that allows you to interact with the cluster also.

What are the main differences? Well, a person should use kubectl but a system, or an application would use the API. In fact, if you’re making a tool that interacts with kubernetes and you’re using kubectl under the hood, I bet it’s because you feel more comfortable with kubectl and not so much with the REST API… But nothing comes free, you trade the knowledge you have of a tool by having to parse text instead of clearly defined objects that the API should return.

Truth is, that the API documentation is not great either, so… maybe we should ask to have a bunch of examples that use the API so people can learn faster :)

Permalink


Accessing Kubernetes Apiserver

by Ivan Pedrazas 2016-06-06 kubernetes security tokens

The process to access the api server is very simple. The apiserver has a flag that defines what type of access is desired:

  • --authorization-mode=AlwaysDeny blocks all requests (used in tests).
  • --authorization-mode=AlwaysAllow allows all requests; use if you don’t need authorization.
  • --authorization-mode=ABAC allows for user-configured authorization policy. ABAC stands for Attribute-Based Access Control.
  • --authorization-mode=Webhook allows for authorization to be driven by a remote service using REST.

To allow Basic Auth and/or tokens, we have to select ABAC.

Access

To access the API server via tokens there are 2 things that need to be defined: the token/user and what the user is allowed to do. Tokens are defined in a file, policies are defined in a different file.

These configuration files have to be passed to the kube-apiserver using the following parameters:

  • --authorization-mode=ABAC
  • --token-auth-file=/srv/kubernetes/auth_tokens.csv
  • --authorization-policy-file=/srv/kubernetes/auth-policy.json

If you want to allow Basic Auth, you have to specify the file containing the

  • --basic-auth-file=/srv/kubernetes/basic_auth.csv

Example of running the apiserver with those flags:

/bin/sh -c /usr/local/bin/kube-apiserver --address=127.0.0.1 --etcd-servers=http://127.0.0.1:4001
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,PersistentVolumeLabel,ResourceQuota
--token-auth-file=/srv/kubernetes/auth_tokens.csv
--authorization-mode=ABAC
--authorization-policy-file=/srv/kubernetes/auth-policy.json
--basic-auth-file=/srv/kubernetes/basic_auth.csv

Here are examples of the files used by the apiserver:

Example of tokens in auth_tokens.csv:

Wx4WOTOmFoY5yXaoMPtHdnKeFLPYeBBL,admin,admin
jD34eFwNrJo9urd7QWLMALOjK7R58j1g,kubelet,kubelet
PxgQ4vSloVIhfFwx9WYaj8uke93JVBHh,kube_proxy,kube_proxy
i2TgpiZFZQNkIydDZzVkxmTHl3Q2hPNn,ivan,ivan

Example of user/password for Basic Auth basic_auth.csv:

GGIfwZn63i3NMWeN,admin,admin

Example of authentication policy file auth-policy.json

{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"ivan", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kube_proxy", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubecfg", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"client", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group":"system:serviceaccounts", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}

Testing

If you want to test the access, you can try the following commands:

Access using Basic Auth:

curl -k -X GET -H   "Authorization: Basic YWRtaW46R0dJZndabjYzaTNOTVdlTg=="   https://$API_SERVER

Note that the string YWRtaW46R0dJZndabjYzaTNOTVdlTg== is the result of

echo -n "admin:GGIfwZn63i3NMWeN" | base64

Access using tokens:

curl -k -X GET -H "Authorization: Bearer i2TgpiZFZQNkIydDZzVkxmTHl3Q2hPNn"    https://$API_SERVER

Policy File Format

For mode ABAC, also specify --authorization-policy-file=SOME_FILENAME. The file format is one JSON object per line. There should be no enclosing list or map, just one map per line. Each line is a “policy object”. A policy object is a map with the following properties:

  • Versioning properties:
    • apiVersion, type string; valid values are “abac.authorization.kubernetes.io/v1beta1”. Allows versioning and conversion of the policy format.
    • kind, type string: valid values are “Policy”. Allows versioning and conversion of the policy format.
  • spec property set to a map with the following properties:
    • Subject-matching properties:
      • user, type string; the user-string from --token-auth-file. If you specify user, it must match the username of the authenticated user. * matches all requests.
      • group, type string; if you specify group, it must match one of the groups of the authenticated user. * matches all requests.
    • readonly, type boolean, when true, means that the policy only applies to get, list, and watch operations.
    • Resource-matching properties:
      • apiGroup, type string; an API group, such as extensions. * matches all API groups.
      • namespace, type string; a namespace string. * matches all resource requests.
      • resource, type string; a resource, such as pods. * matches all resource requests.
    • Non-resource-matching properties:
      • nonResourcePath, type string; matches the non-resource request paths (like /version and /apis). * matches all * non-resource requests. /foo/* matches /foo/ and all of its subpaths.

Permalink


Kubernetes Play

by Ivan Pedrazas 2016-05-10 kubernetes kubernetes tools playground test

Recently we were talking about building a kubernetes cluster to let people play and familiarise with Kuberentes. However, building such cluster is not an easy task. Security was our main concern. Not because kubernetes is not secure enough, but because having a kubernetes cluster exposed to the public could be dangerous.

While thinking of the different aspects we should cover and build I remembered a website called “Katacoda” where you could learn Docker. I went there and sure enough, Ben had built a Kubernetes playground!!!

So, if you want to familiarise with Kubernetes, I strongly advise you to go there and follow the different tutorials. The great thing about Katacoda is that you do not need anything special, just a good old browser (not too old, please).

Permalink


Using Kong with Kubernetes

by Álex González 2015-12-17 kubernetes kong microservices

If you don’t know about Kong yet, you should take a look. It’s an Open Source API Gateway, they define themselves as: “The open-source management layer for APIs, delivering high performance and reliability.” and they are quite right.

I was playing with Kong lately at work (jobandtalent.com, we are hiring!) and I think that it could be pretty awesome as a entry layer to your microservices platform running in Kubernetes.

For the sake of simplicity I will not run Kong in Kubernetes, but it shouldn’t be so difficult since they already provide Docker images. Also, running Kong on the same cluster you will be able to use internal networking between pods: win-win.

So, what will I show?

  • I will deploy a Kubernetes with 2 pods (our 2 microservices) &
  • I will install Kong locally and configure it to point to this 2 services.

Go & packing

I’ve created a small Go app that will show the value of an environment variable when you GET /:

package main

import (
  "fmt"
  "log"
  "net/http"
  "os"
)

func main() {
  http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, os.Getenv("TEST_RESULT"))
  })

  log.Fatal(http.ListenAndServe(":8080", nil))
}

Now we will build the application to later pack it in our image. Remember that if you are in Mac you will need to cross-compile the app to work on Linux:

$ GOOS=linux go build

We can pack it into our images now. For doing so we need a Dockerfile. It’s a simple binary, so the Dockerfile is not complex at all:

FROM scratch
ADD app /
ENTRYPOINT ["/app"]

Cool! What can we do now with our shiny image? Yes, you are right! Push it to the hub:

$ docker build -t agonzalezro/kong-test .
$ docker push agonzalezro/kong-test

k8s

We have our image on the registry and all we need now is running it on Kubernetes. I am using Google Container Engine for deploying this, but you can use whatever you prefer.

Let’s create our RCs & services:

# rc1.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: api1
spec:
  selector:
    name: api
    version: first
  template:
    metadata:
      labels:
        name: api
        version: first
    spec:
      containers:
        - name: app
          image: agonzalezro/kong-test
          env:
            - name: TEST_RESULT
              value: "This is the first app"

# rc2.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: api2
spec:
  selector:
    name: api
    version: second
  template:
    metadata:
      labels:
        name: api
        version: second
    spec:
      containers:
        - name: app
          image: agonzalezro/kong-test
          env:
            - name: TEST_RESULT
              value: "Second!"

# svc1.yml
apiVersion: v1
kind: Service
metadata:
  name: app1-svc
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080
  selector:
    name: api
    version: first

# svc2.yml
apiVersion: v1
kind: Service
metadata:
  name: app2-svc
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080
  selector:
    name: api
    version: second

And now run it:

$ kubectl create -f rc1.yml -f rc2.yml -f svc1.yml -f svc2.yml

Wait for the service and the pods to be ready and check their IPS:

$ kubectl get services
NAME         CLUSTER_IP      EXTERNAL_IP      PORT(S)   SELECTOR                  AGE
app1-svc     10.159.242.86   130.211.89.175   80/TCP    name=api,version=first    17m
app2-svc     10.159.246.93   104.155.53.175   80/TCP    name=api,version=second   17m
kubernetes   10.159.240.1    <none>           443/TCP   <none>                    1h

Kong

Follow the instruction here: https://getkong.org/install/docker/ to install Kong locally.

Yeah! We have it up & running so let’s point it to our shinny cluster. We need to use Kong API for that (port :8001):

$ http http://dockerhost:8001/apis/ name=first upstream_url=http://130.211.89.175 request_path=/first strip_request_path=true
$ http http://dockerhost:8001/apis/ name=second upstream_url=http://104.155.53.175 request_path=/second strip_request_path=true

What we did here? We set up two new endpoints /first & /second that are pointing to the both Kubernetes services previously created. We could have done it with DNS as well using request_host instead.

Lets call Kong on the port :8000 to use them:

$ http http://dockerhost:8000/first
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 21
Content-Type: text/plain; charset=utf-8
Date: Thu, 17 Dec 2015 21:43:41 GMT
Via: kong/0.5.4

This is the first app

$ http http://dockerhost:8000/second
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 7
Content-Type: text/plain; charset=utf-8
Date: Thu, 17 Dec 2015 21:43:44 GMT
Via: kong/0.5.4

Second!

\o/ We did it!

Next steps

You have Kong pointed to your cluster, now it’s up to your imagination what to do next. I would say try to configure some rate limiting or auth, it’s deadly simply. Check them here: https://getkong.org/plugins/

If you have any question or you want to discuss this further let me know at @agonzalezro.

Permalink


Deploy by Labels

by Ivan Pedrazas 2015-12-04 15:38 kubernetes management labels core

Labels in kubernetes are the only way of grouping, filtering and selecting resources. In my experience, we do not use labels enough. Unlike with AWS, you can define as many labels as you want, so, why are we not truly squeezing all the potential of labels?

I’d say it’s because we’re not use to, and because we lack clear use cases. Yes, it drills down to we getting used to organise and manage things in a specific way.

In my experience, deployments is one of the parts where being smart with your labels can take you a very long way.

Have you ever done a deployment where everything went well but for a non technical issue you had to roll back? something like “Oh, God, that promotion cannot go live just yet… ROLLBACK! ROLLBACK!!”

Labels are key/value pairs. I usually tag or label my resources with:

  • state: production, test, lab…
  • version: the version of the app following SemVer
  • channel: “public”, “internal”, “cannary”
  • owner: who owns the resource. this can be for making your life easier when doing interal billing, or for knowing who to get in contact with if something goes wrong with it.

So, how do you deploy a new version of your application?

Let’s assume the initial state is this:

  • Replication Controller: deep-frontend, replicas: 3
  • Pod: deep-reports-38x6n (state: production, version: 1.0.7, channel: public, owner: ipedrazas)
  • Service: selector state=production, channel=public

Note that your app is available through the service and the service binds to the pods that have the labels state=production, channel=public and version= 1.0.7.

If now you deploy a new Replication controller you will have the following scenario:

  • Replication Controller: deep-frontend, replicas: 3
  • Replication Controller: deep-frontend_new, replicas: 3
  • Pod: deep-frontend-38x6n (state: production, version: 1.0.7, channel: public, owner: ipedrazas)
  • Pod: deep-frontend_new-jag1p (state: production, version: 1.0.8, channel: public, owner: ipedrazas)
  • Service: selector state=production, channel=public, version=1.0.7

You can see where I’m going. You can then go and update a label in the service: version=1.0.8 and your service is repointed to the new app. As you can see, the old app is still in the system, rolling back is just a matter of updating the label back to its original value.

    kubectl label pods deep-reports-38x6n version=1.0.8 --overwrite

Production deployments have a very wide range of situations. Trying to find a golden rule for deployments (standarisation, standarisation) usually doesn’t work well, so it’s better to define certain patterns that you can adapt to your needs.

Labels are very powerful, and usually they’re not used to their maximum potential. The post doesn’t try to convince you to change your deployment strategy but to illustrate different ways of doing the same thing, and different use cases of this kubernetes artifact that can change the way you understand your landscape.

If you want to know a bit more about Labels, here’s the documentation.

Permalink


Private Registry

by Ivan Pedrazas 2015-12-03 09:58 kubernetes registry core

The first time we tried to use a private registry in Kubernetes we got bitten by a weird bug: the format of the .dockercfg.

If you read the documentation you will see that you have to create a secret, and then use that secret in your pod definition.

What it seems to be missing from the docs is that the format of the json that contains the registry auth info is important.

This is the example in the documentation:

$ echo $(cat ~/.dockercfg)
{ "https://index.docker.io/v1/": { "auth": "ZmFrZXBhc3N3b3JkMTIK", "email": "jdoe@example.com" } }

But we need base64 encode text:

$ cat ~/.dockercfg | base64
eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K

Finally, we create the secret definiton using the base64

$ cat > /tmp/image-pull-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: myregistrykey
data:
  .dockercfg: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
type: kubernetes.io/dockercfg
EOF

Finally, we create the secret in the cluster

$ kubectl create -f /tmp/image-pull-secret.yaml
secrets/myregistrykey

Once we have the secret, we can use that secret in our pods.

apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
    - name: foo
      image: janedoe/awesomeapp:v1
  imagePullSecrets:
    - name: myregistrykey

This is all true and good, but the detail of the format of .dockercfg is the kind of thing that will have you running around calling names. So, what’s the problem? If you execute the first command of the post:

$ echo $(cat ~/.dockercfg)

You will see that it returns 1 line. However, between catting the file and echoing the catting fo the file there’s one little detail: that ECHO makes the file to be in one single line.

Now, look at this:

-> % cat .dockercfg| base64
ewoKCSJodHRwczovL2luZGV4LmRvY2tlci5pby92MS8iOiB7CgkJImF2dGgiOiAiYVhCbFpISmhl
bUZ6T25Rd2REQnlNRMV5T21SdiIsCgkJImVtYWlsIjogImlwZWRyYXphc0BnbWFpbC5jb20iCgl9
Cn0=

-> % echo $(cat .dockercfg) | base64
eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV2aCI6ICJhWEJsWkhKaGVtRnpP
blF3ZERCeU1ERXlPbVJ2KiwgImVtYWlsIjogImlwZWRyYXphc0BnbWFpbC5jb20iIH0gfQo=

I’m not sure why nobody has bothered writing this little note in the docs, but if you don’t echo the cat, the secret will be wrong. Anyway, from now on, remember, the format of the json is important… so make sure you verify the json you’re using is in one single line.

Permalink


Jobs

by Ivan Pedrazas 2015-11-30 12:08 kubernetes job core

No, this post is not about recruitment, it’s about the Jobs artifact that was introduced with version 1.1.

Jobs are the way in Kubernetes of running on-off containers (or pods). Until now, to run a container you could define a pod and schedule it without a replication controller. Now, we have the concept of a “job”. In reality, “Task” would be more appropriate, but as we can see in the roadmap, scheduling jobs will come in version 1.2.

If we read the definition in the docs it says:

A job creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the job tracks the successful completions. When a specified number of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the pods it created.

So, What can we do with jobs now? One of the things we do is to run Integration Test. You deploy your artifacts: pods, replication controllers, and servcies and then, we run a job to test that everything works as expected.

Building a CI pipeline is hard. Using kubernetes simplifies a bit certain aspects (like the deployment) but it makes harder other bits (like knowing when everything is ready). We’re preparing a big post about CI, so, stay tunned.

A job definition is very similar to a Replication controller or a Pod:

apiVersion: extensions/v1beta1
kind: Job
metadata:
  name: pi
spec:
  selector:
    matchLabels:
      app: pi
  template:
    metadata:
      name: pi
      labels:
        app: pi
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never

Note that the apiVersion uses extensions. The schema of a job will change once the scheduling features land in version 1.2, but for now, it’s very straight forward: define the artifact kind as a job, include the pod template definition and off you go. Another thing to note is the restartPolicy, you can set it to Always, OnFailure, or Never, depending on what you think it’s appropriate.

Permalink


Kubernetes, a blog

by Ivan Pedrazas 2015-11-26 11:57 kubernetes

One of the things that I really missed during my journey to Docker was to have a place to put all the notes, posts and leassons learnt… which end up scattered all over gists, posts and text files all over the place.

Now with Kubernetes I thought… perhaps I should start being a bit more organised, and since I “had” to register this domain, I thought, you know what, it would be great if apart from me, anyone else could contribute to it. rsome, so, I decided to go for a very open attempt.

So, if you want to publish a post in this blog, just fork and send a PR!

Let’s make the Kubernetes community even more awesome!

Permalink


k8s.uk is made with by @agonzalezro and @ipedrazas