Kubernetes Multi-Container Pods

Multi-Container Pods

In the earlier post, we have taken a look at single-container pods. In this post, we will explore the details of working with multi-container pods. This is where we really start to hit the good stuff. As we learn about multi-container pods we will also cover more about namespaces, and pod logs.

mpods

We will be using a sample application for demonstration. It’s a simple application that increments and prints a counter. It is split into four containers across three tiers.

mpods1

  • The application tier includes the server container that is a simple node.js application. It accepts a POST request to increment a counter and a GET request to retrieve the current value of the counter.
  • The counter is stored in redis which comprises the data tier.
  • The support tier includes a poller and a counter. The poller container continually makes a get request back to the server and prints the value. The counter continually makes a post request to the server with random values.

All the containers use environment variables for configuration.

Also, the Docker images are public so we can reuse them for this exercise. Let’s walk through modeling the application as a multi-container pods.

Namespaces

We’ll start by creating a namespace. A namespace separates different Kubernetes resources.

Namespaces may be used to isolate users, environments or applications.

You can also use Kubernetes role-based authentication to manage user’s access rights to a resource in a given namespace.

Using namespaces is a best practice. They’re created just like any other Kubernetes resource. Here’s an example of a simple namespace manifest.

apiVersion: v1
kind: Namespace
metadata:
  name: microservice
  labels:
    app: counter

Namespaces don’t require a spec, the main part is the name which is set to microservice and it’s a good idea to label it as well. Everything in this namespace will relate to the counter microservices app.

ubuntu@ip-10-0-128-5:~/src# kubectl create -f 3.1-namespace.yaml
namespace/microservice created
ubuntu@ip-10-0-128-5:~/src# kubectl get namespaces
NAME              STATUS   AGE
default           Active   35m
kube-node-lease   Active   35m
kube-public       Active   35m
kube-system       Active   35m
microservice      Active   6s
ubuntu@ip-10-0-128-5:~/src# kubectl describe namespace microservice | more
Name:         microservice
Labels:       app=counter
Annotations:  <none>
Status:       Active

No resource quota.

No resource limits.
ubuntu@ip-10-0-128-5:~/src#

You can also use kubectl create namespace microservice instead of kubectl create -f manifest.yaml

Future kubectl commands need to use a --namespace or -n option to specify the namespace otherwise the default namespace is used.

Now onto the pod, in the below manifest, I have named the Pod app as “app”. Off the top I want to mention that you can specify a namespace in the metadata. But that makes the manifest slightly less portable because the namespace can’t be overridden at the command line.

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: redis
      image: redis:latest
      imagePullPolicy: IfNotPresent
      ports:
        - containerPort: 6379

    - name: server
      image: lrakai/microservices:server-v1
      ports:
        - containerPort: 8080
      env:
        - name: REDIS_URL
          value: redis://localhost:6379

    - name: counter
      image: lrakai/microservices:counter-v1
      env:
        - name: API_URL
          value: http://localhost:8080

    - name: poller
      image: lrakai/microservices:poller-v1
      env:
        - name: API_URL
          value: http://localhost:8080

redis container: Moving down to the redis container. We’ll use the latest official redis image. The latest version is chosen to illustrate a point. When you use the latest tag Kubernetes will always pull the image whenever the pod is started. This can introduce bugs if a pod restarts and pulls a new latest version without you realizing it. To prevent always pulling the image and using an existing version if one exists, you can set the imagePullPolicy field to IfNotPresent. It’s useful to know this but in most situations you are better off specifying a specific tag rather than latest. When specific tags are used the default image pull behavior is IfNotPresent. The standard redis Port of 6379 is published.

server container: Now on to the server container. The server container is straightforward. The image is the public image from the sample application. The tag is used to indicate the microservice within the microservices repository. Example: lrakai/microservices:server-v1, here server-v1 is the microservice within the microservices repo. The server runs on port 8080 so it’s exposed. The server also requires a redis URL environment variable to connect to the data tier. We can set this in the ENV (environment variables list).

How does the server container know where to find redis?

Well, because containers in a pod share the same network stack, a result of which is they all share the same IP address, so they can reach other containers in the pod on localhost at their declared container port. The correct host:port for this example is localhost:6379. Image pull policy is omitted because Kubernetes uses IfNotPresent when the explicit tag is given. We can use the same approach for the counter and poller containers. These containers require the API_URL environment variable to reach the server in the application tier. The correct host port combo for this example is localhost:8080.

Now let’s create the pod this time adding the -n option to set the namespace. The pod will be created in to our microservice namespace.

ubuntu@ip-10-0-128-5:~/src# kubectl create -f 3.2-multi_container.yaml -n microservice
pod/app created
ubuntu@ip-10-0-128-5:~/src#

Remember to include the same namespace option with all kubectl commands related to the pod, otherwise you will be targeting the default namespace.

Get the pod by entering kubectl get -n microservice pod app

ubuntu@ip-10-0-128-5:~/src# kubectl get -n microservice pod app
NAME   READY   STATUS    RESTARTS   AGE
app    4/4     Running   0          36s
ubuntu@ip-10-0-128-5:~/src#

The -n namespace option can be included anywhere after kubectl it doesn’t have to be after get, it could be before or after. When you have tab completion enabled it makes sense to put it earlier to get the completions for your target namespace. Observe the output shows a slash 4 under status since we have 4 containers in the pod. The status also summarized what is going on but it’s best to describe the pod to see what’s going on in detail.

kubectl describe -n microservice pod app

ubuntu@ip-10-0-128-5:~/src# kubectl describe -n microservice pod app
Name:         app
Namespace:    microservice
Priority:     0
Node:         ip-10-0-0-66.us-west-2.compute.internal/10.0.0.66
Start Time:   Sun, 26 Apr 2020 22:29:02 +0000
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           192.168.169.65
Containers:
  redis:
    Container ID:   docker://981b637dcab1090850636bfb7c03a533ea14525c7f3bec58efd299d506896e40
    Image:          redis:latest
    Image ID:       docker-pullable://redis@sha256:157a95b41b0dca8c308a33489dfdb28019e033110320414b4b16fad7d28c0f9f
    Port:           6379/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 26 Apr 2020 22:29:09 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nlrmn (ro)
  server:
    Container ID:   docker://e5b4c5a4f2b1c592a0818cb9ad10129d59e95cd944edcd80a13752baa1d76c47
    Image:          lrakai/microservices:server-v1
    Image ID:       docker-pullable://lrakai/microservices@sha256:9e3e3c45bb9d950fe7a38ce5e4e63ace2b6ca9ba8e09240f138c5df39d7b7587
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 26 Apr 2020 22:29:13 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      REDIS_URL:  redis://localhost:6379
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nlrmn (ro)
  counter:
    Container ID:   docker://92ab2b8497344483052e2d218c2dace6addec12844dcc11607c4080ca8f8e7c4
    Image:          lrakai/microservices:counter-v1
    Image ID:       docker-pullable://lrakai/microservices@sha256:d8eeb8a11da056400c0e01c95025053e291b5bd973881430f9788c449d8457e8
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sun, 26 Apr 2020 22:29:15 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      API_URL:  http://localhost:8080
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nlrmn (ro)
  poller:
    Container ID:   docker://c7049c3ed37c732d5602139ecadef9eb9318db06ee196968e22eb87ecd92777e
    Image:          lrakai/microservices:poller-v1
    Image ID:       docker-pullable://lrakai/microservices@sha256:aaa48e19ac4a3a6e21c07fb1ce0bc5688e1279fef912a9bf85258f77f78bd82f
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sun, 26 Apr 2020 22:29:17 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      API_URL:  http://localhost:8080
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nlrmn (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-nlrmn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-nlrmn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                              Message
  ----    ------     ----  ----                                              -------
  Normal  Scheduled  21m   default-scheduler                                 Successfully assigned microservice/app to ip-10-0-0-66.us-west-2.compute.internal
  Normal  Pulling    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Pulling image "redis:latest"
  Normal  Created    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Created container redis
  Normal  Pulled     21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Successfully pulled image "redis:latest"
  Normal  Started    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Started container redis
  Normal  Pulling    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Pulling image "lrakai/microservices:server-v1"
  Normal  Pulled     21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Successfully pulled image "lrakai/microservices:server-v1"
  Normal  Pulling    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Pulling image "lrakai/microservices:counter-v1"
  Normal  Created    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Created container server
  Normal  Started    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Started container server
  Normal  Pulled     21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Successfully pulled image "lrakai/microservices:counter-v1"
  Normal  Created    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Created container counter
  Normal  Started    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Started container counter
  Normal  Pulling    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Pulling image "lrakai/microservices:poller-v1"
  Normal  Pulled     21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Successfully pulled image "lrakai/microservices:poller-v1"
  Normal  Created    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Created container poller
  Normal  Started    21m   kubelet, ip-10-0-0-66.us-west-2.compute.internal  Started container poller
ubuntu@ip-10-0-128-5:~/src#

You’ll see the event log has more going on, now that there are multiple containers. The same events are being triggered for each container from pulling to starting, as was the case for a single container pod. If something goes wrong you should check the event log to see what’s happening behind the scenes to debug any issue. Everything looks OK for us though.

View the logs

Once the containers are running we can look at the container logs to see what they are doing.

Logs are simply anything that is written to standard output or standard error in the container.

The containers need to write messages to standard output or error otherwise nothing will appear in the logs. The containers in this example all follow that best practice so we can see what they are doing. Kubernetes records the logs and they can be viewed via the logs command. The kubectl log command retrieves logs for a specific container in a given pod. It dumps all of the logs by default or you can use the tail option to limit the number of logs presented. Let’s see the 10 most recent logs for the counter container in the app pod.

In general, you will run: kubectl logs -n <namespace> <pod> <container> --tail 10

kubectl logs -n microservice app counter --tail 10

ubuntu@ip-10-0-128-5:~/src# kubectl logs -n microservice app counter --tail 10
Incrementing counter by 5 ...
Incrementing counter by 4 ...
Incrementing counter by 8 ...
Incrementing counter by 8 ...
Incrementing counter by 6 ...
Incrementing counter by 4 ...
Incrementing counter by 7 ...
Incrementing counter by 3 ...
Incrementing counter by 8 ...
Incrementing counter by 3 ...
ubuntu@ip-10-0-128-5:~/src#

Here we can see the counter is incrementing the count by random numbers between 1 and ten.

Let’s check the value of the count by inspecting the logs for the poller container. This time we’ll use -f to stream logs in real time

kubectl logs -n microservice app poller -f

ubuntu@ip-10-0-128-5:~/src# kubectl logs -n microservice app poller -f
Current counter: 12
Current counter: 23
Current counter: 43
Current counter: 60
Current counter: 75
Current counter: 86
Current counter: 90
...
...
Current counter: 2706
Current counter: 2721
Current counter: 2730
^C
ubuntu@ip-10-0-128-5:~/src#

We can see the count is increasing every second as the counter continues to increment it. That confirms it. Our first multi-container application is up and running. Press ctrl+c to stop following the logs.

Summary

  • We created a multi-container pod that implements a three tier application. We used the fact that containers in the same pod can communicate with one another using localhost.

mpods2

  • We also saw how to get logs from containers running in kubernetes by using the kubectl logs command. Remember that logs work by recording what the container writes to standard output and standard error. The logs allowed us to confirm the application is working as expected by continuously incrementing the count.

  • But there are some issues with the current implementation. Because pods are the smallest unit of work Kubernetes can only scale out by increasing the number of pods and not the containers inside the pod. If we want to scale out the application tier with the current design, we’d have to also scale out all the other containers proportionately. That also means that there would be multiple redis containers running and each would have their own copy of the counter. That’s certainly not what we’re going for. It is a much better approach to be able to scale each service independently. Breaking the application out into multiple pods and connecting them with services is a better implementation.

mpods3

I’ll walk through design in the next post. But before moving on it is worth noting that sometimes you do want each container in a pod to scale proportionately. It comes down to how tightly coupled the containers are and if it makes sense to think of them as a single unit.

With that point out of the way, in the next post we will leverage services to break our tightly coupled pod design into multiple independent pods.