Need for Init Containers
Sometimes you need to perform some tasks or check some prerequisites before a main application container starts. Some examples include:
- waiting for a service to be created,
- downloading files the application depends on,
- or dynamically deciding which port the application should use.
The code that performs those tasks could be crammed into the main application image but it is better to keep a clean separation between the main application and supporting functionality to keep the smallest footprint you can for the images. However, the tasks are closely linked to the main application (same pod) and are required to run before the main application starts. Kubernetes provides init containers as a way to run these tasks that are required to complete before the main container starts.
Pods may declare any number of init containers. They run in a sequence in the order they are declared. Each init container must run to completion before the following init container begins. Once all of the init containers have completed the main containers in the pod can start. Init containers use different images from the containers in a pod.
Benefits of Init Containers
This provides some benefits:
- They can contain utilities that are not desirable to include in the actual application image for security reasons.
- They can also contain utilities or custom code for setup that is not present in the application image. For example there is no need to include utilities like Sed Awk or dig in an application image if they are only used for setup.
- Init containers also provide an easy way to block or delay the startup of an application until some preconditions are met. They are similar to readiness probes in this sense but only run at pod startup and can perform other useful work.
All these features together make init containers a vital part of the Kubernetes tool box.
There is one important thing to understand about init containers. They run every time a pod is created. This means they will run once for every replica in a deployment. If a pod restarts, say due to a failed liveness probe, the init containers would run again as part of the restart. Thus you have to assume that init containers run at least once. This usually means init containers should be idempotent; meaning running it more than once has no additional effect.
To start off, lets start by creating the namespace
probes, we will be making use of the manifests from the probes post as our starting point.
Create namespace, data-tier and app-tier
ubuntu@ip-10-0-128-5:~/src# kubectl create -f 7.1-namespace.yaml namespace/probes created ubuntu@ip-10-0-128-5:~/src# kubectl create -f 7.2-data_tier.yaml -n probes service/data-tier created deployment.apps/data-tier created ubuntu@ip-10-0-128-5:~/src# kubectl create -f 7.3-app_tier.yaml -n probes service/app-tier created deployment.apps/app-tier created ubuntu@ip-10-0-128-5:~/src# kubectl get -n probes deployments. NAME READY UP-TO-DATE AVAILABLE AGE app-tier 1/1 1 1 2m43s data-tier 1/1 1 1 2m54s ubuntu@ip-10-0-128-5:~/src#
Declare the init container in the app-tier manifest
Let’s add an init container to our app-tier that will wait for Redis before starting any application servers.
apiVersion: v1 kind: Service metadata: name: app-tier labels: app: microservices spec: ports: - port: 8080 selector: tier: app --- apiVersion: apps/v1 kind: Deployment metadata: name: app-tier labels: app: microservices tier: app spec: replicas: 1 selector: matchLabels: tier: app template: metadata: labels: app: microservices tier: app spec: containers: - name: server image: lrakai/microservices:server-v1 ports: - containerPort: 8080 name: server env: - name: REDIS_URL # Environment variable service discovery # Naming pattern: # IP address: <all_caps_service_name>_SERVICE_HOST # Port: <all_caps_service_name>_SERVICE_PORT # Named Port: <all_caps_service_name>_SERVICE_PORT_<all_caps_port_name> value: redis://$(DATA_TIER_SERVICE_HOST):$(DATA_TIER_SERVICE_PORT_REDIS) # In multi-container example value was # value: redis://localhost:6379 - name: DEBUG value: express:* livenessProbe: httpGet: path: /probe/liveness port: server initialDelaySeconds: 5 readinessProbe: httpGet: path: /probe/readiness port: server initialDelaySeconds: 3 initContainers: - name: await-redis image: lrakai/microservices:server-v1 env: - name: REDIS_URL value: redis://$(DATA_TIER_SERVICE_HOST):$(DATA_TIER_SERVICE_PORT_REDIS) command: - npm - run-script - await-redis
We see that
initContainers have the same fields as regular containers in a pod spec. The one exception is initContainers do not support readiness probes because they must run to completion before the state of the pod can be considered ready. You will receive an error if you try to include a readiness probe in an initContainer.
You can see that the fields are the same as what we have seen with regular containers. I’ve used the same image as the main application for simplicity and it has everything we need in it already. The command field is used to override the image’s default entrypoint command. For this init container we want to run a script that waits for a successful connection with redis. The script is already included in the image and is executed with the
NPM run-script await-redis command. This command will block until a connection is established with the configured redis url provided as an environment variable.
Apply the changes to existing deployments
Now let’s apply the changes to the existing deployment. After that, describe the deployment’s pod
ubuntu@ip-10-0-128-5:~/src# kubectl apply -f 8.1-app_tier.yaml -n probes Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply service/app-tier configured Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply deployment.apps/app-tier configured ubuntu@ip-10-0-128-5:~/src# kubectl describe pod -n probes app-tier-6bf4d544c-kt9vd Name: app-tier-6bf4d544c-kt9vd Namespace: probes Priority: 0 Node: ip-10-0-18-215.us-west-2.compute.internal/10.0.18.215 Start Time: Tue, 05 May 2020 18:26:40 +0000 Labels: app=microservices pod-template-hash=6bf4d544c tier=app Annotations: <none> Status: Running IP: 192.168.70.194 Controlled By: ReplicaSet/app-tier-6bf4d544c Init Containers: await-redis: Container ID: docker://70833194504bd680bef2f272ebfa1b85ec4a9f0342ce29ab5a5f5cb737dc3c61 Image: lrakai/microservices:server-v1 Image ID: docker-pullable://lrakai/microservices@sha256:9e3e3c45bb9d950fe7a38ce5e4e63ace2b6ca9ba8e09240f138c5df39d7b7587 Port: <none> Host Port: <none> Command: npm run-script await-redis State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 05 May 2020 18:26:46 +0000 Finished: Tue, 05 May 2020 18:26:46 +0000 Ready: True Restart Count: 0 Environment: REDIS_URL: redis://$(DATA_TIER_SERVICE_HOST):$(DATA_TIER_SERVICE_PORT_REDIS) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-66l8q (ro) Containers: server: Container ID: docker://b9fe1afb2de7b4e0d7a29270fdb373c72a296c4f9e95dad293bd861459720fab Image: lrakai/microservices:server-v1 Image ID: docker-pullable://lrakai/microservices@sha256:9e3e3c45bb9d950fe7a38ce5e4e63ace2b6ca9ba8e09240f138c5df39d7b7587 Port: 8080/TCP Host Port: 0/TCP State: Running Started: Tue, 05 May 2020 18:26:48 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:server/probe/liveness delay=5s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:server/probe/readiness delay=3s timeout=1s period=10s #success=1 #failure=3 Environment: REDIS_URL: redis://$(DATA_TIER_SERVICE_HOST):$(DATA_TIER_SERVICE_PORT_REDIS) DEBUG: express:* Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-66l8q (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-66l8q: Type: Secret (a volume populated by a Secret) SecretName: default-token-66l8q Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 36s default-scheduler Successfully assigned probes/app-tier-6bf4d544c-kt9vd to ip-10-0-18-215.us-west-2.compute.internal Normal Pulling 35s kubelet, ip-10-0-18-215.us-west-2.compute.internal Pulling image "lrakai/microservices:server-v1" Normal Pulled 31s kubelet, ip-10-0-18-215.us-west-2.compute.internal Successfully pulled image "lrakai/microservices:server-v1" Normal Created 31s kubelet, ip-10-0-18-215.us-west-2.compute.internal Created container await-redis Normal Started 30s kubelet, ip-10-0-18-215.us-west-2.compute.internal Started container await-redis Normal Pulled 29s kubelet, ip-10-0-18-215.us-west-2.compute.internal Container image "lrakai/microservices:server-v1" already present on machine Normal Created 29s kubelet, ip-10-0-18-215.us-west-2.compute.internal Created container server Normal Started 28s kubelet, ip-10-0-18-215.us-west-2.compute.internal Started container server ubuntu@ip-10-0-128-5:~/src#
And observe the event log now shows the entire lifecycle with init containers. The await-redis init container runs to completion before the server container is created. You can also view the logs of init containers using the usual logs command and specifying the name of the init container as the last argument after the pod name to retrieve logs for a given init container. This is specifically important when debugging an init container failure which prevents the main containers from ever being created.
View Init Container logs to debug pod startup issues
ubuntu@ip-10-0-128-5:~/src# kubectl logs -n probes app-tier-6bf4d544c-kt9vd await-redis npm info it worked if it ends with ok npm info using email@example.com npm info using firstname.lastname@example.org npm info lifecycle email@example.com~preawait-redis: firstname.lastname@example.org npm info lifecycle email@example.com~await-redis: firstname.lastname@example.org > email@example.com await-redis /usr/src/app > node await.js Connection ok npm info lifecycle firstname.lastname@example.org~postawait-redis: email@example.com npm info ok ubuntu@ip-10-0-128-5:~/src#
This concludes our tour of init containers. They give you another mechanism for controlling the lifecycle of pods. You can use them to perform some tasks before the main containers have an opportunity to start. This can be useful for checking preconditions such checking that the depended upon services are created or preparing the depended upon files. The files use case requires knowledge of another Kubernetes concept to pull of, namely: volumes which can be used to share files between containers. We will see this next.