Kubernetes ConfigMaps and Secrets

Motivation for using ConfigMaps and Secrets

Up until now the deployment template has included all of the configuration required by pod containers. This is a big improvement over storing the configuration inside the binary or container image which makes it difficult to reuse. Having configuration in the pod spec also makes it less portable. Furthermore, if the configuration involves sensitive information such as passwords or API keys, then it also presents a security issue.

Kubernetes provides ConfigMaps and Secrets resource kinds to allow you to seperate configuration from pod’s specs. This separation makes it easier to manage and change configuration. It also makes for more portable manifests. ConfigMaps and secrets are very similar and used in the same way when it comes to pods. One difference is that secrets are specifically for storing sensitive information. Secrets reduce the risk of their data being exposed.

However the cluster administrator also needs to ensure all the proper encryption and access control safeguards are in place to really consider secrets being safe.

Another difference is that secrets:

  • have specialized types for storing credentials required to pull images from registries,
  • and to store TLS private keys and certificates.

Refer to the official documentation when you need to make use of those capabilities.

ConfigMaps and Secrets store data as key-value pairs.

Pods must reference ConfigMaps or Secrets to use their data.

Pods can use the data by mounting them as files by using a volume or as environment variables.

ConfigMaps Demo

I will show examples of these in the demo. We’ll use a ConfigMap to configure redis using a volume to mount a config file and we will use a secret to inject a sensitive environment variable into the app-tier.

Create a namespace

First let’s create a config namespace for this demo. The manifest is shown here:

apiVersion: v1
kind: Namespace
metadata:
  name: config
  labels:
    app: counter
ubuntu@ip-10-0-128-5:~/src# kubectl create -f 10.1-namespace.yaml
namespace/config created
ubuntu@ip-10-0-128-5:~/src#

Create a ConfigMap

Now, lets look at how the ConfigMap manifest looks:

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-config
data:
  config: | # YAML for multi-line string
    # Redis config file
    tcp-keepalive 240
    maxmemory 1mb

First notice there is no spec, rather the key-value pairs that the ConfigMap stores are under a mapping named data. Here we have a single key named config. You can have more than one but one is enough for our purpose. The value of config is a multi-line string that represents the file contents of a redis configuration file. The bar or pipe symbol after config is YAML for starting a multi-line string and causes all of the following lines to be the value of config including the Redis config comment. The configuration file values set the tcp keepalive and maxmemory of redis. These are arbitrarily chosen for this example.

Separating the configuration makes it easy to manage configuration separately from the pod spec. We will have to make some initial changes to the pod to make use of the ConfigMap but after that the two can be managed separately.

Data-tier manifest

Let’s take a look at the updated data-tier. I’m comparing with data tier from our probes post which doesn’t include the persistent volume to avoid not being able to satisfy the persistent volume claim (remember we needed a special setup there, where we needed to get the EBS volume id).

Starting from the volumes, a new ConfigMap type of volume is added and it references the redis-config ConfigMap we just saw. Items declares which key value pair we want to use from the config map. We only have one in our case and that is config. If you have multiple environments, you could easily do things like referencing a dev configuration in one environment and a production configuration in another.

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-config
data:
  config: | # YAML for multi-line string
    # Redis config file
    tcp-keepalive 240
    maxmemory 1mb

The path sets the path of the file that will be mounted with the config value. This is relative to the mount point of the volume. Up above in the container spec the volumeMounts mapping declares the use of the config volume and mounts it at /etc/redis. So the full absolute path of the config path will be /etc/redis/redis.conf.

The last change that we need is to use a custom command for the container so that redis knows to load the config file when it starts. We do that by setting redis-server /etc/redis/redis.conf as the command. With this setup we can now independently configure redis without touching the deployment template.

As a quick sidenote before we create the resources, if we were dealing with a secret rather than a configmap the volume type would be secret rather than configMap and the name key would be replaced with secretName. Everything else would be the same.

Create the data-tier

Let’s create the resources using the manifest

apiVersion: v1
kind: Service
metadata:
  name: data-tier
  labels:
    app: microservices
spec:
  ports:
  - port: 6379
    protocol: TCP # default
    name: redis # optional when only 1 port
  selector:
    tier: data
  type: ClusterIP # default
---
apiVersion: apps/v1 # apps API group
kind: Deployment
metadata:
  name: data-tier
  labels:
    app: microservices
    tier: data
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: data
  template:
    metadata:
      labels:
        app: microservices
        tier: data
    spec: # Pod spec
      containers:
      - name: redis
        image: redis:latest
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 6379
            name: redis
        livenessProbe:
          tcpSocket:
            port: redis # named port
          initialDelaySeconds: 15
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 5
        command:
          - redis-server
          - /etc/redis/redis.conf
        volumeMounts:
          - mountPath: /etc/redis
            name: config
      volumes:
        - name: config
          configMap:
            name: redis-config
            items:
            - key: config
              path: redis.conf
ubuntu@ip-10-0-128-5:~/src# kubectl create -n config -f 10.2-data_tier_config.yaml -f 10.3-data_tier.yaml
configmap/redis-config created
service/data-tier created
deployment.apps/data-tier created
ubuntu@ip-10-0-128-5:~/src#

Inspect the effect of ConfigMap

Now let’s start a shell in the container using kubectl exec to inspect the effect of our ConfigMap

ubuntu@ip-10-0-128-5:~/src# kubectl exec -n config data-tier-dcf646d97-rk254 -it /bin/bash
root@data-tier-dcf646d97-rk254:/data# cat /etc/redis/redis.conf
# Redis config file
tcp-keepalive 240
maxmemory 1mb
root@data-tier-dcf646d97-rk254:/data#

See that the contents match the configmap value that we specified. Now to prove that redis actually loaded the config we can output tcp-keepalive configuration value to make sure it matches the 240 value in the file.

root@data-tier-dcf646d97-rk254:/data# redis-cli CONFIG GET tcp-keepalive
1) "tcp-keepalive"
2) "240"
root@data-tier-dcf646d97-rk254:/data#

how changes to configmaps interact with volumes and deployments?

And there we have it. Separation of configuration and pod spec is complete. Let’s exit out of the container. Before we move on I want to highlight how changes to configmaps interact with volumes and deployments. Let’s use kubectl edit to update the configmap, and change the tcp-keepalive value from 240 to 500. And watch the redis config file mounted in the container.

kubectl edit -n config configmaps redis-config

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  config: |
    # Redis config file
    tcp-keepalive 500
    maxmemory 1mb
kind: ConfigMap
metadata:
  creationTimestamp: "2020-05-06T23:10:40Z"
  name: redis-config
  namespace: config
  resourceVersion: "4742"
  selfLink: /api/v1/namespaces/config/configmaps/redis-config
  uid: abadaab5-7d88-4f20-aeb8-5c3bae3598a8

Watch it: watch kubectl exec -n config data-tier-dcf646d97-d9fr5 cat /etc/redis/redis.conf

Every 2.0s: kubectl exec -n config data-tier-dcf646d97-d9fr5 cat /etc/redis/redis.conf

# Redis config file
tcp-keepalive 240
maxmemory 1mb

Every 2.0s: kubectl exec -n config data-tier-dcf646d97-d9fr5 cat /etc/redis/redis.conf

# Redis config file
tcp-keepalive 500
maxmemory 1mb

Within around a minute the volume will reflect the change we made to the ConfigMap. That is pretty slick but redis only loads the configuration file on startup so it won’t impact the running redis process. And because we never updated the deployment’s template we never triggered a rollout. Let’s confirm the tcp-keepalive redis is using hasn’t been updated using the redis-cli CONFIG GET tcp-keepalive command via kubectl exec as shown here:

ubuntu@ip-10-0-128-5:~/src# kubectl exec -n config data-tier-dcf646d97-d9fr5 redis-cli CONFIG GET tcp-keepalive
tcp-keepalive
240
ubuntu@ip-10-0-128-5:~/src#

Restart the deployment pod to apply the change

That is something to keep in mind when you separate the configuration from the pod spec. To cause the deployment’s pods to restart and have redis apply the new configuration changes we can use kubectl rollout -n config restart deployment data-tier

ubuntu@ip-10-0-128-5:~/src# kubectl rollout -n config restart deployment data-tier
deployment.extensions/data-tier restarted
ubuntu@ip-10-0-128-5:~/src#

This will cause a rollout using the current deployment template and when the new pods start, the redis containers will use the new configuration. We can verify that using the redis cli with kubectl exec again

ubuntu@ip-10-0-128-5:~/src# kubectl exec -n config data-tier-f4f867fb7-bf24d redis-cli CONFIG GET tcp-keepalive
tcp-keepalive
500
ubuntu@ip-10-0-128-5:~/src#

Secrets Demo

Now we can quickly see how secrets work and see the similarities they have with configmaps. We will add a secret to the app tier using an environment variable. It won’t have any functional impact but it will show the idea.

Here is our secret manifest.

apiVersion: v1
kind: Secret
metadata:
  name: app-tier-secret
stringData: # unencoded data
  api-key: LRcAmM1904ywzK3esX
  decoded: hello
data: #for base-64 encoded data
  encoded: aGVsbG8= # hello in base-64

# api-key secret (only) is equivalent to
# kubectl create secret generic app-tier-secret --from-literal=api-key=LRcAmM1904ywzK3esX

I’ll mention up front that usually you don’t want to check in secrets to source control given their sensitive nature. It makes more sense to have secrets managed separately. You could still use manifest files, as we are here, or the secret could be created directly with kubectl. The command at the bottom of the file shows how to create the same secret without a manifest file.

Focusing on the manifest itself we can see a similar structure as a configmap except for the kind being secret rather than configmap and secrets can use a stringdata mapping in addition to the data one we used in our configMap. As part of the effort to reduce the risk of secrets being exposed in plaintext, they are stored as base-64 encoded strings. Kubernetes automatically decodes them when used in a container. I have to point out that base-64 encoding does not really offer any additional security. It is not encrypting the values. Anyone can decode a base64 string so continue to treat the encoded strings as sensitive data. With that cautionary statement out of the way, the stringData mapping allows you to specify secrets without first encoding them because Kubernetes encode them for you. It is simply a convenience. If you use the data mapping you must specify encoded values. The api-key secret is the one that we will use in the app-tier but I’ve included the encoded and decoded key value pairs to illustrate the base64 encoding. In the data mapping the encoded value is hello base64 encoded.

Create the secret

Let’s create the secret to see this

ubuntu@ip-10-0-128-5:~/src# kubectl create -f 10.4-app_tier_secret.yaml -n config
secret/app-tier-secret created
ubuntu@ip-10-0-128-5:~/src# kubectl describe secrets -n config app-tier-secret
Name:         app-tier-secret
Namespace:    config
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
api-key:  18 bytes
decoded:  5 bytes
encoded:  5 bytes
ubuntu@ip-10-0-128-5:~/src#

The values are hidden as part of the effort to shield secret values. However, we can see the values that are stored with kubectl edit

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  api-key: TFJjQW1NMTkwNHl3ekszZXNY
  decoded: aGVsbG8=
  encoded: aGVsbG8=
kind: Secret
metadata:
  creationTimestamp: "2020-05-06T23:55:17Z"
  name: app-tier-secret
  namespace: config
  resourceVersion: "8687"
  selfLink: /api/v1/namespaces/config/secrets/app-tier-secret
  uid: acd69b89-924c-41ad-911d-b45ea4eb53b8
type: Opaque

From here we can see that stringdata mapping is not actually stored. The values are base64 encoded and added to the data mapping. The decoded value we entered in string data was hello but now it is the base64 encoded string beginning with aGV.

Create the app-tier

Shifting over to the app tier deployment, the an API_KEY environment variable is added.

apiVersion: v1
kind: Service
metadata:
  name: app-tier
  labels:
    app: microservices
spec:
  ports:
  - port: 8080
  selector:
    tier: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-tier
  labels:
    app: microservices
    tier: app
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: app
  template:
    metadata:
      labels:
        app: microservices
        tier: app
    spec:
      containers:
      - name: server
        image: lrakai/microservices:server-v1
        ports:
          - containerPort: 8080
            name: server
        env:
          - name: REDIS_URL
            # Environment variable service discovery
            # Naming pattern:
            #   IP address: <all_caps_service_name>_SERVICE_HOST
            #   Port: <all_caps_service_name>_SERVICE_PORT
            #   Named Port: <all_caps_service_name>_SERVICE_PORT_<all_caps_port_name>
            value: redis://$(DATA_TIER_SERVICE_HOST):$(DATA_TIER_SERVICE_PORT_REDIS)
            # In multi-container example value was
            # value: redis://localhost:6379
          - name: DEBUG
            value: express:*
          - name: API_KEY
            valueFrom:
              secretKeyRef:
                name: app-tier-secret
                key: api-key
        livenessProbe:
          httpGet:
            path: /probe/liveness
            port: server
          initialDelaySeconds: 5
        readinessProbe:
          httpGet:
            path: /probe/readiness
            port: server
          initialDelaySeconds: 3
      initContainers:
        - name: await-redis
          image: lrakai/microservices:server-v1
          env:
          - name: REDIS_URL
            value: redis://$(DATA_TIER_SERVICE_HOST):$(DATA_TIER_SERVICE_PORT_REDIS)
          command:
            - npm
            - run-script
            - await-redis

A valueFrom mapping is used to reference a source for the value. Here the source is secret so the secretKeyRef is used. If you needed to get the environment variable value from a ConfigMap rather than a secret you would use configMapKeyRef instead of secretKeyRef. The name is the name of the secret and key is the name of the key in the secret you want to get the value from.

Dump the environment variables

Let’s create the app tier and dump the environment variables.

ubuntu@ip-10-0-128-5:~/src# kubectl create -f 10.5-app_tier.yaml -n config
service/app-tier created
deployment.apps/app-tier created
ubuntu@ip-10-0-128-5:~/src#

ubuntu@ip-10-0-128-5:~/src# kubectl exec -n config app-tier-8445f7bb4d-xdwrj env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=app-tier-8445f7bb4d-xdwrj
REDIS_URL=redis://10.100.137.210:6379
DEBUG=express:*
API_KEY=LRcAmM1904ywzK3esX
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
DATA_TIER_SERVICE_PORT=6379
DATA_TIER_PORT=tcp://10.100.137.210:6379
DATA_TIER_PORT_6379_TCP_PROTO=tcp
DATA_TIER_PORT_6379_TCP_ADDR=10.100.137.210
APP_TIER_SERVICE_PORT=8080
APP_TIER_PORT=tcp://10.108.198.154:8080
APP_TIER_PORT_8080_TCP_PORT=8080
DATA_TIER_SERVICE_HOST=10.100.137.210
DATA_TIER_PORT_6379_TCP=tcp://10.100.137.210:6379
DATA_TIER_PORT_6379_TCP_PORT=6379
APP_TIER_PORT_8080_TCP_PROTO=tcp
APP_TIER_PORT_8080_TCP_ADDR=10.108.198.154
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
DATA_TIER_SERVICE_PORT_REDIS=6379
APP_TIER_PORT_8080_TCP=tcp://10.108.198.154:8080
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
APP_TIER_SERVICE_HOST=10.108.198.154
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
NPM_CONFIG_LOGLEVEL=info
NODE_VERSION=6.11.0
YARN_VERSION=0.24.6
HOME=/root
ubuntu@ip-10-0-128-5:~/src#

We can find API_KEY=LRcAmM1904ywzK3esX variable amid the wash of variables and observe the value is the decoded value that we entered in the stringdata of our secret manifest and not an encoded value. There is no need to decode inside the container.

I’ll just mention before wrapping up that just like with using volumes to reference secrets or configMaps, you should restart a rollout to have the deployment’s pods restart with the new version of environment variables. Environment variables do not update on the fly like volumes did so actively managing the rollout is really a must.

Conclusion

This concludes our lesson on configMaps and secrets. Let’s recap what we learned:

  • ConfigMaps and secrets are used for separating configuration data from pod specs or what would otherwise be stored in container images.
  • ConfigMaps and secrets both store groups of key and value data. Secrets should be used when storing sensitive data.
  • Both can be accessed in pod containers by either referencing them using volumes or environment variables.