A simple introduction to Docker
If your grandma asks you
What is docker?, then hug her and say this:
Docker is a container platform that allows you to separate your application from the underlying infrastructure by bundling up your code and all of its dependencies into a self-contained entity that will run the same on any supported system.
Deploying code is a challenge. Managing dependencies is a challenge. Handling rollbacks is a challenge. All of these things are made more challenging because the development environments are seldom identical to production, and this is where Docker containers can help. They allow you to take whatever software you need to run, bundle it up into a consistent format, and run it on any infrastructure that knows how to handle a container. The end result is kind of like having a single executable for all of your code. It’s similar to an app on your phone. All of the code is bundled up into a single unit, and it’s going to run the same way on my phone and yours.
With Docker, regardless of the programming language you use or the Linux distribution your code runs on, you can wrap it all up into one unit called a container; and the container knows how to run your app. If your app relies on a specific version of ImageMagick, then you can include it in the container.
Then any time you run that container, you know that you have the correct version. If later you need to update the version of ImageMagick, then you create new container with whatever version you need and any time you run that container, it’s going to run correctly because it has everything it needs inside.
Having your code run inside of a container means that it’s isolated from other processes and other containers. So each container can have its own process space, its own network stack, resource allocation, et cetera.
Once all of your code and dependencies are in a container, they’re going to run the same way anywhere because everything required to have the code run is inside the container. So if you use Docker containers for all of your applications, then that’s going to allow you to standardize on how you deploy all of your applications.
Okay, at a high level, Docker uses a client-server architecture. Hmm, Okay?
The server part is that there’s a Docker daemon, which is responsible for managing Docker objects. By objects I mean things such as images, containers and networks. The daemon exposes a REST API that the client consumes over UNIX sockets or a network interface.
The client is the Docker binary, so whenever you use the
docker command, that’s the client. This diagram here gives a glimpse of how the client and daemon interact together.
Here you can see that the subcommands issued by the client are sent over to the daemon, for example, the
docker pull command here instructs the daemon to get an image from the registry.
The Docker daemon has a lot of different configuration options that you can pass in when you run the daemon. There are different options that let you change how the daemon operates, for example, if you want to use a remote daemon you could adjust the
socket option. Another example is if you want to have some debugging capabilities you can pass in the
-D flag, so if you want to make changes to the runtime, you could do that too.
So the Docker daemon is in charge of managing Docker objects, and the client is the primary way that you’ll interact with the Docker API.
Being new to Docker, you don’t need to change anything about the daemon, the defaults will get you by just fine. So while you might not be customizing the daemon settings, knowing about that separation of the client and the daemon will help if you run into an error such as this one when you’re using the Docker binary.
(base) shravan-docker$ docker ps Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? (base) shravan-docker$
In this example, the Docker binary was used to try and list off the running containers. Since the client relies on that daemon to be running to get that information and the daemon isn’t running, the client is kind of useless and it throws this error, and the solution to this is just to make sure that the daemon is started on the OS that you’re running on.
Run Docker hello-world
As I am using mac, I just downloaded docker-desktop for mac and started the docker daemon.
(base) shravan-docker$ docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 1b930d010525: Pull complete Digest: sha256:f9dfddf63636d84ef479d645ab5885156ae030f611a56f3a7ac7f2fdd86d7e4e Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/ (base) shravan-docker$
Creating and executing your first container using docker
While we did run the hello-world container above, now, I’m going to actually explain what’s happening. I want to start by running another container. The command will be similar to the hello-world example, though it’s not exactly the same thing.
docker run -it ubuntu /bin/bash
base) shravan:/# docker run -it ubuntu /bin/bash Unable to find image 'ubuntu:latest' locally latest: Pulling from library/ubuntu 5bed26d33875: Pull complete f11b29a9c730: Pull complete 930bda195c84: Pull complete 78bf9a5ad49e: Pull complete Digest: sha256:bec5a2727be7fff3d308193cfde3491f8fba1a2ba392b7546b43a051853a341d Status: Downloaded newer image for ubuntu:latest root@5fb1e46aceea:/#
We will now break this down and see how it all works. The command that I just ran instructs Docker to run a container based on the official Ubuntu image. I want you to focus on the Ubuntu part of the command for now. If you recall, I am running this on my mac. So this is often the point where grandma will ask you where is Ubuntu coming from? .We’ve run two containers now, we ran hello-world and Ubuntu, and from an outsider’s perspective, it’s not really clear where those names come from and what they actually mean.
When we reviewed the Docker architecture I very casually mentioned that Docker downloads images from a registry. Now that registry’s where hello-world and Ubuntu come from. They’re official images stored on either
Docker Hub or the newer
Both the Docker Hub and Docker Store serve as centralized locations for Docker images to be downloaded.
The first thing to notice about the output is:
Unable to find image 'ubuntu:latest' locally, here, Docker looks locally on the mac, it can’t find the image, so was it does is that it goes and downloads it from the Docker Hub.
When it downloads the image, it stores it in a subdirectory that lives inside of the /var/lib/docker directory. The next time it runs, it’s going to see if an image has changed since it was last downloaded. If it hasn’t, then it’s just going to use that local version because it already exists locally.
Let’s look back at the command and see how we got here. Behind the scenes this ran docker pull to pull down the image because it didn’t exist locally. Then the Docker run command allows you to execute a command from inside of the container. The command that we’re running is the bash binary. The way we are able to get an interactive shell is that we’re using the
I and T flags.
The I flag makes it interactive by redirecting standard IO. The T flag implements a pseudo TTY, which basically makes the terminal behave like a standard terminal. Because we told Docker to run a bash shell as the container process, if we exit out of this, then the container is going to stop.
So to quickly summarize: First, both the Docker Hub and the new Store serve as a registry of existing images that you can use as is, or to form the base for your own images. Second, when using the Docker run command, behind the scenes it’s going to download the image if it doesn’t exist locally. Then it will run whatever command that you specify.
Images vs Containers
Here we are going to look at the differences between images and containers. At high level, the difference between the two is similar to the difference between an executable(image) and a running application(container). For example,
(base) shravan-docker: ls -rtl /bin/bash -r-xr-xr-x 1 root wheel 618416 May 17 2019 /bin/bash (base) shravan-docker: ps -ef | grep bash | grep -v grep 501 400 398 0 28Mar20 ttys000 0:00.06 -bash 501 415 414 0 28Mar20 ttys001 0:00.08 -bash 501 424 419 0 28Mar20 ttys002 0:00.18 -bash 501 426 425 0 28Mar20 ttys003 0:00.33 -bash (base) shravan-docker:
Each running application is its own instance and independent of the others. The running application is also independent from the executable in that changes to the app won’t impact the executable. In this analogy, the executable is like an image, and the running app is like a container. An image is a template that defines how the container will look once it creates an instance.
Images are built on the concept of layers. There is always a
base layer, and then there is some number of additional layers that represent file changes. Each layer is stacked on top of the others, consisting of the differences between it and the previous layer.
Because Docker builds images from layers, and the different layers are file diffs. The layers usually don’t take up too much space on disk. Let’s demonstrate the difference between images and containers in a terminal.
Containers Example: To illustrate the point of containers, lets do this exercise:
- Create a container using the ubuntu image and connect to it.
docker run -it ubuntu /bin/bash
- Create a temp file under your home directory called
hello.txt, exit out of this container.
- Create another container using ubuntu image and connect to it.
docker run -it ubuntu /bin/bash
- You will not find the
hello.txtfile under the home directory because this is a new container. Ok, so exit out of this container.
docker ps -a to check all the containers even the ones that are stopped.
Suppose now you wanted to check the
hello.txt file. The important part of the above table is the
names column which is a random name assigned to each container. Identify the container where you have created the
hello.txt file and attach to it. But before we
attach to the container, we must first
start the container.
(base) shravan-docker% docker start sad_ishizaka sad_ishizaka
At this point, we started the container and it is running in the background now. In order to interact with it, we need to use the
docker attach container-name command to attach to it.
So, what did all of this prove? We created a container based on the Ubuntu image. And then we added a text file to that container. Then we started up a separate container, also based on Ubuntu, and the text file wasn’t there. It didn’t exist in that container. That’s because each time you create a container, it’s based on how the image looks at the moment the container is created.
What’s next in the docker series of posts: So far we have learnt the foundations of using Docker. Next we’ll learn about images and containers, port mapping, Docker networks, volumes, tagging, and more.
Learning Objectives for this series:
- You should understand what Docker is
- You should understand how to create Docker images
- You should understand how to map ports between Docker and the Host OS
- You should understand the basics of Docker networking
- You should understand how to use volumes for persistent storage
- You should be able to tag images