AWS ECS and ECR explained

Amazon EC2 Container Service

The Amazon EC2 Container Service, which is commonly known as Amazon ECS is a service that allows you to run Docker-enabled applications packaged as containers across a cluster of EC2 instances without requiring you to manage a complex and administratively heavy cluster management system. The burden of managing your own cluster management system is abstracted with the Amazon ECS service by passing that responsibility over to AWS, specifically though the use of AWS Fargate.

AWS Fargate is an engine used to enable ECS to run containers without having to manage and provision instances and clusters for containers.

Containers Summaraized:

A Container holds everything that an application requires to enable it to run from within it’s isolated container package. This may include system libraries, code, system tools, run time, etcetera. But it does not include an operating system like a virtual machine does, and so reduces overhead of the actual container itself.

Containers are decoupled from the underlying operating system, making Container applications very portable, lightweight, flexible, and scalable across a cloud environment.

This ensures that the application will always run as expected regardless of it’s deployment location.

With this in mind, if you are already using Docker, or have existing containerized applications packaged locally, then these will work seamlessly on Amazon ECS.


EC2 Container Service removes the need for you to manage your own cluster management system thanks to its interactions with AWS Fargate. You don’t even have to specify which instance type to use. This can be very time consuming and requires a lot of overhead to continue to monitor and maintain and scale.

With Amazon ECS there is no need to install any management software for your cluster, neither is there a need to install any monitoring software either. All of this, and more, is taken care of by the service, allowing you to focus on building great applications and deploying them across your scalable cluster.

Launching ECS cluster

When launching your ECS cluster you have the option of two different deployment models:

  • a Fargate launch: The Fargate launch requires far less configuration and simply requires you to specify the CPU and memory required, define the networking and IAM policies in addition to you having to package your applications into containers.
  • an EC2 launch: However, with an EC2 launch you have a far greater scope of customization and configurable parameters. For example, you are responsible for patching and scaling your instances, and you can specify which instance types you used, and how many containers should be in a cluster.

Monitoring ECS cluster using CloudWatch

There are use cases for both modes. You may need more granularity and control with some of your clusters due to security and compliance controls. Monitoring is taken care of through the use of AWS CloudWatch, which will monitor metrics against your containers and your cluster. Those of you who have used CloudWatch before will be aware you can easily create alarms based off of these metrics providing you notification of when specific events occur such as your cluster size scaling up or down.

ECS Cluster

An Amazon ECS cluster is comprised of a collection of EC2 instances. As such, some of the functionality and features that we are already familiar with can be used with these instances. For example Security Groups to implement instance level securely at a port and protocol level, along with Elastic Load Balancing and Auto Scaling.

Although these EC2 instances form a cluster, they still operate in much the same way as a single EC2 instance. So again, for example, should you need to connect to one of your instances itself, you could still use the same familiar methods such as initiating an SSH connection.

More about the clusters: The clusters themselves act as a resource pool, aggregating resources such as CPU and memory. The cluster is dynamically scalable, meaning you can start your cluster as a single small instance, but it can dynamically scale to thousands of larger instances. Multiple instance types can be used within the cluster if required.

Although the cluster is dynamically scalable, it’s important to point out that it can only scale within a single region. Amazon ECS is region-specific, so it can span multiple availability zones, but it cannot span multiple regions. With ECS you can schedule your containers to be deployed across your cluster based on different requirements, such as resources requirements or specific availability requirements, through the use of multiple availability zones.

The instances within the Amazon ECS cluster also have a Docker daemon and an ECS agent installed. These agents communicate with each other allowing Amazon ECS commands to be translated into Docker commands.


The Elastic Container Registry service, known as ECR. This service links closely with the previous service discussed, the EC2 Container Service, as it provides a secure location to store and manage your docker images that can be distributed and deployed across your applications.

This is a fully managed service, and as a result, you do not need to provision any infrastructure to allow you to create this registry of docker images. This is all provisioned and managed by AWS. This service is primarily used by developers, allowing them to push, pull, and manage their library of docker images in a central and secure location.

Components of ECR

To understand the service better, let’s look at some components used. These being, registry, authorization token, repository, repository policy, and image.


The ECR registry is the object that allows you to host and store your docker images in, as well as create image repositories. Within your AWS account, you will be provided with a default registry. When your registry is created, then by default, the URL for the registry is as follows:

where you’ll need to replace the aws_acount_id and region with your own information that is applicable to your account.

Your account will have both read and write access by default to any images you create within the registry and any repositories. Access to your registry and images can be controlled via IAM policies in addition to repository policies as well, to enforce tighter and stricter security controls.

As the docker command line interface doesn’t support the different AWS authentication methods that are used, then before your docker client can access your registry, it needs to be authenticated as an AWS user, which will then allow your client to both push and pull images. And this is done by using an authorization token.

Authentication Token

To begin the authorization process to allow your docker client to communicate with the default registry, you can run the get-login command using the AWS CLI, as shown:

aws ecr get-login --region region --no-include-email

where the region should be replaced with your own region. This will then produce an output response, which will be a docker login command.

docker login -u AWS -p password

You must then copy this command and paste it into your docker terminal which will then authenticate your client and associate a docker CLI to your default registry. This process produces an authorization token that can be used within the registry for 12 hours, at which point, you will need to re-authenticate by following the same process.


The repository are objects within your registry that allow you to group together and secure different docker images. You can create multiple repositories with the registry, allowing you to organize and manage your docker images into different categories.

Using policies from both IAM and repository policies, you can assign permissions to each repository allowing specific uses to perform certain actions, such as performing a push or pull API.

Repository Policy

As I just mentioned, you can control access to your repository and images using both IAM policies and repository policies. There are a number of different IAM managed policies to help you control access to ECR, these being the three shown here.

Repository policies are resource-based policies, which means you need to ensure you add a principle to the policy to determine who has access and what permissions they have. It’s important to be aware of that for an AWS user to gain access to the registry, they will require access to the ecr get authorization token API call. Once they have this access, repository policies can control what actions those users can perform on each of the repositories. These resource-based policies are created within ECR itself and within each other repositories that you have.


Once you have configured your registry, repositories, and security controls, and authenticated your docker client with ECR, you can then begin storing your docker images in the required repositories, ready to then pull down again as and when required.

To push an image into ECR, you can use the docker push command, and to retrieve and image you can use the docker pull command. For more information on how to perform both a push and a pull of images, please see the following links. That now brings me to the end of this lecture covering the Elastic Container Registry service.

Coming up in the next lecture, I shall be looking at the Amazon Elastic Container Service for Kubernetes, known as EKS.