How to create an EKS cluster using eksctl?

Introduction

First we will explore what EKS is and then develop an understanding of the three tools: eksctl, kubectl, aws-iam-authenticator that are used to interact with the EKS service. Then I will demostrate creating an EKS cluster using eksctl and use kubectl and aws-iam-authenticator to connect to the cluster. Finally we will destroy all the resources that we created.

Elastic Container Service for Kubernetes (EKS)

With EKS, AWS provides a managed service allowing you to run Kubernetes across your AWS infrastructure without having to take care of provisioning and running the Kubernetes management infrastructure in what’s referred to as the control plane. You, the AWS account owner, only need to provision and maintain the worker nodes.

What is a control plane and what are worker nodes?

Control Plane

There are a number of different components that make up the control plane and these include a number of different APIs, the kubelet processes and the Kubernetes Master, and these dictate how kubernetes and your clusters communicate with each other. The control plane itself is run across master nodes.

The control plane schedules containers onto nodes. The term scheduling does not refer to time in this context. Scheduling, in this case, refers to the decision process of placing containers onto nodes in accordance with their declared, compute requirements. The Control Plane also tracks the state of all kubernetes objects by continually monitoring the objects. So in EKS, AWS is responsible for provisioning, scaling and managing the control plane and they do this by utilising multiple availability zones for additional resilience.

Worker Nodes

Kubernetes clusters are composed of nodes and the term cluster refers to the aggregate of all of the nodes. A node is a worker machine in Kubernetes and runs as an on-demand EC2 instance and includes software to run containers managed by the Kubernetes control plane. For each node created, a specific AMI is used which also ensures docker and kubelet in addition to the AWS IAM authenticator is installed for security controls. These nodes are what us as the customer are responsible for managing within EKS. Once the worker nodes are provisioned they can then connect to EKS using an endpoint.

The need for eksctl

Let me provide a brief overview of what’s required to start using the EKS service. Unlike other implementations, such as Google GKE (Google Kubernetes Engine), batteries are not necessarily included with EKS. Thus you cannot do create a complete cluster with one single command. Fortunately, there is eksctl which lets us build a Kubernetes cluster with batteries included. Let’s quickly take a look at what it would be like to configure a cluster on Kubernetes the hard way.

This is where eksctl comes in

WeaveWorks created this tool called eksctl that can be used in a similar way, to allow us to create our own cluster in a single command:

eksctl create cluster \
    --name=sk-eks-cluster \
    --region=us-west-2 \
    --ssh-public-key=CADemoKey.pub \
    --nodes=4 \
    --node-type=m5.large

The eksctl tool uses CloudFormation under the hood, creating one stack for the EKS master control plane and another stack for the worker nodes.

Install EKS tools: kubectl, aws-iam-authenticator and eksctl

In this demonstration, we’re going to set up our tool line to allow us to communicate and create our EKS clusters. So there are three tools that we’re going to install:

  1. kubectl: kubectl is a command line interface, for running commands against Kubernetes clusters.
  2. aws_iam_authenticator: is a tool that allows you to use AWS IAM credentials to authenticate against Kubernetes clusters.
  3. eksctl: provides a nice abstraction for creating clusters. It’s a command line interface tool, and provides a very simple method for bringing clusters up. As we’ll see, all you need to do to bring up a EKS cluster is run eksctl create cluster, and underneath it will take care of all of the wiring up of the various, individual components.

Install kubectl tools on mac

Use curl instead of homebrew.

(base) shravan-learning_kubernetes# curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"

(base) shravan-learning_kubernetes# ls
README.md	kubectl		src
(base) shravan-learning_kubernetes# chmod +x kubectl
(base) shravan-learning_kubernetes# ls -rtl
total 98080
-rw-r--r--   1 shravan  staff        21 Apr 25 09:14 README.md
drwxr-xr-x  33 shravan  staff      1056 Apr 25 09:15 src
-rwxr-xr-x   1 shravan  staff  50164640 May  7 19:06 kubectl
(base) shravan-learning_kubernetes# sudo mv ./kubectl /usr/local/bin/kubectl
Password:
(base) shravan-learning_kubernetes# kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}

(base) shravan-learning_kubernetes# kubectl version --client --short
Client Version: v1.18.2
(base) shravan-learning_kubernetes#

Install aws-iam-authenticator

(base) shravan-learning_kubernetes# curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/darwin/amd64/aws-iam-authenticator

(base) shravan-learning_kubernetes# ls
README.md		aws-iam-authenticator	src
(base) shravan-learning_kubernetes# curl -o aws-iam-authenticator.sha256 https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/darwin/amd64/aws-iam-authenticator.sha256

(base) shravan-learning_kubernetes# openssl sha1 -sha256 aws-iam-authenticator
SHA256(aws-iam-authenticator)= 8d986bcfe77003c739a143f937ffa8d5b5cc9cca27a72e9de33af2e90f19bf21
(base) shravan-learning_kubernetes# cat aws-iam-authenticator.sha256
8d986bcfe77003c739a143f937ffa8d5b5cc9cca27a72e9de33af2e90f19bf21 aws-iam-authenticator
(base) shravan-learning_kubernetes# chmod +x ./aws-iam-authenticator
(base) shravan-learning_kubernetes# mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin
(base) shravan-learning_kubernetes# echo 'export PATH=$PATH:$HOME/bin' >> ~/.bash_profile
(base) shravan-learning_kubernetes# aws-iam-authenticator help
A tool to authenticate to Kubernetes using AWS IAM credentials

Usage:
  aws-iam-authenticator [command]

Available Commands:
  help        Help about any command
  init        Pre-generate certificate, private key, and kubeconfig files for the server.
  server      Run a webhook validation server suitable that validates tokens using AWS IAM
  token       Authenticate using AWS IAM and get token for Kubernetes
  verify      Verify a token for debugging purpose
  version     Version will output the current build information

Flags:
  -i, --cluster-id ID                 Specify the cluster ID, a unique-per-cluster identifier for your aws-iam-authenticator installation.
  -c, --config filename               Load configuration from filename
      --feature-gates mapStringBool   A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
                                      AllAlpha=true|false (ALPHA - default=false)
                                      IAMIdentityMappingCRD=true|false (ALPHA - default=false)
  -h, --help                          help for aws-iam-authenticator
  -l, --log-format string             Specify log format to use when logging to stderr [text or json] (default "text")

Use "aws-iam-authenticator [command] --help" for more information about a command.
(base) shravan-learning_kubernetes#

Install ekscli

(base) shravan-learning_kubernetes# brew install weaveworks/tap/eksctl
==> Installing eksctl from weaveworks/tap
==> Downloading https://github.com/weaveworks/eksctl/releases/download/0.18.0/eksctl_Darwin_amd64.tar.gz
==> Downloading from https://github-production-release-asset-2e65be.s3.amazonaws.com/134539560/60ea3100-88aa-11ea-8580-cf5137f45c16?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%
######################################################################## 100.0%
🍺  /usr/local/Cellar/eksctl/0.18.0: 3 files, 83.5MB, built in 4 seconds
(base) shravan-learning_kubernetes#

Check the installation

(base) shravan-learning_kubernetes# eksctl help
The official CLI for Amazon EKS

Usage: eksctl [command] [flags]

Commands:
  eksctl create                  Create resource(s)
  eksctl get                     Get resource(s)
  eksctl update                  Update resource(s)
  eksctl upgrade                 Upgrade resource(s)
  eksctl delete                  Delete resource(s)
  eksctl set                     Set values
  eksctl unset                   Unset values
  eksctl scale                   Scale resources(s)
  eksctl drain                   Drain resource(s)
  eksctl utils                   Various utils
  eksctl completion              Generates shell completion scripts for bash, zsh or fish
  eksctl version                 Output the version of eksctl
  eksctl help                    Help about any command

Common flags:
  -C, --color string   toggle colorized logs (valid options: true, false, fabulous) (default "true")
  -h, --help           help for this command
  -v, --verbose int    set log level, use 0 to silence, 4 for debugging and 5 for debugging with AWS debug logging (default 3)

Use 'eksctl [command] --help' for more information about a command.

(base) shravan-learning_kubernetes#

Create EKS cluster

We’re going to create our first AWS managed Kubernetes cluster. We’re going to use the eksctl cli to create the cluster. Before we start, let’s just quickly review how eksctl is used to create clusters. So on their website, it’s very well documented in terms of the parameters that can be used. You can simply create one by running eksctl create cluster, and that cluster will kick off with a number of defaults. Including, it will provision two times m5.large nodes for the workers. And it will use the AWS EKS official AMI image. And the placement will be into the us-west-2 Oregon region.

Beyond that, you can customize further the provisioning process for your cluster. For example, you can specify a custom name for your cluster, and you can specify the number of nodes or worker nodes that you want. Another interesting thing you can do is to do auto scaling for the worker nodes. So in this case you’re setting the –nodes -min to three, and at the other end, you’re setting –nodes -max equal to five. So that will create an auto scaling group for the worker nodes and will scale in and out between three and five. Okay, let’s jump into the terminal and we’ll begin the process.

Before that, lets create a key-pair that will be used by our kubectl to login to the worker nodes. I will show how this key will make its way into the .kube/config file.

(base) shravan-learning_kubernetes# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/shravan/.ssh/id_rsa): CADemoKey
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in CADemoKey.
Your public key has been saved in CADemoKey.pub.
The key fingerprint is:
SHA256:0leDxk8ukz8pfLlDR8PRu30jcj463Sdpn+QfpciS16I shravan@shravan-mbp.fios-router.home
The key's randomart image is:
+---[RSA 2048]----+
|               . |
|         . .  . .|
|          + +. ..|
|       . . * .+. |
|      . S = o. .+|
|       . o B.B.++|
|          =./.=+o|
|           B.B*.+|
|          E.+o.==|
+----[SHA256]-----+
(base) shravan-learning_kubernetes#

Time to create the EKS cluster

(base) shravan-learning_kubernetes# eksctl create cluster --name=sk-eks-cluster --region=us-west-2 --ssh-public-key=CADemoKey.pub --nodes=4 --node-type=m5.large
[ℹ]  eksctl version 0.18.0
[ℹ]  using region us-west-2
[ℹ]  setting availability zones to [us-west-2c us-west-2b us-west-2d]
[ℹ]  subnets for us-west-2c - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-1ac43407" will use "ami-026522559b4f79cc8" [AmazonLinux2/1.15]
[ℹ]  using SSH public key "CADemoKey.pub" as "eksctl-sk-eks-cluster-nodegroup-ng-1ac43407-d8:ec:4a:0e:05:95:ba:f5:1c:5b:8b:af:32:2a:07:8b"
[ℹ]  using Kubernetes version 1.15
[ℹ]  creating EKS cluster "sk-eks-cluster" in "us-west-2" region with un-managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=sk-eks-cluster'
[ℹ]  CloudWatch logging will not be enabled for cluster "sk-eks-cluster" in "us-west-2"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=us-west-2 --cluster=sk-eks-cluster'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "sk-eks-cluster" in "us-west-2"
[ℹ]  2 sequential tasks: { create cluster control plane "sk-eks-cluster", create nodegroup "ng-1ac43407" }
[ℹ]  building cluster stack "eksctl-sk-eks-cluster-cluster"
[ℹ]  deploying stack "eksctl-sk-eks-cluster-cluster"
[ℹ]  building nodegroup stack "eksctl-sk-eks-cluster-nodegroup-ng-1ac43407"
[ℹ]  --nodes-min=4 was set automatically for nodegroup ng-1ac43407
[ℹ]  --nodes-max=4 was set automatically for nodegroup ng-1ac43407
[ℹ]  deploying stack "eksctl-sk-eks-cluster-nodegroup-ng-1ac43407"
[✔]  all EKS cluster resources for "sk-eks-cluster" have been created
[✔]  saved kubeconfig as "/Users/shravan/.kube/config"
[ℹ]  adding identity "arn:aws:iam::506140549518:role/eksctl-sk-eks-cluster-nodegroup-n-NodeInstanceRole-X2DAA3FE2C6G" to auth ConfigMap
[ℹ]  nodegroup "ng-1ac43407" has 0 node(s)
[ℹ]  waiting for at least 4 node(s) to become ready in "ng-1ac43407"
[ℹ]  nodegroup "ng-1ac43407" has 4 node(s)
[ℹ]  node "ip-192-168-0-195.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-56-212.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-71-193.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-78-9.us-west-2.compute.internal" is ready
[ℹ]  kubectl command should work with "/Users/shravan/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "sk-eks-cluster" in "us-west-2" region is ready
(base) shravan-learning_kubernetes#

Verify in the console that the VPC cluster got created.

Notice how we didn’t create any of these manually

Didn’t have to create the VPC

Didn’t have to create all these subnets

Take a look at all the EC2 instances:

Also notice from the eksctl output that "/Users/shravan/.kube/config" got created. If you inspect the contents of this file:

(base) shravan-cloud# cat "/Users/shravan/.kube/config"
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXdPREF3TVRJek9Gb1hEVE13TURVd05qQXdNVEl6T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDFVCnhHMy82UzhXVHFMc0h1bjBkM053QlUwV254aFAyK1NiM09UMk9EMHNLZ3ZINHJHa2ZrWExUVHpsUERodmpCN1YKTm5IdFNHd2Y3bFBTNVBrR0xhOWxjU3lxWEkvNDNoa0JVQzhuUlA4QXZtTWIzOXN0RDN2ejFRaVo1NWxmMDA0UApnQlVCOGYwWXFaQk9KOFA4SHRQTTRTYjIwNUVMZVNBd2dTMVpvVmFYQTZkdXAzT1JYM0ZCb2lLMnNoVXFQaGNsCkl3THBIck1oeUw3Vmh0WjNmdWN5WWIzUE5tUnpnSVR3NzJiUEMwYkhWbTc2NnFDRTRiRFpKNXJyUzB1RlRheTQKTVZJYklIWkNDallvVzZyR3prSnJlMjV2OW40Z2FIK1ZPa1lKZVl6QTlpWnB5elRxSVJVaGJ3SG5FRmZhdWxXawpuM0dlNUdTMlN5a002Q3hHM2RFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKSU1TUFhDOXpTOWhlbm91RnlPbWdQdUR0UzMKZUlzT3RORm9pMTVTZEFneklORkhoY2lFZ093aUNpKzBiTFg3Y0d3dUNRaVhETkRhOGhTK2RPK1Q5eEFBby9hagpKZk5NQW5Nbk5YYnNhb3JRc2Y1aWREcG9ZVHdFMmtTeU9Tb0tkNktwblh0bUhhczc2bGJVbEgvSkljeGdlTU9HCisyNGhTSmJHVVgwT2NVMFdJeSs5eGZpUjFlVnVwQ1hXVFNSVjlnVGlnbXY4SWxzWDRMSjZ3RDBNYnpSSGdyb0MKdkRlbVdwVWNUdFJ5Vm5mUkdrT2FMNzhYMEhXOThqUTNyYkN2RDM0ZUoxeExBcVZhS3BVREt1MFh4MWljWTRKcQpZZGZpc0NZbS84RVhzOUZucGFFTUNKNThQQ1V6LzAybjBkNGdIV2pWZ2U0eDgwWWhvTUlwWDBxNUVldz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://EE4CC311076E6639199766E5BEAC7BB5.gr7.us-west-2.eks.amazonaws.com
  name: sk-eks-cluster.us-west-2.eksctl.io
contexts:
- context:
    cluster: sk-eks-cluster.us-west-2.eksctl.io
    user: sk_cloudformation@sk-eks-cluster.us-west-2.eksctl.io
  name: sk_cloudformation@sk-eks-cluster.us-west-2.eksctl.io
current-context: sk_cloudformation@sk-eks-cluster.us-west-2.eksctl.io
kind: Config
preferences: {}
users:
- name: sk_cloudformation@sk-eks-cluster.us-west-2.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - sk-eks-cluster
      command: aws-iam-authenticator
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      - name: AWS_DEFAULT_REGION
        value: us-west-2
(base) shravan-cloud#

The beauty of eksctl and .kube/config file

If you inspect eks-cluster created on the AWS console, you will notice that the certificate-authority-data that is displayed on the cluster is the same as the one inside .kube/config file (this is the same public key we generated above). The real beauty of eksctl is that it makes using kubectl very easy by putting all the cluster configuration inside .kube/config file.

At this point, we can just run the kubectl command which will use this above file to connect to the cluster.

base) shravan-cloud# kubectl get services
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   19m
(base) shravan-cloud# kubectl get nodes
NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-0-195.us-west-2.compute.internal    Ready    <none>   13m   v1.15.11-eks-af3caf
ip-192-168-56-212.us-west-2.compute.internal   Ready    <none>   13m   v1.15.11-eks-af3caf
ip-192-168-71-193.us-west-2.compute.internal   Ready    <none>   13m   v1.15.11-eks-af3caf
ip-192-168-78-9.us-west-2.compute.internal     Ready    <none>   13m   v1.15.11-eks-af3caf
(base) shravan-cloud# kubectl get pods
No resources found in default namespace.
(base) shravan-cloud# kubectl get namespaces
NAME              STATUS   AGE
default           Active   20m
kube-node-lease   Active   20m
kube-public       Active   20m
kube-system       Active   20m
(base) shravan-cloud#

Side note about CloudFormation Templates that eksctl uses

In the AWS console, you can check the two CloudFormation templates that will be created. You can also get the name of the cloudformation tempates that were used to create the cluster in the cluster create output. For the control plane it is this eksctl-sk-eks-cluster-cluster. Just glance through all the resources it created for us:

  • "ResourceType": "AWS::EC2::SecurityGroup"
  • "ResourceType": "AWS::EKS::Cluster"
  • "ResourceType": "AWS::EC2::SecurityGroupIngress"
  • "ResourceType": "AWS::EC2::InternetGateway"
  • "ResourceType": "AWS::EC2::NatGateway"
  • "ResourceType": "AWS::EC2::EIP"
  • "ResourceType": "AWS::EC2::Route"
  • "ResourceType": "AWS::EC2::Route"

and many more shown below. This saves us all the trouble of manually wiring them up.

(base) shravan-cloud# aws cloudformation describe-stack-resources --stack-name eksctl-sk-eks-cluster-cluster --region us-west-2 | jq .
{
  "StackResources": [
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "ClusterSharedNodeSecurityGroup",
      "PhysicalResourceId": "sg-0e09e46770af867c3",
      "ResourceType": "AWS::EC2::SecurityGroup",
      "Timestamp": "2020-05-08T00:05:21.831Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "ControlPlane",
      "PhysicalResourceId": "sk-eks-cluster",
      "ResourceType": "AWS::EKS::Cluster",
      "Timestamp": "2020-05-08T00:15:21.186Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "ControlPlaneSecurityGroup",
      "PhysicalResourceId": "sg-0cd28411709395591",
      "ResourceType": "AWS::EC2::SecurityGroup",
      "Timestamp": "2020-05-08T00:05:22.056Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "IngressDefaultClusterToNodeSG",
      "PhysicalResourceId": "IngressDefaultClusterToNodeSG",
      "ResourceType": "AWS::EC2::SecurityGroupIngress",
      "Timestamp": "2020-05-08T00:15:24.184Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "IngressInterNodeGroupSG",
      "PhysicalResourceId": "IngressInterNodeGroupSG",
      "ResourceType": "AWS::EC2::SecurityGroupIngress",
      "Timestamp": "2020-05-08T00:05:24.175Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "IngressNodeToDefaultClusterSG",
      "PhysicalResourceId": "IngressNodeToDefaultClusterSG",
      "ResourceType": "AWS::EC2::SecurityGroupIngress",
      "Timestamp": "2020-05-08T00:15:24.133Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "InternetGateway",
      "PhysicalResourceId": "igw-047ecfbfad9f135aa",
      "ResourceType": "AWS::EC2::InternetGateway",
      "Timestamp": "2020-05-08T00:05:13.914Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "NATGateway",
      "PhysicalResourceId": "nat-0ac2988cf07c48ce5",
      "ResourceType": "AWS::EC2::NatGateway",
      "Timestamp": "2020-05-08T00:08:09.409Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "NATIP",
      "PhysicalResourceId": "52.36.168.131",
      "ResourceType": "AWS::EC2::EIP",
      "Timestamp": "2020-05-08T00:05:14.490Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "NATPrivateSubnetRouteUSWEST2B",
      "PhysicalResourceId": "eksct-NATPr-1WBVUP7XH1JM3",
      "ResourceType": "AWS::EC2::Route",
      "Timestamp": "2020-05-08T00:08:26.958Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "NATPrivateSubnetRouteUSWEST2C",
      "PhysicalResourceId": "eksct-NATPr-VDR0VYG6PJDM",
      "ResourceType": "AWS::EC2::Route",
      "Timestamp": "2020-05-08T00:08:27.276Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "NATPrivateSubnetRouteUSWEST2D",
      "PhysicalResourceId": "eksct-NATPr-SHUGIB5FZO5O",
      "ResourceType": "AWS::EC2::Route",
      "Timestamp": "2020-05-08T00:08:27.352Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "PolicyCloudWatchMetrics",
      "PhysicalResourceId": "eksct-Poli-79IFL8KUUAFS",
      "ResourceType": "AWS::IAM::Policy",
      "Timestamp": "2020-05-08T00:05:58.890Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "PolicyNLB",
      "PhysicalResourceId": "eksct-Poli-3VP1X21GWRZ9",
      "ResourceType": "AWS::IAM::Policy",
      "Timestamp": "2020-05-08T00:05:58.917Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "PrivateRouteTableUSWEST2B",
      "PhysicalResourceId": "rtb-0a904564b76372f06",
      "ResourceType": "AWS::EC2::RouteTable",
      "Timestamp": "2020-05-08T00:05:17.874Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "PrivateRouteTableUSWEST2C",
      "PhysicalResourceId": "rtb-0cbc85e1a340fb74f",
      "ResourceType": "AWS::EC2::RouteTable",
      "Timestamp": "2020-05-08T00:05:17.810Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "PrivateRouteTableUSWEST2D",
      "PhysicalResourceId": "rtb-0d3ceab4a8ac6a39f",
      "ResourceType": "AWS::EC2::RouteTable",
      "Timestamp": "2020-05-08T00:05:18.149Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "PublicRouteTable",
      "PhysicalResourceId": "rtb-078dd1cb7ebfa5593",
      "ResourceType": "AWS::EC2::RouteTable",
      "Timestamp": "2020-05-08T00:05:17.627Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "PublicSubnetRoute",
      "PhysicalResourceId": "eksct-Publi-1T0XVKYPW1L1Q",
      "ResourceType": "AWS::EC2::Route",
      "Timestamp": "2020-05-08T00:05:35.370Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "RouteTableAssociationPrivateUSWEST2B",
      "PhysicalResourceId": "rtbassoc-03cf1c56e34577f43",
      "ResourceType": "AWS::EC2::SubnetRouteTableAssociation",
      "Timestamp": "2020-05-08T00:05:51.373Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "RouteTableAssociationPrivateUSWEST2C",
      "PhysicalResourceId": "rtbassoc-0c62a80d5d25d30f1",
      "ResourceType": "AWS::EC2::SubnetRouteTableAssociation",
      "Timestamp": "2020-05-08T00:05:50.950Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "RouteTableAssociationPrivateUSWEST2D",
      "PhysicalResourceId": "rtbassoc-09a0c3543a4cc9af8",
      "ResourceType": "AWS::EC2::SubnetRouteTableAssociation",
      "Timestamp": "2020-05-08T00:05:50.670Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "RouteTableAssociationPublicUSWEST2B",
      "PhysicalResourceId": "rtbassoc-0946ee72e308d3188",
      "ResourceType": "AWS::EC2::SubnetRouteTableAssociation",
      "Timestamp": "2020-05-08T00:05:50.442Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "RouteTableAssociationPublicUSWEST2C",
      "PhysicalResourceId": "rtbassoc-0db4675ebf3521b1e",
      "ResourceType": "AWS::EC2::SubnetRouteTableAssociation",
      "Timestamp": "2020-05-08T00:05:51.269Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "RouteTableAssociationPublicUSWEST2D",
      "PhysicalResourceId": "rtbassoc-05362b1ca3349e1f1",
      "ResourceType": "AWS::EC2::SubnetRouteTableAssociation",
      "Timestamp": "2020-05-08T00:05:50.804Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "ServiceRole",
      "PhysicalResourceId": "eksctl-sk-eks-cluster-cluster-ServiceRole-C9FB05RF08BS",
      "ResourceType": "AWS::IAM::Role",
      "Timestamp": "2020-05-08T00:05:27.679Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "SubnetPrivateUSWEST2B",
      "PhysicalResourceId": "subnet-024e19d4e7fa33dfb",
      "ResourceType": "AWS::EC2::Subnet",
      "Timestamp": "2020-05-08T00:05:32.998Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "SubnetPrivateUSWEST2C",
      "PhysicalResourceId": "subnet-0423b8e051294d6d8",
      "ResourceType": "AWS::EC2::Subnet",
      "Timestamp": "2020-05-08T00:05:33.027Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "SubnetPrivateUSWEST2D",
      "PhysicalResourceId": "subnet-003fbf3694451f871",
      "ResourceType": "AWS::EC2::Subnet",
      "Timestamp": "2020-05-08T00:05:32.998Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "SubnetPublicUSWEST2B",
      "PhysicalResourceId": "subnet-0e6ff790f78ba2a4d",
      "ResourceType": "AWS::EC2::Subnet",
      "Timestamp": "2020-05-08T00:05:32.711Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "SubnetPublicUSWEST2C",
      "PhysicalResourceId": "subnet-0e5533e0f5e7da962",
      "ResourceType": "AWS::EC2::Subnet",
      "Timestamp": "2020-05-08T00:05:33.034Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "SubnetPublicUSWEST2D",
      "PhysicalResourceId": "subnet-0352b328558edbd68",
      "ResourceType": "AWS::EC2::Subnet",
      "Timestamp": "2020-05-08T00:05:32.995Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "VPC",
      "PhysicalResourceId": "vpc-014711e11e7590c6d",
      "ResourceType": "AWS::EC2::VPC",
      "Timestamp": "2020-05-08T00:05:14.725Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    },
    {
      "StackName": "eksctl-sk-eks-cluster-cluster",
      "StackId": "arn:aws:cloudformation:us-west-2:506140549518:stack/eksctl-sk-eks-cluster-cluster/7a073f30-90bf-11ea-a176-0ae3e770e3b8",
      "LogicalResourceId": "VPCGatewayAttachment",
      "PhysicalResourceId": "eksct-VPCGa-1X9XZVU4EP5ZT",
      "ResourceType": "AWS::EC2::VPCGatewayAttachment",
      "Timestamp": "2020-05-08T00:05:32.411Z",
      "ResourceStatus": "CREATE_COMPLETE",
      "DriftInformation": {
        "StackResourceDriftStatus": "NOT_CHECKED"
      }
    }
  ]
}
(base) shravan-cloud#

Deleting the resources

Finally destroy the resources. Start by deleting all the services if you have any. Then obliterate the cluster.

(base) shravan-cloud# kubectl delete services --all
service "kubernetes" deleted
(base) shravan-cloud# eksctl get cluster
NAME		REGION
sk-eks-cluster	us-west-2
(base) shravan-cloud# eksctl delete cluster sk-eks-cluster
[ℹ]  eksctl version 0.18.0
[ℹ]  using region us-west-2
[ℹ]  deleting EKS cluster "sk-eks-cluster"
[ℹ]  deleted 0 Fargate profile(s)
[✔]  kubeconfig has been updated
[ℹ]  cleaning up LoadBalancer services
[ℹ]  2 sequential tasks: { delete nodegroup "ng-1ac43407", delete cluster control plane "sk-eks-cluster" [async] }
[ℹ]  will delete stack "eksctl-sk-eks-cluster-nodegroup-ng-1ac43407"
[ℹ]  waiting for stack "eksctl-sk-eks-cluster-nodegroup-ng-1ac43407" to get deleted
[ℹ]  will delete stack "eksctl-sk-eks-cluster-cluster"
[✔]  all cluster resources were deleted
(base) shravan-cloud#