Deploying Kubernetes options

Deploying Kubernetes

Once you’ve decided on using Kubernetes, you have a variety of methods for deploying Kubernetes.

Single-Node Kubernetes Clusters

For development and test scenarios, you can run Kubernetes on a single machine.

  • Docker for Mac and Docker for Windows both include support for running Kubernetes on a local machine in a single-node configuration. Just make sure Kubernetes is enabled in the settings. This is the easiest way to get started if you are already using Docker.


  • Another option is to use Minikube, which supports Linux, in addition to Mac and Windows.
  • Lastly, Linux systems can use kubeadm to set up a single-node cluster. Kubeadm is used as a building block for building Kubernetes clusters but it can effectively create a single-node cluster. Beware that kubeadm will install Kubernetes on the system itself rather than in a virtual machine, like the prior methods.

Single-node Kubernetes clusters are also useful within continuous integration pipelines. In this use case, you want to create ephemeral clusters that start quickly and are in a pristine state for testing applications in Kubernetes each time you check in your code. Kubernetes-in-Docker, abbreviated KND or kind, is made specifically for this use case.

Multi-Node Kubernetes Clusters

For your production workloads, you want clusters with multiple nodes to take advantage of horizontal scaling and to tolerate node failures.

To decide what solution works best for you, you need to ask several key questions:

How much control do you want vs how much maintenance of the cluster?

How much control you want over the cluster versus the amount of effort you’re willing to invest in maintaining the cluster?


Fully-managed solutions free you from routine maintenance but often lag the latest Kubernetes releases by a couple of version numbers. New version of Kubernetes are released every three months. Examples of fully-managed Kubernetes as a service solutions include, Amazon Elastic Kubernetes Service or EKS, Azure Kubernetes Service or AKS, and Google Kubernetes Engine or GKE. If you preferred to have full control over your cluster, you should checkout kubespray, kops, and kubeadm.

Do you already have expertise with a particular cloud provider?

Cloud provider’s managed Kubernetes services integrate tightly with their other services in their cloud. For example, how identity and access management are performed.

Do you need enterprise support?


Are you concerned about vendor lock-in?

If you are, you should focus on open source solutions like kubespray and Rancher that can deploy Kubernetes clusters to a wide variety of platforms.

Do you want the cluster on-prem, in the cloud, or both?

Because Kubernetes provides users with an abstraction of a cluster of resources, the underlying nodes can be running in different platforms. Kubernetes itself is at the core of open source hybrid clouds or cloud running on-prem and in the cloud. Even cloud vendor Kubernetes solutions allow using on-prem compute. For example, GKE on-prem lets you run GKE on-premise. EKS allows you to add on prem nodes to the cluster and Azure Stack allows you to run AKS on-prem.

Do you want to run Linux containers, Windows containers, or both?

To support Linux containers, you need to ensure that you have Linux nodes in your cluster. To support Windows containers, you need to ensure that you have Windows nodes in your cluster. Both Linux and Windows nodes can exist in the same cluster to support both types of containers.

Cluster setup

For the subsequent posts, I will be using a multi-node Kubernetes cluster as shown here:



Docker Desktop for Mac and Windows:



Kubernetes in Docker (kind):

Amazon EKS:

AKS (Azure):

GKE (Google Cloud):




Pivotal Container Service:


GKE On-Prem:

Azure Stack: