Search

Getting started with Kubernetes: how to set up your first cluster

Kubernetes is an excellent tool for deploying and scaling complex distributed systems, but it’s no secret that getting started with Kubernetes is a challenge. Most Kubernetes tutorials use a tool like Minikube to get you started, which doesn’t teach you much about configuring production-ready clusters.

In this article, we’ll take a different approach and show you how to set up a real-world, production-ready Kubernetes cluster using Amazon Elastic Kubernetes Service (Amazon EKS) and Terraform.

Introducing Terraform

Hashicorp’s Terraform is an infrastructure as code (IaC) solution that allows you to declaratively define the desired configuration of your cloud infrastructure. Using the Terraform CLI, you can provision this configuration locally or as part of automated CI/CD pipelines.

Terraform is similar to configuration tools provided by cloud platforms such as AWS CloudFormation or Azure Resource Manager, but it has the advantage of being provider-agnostic. If you’re not familiar with Terraform, we recommend that you first go through their getting started with AWS guide to learn the most important concepts.

Defining infrastructure

Let’s build a Terraform configuration that provisions an Amazon EKS cluster and an AWS Virtual Private Cloud (VPC) step-by-step.

Create the main.tf file with the following content:

				
					provider "aws" {
  region = "eu-west-1"
}

data "aws_availability_zones" "azs" {
  state = "available"
}

locals {
  cluster_name = "eks-circleci-cluster"
}
				
			

In this file, we first set up the AWS provider with the region set to eu-west-1. Feel free to change this to any other AWS region.

The AWS provider will check various places for valid credentials to use, so be sure to set these.

We then fetch the availability zones that we can use in this region. This makes it easier to change the region without requiring any other changes. We also set the cluster name as a variable because we’ll be using this multiple times. You can change this value as well.

Let’s configure the VPC. Add the following content to the file:

				
					module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 2.48"

  name = "eks-circleci-vpc"
  cidr = "10.0.0.0/16"

  azs                  = slice(data.aws_availability_zones.azs.names, 0, 2)
  private_subnets      = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets       = ["10.0.3.0/24", "10.0.4.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = "1"
  }
}
				
			

By using the AWS VPC module we greatly simplify VPC creation. We configure the VPC to use the first two AZs in the region where we deploy our Terraform template. This is the minimum number of AZs required by EKS.

To save costs, we only configure a single NAT gateway. The AWS VPC module will set up the routing tables properly to route everything through this single NAT gateway. We also enable DNS hostnames for the VPC as this is a requirement for EKS.

Finally, we set the tags required by EKS so that it can discover its subnets and know where to place public and private load balancers.

Next, let’s configure the EKS cluster using the AWS EKS module.

				
					module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 12.2"

  cluster_name    = local.cluster_name
  cluster_version = "1.17"
  subnets         = module.vpc.private_subnets
  vpc_id          = module.vpc.vpc_id

  worker_groups = [
    {
      instance_type = "t3.large"
      asg_max_size  = 1
    }
  ]
}
				
			

We configure the cluster to use the VPC we’ve created and define a single worker group with one t3.large instance. This will be enough to create a simple test resource in the cluster while minimizing costs.

Finally, add the following configuration to the file:

				
					data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

provider "kubernetes" {
  version = "~> 1.9"

  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}

output "kubectl_config" {
  description = "kubectl config that can be used to authenticate with the cluster"
  value       = module.eks.kubeconfig
}
				
			

We fetch some data from the Amazon EKS cluster configuration and configure the Terraform Kubernetes Provider to authenticate with the cluster. This provider is also used within the AWS EKS module when it creates a configmap in the cluster, which gives the currently authenticated AWS user admin permissions to the cluster. This is an EKS-specific method for granting an AWS entity access to a cluster.

Finally, the output block will display the information required to authenticate with the cluster – more on that after we have provisioned the cluster.

We now have a Terraform configuration completely ready for spinning up an EKS cluster. Let’s apply this configuration and create a test resource in our cluster.

Provisioning a Kubernetes cluster

Configure your shell to authenticate the Terraform AWS provider. With your working directory in the same directory where we just created the Terraform file, initialize the workspace:

				
					$ terraform init
Initializing modules...
Downloading terraform-aws-modules/eks/aws 12.2.0 for eks...
- eks in .terraform/modules/eks/terraform-aws-eks-12.2.0
- eks.node_groups in .terraform/modules/eks/terraform-aws-eks-12.2.0/modules/node_groups
Downloading terraform-aws-modules/vpc/aws 2.48.0 for vpc...
- vpc in .terraform/modules/vpc/terraform-aws-vpc-2.48.0

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "random" (hashicorp/random) 2.3.0...
- Downloading plugin for provider "aws" (hashicorp/aws) 3.3.0...
- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.12.0...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...

Terraform has been successfully initialized!

[...]
				
			

The terraform init command downloads all providers included in the config file. We can now apply and provision the VPC and the cluster.

NoteThis will take 10 to 15 minutes to complete.

				
					$ terraform apply

[...]

Plan: 40 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

[...]

Apply complete! Resources: 40 added, 0 changed, 0 destroyed.

Outputs:

[...]
				
			

That’s it: we now have an Amazon EKS cluster fully up and running.

Let’s deploy a pod and expose it through a load balancer to ensure our cluster works as expected.

To authenticate with the cluster, you need to have kubectl and the aws-iam-authenticator installed. The Outputs section at the end of terraform apply includes the details that you can add to your kubeconfig file (or to a temporary new file). Once done, let’s create a new deployment and expose it:

				
					$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

$ kubectl expose deployment/nginx --port=80 --type=LoadBalancer
service/nginx exposed

$ kubectl get service nginx
				
			

Get the EXTERNAL-IP value from the final output – this is the DNS entry of the AWS load balancer. It may take a few minutes for the DNS to propagate. When it has, you will be greeted by the “Welcome to nginx!” page. Success!

An easier way: the CircleCI aws-eks orb

You’ve learned one method for spinning up an Amazon EKS cluster. However, assembling the configuration and keeping it up to date was — and will continue to be — quite a bit of (manual) work.

Fortunately, the CircleCI aws-eks orb can help. CircleCI orbs contain pre-packaged configuration code that makes it easier to integrate with other developer tools. This is just one of the many orbs available in the orb registry.

The aws-eks orb can automatically spin up, test, and tear down an Amazon EKS cluster. You can use it to create powerful workflows where applications are tested in a clean, fully isolated, temporary EKS cluster. Let’s try it.

A CircleCI account is the main prerequisite to get started with the orb. Sign up with CircleCI and connect a Git repository to your account in a CircleCI project.

In this project, go to the Environment Variables tab under Settings and add the following variables so that the project can authenticate with AWS. Be sure that the AWS IAM User has the permissions required to create an EKS cluster and its dependencies. Set the value of the region variable to the region where you want to provision the cluster.

Next, create the .circleci/config.yml file in your Git repository and add to it the following content:

				
					version: 2.1

orbs:
  aws-eks: circleci/aws-eks@1.0.0
  kubernetes: circleci/kubernetes@0.11.1
  
jobs:
  test-cluster:
    executor: aws-eks/python3
    parameters:
      cluster-name:
        description: |
          Name of the EKS cluster
        type: string
    steps:
      - kubernetes/install
      - aws-eks/update-kubeconfig-with-authenticator:
          cluster-name: << parameters.cluster-name >>
      - run:
          command: |
            kubectl get services
          name: Test cluster

workflows:
  deployment:
    jobs:
      - aws-eks/create-cluster:
          cluster-name: my-first-cluster
      - test-cluster:
          cluster-name: my-first-cluster
          requires:
            - aws-eks/create-cluster
      - aws-eks/delete-cluster:
          cluster-name: my-first-cluster
          requires:
            - test-cluster
				
			

This file configures a workflow with three jobs:
  1. Uses the create-cluster command from the aws-eks orb to create a cluster and its dependencies using the eksctl utility
  2. Runs a simple test to verify the cluster works as expected
  3. Destroys the cluster

Commit this file to your repository, and CircleCI will automatically start the workflow that will take 15 to 20 minutes. Of course, you can add all kinds of steps required after cluster creation, such as provisioning resources and running tests.

Compared with the Terraform method of creating an Amazon EKS cluster that we discussed earlier, the aws-eks orb drastically simplifies and speeds up the process of managing the lifecycle of an EKS cluster. The complexity of the EKS cluster itself, as well as the configuration of its dependencies (such as the VPC), is completely abstracted away. This is a low-maintenance solution that allows you to focus your efforts on building valuable continuous integration workflows with automated tests.

Next steps

You’ve learned that creating your first Kubernetes cluster doesn’t have to be difficult or scary. Terraform provides an easy way to define the cluster infrastructure. Through simple CLI commands, you can easily provision the defined infrastructure.

The CircleCI orb simplifies the process even further. You can reach the same end result without writing your own Terraform code or running any commands.

The best way to learn this is to do it yourself. Start by creating your cluster manually, using Terraform code along with its CLI. Then, try CircleCI to see just how easy and fast it is to create a cluster using the aws-eks orb.


This article was originally posted on CircleCI.

Does your business work with Kubernetes or container-orchestration systems? If you’re interested in developing expert technical content that performs, let’s have a conversation today.

Facebook
Twitter
LinkedIn
Reddit
Email

POST INFORMATION

If you work in a tech space and aren’t sure if we cover you, hit the button below to get in touch with us. Tell us a little about your content goals or your project, and we’ll reach back within 2 business days. 

Share via
Copy link
Powered by Social Snap