Amazon EKS: Setting Up and Managing Kubernetes on AWS

Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that makes it easy to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane. This guide covers the essential steps for setting up and managing an EKS cluster.

Introduction to Amazon EKS

Amazon EKS runs the Kubernetes management infrastructure across multiple AWS availability zones, automatically detects and replaces unhealthy control plane nodes, and provides on-demand upgrades and patching. This guide will walk you through the process of setting up the necessary IAM roles, installing required tools, and creating your first EKS cluster.

Prerequisites

Before you begin working with Amazon EKS, ensure you have the following:

  • An AWS account with appropriate permissions
  • AWS CLI installed and configured
  • kubectl command line tool installed
  • Basic understanding of Kubernetes concepts

Setting Up IAM Permissions for EKS

Creating an EKS Admin User Policy

To manage EKS clusters, you need to create an IAM policy with appropriate permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "eks:*"
            ],
            "Resource": "*"
        }
    ]
}

Note: For production environments, it’s recommended to follow the principle of least privilege and restrict permissions to only what’s necessary. See the official EKS documentation for more granular policy examples.

Creating the EKS Service Role

EKS requires a service role that allows it to manage resources on your behalf:

  1. Open the IAM console at https://console.aws.amazon.com/iam/
  2. Choose Roles, then Create role
  3. Select EKS from the list of services
  4. Choose Allows Amazon EKS to manage your clusters on your behalf for your use case
  5. Click Next: Permissions (the appropriate managed policy should be automatically attached)
  6. Click Next: Tags (add tags if needed) and then Next: Review
  7. For Role name, enter a descriptive name such as eksServiceRole
  8. Click Create role

Installing Required Tools

AWS IAM Authenticator

The AWS IAM Authenticator allows you to use AWS IAM credentials to authenticate to a Kubernetes cluster:

# For macOS with Homebrew
$ brew install aws-iam-authenticator

# For Linux
$ curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/aws-iam-authenticator
$ chmod +x ./aws-iam-authenticator
$ sudo mv ./aws-iam-authenticator /usr/local/bin/

eksctl - The EKS CLI Tool

The eksctl command line tool simplifies the process of creating and managing EKS clusters:

# For macOS with Homebrew
$ brew tap weaveworks/tap
$ brew install weaveworks/tap/eksctl

# For Linux
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin

Creating an EKS Cluster with eksctl

The eksctl tool simplifies cluster creation by handling the provisioning of all required resources, including VPC, subnets, security groups, and IAM roles:

$ eksctl create cluster \
  --name eks-qa \
  --region us-west-2 \
  --node-type m5.large \
  --node-volume-size 20 \
  --nodes 2 \
  --nodes-max 4 \
  --nodes-min 1 \
  --ssh-public-key ~/.ssh/id_rsa_ops.pub \
  --tags Owner=EKS \
  --vpc-cidr 10.30.0.0/16

Command Parameters Explained:

  • --name: The name of your EKS cluster
  • --region: AWS region to deploy the cluster in
  • --node-type: EC2 instance type for worker nodes
  • --node-volume-size: Size (in GB) of the EBS volume for each node
  • --nodes: Initial number of nodes to launch
  • --nodes-max: Maximum number of nodes in the auto-scaling group
  • --nodes-min: Minimum number of nodes in the auto-scaling group
  • --ssh-public-key: SSH key for remote access to worker nodes
  • --tags: AWS resource tags to apply to the cluster
  • --vpc-cidr: CIDR block for the VPC (eksctl will create a new VPC)

Example Cluster Creation Output

When you run the cluster creation command, you’ll see output similar to this:

2018-12-13T14:03:56-08:00 [ℹ]  using region us-west-2
2018-12-13T14:03:57-08:00 [ℹ]  setting availability zones to [us-west-2a us-west-2c us-west-2b]
2018-12-13T14:03:57-08:00 [ℹ]  subnets for us-west-2a - public:10.30.0.0/19 private:10.30.96.0/19
2018-12-13T14:03:57-08:00 [ℹ]  subnets for us-west-2c - public:10.30.32.0/19 private:10.30.128.0/19
2018-12-13T14:03:57-08:00 [ℹ]  subnets for us-west-2b - public:10.30.64.0/19 private:10.30.160.0/19
2018-12-13T14:03:57-08:00 [ℹ]  using "ami-0f54a2f7d2e9c88b3" for nodes
2018-12-13T14:03:57-08:00 [ℹ]  creating EKS cluster "eks-qa" in "us-west-2" region
2018-12-13T14:03:57-08:00 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
2018-12-13T14:03:57-08:00 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=eks-qa'
2018-12-13T14:03:57-08:00 [ℹ]  creating cluster stack "eksctl-eks-qa-cluster"
2018-12-13T14:14:12-08:00 [ℹ]  creating nodegroup stack "eksctl-eks-qa-nodegroup-0"
2018-12-13T14:14:12-08:00 [ℹ]  as --nodes-min=1 and --nodes-max=4 were given, default value of --nodes=2 was kept as it is within the set range
2018-12-13T14:18:09-08:00 [✔]  all EKS cluster resource for "eks-qa" had been created
2018-12-13T14:18:09-08:00 [✔]  saved kubeconfig as "/Users/l/.kube/config"
2018-12-13T14:18:10-08:00 [ℹ]  the cluster has 0 nodes
2018-12-13T14:18:10-08:00 [ℹ]  waiting for at least 1 nodes to become ready
2018-12-13T14:18:54-08:00 [ℹ]  the cluster has 2 nodes
2018-12-13T14:18:54-08:00 [ℹ]  node "ip-10-30-60-11.us-west-2.compute.internal" is ready
2018-12-13T14:18:54-08:00 [ℹ]  node "ip-10-30-68-158.us-west-2.compute.internal" is ready
2018-12-13T14:18:55-08:00 [ℹ]  kubectl command should work with "/Users/l/.kube/config", try 'kubectl get nodes'
2018-12-13T14:18:55-08:00 [✔]  EKS cluster "eks-qa" in "us-west-2" region is ready

Verifying Cluster Creation

After the cluster is created, verify that you can connect to it:

# List the nodes in your cluster
$ kubectl get nodes

# View cluster information
$ kubectl cluster-info

Next Steps

After your EKS cluster is up and running, consider these next steps:

  1. Deploy applications: Use kubectl apply to deploy your applications to the cluster
  2. Set up monitoring: Install Prometheus and Grafana for monitoring
  3. Configure logging: Set up CloudWatch Logs for centralized logging
  4. Implement CI/CD: Set up a CI/CD pipeline for automated deployments
  5. Manage access: Configure Kubernetes RBAC for fine-grained access control

Troubleshooting

If you encounter issues with your EKS cluster:

  • Check the CloudFormation console for stack creation errors
  • Verify IAM permissions for both the EKS service role and your user
  • Ensure the AWS IAM Authenticator is properly configured
  • Check the AWS EKS console for cluster status and events
  • Run eksctl utils describe-stacks --region=us-west-2 --name=eks-qa for detailed stack information