How to Configure K8S Multi-Node Cluster over AWS Cloud via Ansible Role

Jayesh Kumar
3 min readApr 22, 2021

INTRODUCTION:

πŸ”… Create Ansible Playbook to launch 3 AWS EC2 Instance
πŸ”… Create Ansible Playbook to configure Docker over those instances.
πŸ”… Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
πŸ”… Convert Playbook into roles and Upload those role on your Ansible Galaxy.

FIRST STEP

Launch three ec2 instances on AWS cloud

In this task, we would deploy a Kubernetes cluster on AWS cloud via Ansible roles.

I have made two different roles for the same purpose named:

aws_provision is for launching two ec2-instances and cluster_setup is for setting Kubernetes cluster on the launched ec2-instances.

This is the main task file for AWS provision.

This is the vars file to store variables for the aws_provision.

Now we would run the ansible-playbook to run the aws_provision role.

We can see now that we have launched three ec2 instances on AWS.

SECOND STEP

Configure Kubernetes consisting one Master and two Worker Nodes.

Steps involving are:

FOR MASTER AND SLAVE BOTH:

Configure yum for kubernetes

Installing docker

Install kubeadm, kubelet, kubectl

Start kubelet service

Pull the images

Edit Daemon.json

Restart the docker service

Install Iproute-tc

kube init

FOR MASTER ONLY :

pull images for kubeadm

Configuring Master via kubeadm

FOR WORKER NODE ONLY:

copying admin file

Pasting the token file

Running the token file

Now, this is the task file for cluster setup.

this is the file folder for role cluster_setup

Now we would run the ansible-playbook to configure the cluster.

Now we would check the configuration by logging into the Master node and running: kubectl get nodes

Finally!!! our cluster is set up.

Refer to the code below for any confusion.

https://github.com/jayesh49/Task-19.git

--

--