Multi-Cloud Setup of Kubernetes

What is Kubernetes?

As applications grow to span multiple containers deployed across multiple servers, operating them becomes more complex. To manage this complexity, Kubernetes provides an open-source API that controls how and where those containers will run.

Kubernetes orchestrates clusters of virtual machines and schedules containers to run on those virtual machines based on their available compute resources and the resource requirements of each container. Containers are grouped into pods, the basic operational unit for Kubernetes and those pods scale to your desired state.

Kubernetes also automatically manages service discovery, incorporates load balancing, tracks resource allocation and scales based on computing utilization. And, it checks the health of individual resources and enables apps to self-heal by automatically restarting or replicating containers.

Kubernetes uses Docker for creating container.

What is Docker?

To know the actual ground details about containers and docker: Containers: The actual mechanism behind the technology and why Kubernetes depreciated Docker | by Gursimar Singh | May, 2021 | Medium

What is a Multi-Node cluster in Kubernetes?

A multi-node cluster in Kubernetes is a setup with various nodes among which one is known as the master node and the rest are the worker nodes

AWS (Amazon Web Services)

One of the services is EC2 (Elastic Cloud Computing). Here they provide virtual machines with the support of many major operating systems and resources such as ram, CPU, networking, etc.

We will configure the Kubernetes cluster over EC2 instances.

GCP(Google Cloud Platform)

This distribution of resources provides several benefits, including redundancy in case of failure and reduced latency by locating resources closer to clients. This distribution also introduces some rules about how resources can be used together.

Microsoft Azure

Ansible

Ansible can manage powerful automation tasks and can adapt to different workflows and environments. At the same time, new users of Ansible can very quickly use it to become more productive.

Let’s jump into the task,

First let us launch an EC2 instance. We will be configuring it as our master node.

Below are the services we need to launch on AWS using Ansible

  1. Create a VPC (Virtual Private Cloud)
  2. Create subnets in that VPC.
  3. Create an internet gateway.
  4. Create routing table.
  5. Create an internet gateway.
  6. Creating security group.
  7. Launch ec2-instances in that subnet of respective VPC.

VPC Architecture

Creating VPC

- name: VPC for EC2
ec2_vpc_net:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
name: "{{ vpc_name }}"
cidr_block: "{{ vpcCidrBlock }}"
region: "{{ region }}"
# enable dns support
dns_support: yes
# enable dns hostnames
dns_hostnames: yes
tenancy: default
state: "{{ state }}"
register: ec2_vpc_net_result

Creating subnets in the VPC.

- name: Subnet for VPC
ec2_vpc_subnet:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vpc_id: "{{ ec2_vpc_net_result.vpc.id }}"
region: "{{ region }}"
az: "{{ zone }}" # az is the availability zone
state: "{{ state }}"
cidr: "{{ subNetCidrBlock }}"
# enable public ip
map_public: yes
resource_tags:
Name: "{{ subnet_name }}"
register: subnet_result

Creating internet gateway

# create an internet gateway for the vpc
- name: create ec2 vpc internet gateway
ec2_vpc_igw:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vpc_id: "{{ ec2_vpc_net_result.vpc.id }}"
region: "{{ region }}"
state: "{{ state }}"
tags:
Name: "{{ igw_name }}"
register: igw_result

Creating routing table

- name: Creating routing table for EC2 VPC Public Subnet
ec2_vpc_route_table:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vpc_id: "{{ ec2_vpc_net_result.vpc.id }}"
region: "{{ region }}"
state: "{{ state }}"
tags:
Name: "{{ route_table_name }}"
subnets: [ "{{ subnet_result.subnet.id }}" ]# create routes
routes:
- dest: "{{ destinationCidrBlock }}"
gateway_id: "{{ igw_result.gateway_id }}"
register: public_route_table

Creating security group

- ec2_group:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vpc_id: "{{ ec2_vpc_net_result.vpc.id }}"
region: "{{ region }}"
state: "{{ state }}"
name: "{{ security_group_name }}"
description: "{{ security_group_name }}"
tags:
Name: "{{ security_group_name }}"
rules:
- proto: all
cidr_ip: "{{ port22CidrBlock }}"
rule_desc: allow all traffic
register: security_group_results

Launching EC2-instance

- name: "Provisioning OS on AWS using Ansible"
ec2:
key_name: "ansiblekey"
instance_type: "t2.micro"
image: "ami-08e0ca9924195beba"
wait: yes
count: 1
vpc_subnet_id: "{{ subnet_result.subnet.id }}"
assign_public_ip: yes
region: "ap-south-1"
state: present
group_id: "{{ security_group_results.group_id }}"
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
instance_tags:
Name: "{{ item }}"
loop: "{{ OS_Names }}"
  • Vars file of EC2 playbook :
---# vars file for EC2-launch 
image: "ami-089c6f2e3866f0f14"
instance_type: "t2.micro"
region: "us-east-2"
key: testingkey
vpc_subnet_id: "subnet-2321516f"
security_group_id: "sg-07a58bacace819405"
OS_Names:
- "K8S_Master" aws_access_key: 'xxxxxxxxxxxxxx'
aws_secret_key: 'xxxxxxxxxxxxxxxxxxxxxxxxxx'

We can confirm that the instance has launched after running the playbook by checking the AWS console.

Now let us configure it as our master node for the kubernetes cluster.

Steps to be performed in all the nodes.

1. First, install docker and start the services. For setting up Kubernetes need a docker driver with systemd. By default, systemd commands are not supported by containers.

2. Configure the systemd to the docker.

3. Configuring Kubernetes repository for yum.

4. Install Kubectl, Kubelete, and kubeadm.

Kubectl: The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters.

Kubelet: An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.

The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers that were not created by Kubernetes.

Kubeadm: It is a Kubernetes cluster management tool. It performs the necessary actions to create a Kubernetes cluster. It also useful for upgrading, joining multiple nodes, manages Kubernetes certificates, external authentications for the cluster.

  • Now we can start kubelet services
  • You also need to install iproute-tc to maintain the network inside Kubernetes cluster.

iproute-tc:- The Traffic Control utility manages queueing disciplines, their classes, and attached filters and actions. It is the standard tool to configure QoS in Linux. Simply it manages the network traffic in the cluster.

- name: Install Docker
package:
name: “docker”
state: present- name: Changing docker driver
copy:
src: daemon.json
dest: “/etc/docker/daemon.json”- name: Starting Docekr services
service:
name: docker
state: restarted
enabled: yes- name: Configuring Kubernetes Repository
copy:
dest: “/etc/yum.repos.d/kubernetes.repo”
src: kubernetes.repo- name: Installing the k8s applications
command: “yum install -y kubelet kubeadm kubectl — disableexcludes=kubernetes”- name: Starting kubelet services
service:
name: “kubelet”
state: restarted
- name: Installing iproute-tc
package:
name: ‘iproute-tc’
state: present- name: Pulling images
shell: kubeadm config images pull- name: Configuring network
shell: “echo ‘1’ > /proc/sys/net/bridge/bridge-nf-call-iptables”

The above steps are common for the master and worker nodes, as you move forward you can see the differences.

Setting up the Master Node

  • Config Kubernetes admin file
  • Starting the kubeadm init services
  • kubeadm generates a token, which helps the worker node to connect with the master node.
  • Configure the flannel with Kubernetes

Flannel is a virtual networking layer designed specifically for containers. This helps the pods and nodes to keep connected to the master node, it will configure automatically to every worker node.

- name: Initializing Kubeadm Servicessetup
command: kubeadm init — pod-network-cidr=10.240.0.0/16 — ignore-preflight-errors=NumCPU — ignore-preflight-errors=Mem
ignore_errors: true- name: Creating .kube directory
file:
path: ~/.kube
state: directory
mode: 0755- name: link the admin.conf with .kube/admin file
file:
src: /etc/kubernetes/admin.conf
dest: ~/.kube/config
state: link
mode: 0644- name: Generating a token
command: kubeadm token create — print-join-command
register: token- name: Set the kubeadm join command globally
set_fact:
kubernetes_join_command: >
{{ token.stdout }}
when: token.stdout is defined
delegate_to: “{{ item }}”
delegate_facts: true
with_items: “{{ groups[‘all’] }}”- name: Transfering network file
copy:
src: kube-flannel.yml
dest: /root/kube-flannel.yml- name: Creating an Overlay Network to connect worker nodes
command: kubectl apply -f /root/kube-flannel.yml

That’s it we are done with setting up the master node.

Now, let us set up the worker nodes and connect them to complete the multi-cloud m cluster

First let us set up a virtual instance on Microsoft Azure,

  • Here, first we need to create a resource group
  • Then we need to create a virtual network
  • After that we need to add subnet
  • Then we create public IP address
  • We will be enabling SSH in order to configure it with ansible later on
  • We need to create virtual network interface card
  • Finally we create the VM
# This playbook create an Azure VM with public IP, and open 22 port for SSH, and add ssh public key to the VM.
# This playbook create an Azure VM with public IP
# Change variables below to customize your VM deployment- name: Create Azure VM
hosts: localhost
connection: local
vars:
resource_group: "{{ resource_group_name }}"
vm_name: testvm
location: eastus
ssh_key: "<KEY>"
tasks:
- name: Create a resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}" - name: Create virtual network
azure_rm_virtualnetwork:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
address_prefixes: "10.0.0.0/16" - name: Add subnet
azure_rm_subnet:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
address_prefix: "10.0.1.0/24"
virtual_network: "{{ vm_name }}" - name: Create public IP address
azure_rm_publicipaddress:
resource_group: "{{ resource_group }}"
allocation_method: Static
name: "{{ vm_name }}" - name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound - name: Create virtual network interface card
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
virtual_network: "{{ vm_name }}"
subnet: "{{ vm_name }}"
public_ip_name: "{{ vm_name }}"
security_group: "{{ vm_name }}"- name: Create VM
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "{{ vm_name }}"
vm_size: Standard_DS1_v2
admin_username: azureuser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/azureuser/.ssh/authorized_keys
key_data: "{{ ssh_key }}"
network_interfaces: "{{ vm_name }}"
image:
offer: CentOS
publisher: OpenLogic
sku: 7.5
version: latest

This will launch a CentOS VM.

Now let us launch a VM over GCP,

  • First we need to create a compute disk
  • Then we need to create an address (this is for the IP address)
  • Finally we configure and create the instance
- name: Create an instance
hosts: localhost
gather_facts: no
vars:
gcp_project: my-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /home/my_account.json
zone: "us-central1-a"
region: "us-central1"tasks:
- name: create a disk
gcp_compute_disk:
name: 'disk-instance'
size_gb: 50
source_image: 'projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: disk
- name: create a address
gcp_compute_address:
name: 'address-instance'
region: "{{ region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a instance
gcp_compute_instance:
state: present
name: test-vm
machine_type: n1-standard-1
disks:
- auto_delete: true
boot: true
source: "{{ disk }}"
network_interfaces:
- network: null # use default
access_configs:
- name: 'External NAT'
nat_ip: "{{ address }}"
type: 'ONE_TO_ONE_NAT'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
register: instance- name: Wait for SSH to come up
wait_for: host={{ address.address }} port=22 delay=10 timeout=60- name: Add host to groupname
add_host: hostname={{ address.address }} groupname=new_instances

We are done with launching the instances now we need to install docker and kubernetes as mentioned above.

Setting up as Worker Nodes

- name: connecting to the master node  shell: >  {{ kubernetes_join_command }}

That’s it we are done it setting up the cluster.

To check everything is working fine

  • Now, let’s check the status of the cluster by logging in to our EC2 master node.
  • The Kubelet service is active and running. ($ Systemctl status kubelet)
  • Docker is also active and running. ($ Systemctl status docker)

To check the status of the pods,

$ kubectl get nodes

We can create an application of our choice and deploy it in the Kubernetes cluster.

#deploy the app 
$ kubectl create deployment myapp --image=vimal13/apache-webserver-php#expose the deployment to the real world
$ kubectl expose service myapp --port=80 --type=NodePort#We will get a link to connect to the application

We also can perform the above steps in the GUI console in a similar fashion for a better understanding of the process.

--

--

Btech ECE pursuing student ....Arth Learner

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store