Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
56 views11 pages

Docker k8s With Ansible

The document discusses using Ansible to automate the installation of Docker and Kubernetes on Ubuntu VMs. It provides YAML playbooks to install Docker, create users, install Kubernetes components, initialize the Kubernetes master, and join worker nodes to the cluster.

Uploaded by

Safwen Soker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views11 pages

Docker k8s With Ansible

The document discusses using Ansible to automate the installation of Docker and Kubernetes on Ubuntu VMs. It provides YAML playbooks to install Docker, create users, install Kubernetes components, initialize the Kubernetes master, and join worker nodes to the cluster.

Uploaded by

Safwen Soker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Install docker and kubernetes

on Ubuntu with ansible

Automation of deploy kubernetes cluster in Ubuntu VMs

I. Install ansible

Step 1 — Installing Ansible

From your control node, run the following command to include the official project’s PPA
(personal package archive) in your system’s list of sources:

Commands
sudo apt-add-repository ppa:ansible/ansible
sudo apt update
sudo apt install ansible

II. Files preparation


Hosts

In Ansible, the "hosts" file is a configuration file that specifies the hosts or servers that
Ansible will manage. This file is located in the Ansible control machine and is used to define
the inventory of hosts that Ansible will manage.

Exple:
[masters] <------------------------------------------------------------ hosts group

master ansible_connection=local <--------------------------------------------------- localhost

[workers]

worker1 ansible_host=10.25.126.146 ansible_user=root <---------------------external host

worker2 ansible_host=10.25.126.147 ansible_user=root

worker3 ansible_host=10.25.126.148 ansible_user=root

worker4 ansible_host=10.25.126.149 ansible_user=root

worker5 ansible_host=10.25.126.150 ansible_user=root

worker6 ansible_host=10.25.126.151 ansible_user=root

worker7 ansible_host=10.25.126.152 ansible_user=root

worker8 ansible_host=10.25.126.153 ansible_user=root

NB: Make sure that you have SSH access to servers from your control node

NB: You don't have to install ansible in all hosts just install it in the control node

Docker.yml
We will create an ansible YAML file to help us to install docker

First create a yaml file and name it docker.yml , then put this configuration:

---

- hosts: all

become: true

vars:

contaiiner_count: 4

default_container_name: docker

default_container_image: ubuntu

default_container_command: sleep 1

tasks:
- name: Install aptitude

apt:

name: aptitude

state: latest

update_cache: true

- name: Install required system packages

apt:

pkg:

- apt-transport-https

- ca-certificates

- curl

- software-properties-common

- python3-pip

- virtualenv

- python3-setuptools

state: latest

update_cache: true

- name: Add Docker GPG apt Key

apt_key:

url: https://download.docker.com/linux/ubuntu/gpg

state: present

- name: AddDocker Repository

apt_repository:

repo: deb https://download.docker.com/linux/ubuntu focal stable

state: present

- name: Update apt and install docker-ce

apt:
name: docker-ce

state: latest

update_cache: true

- name: Install Docker Module for python

pip:

name: docker

- name: Pull default Docker image

community.docker.docker_image:

name: "{{ default_container_image }}"

source: pull

- name: Create default containers

community.docker.docker_container:

name: "{{ default_container_name }}{{ item }}"

image: "{{ default_container_image }}"

command: "{{ default_container_command }}"

state: present

with_sequence: count={{ container_count }}

Users.yml

Creating a Kubernetes user with Ansible Playbook


Our first task in setting up the Kubernetes cluster is to create a new user on each node. This will be a non-
root user, that has sudo privileges. It’s a good idea not to use the root account for day to day operations, of
course. We can use Ansible to set the account up on all three nodes, quickly and easily. First, create a file in
the working directory:

- hosts: 'workers, masters'

become: yes

tasks:
- name: create the kube user account

user: name=kube append=yes state=present createhome=yes shell=/bin/bash

- name: allow 'kube' to use sudo without needing a password

lineinfile:

dest: /etc/sudoers

line: 'kube ALL=(ALL) NOPASSWD: ALL'

validate: 'visudo -cf %s'

- name: set up authorized keys for the kube user

authorized_key: user=kube key="{{item}}"

with_file:

- ~/.ssh/id_rsa.pub

Kubernetes.yml

Install Kubernetes with Ansible Playbook


Now we’re getting to the fun part! With our user now created, we can move on to installing Kubernetes.
Lets dive straight in and have a look at the playbook, which I have named kubernetes.yml:

---

- hosts: "masters, workers"

remote_user: ubuntu

become: yes

become_method: sudo

become_user: root

gather_facts: yes

connection: ssh

tasks:
- name: Create containerd config file

file:

path: "/etc/modules-load.d/containerd.conf"

state: "touch"

- name: Add conf for containerd

blockinfile:

path: "/etc/modules-load.d/containerd.conf"

block: |

overlay

br_netfilter

- name: modprobe

shell: |

sudo modprobe overlay

sudo modprobe br_netfilter

- name: Set system configurations for Kubernetes networking

file:

path: "/etc/sysctl.d/99-kubernetes-cri.conf"

state: "touch"

- name: Add conf for containerd

blockinfile:

path: "/etc/sysctl.d/99-kubernetes-cri.conf"

block: |
net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-ip6tables = 1

- name: Apply new settings

command: sudo sysctl --system

- name: install containerd

shell: |

sudo apt-get update && sudo apt-get install -y containerd

sudo mkdir -p /etc/containerd

sudo containerd config default | sudo tee /etc/containerd/config.toml

sudo systemctl restart containerd

- name: disable swap

shell: |

sudo swapoff -a

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

- name: install and configure dependencies

shell: |

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

- name: Create kubernetes repo file

file:
path: "/etc/apt/sources.list.d/kubernetes.list"

state: "touch"

- name: Add K8s Source

blockinfile:

path: "/etc/apt/sources.list.d/kubernetes.list"

block: |

deb https://apt.kubernetes.io/ kubernetes-xenial main

- name: install kubernetes

shell: |

sudo apt-get update

sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00

sudo apt-mark hold kubelet kubeadm kubectl

Masters.yml

Creating a Kubernetes Cluster Master Node using Ansible Playbook


Now we should have containerd and Kubernetes installed on all our nodes. The next step is to create the
cluster on the master node. This is the masters.yml file, which will initialise the Kubernetes cluster on my
master node and set up the pod network, using Calico:

- hosts: masters

become: yes

tasks:

- name: initialize the cluster

shell: kubeadm init --pod-network-cidr=10.244.0.0/16

args:

chdir: $HOME

creates: cluster_initialized.txt
- name: create .kube directory

become: yes

become_user: kube

file:

path: $HOME/.kube

state: directory

mode: 0755

- name: copies admin.conf to user's kube config

copy:

src: /etc/kubernetes/admin.conf

dest: /home/kube/.kube/config

remote_src: yes

owner: kube

- name: install Pod network

become: yes

become_user: kube

shell: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

args:

chdir: $HOME

- name: Get the token for joining the worker nodes

become: yes

become_user: kube

shell: kubeadm token create --print-join-command


register: kubernetes_join_command

- debug:

msg: "{{ kubernetes_join_command.stdout }}"

- name: Copy join command to local file.

become: yes

local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}"


dest="/tmp/kubernetes_join_command" mode=0777

Workers.yml

Join Worker Nodes to Kubernetes Cluster using Ansible Playbook


Now we have a Kubernetes cluster initialised, the final step is to join our worker nodes to the cluster. To do
so, the final workers.yml – contains the following:

- hosts: workers

become: yes

gather_facts: yes

tasks:

- name: Copy join command from Ansiblehost to the worker nodes.

become: yes

copy:

src: /tmp/kubernetes_join_command

dest: /tmp/kubernetes_join_command

mode: 0777

- name: Join the Worker nodes to the cluster.

become: yes
command: sh /tmp/kubernetes_join_command

register: joined_or_not

III. Exexute yaml files


Run ansible command

ansible-playbook -i hosts filename.yml

ansible-playbook: command to run ansible playbook

-i hosts : The file that carries the hosts

filename.yml: The playbook you want to run

NB: Make sure that there is no problem in the yaml file format

NB: Sometimes you have to run the command in root mode you should just add sudo to the
command : => sudo ansible-playbook -i hosts filename.yml

NB: Run playbooks in the order used in documentation

You find playbooks in git.pyxis.com.tn as soon as possible .

Thanks for your attention

You might also like