infrapuzzle/bootstrap_hetzner/README.md

2.6 KiB

Bootstrap

The following lines document how to initalize a fresh cluster. On a real cluster, or using Vagrant. It therefore assumes to clone kubespray to this folder. It will be excluded in .gitignore and all files are in this folder.

Use kubespray tag as parameter

./init.sh "release-2.14"

See kubespray.io oj detailed information about kubespray. Though it seems to be a littlebit outdated.

Vagrant

cd kubespray
vagrant up
# up and abkle to ssh
vagrant ssh k8s-1

Prod

Prepare server:

  • deactivate swap!
  • moritz username ALL=(ALL) NOPASSWD:ALL
  • `
ssh centos@<ip>
# auth via pw
sudo su - root
adduser moritz
visudo # add as sudo user
su - moritz
sudo yum -y install vim python3
ssh-keygen
vim .ssh/authorized_users # paste key
chmod 644 .ssh/authorized_keys
# check whether login works with ssh key
sudo vim /etc/ssh/sshd_config # remove pw auth & root login
sudo yum upgrade -y && sudo reboot

Install Kubernetes:

. ./init.sh
# follow instructions from output, sth like:
cd kubespray
ansible-playbook -i inventory/prod/inventory.ini cluster.yml

And get credentials:

ssh <ip>
sudo su - root
cd
cp -r .kube /home/moritz/
chown -R moritz. /home/moritz/.kube
#ctrl + d
kubectl get ns # test connection
#ctrl + d
scp haumdaucher.de:/home/moritz/.kube/config .kube/config

Foreward in k8s-directory.

Upgrade cluster

Check the current default value of kube_version in cloned repository.

cd kubespray
ansible-playbook -i inventory/prod/inventory.ini -e kube_version=v1.19.4 -e upgrade_cluster_setup=true cluster.yml
# or just the newest version
ansible-playbook -i inventory/prod/inventory.ini -e upgrade_cluster_setup=true cluster.yml
# upgrade to specific calico version (did not trigger/ failed)
ansible-playbook -i inventory/prod/inventory.ini -e upgrade_cluster_setup=true -e calico_version=v3.15.2 cluster.yml --tags=network

History:

  • 2020-04-18 kube_version=v1.16.8 kubespray_branch=release-2.12

Add node

See documentation.

Note: This was more or less a trial and error approach. Running different playbooks over and over again got it right at some point.

ansible-playbook -i inventory/prod/inventory.ini --limit=ns3088070.ip-37-59-40.eu,ns3100058.ip-37-59-61.eu scale.yml
ansible-playbook -i inventory/prod/inventory.ini --limit=etcd,kube-master -e ignore_assert_errors=yes cluster.yml

This runs everything and is kind of idempotent:

ansible-playbook -i inventory/prod/inventory.ini cluster.yml