12 KiB
k8s
This folder holds all the services required for my private infrastructure. Following contraints apply:
- Order of implementation is top down.
- Every namespace has a subfolder within this subdirectory.
- helm3
Operations
Cleanup Error pods.
kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod
Redeploy a deployment:
DEPLOYMENT="rstudio"
NAMESPACE="datalab"
kubectl patch deployment $DEPLOYMENT -n $NAMESPACE -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$( date +%s )\"}}}}}"
Deployment (non persistent stuff)
ingress-nginx
Apply with helm:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install --create-namespace ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx -f ingress-nginx/ingress-nginx.yaml
cert-manager
Apply with helm. See chart.:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install --create-namespace cert-manager jetstack/cert-manager -n cert-manager -f cert-manager/cert-manager.yaml
# apply the two issuer classes
kubectl apply -f cert-manager/staging-issuer.yaml
kubectl apply -f cert-manager/production-issuer.yaml
To test all this you may use the kuaard demo project:
$ kubectl apply -f kuard
# checkout: https://kuard.haumdaucher.de
$ kubectl delete -f kuard
openebs
Update with the follwoing command. Chart can be found here.
Pitfall:
- On fresh installation: activate ndmOperator, so that CRDs are correctly installed. It may be deactivated afterwards.
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm upgrade --install --create-namespace -f openebs/openebs.yml openebs --namespace openebs openebs/openebs
k apply -f openebs/storageclass.yml
minio
See chart on GitHub.
helm repo add minio https://helm.min.io/
helm repo update
helm upgrade --install -f minio/minio.secret.yaml --namespace minio --create-namespace minio minio/minio
# #
helm delete minio -n minio
kubectl delete ns minio
velero
Backup tool. See chart README.
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm upgrade --install --create-namespace --namespace velero -f ./velero/velero.secret.yaml velero vmware-tanzu/velero
kubectl create secret generic rclone-config --from-file=./velero/rclone.secret
kubectl apply -f velero/dropbox_sync.yml
# #
helm delete velero -n velero
kubectl delete ns velero
A manual backup may be created executing the following command. Note: Keep backuped namespaces in sync with config from helm chart!!!
DATE=$( date +%Y%m%d )
velero backup create $DATE --include-namespaces datalab,development,nextcloud,tt-rss,zebrium,mailu --wait
Add private docker registry
TODO: chart no longer exists. Check how to replace this someday.
# create secret base64 encoded and put it in htpasswd helm chart
USER='moritz'
PASSWORD='xxx'
docker run --entrypoint htpasswd --rm registry:2 -Bbn $USER $PASSWORD
# #
helm upgrade --install --create-namespace docker-registry stable/docker-registry -n development -f development/registry.secret.yaml
##kubectl apply -f development/registry.secret.yaml
creating docker-pull-secret
Create credentials secret according to docu:
namespaces="datalab web"
for i in $( echo $namespaces ) ; do
kubectl create secret docker-registry registry-haumdaucher-de \
-n $i \
--docker-server=registry.haumdaucher.de \
--docker-username=moritz \
--docker-password='xxx' \
--docker-email=moritz@moritzgraf.de \
--dry-run -o yaml > ./${i}/docker-pull.yaml.secret
done
# apply
for i in $( echo $namespaces ) ; do
kubectl apply -f ${i}/docker-pull.yaml.secret
done
networking with calico
Install calicoctl in cluster
kubectl apply -n kube-system -f https://docs.projectcalico.org/manifests/calicoctl.yaml
Then you may send commands like:
kubectl exec -ti -n kube-system calicoctl -- /calicoctl get workloadendpoints -n mailu
Or on the server directly:
sudo -E /usr/local/bin/calicoctl node checksystem
metrics
See this documentation.
kubectl exec -ti -n kube-system calicoctl -- /calicoctl patch felixConfiguration default --patch '{"spec":{"prometheusMetricsEnabled": true}}'
kubectl apply -n kube-system -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: felix-metrics-svc
namespace: kube-system
spec:
selector:
k8s-app: calico-node
ports:
- port: 9091
targetPort: 9091
EOF
metrics-server
Getting resources (already done):
cd kube-system
curl -L -o metrics-server.yml https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# add parameters to deployment:
# - --kubelet-preferred-address-types=InternalIP
# - --v=2
# - --kubelet-insecure-tls
Implement metrics-server:
kubectl apply -n kube-system -f kube-system/metrics-server.yml
ameliegraf
Note: Not yet finished. Switched back to portfolio adresses.
The website redirection for ameliegraf.de.
k create ns ameliegraf
k apply -f ameliegraf/ameliegraf.yml
Tekton
Implementation as described in the docs.
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
kubectl apply --filename https://github.com/tektoncd/dashboard/releases/latest/download/tekton-dashboard-release.yaml
#basic-auth, see https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
htpasswd -c ./tekton-pipelines/auth tekton
kubectl delete secret -n tekton-pipelines basic-auth
kubectl create secret -n tekton-pipelines generic basic-auth --from-file=tekton-pipelines/auth
kubectl apply -f tekton-pipelines/tekton-ingress.yml
rm tekton-pipelines/auth
Install client side tools:
brew tap tektoncd/tools
brew install tektoncd/tools/tektoncd-cli
Deployment (persistent stuff)
From here everything should be covered by the backup. Implenting those objects should already be performed by the velero backup.
rstudio
Currently only for one user:
kubectl apply -f datalab/rstudio.yaml
tt-rss
Includes persistent data from mariadb table tt-rss.
helm upgrade --install tt-rss-mariadb bitnami/mariadb -n tt-rss -f tt-rss/tt-rss-mariadb.secret.yml
helm upgrade --install tt-rss-phpmyadmin bitnami/phpmyadmin -n tt-rss -f tt-rss/tt-rss-phpmyadmin.yml
kubectl apply -f tt-rss/
monitoring
The prometheus-operator, now called kube-prometheus-stack is used.
kubectl create ns monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install --create-namespace prometheus-operator prometheus-community/kube-prometheus-stack -n monitoring -f monitoring/prometheus-operator.secret.yml
gitea
In case my PRs have been accepted this is no longer necessary:
git clone git@github.com:iptizer/gitea-chart.git
# from chart repo
helm upgrade --install gitea k8s-land/gitea -n development -f development/gitea.secret.yml
# from local folder
helm upgrade --install gitea ./gitea-chart -n development -f development/gitea.secret.yml
# phpmyadmin
helm upgrade --install gitea-phpmyadmin bitnami/phpmyadmin -n development -f development/gitea-phpmyadmin.yml
backup & restore
See the backup cronjob in the /backup/ folder.
For backup & restore see gitea documentation.
Download the gitea-dump locally and proceed with the following commands:
❯ mkdir gitea_restore
❯ mv gitea-dump-1587901016.zip gitea_restore
❯ cd gitea_restore
❯ unzip gitea-dump-1587901016.zip
Archive: gitea-dump-1587901016.zip
inflating: gitea-repo.zip
creating: custom/
[...]
Import of sql may be done via phpmyadmin.
Copy to remote pod:
kubectl cp ./gitea-repo.zip gitea-gitea-69cd9bc59b-q2b2f:/data/git/
And finally unzip inside shell on pod:
cd /data/git/
unzip gitea-repo.zip
mv repositories/ gitea-repositories/
chown -R git. ./gitea-repositories/
Then login to git.moritzgraf.de and proceed with default values, or adjust them.
nextcloud
helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update
helm upgrade --install nextcloud nextcloud/nextcloud -n nextcloud --version 2.0.2 -f nextcloud/nextcloud.secret.yml
helm upgrade --install nextcloud-phpmyadmin bitnami/phpmyadmin -n nextcloud -f nextcloud/nextcloud-phpmyadmin.yml
backup & restore
#TODO with Velero
Jupyter
Using the project zero-to-jupyterhub. Helm chart can be found here.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm upgrade --cleanup-on-fail --install jupyter jupyterhub/jupyterhub --namespace datalab --values datalab/jupyter-values.yaml
mailu
Using the mailu helm chart.
helm repo add mailu https://mailu.github.io/helm-charts/
helm repo update
helm upgrade --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
helm upgrade --install mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
helm template mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
troubleshooting
Test imap from console:
openssl s_client -crlf -connect moritzgraf.de:993
migrate
# backup on moritzgraf.de
ssh moritzgraf.de "sudo su - docker -c 'cd /home/docker/mailu && docker-compose stop' && sudo su - root -c 'cd /home/docker && tar czvf mailu.tar.gz ./mailu && mv mailu.tar.gz /home/moritz/ && chown moritz. /home/moritz/mailu.tar.gz' && scp mailu.tar.gz haumdaucher.de:/home/moritz/"
# terraform change
cd ../terraform && terraform apply
# helm apply
cd ../k8s
helm upgrade --create-namespace --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
# apply mailu and scale all to 0
kc mailu
k scale --replicas=0 --all=true deploy
# apply restore pod
k apply -f mailu/restore.yaml
# copy tar gz
ssh one
k cp mailu.tar.gz restore:/data/
# exec to pod and arrange persistence
k exec -it restore -- bash
cd /data
tar xzvf mailu.tar.gz
mv mailu/data/* ./admin/
mv mailu/dkim/* ./dkim/
mv mailu/mail/* ./dovecotmail/
chown -R mail:man ./dovecotmail
# scale up
k scale --replicas=1 --all=true deploy
Checks:
- browser mail.moritzgraf.de & login
- browser mail.moritzgraf.de/admin
Add mopbot & corona & corona-api
kubectl apply -f datalab/
zebrium
zebrium.io hat "AI powered" troubleshooting. This is a test use-case for my cluster. It was installed using the following command:
kubectl create namespace zebrium
helm install zlog-collector zlog-collector --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-kubernetes-collector/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy,zebrium.timezone=Europe/Berlin
helm install zstats-collector zstats --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-stats/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com/stats/api/v1/zstats,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy
Benchmark with dbench
Should be usually uncommented, but dbbench may be executed to check speed of local filesystem:
k create ns dbench
k apply -f dbench/
k delete -f dbench
Web
kubectl create ns web
kubectl apply -n web ./re
kubectl apply -f web/