infrapuzzle/k8s/README.md

339 lines
8.9 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# k8s
This folder holds all the services required for my private infrastructure. Following contraints apply:
* Order of implementation is top down.
* Every namespace has a subfolder within this subdirectory.
* helm3
# Operations
Cleanup `Error` pods.
```sh
kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod
```
Redeploy a deployment:
```sh
DEPLOYMENT="rstudio"
NAMESPACE="datalab"
kubectl patch deployment $DEPLOYMENT -n $NAMESPACE -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$( date +%s )\"}}}}}"
```
# Deployment
## namespaces
```sh
namespaces="flux cert-manager nginx-ingress infrapuzzle kuard auth nextcloud datalab web development tt-rss backup monitoring nextcloud mailu"
for i in $( echo $NAMESPACES ) ; do
k create ns $i
done
```
## helm repositories
```sh
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add jetstack https://charts.jetstack.io
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add k8s-land https://charts.k8s.land
helm repo add mailu https://mailu.github.io/helm-charts/
helm repo update
```
## [ingress-controller](https://github.com/helm/charts/tree/master/stable/nginx-ingress)
Apply with helm-operator:
```bash
helm upgrade nginx-ingress stable/nginx-ingress -n nginx-ingress -f nginx-ingress/nginx-ingress.yaml
```
## [cert-manager](https://cert-manager.io/docs/tutorials/acme/ingress/)
Apply with helm-operator:
```bash
helm upgrade cert-manager jetstack/cert-manager -n cert-manager -f cert-manager/cert-manager.yaml
# probably not even needed:
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/master/deploy/manifests/00-crds.yaml
# this is required:
$ kubectl apply -f cert-manager/staging-issuer.yaml
$ kubectl apply -f cert-manager/production-issuer.yaml
```
To test all this you may use the kuaard demo project:
```sh
$ kubectl apply -f kuard
# checkout: https://kuard.haumdaucher.de
$ kubectl delete -f kuard
```
## Add private docker registry
```sh
# create secret base64 encoded and put it in htpasswd helm chart
USER='moritz'
PASSWORD='xxx'
docker run --entrypoint htpasswd --rm registry:2 -Bbn $USER $PASSWORD
# #
helm upgrade --install docker-registry stable/docker-registry -n development -f development/registry.secret.yaml
##kubectl apply -f development/registry.secret.yaml
```
### creating docker-pull-secret
Create credentials secret [according to docu](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line):
```sh
namespaces="datalab"
for i in $( echo $namespaces ) ; do
kubectl create secret docker-registry registry-haumdaucher-de \
-n $i \
--docker-server=registry.haumdaucher.de \
--docker-username=moritz \
--docker-password='xxx' \
--docker-email=moritz@moritzgraf.de \
--dry-run -o yaml > ./${i}/docker-pull.yaml.secret
done
# apply
for i in $( echo $namespaces ) ; do
kubectl apply -f ${i}/docker-pull.yaml.secret
done
```
## openebs
Formerly installed using the helm operator, may now be updated using the following command:
```sh
helm upgrade --install -f openebs/openebs.yml --name openebs --namespace openebs stable/openebs
```
## networking with calico
Install calicoctl in cluster
```sh
kubectl apply -n kube-system -f https://docs.projectcalico.org/manifests/calicoctl.yaml
```
Then you may send commands like:
```sh
kubectl exec -ti -n kube-system calicoctl -- /calicoctl get workloadendpoints -n mailu
```
Or on the server directly:
```sh
sudo -E /usr/local/bin/calicoctl node checksystem
```
### metrics
See this [documentation](https://docs.projectcalico.org/maintenance/monitor-component-metrics).
```sh
kubectl exec -ti -n kube-system calicoctl -- /calicoctl patch felixConfiguration default --patch '{"spec":{"prometheusMetricsEnabled": true}}'
kubectl apply -n kube-system -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: felix-metrics-svc
namespace: kube-system
spec:
selector:
k8s-app: calico-node
ports:
- port: 9091
targetPort: 9091
EOF
```
## rstudio
Currently only for one user:
```sh
kubectl apply -f datalab/rstudio.yaml
```
## tt-rss
Includes *persistent data* from mariadb table `tt-rss`.
```sh
helm upgrade --install tt-rss-mariadb bitnami/mariadb -n tt-rss -f tt-rss/tt-rss-mariadb.secret.yml
helm upgrade --install tt-rss-phpmyadmin bitnami/phpmyadmin -n tt-rss -f tt-rss/tt-rss-phpmyadmin.yml
kubectl apply -f tt-rss/
```
## monitoring
```sh
helm upgrade --install prometheus-operator stable/prometheus-operator -n monitoring -f monitoring/prometheus-operator.secret.yml
```
## gitea
In case my PRs have been accepted this is no longer necessary:
```sh
git clone git@github.com:iptizer/gitea-chart.git
```
```sh
# from chart repo
helm upgrade --install gitea k8s-land/gitea -n development -f development/gitea.secret.yml
# from local folder
helm upgrade --install gitea ./gitea-chart -n development -f development/gitea.secret.yml
# phpmyadmin
helm upgrade --install gitea-phpmyadmin bitnami/phpmyadmin -n development -f development/gitea-phpmyadmin.yml
```
### backup & restore
See the backup cronjob in the `/backup/` folder.
For backup & restore see [gitea documentation](https://docs.gitea.io/en-us/backup-and-restore/).
Download the `gitea-dump` locally and proceed with the following commands:
```sh
mkdir gitea_restore
mv gitea-dump-1587901016.zip gitea_restore
cd gitea_restore
unzip gitea-dump-1587901016.zip
Archive: gitea-dump-1587901016.zip
inflating: gitea-repo.zip
creating: custom/
[...]
```
Import of sql may be done via phpmyadmin.
Copy to remote pod:
```sh
kubectl cp ./gitea-repo.zip gitea-gitea-69cd9bc59b-q2b2f:/data/git/
```
And finally unzip inside shell on pod:
```sh
cd /data/git/
unzip gitea-repo.zip
mv repositories/ gitea-repositories/
chown -R git. ./gitea-repositories/
```
Then login to git.moritzgraf.de and proceed with default values, or adjust them.
## nextcloud
```sh
helm upgrade --install nextcloud stable/nextcloud -n nextcloud -f nextcloud/nextcloud.secret.yml
helm upgrade --install nextcloud-phpmyadmin bitnami/phpmyadmin -n nextcloud -f nextcloud/nextcloud-phpmyadmin.yml
```
### backup & restore
## mailu
Using the [mailu helm chart](https://github.com/Mailu/helm-charts/tree/master/mailu).
```sh
helm repo add mailu https://mailu.github.io/helm-charts/
helm repo update
helm upgrade --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
helm upgrade --install mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
helm template mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
```
### troubleshooting
Test imap from console:
```sh
openssl s_client -crlf -connect moritzgraf.de:993
```
### migrate
```sh
# backup on moritzgraf.de
ssh moritzgraf.de "sudo su - docker -c 'cd /home/docker/mailu && docker-compose stop' && sudo su - root -c 'cd /home/docker && tar czvf mailu.tar.gz ./mailu && mv mailu.tar.gz /home/moritz/ && chown moritz. /home/moritz/mailu.tar.gz' && scp mailu.tar.gz haumdaucher.de:/home/moritz/"
# terraform change
cd ../terraform && terraform apply
# helm apply
cd ../k8s
helm upgrade --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
# apply mailu and scale all to 0
kc mailu
k scale --replicas=0 --all=true deploy
# apply restore pod
k apply -f mailu/restore.yaml
# copy tar gz
ssh one
k cp mailu.tar.gz restore:/data/
# exec to pod and arrange persistence
k exec -it restore -- bash
cd /data
tar xzvf mailu.tar.gz
mv mailu/data/* ./admin/
mv mailu/dkim/* ./dkim/
mv mailu/mail/* ./dovecotmail/
chown -R mail:man ./dovecotmail
# scale up
k scale --replicas=1 --all=true deploy
```
Checks:
* browser mail.moritzgraf.de & login
* browser mail.moritzgraf.de/admin
## minio
```sh
kubectl apply -f minio
```
## Add mopbot & corona & corona-api
```sh
kubectl apply -f datalab/
```
## zebrium
[zebrium.io](https://zebrium.io) hat "AI powered" troubleshooting. This is a test use-case for my cluster. It was installed using the following command:
```sh
kubectl create namespace zebrium
helm install zlog-collector zlog-collector --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-kubernetes-collector/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy,zebrium.timezone=Europe/Berlin
helm install zstats-collector zstats --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-stats/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com/stats/api/v1/zstats,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy
```
## Benchmark with dbench
Should be usually uncommented, but dbbench may be executed to check speed of local filesystem:
```sh
k create ns dbench
k apply -f dbench/
k delete -f dbench
```
## Web
```sh
kubectl apply -f web/
```