|
|
||
|---|---|---|
| .. | ||
| auth | ||
| backup | ||
| cert-manager | ||
| datalab | ||
| dbench | ||
| development | ||
| kuard | ||
| longhorn-system | ||
| mailu | ||
| minio | ||
| monitoring | ||
| nextcloud | ||
| nginx-ingress | ||
| openebs | ||
| troubleshoot | ||
| tt-rss | ||
| web | ||
| README.md | ||
README.md
k8s
This folder holds all the services required for my private infrastructure. Following contraints apply:
- Order of implementation is top down.
- Every namespace has a subfolder within this subdirectory.
- helm3
Operations
Cleanup Error pods.
kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod
Redeploy a deployment:
DEPLOYMENT="rstudio"
NAMESPACE="datalab"
kubectl patch deployment $DEPLOYMENT -n $NAMESPACE -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$( date +%s )\"}}}}}"
Deployment
namespaces
namespaces="flux cert-manager nginx-ingress infrapuzzle kuard auth nextcloud datalab web development tt-rss backup monitoring nextcloud mailu"
for i in $( echo $NAMESPACES ) ; do
k create ns $i
done
helm repositories
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add jetstack https://charts.jetstack.io
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add k8s-land https://charts.k8s.land
helm repo add mailu https://mailu.github.io/helm-charts/
helm repo update
ingress-controller
Apply with helm-operator:
helm upgrade nginx-ingress stable/nginx-ingress -n nginx-ingress -f nginx-ingress/nginx-ingress.yaml
cert-manager
Apply with helm-operator:
helm upgrade cert-manager jetstack/cert-manager -n cert-manager -f cert-manager/cert-manager.yaml
# probably not even needed:
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/master/deploy/manifests/00-crds.yaml
# this is required:
$ kubectl apply -f cert-manager/staging-issuer.yaml
$ kubectl apply -f cert-manager/production-issuer.yaml
To test all this you may use the kuaard demo project:
$ kubectl apply -f kuard
# checkout: https://kuard.haumdaucher.de
$ kubectl delete -f kuard
Add private docker registry
# create secret base64 encoded and put it in htpasswd helm chart
USER='moritz'
PASSWORD='xxx'
docker run --entrypoint htpasswd --rm registry:2 -Bbn $USER $PASSWORD
# #
helm upgrade --install docker-registry stable/docker-registry -n development -f development/registry.secret.yaml
##kubectl apply -f development/registry.secret.yaml
creating docker-pull-secret
Create credentials secret according to docu:
namespaces="datalab"
for i in $( echo $namespaces ) ; do
kubectl create secret docker-registry registry-haumdaucher-de \
-n $i \
--docker-server=registry.haumdaucher.de \
--docker-username=moritz \
--docker-password='xxx' \
--docker-email=moritz@moritzgraf.de \
--dry-run -o yaml > ./${i}/docker-pull.yaml.secret
done
# apply
for i in $( echo $namespaces ) ; do
kubectl apply -f ${i}/docker-pull.yaml.secret
done
openebs
Formerly installed using the helm operator, may now be updated using the following command:
helm upgrade --install -f openebs/openebs.yml --name openebs --namespace openebs stable/openebs
networking with calico
Install calicoctl in cluster
kubectl apply -n kube-system -f https://docs.projectcalico.org/manifests/calicoctl.yaml
Then you may send commands like:
kubectl exec -ti -n kube-system calicoctl -- /calicoctl get workloadendpoints -n mailu
Or on the server directly:
sudo -E /usr/local/bin/calicoctl node checksystem
metrics
See this documentation.
kubectl exec -ti -n kube-system calicoctl -- /calicoctl patch felixConfiguration default --patch '{"spec":{"prometheusMetricsEnabled": true}}'
kubectl apply -n kube-system -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: felix-metrics-svc
namespace: kube-system
spec:
selector:
k8s-app: calico-node
ports:
- port: 9091
targetPort: 9091
EOF
metrics-server
helm upgrade --install metrics-server -n kube-system stable/metrics-server
rstudio
Currently only for one user:
kubectl apply -f datalab/rstudio.yaml
tt-rss
Includes persistent data from mariadb table tt-rss.
helm upgrade --install tt-rss-mariadb bitnami/mariadb -n tt-rss -f tt-rss/tt-rss-mariadb.secret.yml
helm upgrade --install tt-rss-phpmyadmin bitnami/phpmyadmin -n tt-rss -f tt-rss/tt-rss-phpmyadmin.yml
kubectl apply -f tt-rss/
monitoring
helm upgrade --install prometheus-operator stable/prometheus-operator -n monitoring -f monitoring/prometheus-operator.secret.yml
gitea
In case my PRs have been accepted this is no longer necessary:
git clone git@github.com:iptizer/gitea-chart.git
# from chart repo
helm upgrade --install gitea k8s-land/gitea -n development -f development/gitea.secret.yml
# from local folder
helm upgrade --install gitea ./gitea-chart -n development -f development/gitea.secret.yml
# phpmyadmin
helm upgrade --install gitea-phpmyadmin bitnami/phpmyadmin -n development -f development/gitea-phpmyadmin.yml
backup & restore
See the backup cronjob in the /backup/ folder.
For backup & restore see gitea documentation.
Download the gitea-dump locally and proceed with the following commands:
❯ mkdir gitea_restore
❯ mv gitea-dump-1587901016.zip gitea_restore
❯ cd gitea_restore
❯ unzip gitea-dump-1587901016.zip
Archive: gitea-dump-1587901016.zip
inflating: gitea-repo.zip
creating: custom/
[...]
Import of sql may be done via phpmyadmin.
Copy to remote pod:
kubectl cp ./gitea-repo.zip gitea-gitea-69cd9bc59b-q2b2f:/data/git/
And finally unzip inside shell on pod:
cd /data/git/
unzip gitea-repo.zip
mv repositories/ gitea-repositories/
chown -R git. ./gitea-repositories/
Then login to git.moritzgraf.de and proceed with default values, or adjust them.
nextcloud
helm upgrade --install nextcloud stable/nextcloud -n nextcloud -f nextcloud/nextcloud.secret.yml
helm upgrade --install nextcloud-phpmyadmin bitnami/phpmyadmin -n nextcloud -f nextcloud/nextcloud-phpmyadmin.yml
backup & restore
mailu
Using the mailu helm chart.
helm repo add mailu https://mailu.github.io/helm-charts/
helm repo update
helm upgrade --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
helm upgrade --install mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
helm template mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
troubleshooting
Test imap from console:
openssl s_client -crlf -connect moritzgraf.de:993
migrate
# backup on moritzgraf.de
ssh moritzgraf.de "sudo su - docker -c 'cd /home/docker/mailu && docker-compose stop' && sudo su - root -c 'cd /home/docker && tar czvf mailu.tar.gz ./mailu && mv mailu.tar.gz /home/moritz/ && chown moritz. /home/moritz/mailu.tar.gz' && scp mailu.tar.gz haumdaucher.de:/home/moritz/"
# terraform change
cd ../terraform && terraform apply
# helm apply
cd ../k8s
helm upgrade --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
# apply mailu and scale all to 0
kc mailu
k scale --replicas=0 --all=true deploy
# apply restore pod
k apply -f mailu/restore.yaml
# copy tar gz
ssh one
k cp mailu.tar.gz restore:/data/
# exec to pod and arrange persistence
k exec -it restore -- bash
cd /data
tar xzvf mailu.tar.gz
mv mailu/data/* ./admin/
mv mailu/dkim/* ./dkim/
mv mailu/mail/* ./dovecotmail/
chown -R mail:man ./dovecotmail
# scale up
k scale --replicas=1 --all=true deploy
Checks:
- browser mail.moritzgraf.de & login
- browser mail.moritzgraf.de/admin
minio
kubectl apply -f minio
Add mopbot & corona & corona-api
kubectl apply -f datalab/
zebrium
zebrium.io hat "AI powered" troubleshooting. This is a test use-case for my cluster. It was installed using the following command:
kubectl create namespace zebrium
helm install zlog-collector zlog-collector --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-kubernetes-collector/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy,zebrium.timezone=Europe/Berlin
helm install zstats-collector zstats --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-stats/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com/stats/api/v1/zstats,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy
Benchmark with dbench
Should be usually uncommented, but dbbench may be executed to check speed of local filesystem:
k create ns dbench
k apply -f dbench/
k delete -f dbench
Web
kubectl apply -f web/