infrapuzzle/k8s
Moritz Graf 5ad6a42fa2 Current working version of NExtcloud when using the correct caldav URL 2022-04-24 17:55:45 +02:00
..
_archive Current state 2021-01-10 12:57:38 +01:00
ameliegraf Current state 2021-01-10 12:57:38 +01:00
auth Tryout with storageos - not successful 2020-04-10 01:29:08 +02:00
backup Adding backup for nextcloud 2020-05-02 02:52:57 +02:00
cert-manager Updating to cert-manager 1.0 2020-09-12 12:36:17 +02:00
datalab Adding current state 2021-02-14 18:25:19 +01:00
dbench Adding various things like upgrade of openebs 2020-06-20 17:30:07 +02:00
development Adding gitea backup first implementation without permissions 2020-05-01 21:10:38 +02:00
ingress-nginx Adding proxy-body-size in configmap 2020-10-03 19:04:59 +02:00
kuard Commenting kuard out, as it is only for dev 2020-04-11 22:31:16 +02:00
kube-system First hetzner changes 2020-11-14 18:56:45 +01:00
mailu Updating mail and mini with annotation for default ingress class 2022-04-04 22:23:38 +02:00
minio Adding nextcloud changes to make it more secure 2022-04-19 22:29:31 +02:00
monitoring Adding current state 2021-02-14 18:25:19 +01:00
nextcloud Current working version of NExtcloud when using the correct caldav URL 2022-04-24 17:55:45 +02:00
octobot Adding octobot stuff 2022-04-19 07:11:18 +02:00
openebs Updating openebs & switching to bitnami minio chart 2022-04-03 16:26:52 +02:00
troubleshoot Adding troubleshoot 2020-09-12 13:29:49 +02:00
tt-rss Adding latest tt-rss - still not working 2022-04-19 07:12:30 +02:00
velero Velero updates 2022-04-19 07:11:08 +02:00
web Adding velero scheduled backup and dropbo sync 2020-11-15 16:23:11 +01:00
README.md Current working version of NExtcloud when using the correct caldav URL 2022-04-24 17:55:45 +02:00

README.md

k8s

This folder holds all the services required for my private infrastructure. Following contraints apply:

  • Order of implementation is top down.
  • Every namespace has a subfolder within this subdirectory.
  • helm3

Operations

Cleanup Error pods.

kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod

Redeploy a deployment:

DEPLOYMENT="rstudio"
NAMESPACE="datalab"
kubectl patch deployment $DEPLOYMENT -n $NAMESPACE -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": {  \"redeploy\": \"$( date +%s )\"}}}}}"

Deployment (non persistent stuff)

ingress-nginx

Apply with helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install --create-namespace ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx -f ingress-nginx/ingress-nginx.yaml

cert-manager

Apply with helm. See chart.:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install --create-namespace cert-manager jetstack/cert-manager -n cert-manager -f cert-manager/cert-manager.yaml
# apply the two issuer classes
kubectl apply -f cert-manager/staging-issuer.yaml
kubectl apply -f cert-manager/production-issuer.yaml

To test all this you may use the kuaard demo project:

$ kubectl apply -f kuard
# checkout: https://kuard.haumdaucher.de
$ kubectl delete -f kuard

openebs

Update with the follwoing command. Chart can be found here.

Pitfall:

  • On fresh installation: activate ndmOperator, so that CRDs are correctly installed. It may be deactivated afterwards.
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm upgrade --install --create-namespace -f openebs/openebs.yml openebs --namespace openebs openebs/openebs
k apply -f openebs/storageclass.yml

minio (bitnami)

Switching to Bitnami chart as "normal" chart just too big.

Links:

helm repo update
helm upgrade --install -f minio/minio.secret.yaml --namespace minio --create-namespace minio bitnami/minio

velero

Backup tool. See chart README.

helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm upgrade --install --create-namespace --namespace velero -f ./velero/velero.secret.yaml velero vmware-tanzu/velero
kubectl create secret generic rclone-config --from-file=./velero/rclone.secret
kubectl apply -f velero/dropbox_sync.yml
# #
helm delete velero -n velero
kubectl delete ns velero

A manual backup may be created executing the following command. Note: Keep backuped namespaces in sync with config from helm chart!!!

DATE=$( date +%Y%m%d )
velero backup create $DATE --include-namespaces datalab,development,nextcloud,tt-rss,mailu --wait

Add private docker registry

TODO: chart no longer exists. Check how to replace this someday.

# create secret base64 encoded and put it in htpasswd helm chart
USER='moritz'
PASSWORD='xxx'
docker run --entrypoint htpasswd --rm registry:2 -Bbn $USER $PASSWORD
# #
helm upgrade --install --create-namespace docker-registry stable/docker-registry -n development -f development/registry.secret.yaml
##kubectl apply -f development/registry.secret.yaml

creating docker-pull-secret

Create credentials secret according to docu:

namespaces="datalab moritz web"
# the following is ONLY required to update the secret file!!
for i in $( echo $namespaces ) ; do
  kubectl create secret docker-registry registry-haumdaucher-de \
    -n $i \
    --docker-server=registry.haumdaucher.de \
    --docker-username=moritz \
    --docker-password='xxx' \
    --docker-email=moritz@moritzgraf.de \
    --dry-run -o yaml > ./${i}/docker-pull.yaml.secret
done
# apply (may be executed as needed)
for i in $( echo $namespaces ) ; do
  kubectl apply -f datalab/docker-pull.yaml.secret -n $i
done

For kubeflow:

cat << EOF >> config.json
{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "$( echo -n 'moritz:password' | base64 )"
        }
    }
}
EOF
kubectl create -n kubeflow configmap docker-config --from-file=config.json
rm config.json

networking with calico

Install calicoctl in cluster

kubectl apply -n kube-system -f https://docs.projectcalico.org/manifests/calicoctl.yaml

Then you may send commands like:

kubectl exec -ti -n kube-system calicoctl -- /calicoctl get workloadendpoints -n mailu

Or on the server directly:

sudo -E /usr/local/bin/calicoctl node checksystem

metrics

See this documentation.

kubectl exec -ti -n kube-system calicoctl -- /calicoctl patch felixConfiguration default  --patch '{"spec":{"prometheusMetricsEnabled": true}}'
kubectl apply -n kube-system -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: felix-metrics-svc
  namespace: kube-system
spec:
  selector:
    k8s-app: calico-node
  ports:
  - port: 9091
    targetPort: 9091
EOF

metrics-server

Getting resources (already done):

cd kube-system
curl -L -o metrics-server.yml https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# add parameters to deployment:
#           - --kubelet-preferred-address-types=InternalIP
#           - --v=2
#           - --kubelet-insecure-tls

Implement metrics-server:

kubectl apply -n kube-system -f kube-system/metrics-server.yml

ameliegraf

Note: Not yet finished. Switched back to portfolio adresses.

The website redirection for ameliegraf.de.

k create ns ameliegraf
k apply -f ameliegraf/ameliegraf.yml

Deployment (persistent stuff)

From here everything should be covered by the backup. Implenting those objects should already be performed by the velero backup.

rstudio

*DISABLED IN FAVOR OF KUBEFLOW

Currently only for one user:

kubectl apply -f datalab/rstudio.yaml
kubectl delete -f datalab/rstudio.yaml

tt-rss

Includes persistent data from mariadb table tt-rss.

helm upgrade --install tt-rss-mariadb bitnami/mariadb -n tt-rss -f tt-rss/tt-rss-mariadb.secret.yml
helm upgrade --install tt-rss-phpmyadmin bitnami/phpmyadmin -n tt-rss -f tt-rss/tt-rss-phpmyadmin.yml

helm upgrade --install tt-rss k8s-at-home/tt-rss -n tt-rss -f tt-rss/tt-rss.helm.secret.yml

monitoring

The prometheus-operator, now called kube-prometheus-stack is used.

kubectl create ns monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install --create-namespace prometheus-operator prometheus-community/kube-prometheus-stack -n monitoring -f monitoring/prometheus-operator.secret.yml

gitea

In case my PRs have been accepted this is no longer necessary:

git clone git@github.com:iptizer/gitea-chart.git
# from chart repo
helm upgrade --install gitea k8s-land/gitea -n development -f development/gitea.secret.yml
# from local folder
helm upgrade --install gitea ./gitea-chart -n development -f development/gitea.secret.yml

# phpmyadmin
helm upgrade --install gitea-phpmyadmin bitnami/phpmyadmin -n development -f development/gitea-phpmyadmin.yml

backup & restore

See the backup cronjob in the /backup/ folder.

For backup & restore see gitea documentation.

Download the gitea-dump locally and proceed with the following commands:

 mkdir gitea_restore
 mv gitea-dump-1587901016.zip gitea_restore
 cd gitea_restore
 unzip gitea-dump-1587901016.zip
Archive:  gitea-dump-1587901016.zip
  inflating: gitea-repo.zip          
   creating: custom/
[...]

Import of sql may be done via phpmyadmin.

Copy to remote pod:

kubectl cp ./gitea-repo.zip gitea-gitea-69cd9bc59b-q2b2f:/data/git/

And finally unzip inside shell on pod:

cd /data/git/
unzip gitea-repo.zip
mv repositories/ gitea-repositories/
chown -R git. ./gitea-repositories/

Then login to git.moritzgraf.de and proceed with default values, or adjust them.

octobot

Deployment instructions for Octobot. Dex is used for authenticating.

kubectl create ns octobot
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm repo update
helm upgrade --install -n octobot oauth2-octobot oauth2-proxy/oauth2-proxy -f ./octobot/oauth2.secret.values
kubectl apply $(ls octobot/*.yaml | awk ' { print " -f " $1 } ')

octobot-fabi

Deployment instructions for Octobot. Dex is used for authenticating.

kubectl create ns octobot-fabi
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm repo update
helm upgrade --install -n octobot-fabi oauth2-octobot oauth2-proxy/oauth2-proxy -f ./octobot-fabi/oauth2.secret.values
kubectl apply $(ls octobot-fabi/*.yaml | awk ' { print " -f " $1 } ')

nextcloud

Chart GitHub

Configuring Nextcloud

helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update
helm upgrade --install nextcloud nextcloud/nextcloud -n nextcloud --version 2.14.2 -f nextcloud/nextcloud.secret.yml
helm upgrade --install nextcloud-phpmyadmin bitnami/phpmyadmin -n nextcloud -f nextcloud/nextcloud-phpmyadmin.yml

Execute occ in container:

runuser --user www-data -- /usr/local/bin/php /var/www/html/occ

Delete stuff for upgrade (sometime necessary):

kubectl delete sts/nextcloud-mariadb -n nextcloud
kubectl delete sts/nextcloud-redis-master -n nextcloud
kubectl delete deployment nextcloud -n nextcloud

SyncURL for DavX5 => https://cloud.haumdaucher.de/remote.php/dav/principals/users/moritz/

Unknown why normal url is not working. See https://help.nextcloud.com/t/davx5-couldnt-find-caldav-or-carddav-service/68669

backup & restore

#TODO with Velero

fuel datalab

helm repo add influxdata https://helm.influxdata.com/
helm upgrade --install influxdb influxdata/influxdb --namespace datalab --values datalab/influxdb.yml
helm upgrade --install telegraf influxdata/telegraf --namespace datalab --values datalab/telegraf.yml
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgres bitnami/postgresql --namespace datalab --values datalab/postgres.yml.secret

Tear down:

helm uninstall influxdb --namespace datalab

timescaledb @datalab

git clone git@github.com:timescale/timescaledb-kubernetes.git ../../timescaledb-kubernetes
../../timescaledb-kubernetes/charts/timescaledb-single/generate_kustomization.sh timescaledb
cp -r "../../timescaledb-kubernetes/charts/timescaledb-single/kustomize/timescaledb" ./datalab/timescaledb.secret
kubectl apply -n datalab -k ./datalab/timescaledb.secret
helm repo add timescaledb 'https://raw.githubusercontent.com/timescale/timescaledb-kubernetes/master/charts/repo/'
helm install timescaledb timescaledb/timescaledb-single --namespace datalab --values datalab/timescaledb.yml

mailu

Using the mailu helm chart.

helm repo add mailu https://mailu.github.io/helm-charts/
helm repo update
helm upgrade --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
#helm upgrade --install mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
helm template mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml

troubleshooting

Test imap from console:

openssl s_client -crlf -connect moritzgraf.de:993

migrate

# backup on moritzgraf.de
ssh moritzgraf.de "sudo su - docker -c 'cd /home/docker/mailu && docker-compose stop' && sudo su - root -c 'cd /home/docker && tar czvf mailu.tar.gz ./mailu && mv mailu.tar.gz /home/moritz/ && chown moritz. /home/moritz/mailu.tar.gz' && scp mailu.tar.gz haumdaucher.de:/home/moritz/"
# terraform change
cd ../terraform && terraform apply
# helm apply
cd ../k8s
helm upgrade --create-namespace --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
# apply mailu and scale all to 0
kc mailu
k scale --replicas=0 --all=true deploy
# apply restore pod
k apply -f mailu/restore.yaml
# copy tar gz
ssh one
k cp mailu.tar.gz restore:/data/
# exec to pod and arrange persistence
k exec -it restore -- bash
cd /data
tar xzvf mailu.tar.gz
mv mailu/data/* ./admin/
mv mailu/dkim/* ./dkim/
mv mailu/mail/* ./dovecotmail/
chown -R mail:man ./dovecotmail
# scale up
k scale --replicas=1 --all=true deploy

Checks:

  • browser mail.moritzgraf.de & login
  • browser mail.moritzgraf.de/admin

Add mopbot & corona & corona-api

kubectl apply -f datalab/

zebrium

zebrium.io hat "AI powered" troubleshooting. This is a test use-case for my cluster. It was installed using the following command:

kubectl create namespace zebrium
helm install zlog-collector zlog-collector --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-kubernetes-collector/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy,zebrium.timezone=Europe/Berlin
helm install zstats-collector zstats --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-stats/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com/stats/api/v1/zstats,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy

Benchmark with dbench

Should be usually uncommented, but dbbench may be executed to check speed of local filesystem:

k create ns dbench
k apply -f dbench/
k delete -f dbench

Web

kubectl create ns web
kubectl apply -n web ./re
kubectl apply -f web/

Kubeflow

The whole Kubeflow deployment is documented in a seperate repository:

Archive

Deployments previously used.

Jupyter

DEPRECATED: Using Kubeflow instead. Moved to _archive.

Using the project zero-to-jupyterhub. Helm chart can be found here.

helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm upgrade --cleanup-on-fail --install jupyter jupyterhub/jupyterhub --namespace datalab --values datalab/jupyter-values.yaml
helm delete jupyter --namespace datalab

Tekton

DEPRECATED: Using Argo from Kubeflow instead.

Implementation as described in the docs.

kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
kubectl apply --filename https://github.com/tektoncd/dashboard/releases/latest/download/tekton-dashboard-release.yaml
#basic-auth, see https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
htpasswd -c ./tekton-pipelines/auth tekton
kubectl delete secret -n tekton-pipelines basic-auth
kubectl create secret -n tekton-pipelines generic basic-auth --from-file=tekton-pipelines/auth
kubectl apply -f tekton-pipelines/tekton-ingress.yml
rm tekton-pipelines/auth

k delete ns tekton-pipelines

Install client side tools:

brew tap tektoncd/tools
brew install tektoncd/tools/tektoncd-cli