infrapuzzle/k8s
Moritz Graf fa43769c71 Removing dex helm chart 2021-11-06 14:57:01 +01:00
..
_archive Current state 2021-01-10 12:57:38 +01:00
ameliegraf Current state 2021-01-10 12:57:38 +01:00
auth Tryout with storageos - not successful 2020-04-10 01:29:08 +02:00
backup Adding backup for nextcloud 2020-05-02 02:52:57 +02:00
cert-manager Updating to cert-manager 1.0 2020-09-12 12:36:17 +02:00
datalab Adding current state 2021-02-14 18:25:19 +01:00
dbench Adding various things like upgrade of openebs 2020-06-20 17:30:07 +02:00
development Adding gitea backup first implementation without permissions 2020-05-01 21:10:38 +02:00
ingress-nginx Adding proxy-body-size in configmap 2020-10-03 19:04:59 +02:00
kuard Commenting kuard out, as it is only for dev 2020-04-11 22:31:16 +02:00
kube-system First hetzner changes 2020-11-14 18:56:45 +01:00
mailu Updating a few things 2021-09-19 17:24:44 +02:00
minio Adding backup concept with velero & minio 2020-11-07 23:26:25 +01:00
monitoring Adding current state 2021-02-14 18:25:19 +01:00
nextcloud Fixing nextcloud 2020-11-14 22:58:02 +01:00
octobot Removing dex helm chart 2021-11-06 14:57:01 +01:00
openebs First hetzner changes 2020-11-14 18:56:45 +01:00
troubleshoot Adding troubleshoot 2020-09-12 13:29:49 +02:00
tt-rss Working version of tt-rss 2020-04-24 00:26:42 +02:00
velero Adding velero scheduled backup and dropbo sync 2020-11-15 16:23:11 +01:00
web Adding velero scheduled backup and dropbo sync 2020-11-15 16:23:11 +01:00
README.md Current ste, without octobot 2021-11-06 14:56:16 +01:00

README.md

k8s

This folder holds all the services required for my private infrastructure. Following contraints apply:

  • Order of implementation is top down.
  • Every namespace has a subfolder within this subdirectory.
  • helm3

Operations

Cleanup Error pods.

kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod

Redeploy a deployment:

DEPLOYMENT="rstudio"
NAMESPACE="datalab"
kubectl patch deployment $DEPLOYMENT -n $NAMESPACE -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": {  \"redeploy\": \"$( date +%s )\"}}}}}"

Deployment (non persistent stuff)

ingress-nginx

Apply with helm:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install --create-namespace ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx -f ingress-nginx/ingress-nginx.yaml

cert-manager

Apply with helm. See chart.:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install --create-namespace cert-manager jetstack/cert-manager -n cert-manager -f cert-manager/cert-manager.yaml
# apply the two issuer classes
kubectl apply -f cert-manager/staging-issuer.yaml
kubectl apply -f cert-manager/production-issuer.yaml

To test all this you may use the kuaard demo project:

$ kubectl apply -f kuard
# checkout: https://kuard.haumdaucher.de
$ kubectl delete -f kuard

openebs

Update with the follwoing command. Chart can be found here.

Pitfall:

  • On fresh installation: activate ndmOperator, so that CRDs are correctly installed. It may be deactivated afterwards.
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm upgrade --install --create-namespace -f openebs/openebs.yml openebs --namespace openebs openebs/openebs
k apply -f openebs/storageclass.yml

minio

See chart on GitHub.

helm repo add minio https://helm.min.io/
helm repo update
helm upgrade --install -f minio/minio.secret.yaml --namespace minio --create-namespace minio minio/minio
# #
helm delete minio -n minio
kubectl delete ns minio

velero

Backup tool. See chart README.

helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm upgrade --install --create-namespace --namespace velero -f ./velero/velero.secret.yaml velero vmware-tanzu/velero
kubectl create secret generic rclone-config --from-file=./velero/rclone.secret
kubectl apply -f velero/dropbox_sync.yml
# #
helm delete velero -n velero
kubectl delete ns velero

A manual backup may be created executing the following command. Note: Keep backuped namespaces in sync with config from helm chart!!!

DATE=$( date +%Y%m%d )
velero backup create $DATE --include-namespaces datalab,development,nextcloud,tt-rss,zebrium,mailu --wait

Add private docker registry

TODO: chart no longer exists. Check how to replace this someday.

# create secret base64 encoded and put it in htpasswd helm chart
USER='moritz'
PASSWORD='xxx'
docker run --entrypoint htpasswd --rm registry:2 -Bbn $USER $PASSWORD
# #
helm upgrade --install --create-namespace docker-registry stable/docker-registry -n development -f development/registry.secret.yaml
##kubectl apply -f development/registry.secret.yaml

creating docker-pull-secret

Create credentials secret according to docu:

namespaces="datalab moritz web"
# the following is ONLY required to update the secret file!!
for i in $( echo $namespaces ) ; do
  kubectl create secret docker-registry registry-haumdaucher-de \
    -n $i \
    --docker-server=registry.haumdaucher.de \
    --docker-username=moritz \
    --docker-password='xxx' \
    --docker-email=moritz@moritzgraf.de \
    --dry-run -o yaml > ./${i}/docker-pull.yaml.secret
done
# apply (may be executed as needed)
for i in $( echo $namespaces ) ; do
  kubectl apply -f datalab/docker-pull.yaml.secret -n $i
done

For kubeflow:

cat << EOF >> config.json
{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "$( echo -n 'moritz:password' | base64 )"
        }
    }
}
EOF
kubectl create -n kubeflow configmap docker-config --from-file=config.json
rm config.json

networking with calico

Install calicoctl in cluster

kubectl apply -n kube-system -f https://docs.projectcalico.org/manifests/calicoctl.yaml

Then you may send commands like:

kubectl exec -ti -n kube-system calicoctl -- /calicoctl get workloadendpoints -n mailu

Or on the server directly:

sudo -E /usr/local/bin/calicoctl node checksystem

metrics

See this documentation.

kubectl exec -ti -n kube-system calicoctl -- /calicoctl patch felixConfiguration default  --patch '{"spec":{"prometheusMetricsEnabled": true}}'
kubectl apply -n kube-system -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: felix-metrics-svc
  namespace: kube-system
spec:
  selector:
    k8s-app: calico-node
  ports:
  - port: 9091
    targetPort: 9091
EOF

metrics-server

Getting resources (already done):

cd kube-system
curl -L -o metrics-server.yml https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# add parameters to deployment:
#           - --kubelet-preferred-address-types=InternalIP
#           - --v=2
#           - --kubelet-insecure-tls

Implement metrics-server:

kubectl apply -n kube-system -f kube-system/metrics-server.yml

ameliegraf

Note: Not yet finished. Switched back to portfolio adresses.

The website redirection for ameliegraf.de.

k create ns ameliegraf
k apply -f ameliegraf/ameliegraf.yml

Deployment (persistent stuff)

From here everything should be covered by the backup. Implenting those objects should already be performed by the velero backup.

rstudio

*DISABLED IN FAVOR OF KUBEFLOW

Currently only for one user:

kubectl apply -f datalab/rstudio.yaml
kubectl delete -f datalab/rstudio.yaml

tt-rss

Includes persistent data from mariadb table tt-rss.

helm upgrade --install tt-rss-mariadb bitnami/mariadb -n tt-rss -f tt-rss/tt-rss-mariadb.secret.yml
helm upgrade --install tt-rss-phpmyadmin bitnami/phpmyadmin -n tt-rss -f tt-rss/tt-rss-phpmyadmin.yml
kubectl apply -f tt-rss/

monitoring

The prometheus-operator, now called kube-prometheus-stack is used.

kubectl create ns monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install --create-namespace prometheus-operator prometheus-community/kube-prometheus-stack -n monitoring -f monitoring/prometheus-operator.secret.yml

gitea

In case my PRs have been accepted this is no longer necessary:

git clone git@github.com:iptizer/gitea-chart.git
# from chart repo
helm upgrade --install gitea k8s-land/gitea -n development -f development/gitea.secret.yml
# from local folder
helm upgrade --install gitea ./gitea-chart -n development -f development/gitea.secret.yml

# phpmyadmin
helm upgrade --install gitea-phpmyadmin bitnami/phpmyadmin -n development -f development/gitea-phpmyadmin.yml

backup & restore

See the backup cronjob in the /backup/ folder.

For backup & restore see gitea documentation.

Download the gitea-dump locally and proceed with the following commands:

 mkdir gitea_restore
 mv gitea-dump-1587901016.zip gitea_restore
 cd gitea_restore
 unzip gitea-dump-1587901016.zip
Archive:  gitea-dump-1587901016.zip
  inflating: gitea-repo.zip          
   creating: custom/
[...]

Import of sql may be done via phpmyadmin.

Copy to remote pod:

kubectl cp ./gitea-repo.zip gitea-gitea-69cd9bc59b-q2b2f:/data/git/

And finally unzip inside shell on pod:

cd /data/git/
unzip gitea-repo.zip
mv repositories/ gitea-repositories/
chown -R git. ./gitea-repositories/

Then login to git.moritzgraf.de and proceed with default values, or adjust them.

octobot

Deployment instructions for Octobot. Dex is used for authenticating.

kubectl create ns octobot
helm repo add dex https://charts.dexidp.io
helm repo update
helm upgrade --install -n octobot dex-octobot dex/dex -f ./octobot/dex.secret.values
kubectl apply $(ls octobot/*.yaml | awk ' { print " -f " $1 } ')

nextcloud

Chart GitHub

helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update
helm upgrade --install nextcloud nextcloud/nextcloud -n nextcloud --version 2.0.2 -f nextcloud/nextcloud.secret.yml
helm upgrade --install nextcloud-phpmyadmin bitnami/phpmyadmin -n nextcloud -f nextcloud/nextcloud-phpmyadmin.yml

backup & restore

#TODO with Velero

fuel datalab

helm repo add influxdata https://helm.influxdata.com/
helm upgrade --install influxdb influxdata/influxdb --namespace datalab --values datalab/influxdb.yml
helm upgrade --install telegraf influxdata/telegraf --namespace datalab --values datalab/telegraf.yml
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgres bitnami/postgresql --namespace datalab --values datalab/postgres.yml.secret

Tear down:

helm uninstall influxdb --namespace datalab

timescaledb @datalab

git clone git@github.com:timescale/timescaledb-kubernetes.git ../../timescaledb-kubernetes
../../timescaledb-kubernetes/charts/timescaledb-single/generate_kustomization.sh timescaledb
cp -r "../../timescaledb-kubernetes/charts/timescaledb-single/kustomize/timescaledb" ./datalab/timescaledb.secret
kubectl apply -n datalab -k ./datalab/timescaledb.secret
helm repo add timescaledb 'https://raw.githubusercontent.com/timescale/timescaledb-kubernetes/master/charts/repo/'
helm install timescaledb timescaledb/timescaledb-single --namespace datalab --values datalab/timescaledb.yml

mailu

Using the mailu helm chart.

helm repo add mailu https://mailu.github.io/helm-charts/
helm repo update
helm upgrade --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
#helm upgrade --install mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
helm template mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml

troubleshooting

Test imap from console:

openssl s_client -crlf -connect moritzgraf.de:993

migrate

# backup on moritzgraf.de
ssh moritzgraf.de "sudo su - docker -c 'cd /home/docker/mailu && docker-compose stop' && sudo su - root -c 'cd /home/docker && tar czvf mailu.tar.gz ./mailu && mv mailu.tar.gz /home/moritz/ && chown moritz. /home/moritz/mailu.tar.gz' && scp mailu.tar.gz haumdaucher.de:/home/moritz/"
# terraform change
cd ../terraform && terraform apply
# helm apply
cd ../k8s
helm upgrade --create-namespace --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
# apply mailu and scale all to 0
kc mailu
k scale --replicas=0 --all=true deploy
# apply restore pod
k apply -f mailu/restore.yaml
# copy tar gz
ssh one
k cp mailu.tar.gz restore:/data/
# exec to pod and arrange persistence
k exec -it restore -- bash
cd /data
tar xzvf mailu.tar.gz
mv mailu/data/* ./admin/
mv mailu/dkim/* ./dkim/
mv mailu/mail/* ./dovecotmail/
chown -R mail:man ./dovecotmail
# scale up
k scale --replicas=1 --all=true deploy

Checks:

  • browser mail.moritzgraf.de & login
  • browser mail.moritzgraf.de/admin

Add mopbot & corona & corona-api

kubectl apply -f datalab/

zebrium

zebrium.io hat "AI powered" troubleshooting. This is a test use-case for my cluster. It was installed using the following command:

kubectl create namespace zebrium
helm install zlog-collector zlog-collector --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-kubernetes-collector/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy,zebrium.timezone=Europe/Berlin
helm install zstats-collector zstats --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-stats/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com/stats/api/v1/zstats,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy

Benchmark with dbench

Should be usually uncommented, but dbbench may be executed to check speed of local filesystem:

k create ns dbench
k apply -f dbench/
k delete -f dbench

Web

kubectl create ns web
kubectl apply -n web ./re
kubectl apply -f web/

Kubeflow

The whole Kubeflow deployment is documented in a seperate repository:

Archive

Deployments previously used.

Jupyter

DEPRECATED: Using Kubeflow instead. Moved to _archive.

Using the project zero-to-jupyterhub. Helm chart can be found here.

helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm upgrade --cleanup-on-fail --install jupyter jupyterhub/jupyterhub --namespace datalab --values datalab/jupyter-values.yaml
helm delete jupyter --namespace datalab

Tekton

DEPRECATED: Using Argo from Kubeflow instead.

Implementation as described in the docs.

kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
kubectl apply --filename https://github.com/tektoncd/dashboard/releases/latest/download/tekton-dashboard-release.yaml
#basic-auth, see https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
htpasswd -c ./tekton-pipelines/auth tekton
kubectl delete secret -n tekton-pipelines basic-auth
kubectl create secret -n tekton-pipelines generic basic-auth --from-file=tekton-pipelines/auth
kubectl apply -f tekton-pipelines/tekton-ingress.yml
rm tekton-pipelines/auth

k delete ns tekton-pipelines

Install client side tools:

brew tap tektoncd/tools
brew install tektoncd/tools/tektoncd-cli