infrapuzzle/k8s/README.md

741 lines
22 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# k8s
This folder holds all the services required for my private infrastructure. Following contraints apply:
* Order of implementation is top down.
* Every namespace has a subfolder within this subdirectory.
* helm3
# Operations
Cleanup `Error` pods.
```sh
kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod
```
Redeploy a deployment:
```sh
DEPLOYMENT="rstudio"
NAMESPACE="datalab"
kubectl patch deployment $DEPLOYMENT -n $NAMESPACE -p "{\"spec\": {\"template\": {\"metadata\": { \"labels\": { \"redeploy\": \"$( date +%s )\"}}}}}"
```
## helm modify release data
```sh
# see https://gist.github.com/DzeryCZ/c4adf39d4a1a99ae6e594a183628eaee
kubectl get secret sh.helm.release.v1.mailu.v8 -n mailu -o json | jq .data.release | tr -d '"' | base64 -d | base64 -d | gzip -d > tmp_v8_mailu
# edit as you like
kubcetl edit -n mailu secret sh.helm.release.v1.mailu.v8
# delete the "release" stuff and copy and paste :)
```
# Deployment (non persistent stuff)
## [ingress-nginx](https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx)
Apply with helm:
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install --create-namespace ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx -f ingress-nginx/ingress-nginx.yaml
```
## [cert-manager](https://cert-manager.io/docs/tutorials/acme/ingress/)
Apply with helm. [See chart.](https://github.com/jetstack/cert-manager):
```bash
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install --create-namespace cert-manager jetstack/cert-manager -n cert-manager -f cert-manager/cert-manager.yaml
# apply the two issuer classes
kubectl apply -f cert-manager/staging-issuer.yaml
kubectl apply -f cert-manager/production-issuer.yaml
```
To test all this you may use the kuaard demo project:
```sh
$ kubectl apply -f kuard
# checkout: https://kuard.haumdaucher.de
$ kubectl delete -f kuard
```
## openebs
Update with the follwoing command. Chart can be found [here](https://github.com/openebs/charts/tree/master/charts/openebs).
Pitfall:
* On fresh installation: activate *ndmOperator*, so that CRDs are correctly installed. It may be deactivated afterwards.
```sh
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm upgrade --install --create-namespace -f openebs/openebs.yml openebs --namespace openebs openebs/openebs
k apply -f openebs/storageclass.yml
```
## minio (bitnami)
Switching to [Bitnami chart](https://artifacthub.io/packages/helm/bitnami/minio) as "normal" chart just too big.
Links:
* [minio-console.haumdaucher.de](https://minio-console.haumdaucher.de)
* [minio.haumdaucher.de](https://minio.haumdaucher.de)
```sh
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# 11.02.2024: Removed and reinstalled due to upgrade problem
helm upgrade --install -f minio/minio.secret.yaml --namespace minio --create-namespace minio bitnami/minio --version 14.8.1
```
## velero
Backup tool. See chart [README](https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/README.md).
```sh
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm upgrade --install --create-namespace --namespace velero -f ./velero/velero.secret.yaml velero vmware-tanzu/velero --version 5.3.0
kubectl create secret generic rclone-config --from-file=./velero/rclone.secret
kubectl apply -f velero/dropbox_sync.yml
# #
helm delete velero -n velero
kubectl delete ns velero
```
A manual backup may be created executing the following command. **Note: Keep backuped namespaces in sync with config from helm chart!!!**
```sh
DATE=$( date +%Y%m%d )
velero backup create $DATE --include-namespaces datalab,development,nextcloud,tt-rss,mailu --wait
```
## coder
Coder is a code server see [https://coder.com/docs/v2/latest/install/kubernetes](https://coder.com/docs/v2/latest/install/kubernetes).
```sh
kubectl create namespace coder
# Install PostgreSQL
helm repo add bitnami https://charts.bitnami.com/bitnami
# create secret for postgresdb
kubectl apply -f coder/postgres_users.secret.yaml
helm upgrade --install coder-db bitnami/postgresql \
-n coder \
-f coder/bitnami_postgresql.secret.yaml \
--version 15.2.5
# db url postgres://coder:<password>@coder-db-postgresql.coder.svc.cluster.local:5432/coder?sslmode=disable
# Uses Bitnami PostgreSQL example. If you have another database,
# change to the proper URL.
USERPASS=$( kubectl get secret -n coder coder-db-postgresql -o=jsonpath='{.data.password}' | base64 -D )
kubectl create secret generic coder-db-url -n coder \
--from-literal=url="postgres://coder:${USERPASS}@coder-db-postgresql.coder.svc.cluster.local:5432/coder?sslmode=disable"
helm repo add coder-v2 https://helm.coder.com/v2
#
helm upgrade --install coder coder-v2/coder \
--namespace coder \
--values coder/coder.secret.yaml \
--version 2.10.0
```
# llm
See [llm](./llm_hosting.md)
## Add private docker registry
**TODO: chart no longer exists. Check how to replace this someday.**
```sh
# create secret base64 encoded and put it in htpasswd helm chart
USER='moritz'
PASSWORD='xxx'
docker run --entrypoint htpasswd --rm registry:2 -Bbn $USER $PASSWORD
# #
helm upgrade --install --create-namespace docker-registry stable/docker-registry -n development -f development/registry.secret.yaml
##kubectl apply -f development/registry.secret.yaml
```
### creating docker-pull-secret
Create credentials secret [according to docu](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line):
```sh
namespaces="datalab moritz web"
# the following is ONLY required to update the secret file!!
for i in $( echo $namespaces ) ; do
kubectl create secret docker-registry registry-haumdaucher-de \
-n $i \
--docker-server=registry.haumdaucher.de \
--docker-username=moritz \
--docker-password='xxx' \
--docker-email=moritz@moritzgraf.de \
--dry-run -o yaml > ./${i}/docker-pull.yaml.secret
done
# apply (may be executed as needed)
for i in $( echo $namespaces ) ; do
kubectl apply -f datalab/docker-pull.yaml.secret -n $i
done
```
For kubeflow:
```sh
cat << EOF >> config.json
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "$( echo -n 'moritz:password' | base64 )"
}
}
}
EOF
kubectl create -n kubeflow configmap docker-config --from-file=config.json
rm config.json
```
## networking with calico
Install calicoctl in cluster
```sh
kubectl apply -n kube-system -f https://docs.projectcalico.org/manifests/calicoctl.yaml
```
Then you may send commands like:
```sh
kubectl exec -ti -n kube-system calicoctl -- /calicoctl get workloadendpoints -n mailu
```
Or on the server directly:
```sh
sudo -E /usr/local/bin/calicoctl node checksystem
```
### metrics
See this [documentation](https://docs.projectcalico.org/maintenance/monitor-component-metrics).
```sh
kubectl exec -ti -n kube-system calicoctl -- /calicoctl patch felixConfiguration default --patch '{"spec":{"prometheusMetricsEnabled": true}}'
kubectl apply -n kube-system -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: felix-metrics-svc
namespace: kube-system
spec:
selector:
k8s-app: calico-node
ports:
- port: 9091
targetPort: 9091
EOF
```
## metrics-server
Getting resources (already done):
```sh
cd kube-system
curl -L -o metrics-server.yml https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# add parameters to deployment:
# - --kubelet-preferred-address-types=InternalIP
# - --v=2
# - --kubelet-insecure-tls
```
Implement metrics-server:
```sh
kubectl apply -n kube-system -f kube-system/metrics-server.yml
```
## ameliegraf
Note: Not yet finished. Switched back to portfolio adresses.
The website redirection for [ameliegraf.de](ameliegraf.de).
```sh
k create ns ameliegraf
k apply -f ameliegraf/ameliegraf.yml
```
# Deployment (persistent stuff)
From here everything should be covered by the backup. Implenting those objects should already be performed by the velero backup.
## rstudio
***DISABLED IN FAVOR OF KUBEFLOW**
Currently only for one user:
```sh
kubectl apply -f datalab/rstudio.yaml
kubectl delete -f datalab/rstudio.yaml
```
## tt-rss
Includes *persistent data* from mariadb table `tt-rss`.
```sh
helm upgrade --install tt-rss-mariadb bitnami/mariadb -n tt-rss -f tt-rss/tt-rss-mariadb.secret.yml
helm upgrade --install tt-rss-phpmyadmin bitnami/phpmyadmin -n tt-rss -f tt-rss/tt-rss-phpmyadmin.yml
helm upgrade --install tt-rss k8s-at-home/tt-rss -n tt-rss -f tt-rss/tt-rss.helm.secret.yml
```
## monitoring
The prometheus-operator, now called [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) is used.
```sh
kubectl create ns monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install --create-namespace prometheus-operator prometheus-community/kube-prometheus-stack -n monitoring -f monitoring/prometheus-operator.secret.yml --version 56.6.1
# alert configuration
NAMESPACES_TO_ALERT=( kube-system monitoring cert-manager datalab ingress-nginx mailu minio velero web openebs )
for i in "${NAMESPACES_TO_ALERT[@]}"; do
kubectl apply -f monitoring/alertmanagerconfig.secret.yaml -n $i
done
```
### influxdb
Used to store hass data long term.
```sh
helm repo add influxdata https://helm.influxdata.com/
helm upgrade --install influxdb -n influxdb influxdata/influxdb2 -f influxdb/influxdb2.secret.yml --create-namespace
```
### home-assistant hass
How to generate token (not really required): https://github.com/hahn-th/homematicip-rest-api
Using this helm chart: [https://github.com/pajikos/home-assistant-helm-chart](https://github.com/pajikos/home-assistant-helm-chart)
Install chart:
```sh
# secret for auth in hass-code
k apply -f home-assistant/hass-code-auth.secret.yml
#
helm repo add pajikos http://pajikos.github.io/home-assistant-helm-chart/
helm repo update
#helm show values pajikos/home-assistant > ./home-assistant/home-assistant.yaml
k create ns home-assistant
helm upgrade --install home-assistant pajikos/home-assistant -n home-assistant -f ./home-assistant/home-assistant.yaml
```
### robusta
```sh
k create ns robusta
helm repo add robusta https://robusta-charts.storage.googleapis.com
helm repo update
helm upgrade --install robusta robusta/robusta -n robusta -f ./robusta/robusta.yaml
```
## gitea
In case my PRs have been accepted this is no longer necessary:
```sh
git clone git@github.com:iptizer/gitea-chart.git
```
```sh
# from chart repo
helm upgrade --install gitea k8s-land/gitea -n development -f development/gitea.secret.yml
# from local folder
helm upgrade --install gitea ./gitea-chart -n development -f development/gitea.secret.yml
# phpmyadmin
helm upgrade --install gitea-phpmyadmin bitnami/phpmyadmin -n development -f development/gitea-phpmyadmin.yml
```
### backup & restore
See the backup cronjob in the `/backup/` folder.
For backup & restore see [gitea documentation](https://docs.gitea.io/en-us/backup-and-restore/).
Download the `gitea-dump` locally and proceed with the following commands:
```sh
mkdir gitea_restore
mv gitea-dump-1587901016.zip gitea_restore
cd gitea_restore
unzip gitea-dump-1587901016.zip
Archive: gitea-dump-1587901016.zip
inflating: gitea-repo.zip
creating: custom/
[...]
```
Import of sql may be done via phpmyadmin.
Copy to remote pod:
```sh
kubectl cp ./gitea-repo.zip gitea-gitea-69cd9bc59b-q2b2f:/data/git/
```
And finally unzip inside shell on pod:
```sh
cd /data/git/
unzip gitea-repo.zip
mv repositories/ gitea-repositories/
chown -R git. ./gitea-repositories/
```
Then login to git.moritzgraf.de and proceed with default values, or adjust them.
## octobot
Deployment instructions for [Octobot](https://github.com/Drakkar-Software/OctoBot). Dex is used for authenticating.
```sh
kubectl create ns octobot
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm repo update
helm upgrade --install -n octobot oauth2-octobot oauth2-proxy/oauth2-proxy -f ./octobot/oauth2.secret.values
kubectl apply $(ls octobot/*.yaml | awk ' { print " -f " $1 } ')
```
## octobot-fabi
Deployment instructions for [Octobot](https://github.com/Drakkar-Software/OctoBot). Dex is used for authenticating.
```sh
kubectl create ns octobot-fabi
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
helm repo update
helm upgrade --install -n octobot-fabi oauth2-octobot oauth2-proxy/oauth2-proxy -f ./octobot-fabi/oauth2.secret.values
kubectl apply $(ls octobot-fabi/*.yaml | awk ' { print " -f " $1 } ')
```
## nextcloud
[Chart GitHub](https://github.com/nextcloud/helm/tree/master/charts/nextcloud)
[Configuring Nextcloud](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/index.html)
```sh
helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update
helm upgrade --install nextcloud nextcloud/nextcloud -n nextcloud --version 4.6.2 -f nextcloud/nextcloud.secret.yml
helm upgrade --install nextcloud-phpmyadmin bitnami/phpmyadmin -n nextcloud -f nextcloud/nextcloud-phpmyadmin.yml
```
Execute occ in container:
```sh
runuser --user www-data -- /usr/local/bin/php /var/www/html/occ
```
Download nextcloud server and extract occ:
```sh
curl -L -o nextcloud-v27.1.6.tar.gz https://github.com/nextcloud/server/archive/refs/tags/v27.1.6.tar.gz
```
Delete stuff for upgrade (sometime necessary):
```sh
kubectl delete sts/nextcloud-mariadb -n nextcloud
kubectl delete sts/nextcloud-redis-master -n nextcloud
kubectl delete deployment nextcloud -n nextcloud
```
SyncURL for DavX5 => https://cloud.haumdaucher.de/remote.php/dav/principals/users/moritz/
Unknown why normal url is not working. See https://help.nextcloud.com/t/davx5-couldnt-find-caldav-or-carddav-service/68669
### backup & restore
#TODO with Velero
## influxdb & mosquitto
## datalab
```sh
helm repo add influxdata https://helm.influxdata.com/
# influx stuff
helm upgrade --install influxdb2 influxdata/influxdb2 --namespace datalab --values datalab/influxdb2.yml
helm upgrade --install telegraf-operator influxdata/telegraf-operator --namespace datalab --values datalab/telegraf-operator.yml
#
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgres bitnami/postgresql --namespace datalab --values datalab/postgres.yml.secret
#mqtt
helm repo add t3n https://storage.googleapis.com/t3n-helm-charts
helm repo update
helm upgrade --install mosquitto t3n/mosquitto --namespace datalab --values datalab/mosquitto.secret.yml
```
work with it
```sh
# retrieve admin password
echo $(kubectl get secret influxdb2-auth -o "jsonpath={.data['admin-password']}" --namespace datalab | base64 --decode)
```
Tear down:
```sh
helm uninstall influxdb --namespace datalab
```
## timescaledb @datalab
```sh
git clone git@github.com:timescale/timescaledb-kubernetes.git ../../timescaledb-kubernetes
../../timescaledb-kubernetes/charts/timescaledb-single/generate_kustomization.sh timescaledb
cp -r "../../timescaledb-kubernetes/charts/timescaledb-single/kustomize/timescaledb" ./datalab/timescaledb.secret
kubectl apply -n datalab -k ./datalab/timescaledb.secret
helm repo add timescaledb 'https://raw.githubusercontent.com/timescale/timescaledb-kubernetes/master/charts/repo/'
helm install timescaledb timescaledb/timescaledb-single --namespace datalab --values datalab/timescaledb.yml
```
## mailu
Using the [mailu helm chart](https://github.com/Mailu/helm-charts/tree/master/mailu).
```sh
helm repo add mailu https://mailu.github.io/helm-charts/
helm repo update
kubectl apply -f mailu/mailu.secret-key.secret.yml
helm upgrade --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml --version 0.3.5
#helm upgrade --install mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
helm template mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
#helm template mailu ../../mailu-helm-charts/mailu/ -n mailu -f mailu/mailu.secret.yml
```
## SECRET_KEY
There was an ERROR as follows:
```sh
ERROR:root:Can't read SECRET_KEY from file: expected str, bytes or os.PathLike object, not NoneType
Traceback (most recent call last):
File "/config.py", line 8, in <module>
system.set_env()
File "/app/venv/lib/python3.10/site-packages/socrate/system.py", line 35, in set_env
raise exc
File "/app/venv/lib/python3.10/site-packages/socrate/system.py", line 32, in set_env
secret_key = open(os.environ.get("SECRET_KEY_FILE"), "r").read().strip()
TypeError: expected str, bytes or os.PathLike object, not NoneTypew
```
The fix was to add the env variable as follows to the failing deployments:
```sh
- name: SECRET_KEY
value: "fa5faeD9aegietaesahbiequ5Pe9au"
```
### troubleshooting
Test imap from console:
```sh
openssl s_client -crlf -connect moritzgraf.de:993
```
### migrate to GoogleWorkspace
Namespace `migrate`is used.
```
kubectl create ns migrate
```
### old migrate (from before GoogleWorkspace)
```sh
# backup on moritzgraf.de
ssh moritzgraf.de "sudo su - docker -c 'cd /home/docker/mailu && docker-compose stop' && sudo su - root -c 'cd /home/docker && tar czvf mailu.tar.gz ./mailu && mv mailu.tar.gz /home/moritz/ && chown moritz. /home/moritz/mailu.tar.gz' && scp mailu.tar.gz haumdaucher.de:/home/moritz/"
# terraform change
cd ../terraform && terraform apply
# helm apply
cd ../k8s
helm upgrade --create-namespace --install mailu mailu/mailu -n mailu -f mailu/mailu.secret.yml
# apply mailu and scale all to 0
kc mailu
k scale --replicas=0 --all=true deploy
# apply restore pod
k apply -f mailu/restore.yaml
# copy tar gz
ssh one
k cp mailu.tar.gz restore:/data/
# exec to pod and arrange persistence
k exec -it restore -- bash
cd /data
tar xzvf mailu.tar.gz
mv mailu/data/* ./admin/
mv mailu/dkim/* ./dkim/
mv mailu/mail/* ./dovecotmail/
chown -R mail:man ./dovecotmail
# scale up
k scale --replicas=1 --all=true deploy
```
Checks:
* browser mail.moritzgraf.de & login
* browser mail.moritzgraf.de/admin
## mopbot
Mopbot deployment has been moved to the mopbot repository itself.
## zebrium
[zebrium.io](https://zebrium.io) hat "AI powered" troubleshooting. This is a test use-case for my cluster. It was installed using the following command:
```sh
kubectl create namespace zebrium
helm install zlog-collector zlog-collector --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-kubernetes-collector/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy,zebrium.timezone=Europe/Berlin
helm install zstats-collector zstats --namespace zebrium --repo https://raw.githubusercontent.com/zebrium/ze-stats/master/charts --set zebrium.collectorUrl=https://zapi03.zebrium.com/stats/api/v1/zstats,zebrium.authToken=4CFDFC1B806869D78185776AD7D940B0B16254AC,zebrium.deployment=iptizer-zebrium-deploy
```
## Benchmark with dbench
Should be usually uncommented, but dbbench may be executed to check speed of local filesystem:
```sh
k create ns dbench
k apply -f dbench/
k delete -f dbench
```
## Web
```sh
kubectl create ns web
kubectl apply -n web ./re
kubectl apply -f web/
```
## Kubeflow
The whole Kubeflow deployment is documented in a seperate repository:
* [https://git.moritzgraf.de/moritz/datalab-kubeflow](https://git.moritzgraf.de/moritz/datalab-kubeflow)
# Archive
Deployments previously used.
## Jupyter
**DEPRECATED: Using Kubeflow instead. Moved to _archive.**
Using the project [zero-to-jupyterhub](https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub/setup-jupyterhub.html). Helm chart can be found [here](https://github.com/jupyterhub/zero-to-jupyterhub-k8s/tree/master/jupyterhub).
```sh
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm upgrade --cleanup-on-fail --install jupyter jupyterhub/jupyterhub --namespace datalab --values datalab/jupyter-values.yaml
helm delete jupyter --namespace datalab
```
## Tekton
**DEPRECATED: Using Argo from Kubeflow instead.**
Implementation as described [in the docs](https://tekton.dev/docs/getting-started/).
```sh
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
kubectl apply --filename https://github.com/tektoncd/dashboard/releases/latest/download/tekton-dashboard-release.yaml
#basic-auth, see https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
htpasswd -c ./tekton-pipelines/auth tekton
kubectl delete secret -n tekton-pipelines basic-auth
kubectl create secret -n tekton-pipelines generic basic-auth --from-file=tekton-pipelines/auth
kubectl apply -f tekton-pipelines/tekton-ingress.yml
rm tekton-pipelines/auth
k delete ns tekton-pipelines
```
Install client side tools:
```sh
brew tap tektoncd/tools
brew install tektoncd/tools/tektoncd-cli
```
## cnpg - Cloud Native Postgres
cnpg is a postgres operator that is recommended by n8n. So I installed it.
Instructions [here](https://cloudnative-pg.io/documentation/1.26/installation_upgrade/)
Executed:
```sh
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.26/releases/cnpg-1.26.1.yaml
```
## n8n
Using helm chart from [https://github.com/8gears/n8n-helm-chart](https://github.com/8gears/n8n-helm-chart).
```sh
kubectl create ns n8n
helm upgrade --cleanup-on-fail --install mop-n8n \
oci://8gears.container-registry.com/library/n8n \
--namespace n8n --values n8n/n8n.secret.yml --version 1.0.15
```
To verify installation was correcet, use the following command:
```sh
helm get manifest mop-n8n -n n8n | less
```
Apply the garth mcp server:
```sh
kubectl apply -f n8n/garmin-mcp.yaml
```
Generate token:
```sh
uvx garth login
#login with user+pw+token
#take output, put in in garth_tkomen.txt
kubectl create secret generic garth-token-secret --from-file=GARTH_TOKEN=./garth_token.txt -n n8n
```
## n8n-fabi
```sh
kubectl create ns n8n-fabi
helm upgrade --cleanup-on-fail --install fabi-n8n \
oci://8gears.container-registry.com/library/n8n \
--namespace n8n-fabi --values n8n-fabi/n8n-fabi.secret.yml --version 1.0.15
```