Compare commits

..

No commits in common. "cb36f4606c50a6a3faa9a03f824c981e358e469f" and "076cc96f656293fd5b8135427e5816c5e6b780a4" have entirely different histories.

7 changed files with 50 additions and 355 deletions

View File

@ -1,49 +0,0 @@
# AGENTS.md
> [!NOTE]
> This directory handles the **bootstrapping and provisioning** of the Haumdaucher Kubernetes cluster using **Kubespray**.
## Project Overview
* **Tool**: [Kubespray](https://github.com/kubernetes-sigs/kubespray) (Ansible-based).
* **Method**: The local `inventory/` is the source of truth, which is synced into a checked-out Kubespray repository.
* **Idempotency**: The process is designed to be repeatable. The `kubespray` folder is treated as ephemeral and is re-created by `init.sh`.
## Workflow & Scripts
The core workflow is encapsulated in `init.sh`.
### `init.sh`
**Purpose**: Prepares the environment and Kubespray for deployment.
**Actions**:
1. **Clean Slate**: Deletes existing `kubespray/` directory.
2. **Clone**: Clones Kubespray (version defined in variable `VERSION`, e.g., `release-2.27`).
3. **Environment**: Sets up Python virtualenv via `pyenv` and installs `requirements.txt`.
4. **Sync**: Copies local `./inventory/` configurations into `./kubespray/inventory/`.
### Usage
1. **Source the script**:
```bash
source init.sh
```
2. **Deploy / Upgrade**:
After sourcing, go to the `kubespray` directory and run the Ansible playbooks as instructed by the script output.
* **Standard Run**:
```bash
cd kubespray
ansible-playbook -i inventory/prod/inventory.ini cluster.yml
```
* **Forced Upgrade**:
```bash
cd kubespray
ansible-playbook -i inventory/prod/inventory.ini -e upgrade_cluster_setup=true cluster.yml
```
## Directory Structure
* `init.sh`: The entry point script. **Source of truth for Kubespray version.**
* `inventory/`: Contains cluster inventory configurations (hosts, variables). **Edit this, not the one in `kubespray/`**.
* `kubespray/`: (Ignored/Ephemeral) The checked-out Kubespray repository. **Do not edit files here directly**; they will be overwritten.
## Configuration Updates
To upgrade Kubespray or change cluster config:
1. **Version Upgrade**: Update `VERSION` in `init.sh` (e.g., to `release-2.28`).
2. **Config Changes**: Modify files in `./inventory/`.
3. **Apply**: Run `source init.sh` then execute the Ansible playbook.

17
bootstrap/GEMINI.md Normal file
View File

@ -0,0 +1,17 @@
# Purpose
This project configures a Kubernetes cluster utilizing Kubespray with a Vagrant-based development environment.
# Current task
Currently Kubespray `release-2.26`is used. I want you to:
* Read the changelog of Kubespray 2.27 here: https://github.com/kubernetes-sigs/kubespray/releases
* Analyze changes for this new version 2.27.
* MOdify the inventory files in "./inventory" to fit those changes.
* MOdify "init.sh" script and write "release-2.27" as the new version to be used.
# Folder structure
* `./init.sh` - Bootstrap script to set up the environment. The variable `release` defines the Kubespray version to be used.
* `./inventory/` - Directory containing inventory configurations for the Kubernetes cluster. It also contains variables on the Kubeernetes version.
* `./kubespray/` a checked out clone of the Kubespray repository with a specific version. We do not edit files in this sub folder. The "inventory folder from current sub folder will be synced into this "kubespray" folder.

View File

@ -1,125 +0,0 @@
# AGENTS.md
> [!NOTE]
> This file describes the constraints and conventions for the `k8s` directory, which contains deployments for the **haumdaucher.de** Kubernetes cluster.
## Project Overview
This directory contains the Kubernetes manifests and Helm charts for a single-node Kubernetes cluster (Haumdaucher).
* **Domain**: `*.haumdaucher.de`
* **Orchestration**: Self-managed Kubernetes (single node).
* **Ingress**: `ingress-nginx`
* **SSL**: `cert-manager` (LetsEncrypt)
## Directory Structure
* **Top-level folders**: Each folder corresponds to a Kubernetes **namespace**.
* Example: `mailu/` contains resources for the `mailu` namespace.
* **Documentation**: `README.md` is the **authoritative source** for deployment commands. Always check it before running commands.
## Code Style & Conventions
* **Helm Version**: Helm 3 (`helm`) is used.
* **Implementation Order**: Top-down.
* **Naming**: Namespaces matches folder names.
* **Formatting**: Standard YAML conventions.
## Security & Secrets
> [!IMPORTANT]
> **Git-Crypt is enforced.**
> Do not touch encrypted files unless you have the key and know how to unlock them.
**Encrypted File Patterns**:
* `*.secret`
* `*.secret.yaml`
* `*.secret.values`
* `*.secret.sh`
## Deployment Instructions
**Always consult `README.md` first.** Deployments vary between Helm charts and raw manifests.
### Common Patterns
* **Helm**:
```bash
helm upgrade --install <release> <chart> -n <namespace> -f <values-file>
```
* **Kubectl**:
```bash
kubectl apply -f <folder>/<file>.yaml
```
### Operational Tasks
* **Cleanup Error Pods**:
```bash
kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod
```
## Ingress Configuration
Ingress resources **must** follow these strict conventions to work with the cluster's ingress controller (`nginx`) and certificate manager (`cert-manager`).
### Annotations
All Ingress resources must include:
```yaml
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
kubernetes.io/tls-acme: "true"
# Standard nginx tweaks
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
```
### Hostnames & TLS
* **Domain**: Use a subdomain of `haumdaucher.de` or `moritzgraf.de`.
* **TLS Secret Name**: Must use **hyphens** instead of dots.
* Pattern: `<subdomain>-<domain>-<tld>`
* Example: `n8n.moritzgraf.de` -> `n8n-moritzgraf-de`
### Example
```yaml
spec:
ingressClassName: nginx
tls:
- hosts:
- n8n.moritzgraf.de
secretName: n8n-moritzgraf-de
rules:
- host: n8n.moritzgraf.de
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: n8n
port:
number: 5678
```
## Storage / Persistence
The cluster uses **OpenEBS** for dynamic local storage provisioning.
### PersistentVolumeClaims (PVC)
* **Provisioner**: `openebs.io/local` (or similar, managed via `openebs-hostpath`).
* **StorageClass**: `openebs-hostpath`.
* **AccessMode**: Typically `ReadWriteOnce` (RWO) as it's local storage.
To request storage, simply create a PVC or configure Helm charts to use the default storage class (or explicitly `openebs-hostpath`).
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-data
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
## Deployment Constraints
* **Resources**: Always define `requests` and `limits` for CPU and Memory to ensure fair scheduling on the single node.
* **Namespaces**: Every application gets its own namespace.
* **Secrets**: Encrypt all secrets using `git-crypt`.

View File

@ -160,20 +160,8 @@ USER='moritz'
PASSWORD='xxx'
docker run --entrypoint htpasswd --rm registry:2 -Bbn $USER $PASSWORD
# #
# 1. Add the modern repo
helm repo add twuni https://twuni.github.io/docker-registry.helm
helm repo update
# 2. Install the new one
helm upgrade --install docker-registry twuni/docker-registry \
--namespace development \
--create-namespace \
-f development/registry.secret.yaml
### 3. Verification
Once deployed, verify you can login from your local machine:
```bash
docker login registry.haumdaucher.de -u moritz
helm upgrade --install --create-namespace docker-registry stable/docker-registry -n development -f development/registry.secret.yaml
##kubectl apply -f development/registry.secret.yaml
```
### creating docker-pull-secret
@ -181,14 +169,14 @@ docker login registry.haumdaucher.de -u moritz
Create credentials secret [according to docu](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line):
```sh
namespaces="datalab moritz web haumdaucher"
namespaces="datalab moritz web"
# the following is ONLY required to update the secret file!!
for i in $( echo $namespaces ) ; do
kubectl create secret docker-registry registry-haumdaucher-de \
-n $i \
--docker-server=registry.haumdaucher.de \
--docker-username=moritz \
--docker-password='xxxxxxx' \
--docker-password='xxx' \
--docker-email=moritz@moritzgraf.de \
--dry-run -o yaml > ./${i}/docker-pull.yaml.secret
done
@ -364,7 +352,7 @@ helm repo update
helm upgrade --install robusta robusta/robusta -n robusta -f ./robusta/robusta.yaml
```
## gitea (old, no longer existant, do not use)
## gitea
In case my PRs have been accepted this is no longer necessary:
@ -382,23 +370,6 @@ helm upgrade --install gitea ./gitea-chart -n development -f development/gitea.s
helm upgrade --install gitea-phpmyadmin bitnami/phpmyadmin -n development -f development/gitea-phpmyadmin.yml
```
## gitea (new set up)
Chart used: [https://gitea.com/gitea/helm-gitea](https://gitea.com/gitea/helm-gitea)
History: Manually deleted the old git server and reapplied a new one.
```sh
# 1. Add/Update Repo
helm repo add gitea-charts https://dl.gitea.com/charts/
helm repo update
# 2. Install
helm upgrade --install gitea gitea-charts/gitea \
--namespace development \
-f development/gitea.secret.yml
```
### backup & restore
See the backup cronjob in the `/backup/` folder.
@ -734,7 +705,7 @@ Using helm chart from [https://github.com/8gears/n8n-helm-chart](https://github.
kubectl create ns n8n
helm upgrade --cleanup-on-fail --install mop-n8n \
oci://8gears.container-registry.com/library/n8n \
--namespace n8n --values n8n/n8n.secret.yml --version 2.0.1
--namespace n8n --values n8n/n8n.secret.yml --version 1.0.15
```
To verify installation was correcet, use the following command:

View File

@ -1,119 +1,42 @@
# --- Resource Optimization: Disable HA Clusters ---
postgresql-ha:
enabled: false
valkey-cluster:
enabled: false
# --- Lightweight Database (PostgreSQL) ---
postgresql:
enabled: true
global:
postgresql:
auth:
database: gitea
username: gitea
password: "eexai7ohHoameo3aefah" # <--- [1] DB Password
# Reduce DB resources for private use
primary:
resources:
requests:
cpu: 10m
memory: 128Mi
limits:
memory: 512Mi
persistence:
size: 5Gi
annotations:
"helm.sh/resource-policy": keep
enabled: true
storageClass: openebs-hostpath
accessMode: ReadWriteOnce
# --- Lightweight Cache (Valkey Standalone) ---
valkey:
enabled: true
architecture: standalone
global:
valkey:
password: "Aid0eiy1ohghoagahjo3" # <--- [2] Cache Password
master:
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
memory: 128Mi
persistence:
enabled: false # Ephemeral cache is fine for home use (saves disk I/O)
# --- Gitea Configuration ---
image:
tag: "1.21.5"
rootless: true
# Limit Gitea's own resources
resources:
gitea:
requests:
memory: 256Mi
memory: 200Mi
cpu: 100m
limits:
memory: 1Gi
cpu: 1000m
persistence:
mariadb:
enabled: true
storageClass: openebs-hostpath
size: 10Gi
accessModes:
- ReadWriteOnce
gitea:
admin:
username: "moritz"
password: "oongaeY9ohw4eith2Aiv" # <--- [3] Admin Password
email: "moritz@moritzgraf.de"
config:
security:
INSTALL_LOCK: true
SECRET_KEY: "eew5quoo3jeiPheeb7eereeTaik2Ieth" # <--- [4] Secret Key
server:
DOMAIN: git.moritzgraf.de
ROOT_URL: "https://git.moritzgraf.de/"
SSH_DOMAIN: git.moritzgraf.de
SSH_PORT: "2222" # External display port
SSH_LISTEN_PORT: "2222" # Internal container port
START_SSH_SERVER: true
# Connect to our standalone Valkey instance
# The default host for the subchart is usually: <release-name>-valkey-master
cache:
ADAPTER: redis
HOST: "redis://:Aid0eiy1ohghoagahjo3@gitea-valkey-master:6379/0" # <--- [2] Cache Password
session:
PROVIDER: redis
PROVIDER_CONFIG: "redis://:Aid0eiy1ohghoagahjo3@gitea-valkey-master:6379/0" # <--- [2] Cache Password
queue:
TYPE: redis
CONN_STR: "redis://:Aid0eiy1ohghoagahjo3@gitea-valkey-master:6379/0" # <--- [2] Cache Password
service:
ssh:
type: NodePort
port: 2222
targetPort: 2222
nodePort: 30222 # Open this port on your firewall/router if needed
rootUser:
password: chu6ohzat4zae2iPhuoy
db:
user: gitea
name: gitea
password: OohoX6vahsh1mahshujo
ingress:
enabled: true
className: nginx
certManager: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: "512m"
hosts:
- host: git.moritzgraf.de
paths:
- path: /
pathType: Prefix
- name: git.moritzgraf.de
tls:
- secretName: git-moritzgraf-de
hosts:
- git.moritzgraf.de
- hosts:
- "git.moritzgraf.de"
secretName: git-moritzgraf-de
service:
ssh:
serviceType: ClusterIP
port: 22
externalPort: 2222
externalHost: git.moritzgraf.de

Binary file not shown.

View File

@ -1,42 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: docker-registry
namespace: development
annotations:
# --- ADDED: Match the working configuration ---
kubernetes.io/tls-acme: "true"
# ----------------------------------------------
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: docker-registry
meta.helm.sh/release-namespace: development
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
labels:
app: docker-registry
app.kubernetes.io/managed-by: Helm
chart: docker-registry-1.9.2
heritage: Helm
release: docker-registry
spec:
# --- ADDED: Critical for modern K8s ---
ingressClassName: nginx
# --------------------------------------
rules:
- host: registry.haumdaucher.de
http:
paths:
- backend:
service:
name: docker-registry
port:
number: 5000
path: /
# --- CHANGED: Recommended for consistency ---
pathType: Prefix
# --------------------------------------------
tls:
- hosts:
- registry.haumdaucher.de
secretName: registry-haumdaucher-de