infrapuzzle/k8s/AGENTS.md

5.3 KiB

AGENTS.md

[!NOTE] This file describes the constraints and conventions for the k8s directory, which contains deployments for the haumdaucher.de Kubernetes cluster.

Project Overview

This directory contains the Kubernetes manifests and Helm charts for a single-node Kubernetes cluster (Haumdaucher).

  • Domain: *.haumdaucher.de
  • Orchestration: Self-managed Kubernetes (single node).
  • Ingress: ingress-nginx
  • SSL: cert-manager (LetsEncrypt)

Directory Structure

  • Top-level folders: Each folder corresponds to a Kubernetes namespace.
    • Example: mailu/ contains resources for the mailu namespace.
  • Documentation: README.md is the authoritative source for deployment commands. Always check it before running commands.

Code Style & Conventions

  • Helm Version: Helm 3 (helm) is used.
  • Implementation Order: Top-down.
  • Naming: Namespaces matches folder names.
  • Formatting: Standard YAML conventions.

Security & Secrets

[!IMPORTANT] Git-Crypt is enforced. Do not touch encrypted files unless you have the key and know how to unlock them.

Encrypted File Patterns:

  • *.secret
  • *.secret.yaml
  • *.secret.values
  • *.secret.sh

Remote Access

It is possible to execute commands on the remote Linux node for information retrieval or troubleshooting.

  • Host: haumdaucher.de
  • User: moritz (local user)
  • Privileges: Use sudo to gain root privileges.

[!CAUTION] SSH Identity Required: The agent cannot enter an SSH passphrase. If SSH commands fail with authentication errors, request the user to run ssh-add locally to load their identity.

Command Execution

You can execute commands remotely via SSH. This is useful for checking node-level resources (memory, disk, etc.) that kubectl might not expose directly.

Example: Check Memory Usage

ssh moritz@haumdaucher.de "free -h"

Example: Check Disk Usage (with sudo)

ssh -t moritz@haumdaucher.de "sudo df -h"

Note: The -t flag forces pseudo-terminal allocation, which is often required for sudo prompts.

Deployment Instructions

Always consult README.md first. Deployments vary between Helm charts and raw manifests.

Common Patterns

  • Helm:
    helm upgrade --install <release> <chart> -n <namespace> -f <values-file>
    
  • Kubectl:
    kubectl apply -f <folder>/<file>.yaml
    

Post-Implementation Verification

[!IMPORTANT] Verification Workflow: After a new implementation or configuration change, always:

  1. Run kubectl apply -f <file>.yaml.
  2. Run kubectl rollout restart deployment <deployment-name> -n <namespace> if applying a ConfigMap/Secret that a deployment depends on.
  3. Wait for 30 seconds to allow pods to roll over.
  4. Check logs using kubectl logs -n <namespace> -l <label> --tail=100.

The agent must always ask the user for permission to execute this verification workflow after making changes.

Operational Tasks

  • Cleanup Error Pods:
    kubectl get pods | grep Error | cut -d' ' -f 1 | xargs kubectl delete pod
    

Ingress Configuration

Ingress resources must follow these strict conventions to work with the cluster's ingress controller (nginx) and certificate manager (cert-manager).

Annotations

All Ingress resources must include:

annotations:
  kubernetes.io/ingress.class: "nginx"
  cert-manager.io/cluster-issuer: "letsencrypt-prod"
  kubernetes.io/tls-acme: "true"
  # Standard nginx tweaks
  nginx.ingress.kubernetes.io/proxy-body-size: "0"
  nginx.ingress.kubernetes.io/ssl-redirect: "true"
  nginx.ingress.kubernetes.io/force-ssl-redirect: "true"

Hostnames & TLS

  • Domain: Use a subdomain of haumdaucher.de or moritzgraf.de.
  • TLS Secret Name: Must use hyphens instead of dots.
    • Pattern: <subdomain>-<domain>-<tld>
    • Example: n8n.moritzgraf.de -> n8n-moritzgraf-de

Example

spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - n8n.moritzgraf.de
      secretName: n8n-moritzgraf-de
  rules:
    - host: n8n.moritzgraf.de
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: n8n
                port:
                  number: 5678

Storage / Persistence

The cluster uses OpenEBS for dynamic local storage provisioning.

PersistentVolumeClaims (PVC)

  • Provisioner: openebs.io/local (or similar, managed via openebs-hostpath).
  • StorageClass: openebs-hostpath.
  • AccessMode: Typically ReadWriteOnce (RWO) as it's local storage.

To request storage, simply create a PVC or configure Helm charts to use the default storage class (or explicitly openebs-hostpath).

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: my-data
spec:
  storageClassName: openebs-hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Deployment Constraints

  • Resources: Always define requests and limits for CPU and Memory to ensure fair scheduling on the single node.
  • Namespaces: Every application gets its own namespace.
  • Secrets: Encrypt all secrets using git-crypt.