Compare commits

..

No commits in common. "076cc96f656293fd5b8135427e5816c5e6b780a4" and "74f1b4e3396aa8b1894a5be71e69b7fed2ad33fc" have entirely different histories.

23 changed files with 398 additions and 333 deletions

View File

@ -1,15 +0,0 @@
{
"mcpServers": {
"kubespray-changelog": {
"command": "./bin/get_kubespray_changelog.py",
"args": [],
"trust": true
}
},
"general": {
"preferredEditor": "vscode"
},
"ui": {
"theme": "Default"
}
}

View File

@ -1,16 +0,0 @@
{
"theme": "Default",
"preferredEditor": "vscode",
"mcpServers": {
//======================================================================
// *** FOR YOUR KUBESPRAY GOAL ***
// This is the config for the custom Python script we discussed.
// This is the tool that will *actually* search changelogs.
//======================================================================
"kubespray-changelog": {
"command": "./bin/get_kubespray_changelog.py",
"args": [],
"trust": true
}
}
}

View File

@ -1,17 +0,0 @@
# Purpose
This project configures a Kubernetes cluster utilizing Kubespray with a Vagrant-based development environment.
# Current task
Currently Kubespray `release-2.26`is used. I want you to:
* Read the changelog of Kubespray 2.27 here: https://github.com/kubernetes-sigs/kubespray/releases
* Analyze changes for this new version 2.27.
* MOdify the inventory files in "./inventory" to fit those changes.
* MOdify "init.sh" script and write "release-2.27" as the new version to be used.
# Folder structure
* `./init.sh` - Bootstrap script to set up the environment. The variable `release` defines the Kubespray version to be used.
* `./inventory/` - Directory containing inventory configurations for the Kubernetes cluster. It also contains variables on the Kubeernetes version.
* `./kubespray/` a checked out clone of the Kubespray repository with a specific version. We do not edit files in this sub folder. The "inventory folder from current sub folder will be synced into this "kubespray" folder.

View File

@ -103,12 +103,3 @@ This runs everything and is kind of idempotent:
ansible-playbook -i inventory/prod/inventory.ini cluster.yml
```
## Upgrade to 2.31.3
Required to execute:
```
ansible-playbook -i inventory/prod/inventory.ini -e upgrade_cluster_setup=true -e drain_nodes=false upgrade-cluster.yml
```
+ set a feature flag: https://github.com/kubernetes-sigs/kubespray/issues/11887

View File

@ -1,65 +0,0 @@
# Understanding and Resolving Kubernetes Certificate Expiration in Kubespray
## Introduction: The Role of Certificates in Kubernetes
A Kubernetes cluster relies heavily on TLS certificates to secure communication between its various components. The API server, controller manager, scheduler, etcd, and kubelets all use certificates to authenticate and encrypt traffic. These certificates are issued with a specific validity period (usually one year) for security reasons. When they expire, components can no longer trust each other, leading to a cluster-wide failure.
The error message you encountered is a classic symptom of this problem:
```
E1116 13:47:01.271977 ... failed to verify certificate: x509: certificate has expired or is not yet valid
```
This indicates that `kubectl` (and other components) could not validate the certificate presented by the Kubernetes API server because the current date was past the certificate's expiration date.
## The Core Problem: A "Chicken-and-Egg" Deadlock
When the certificates expired, the initial and correct instinct was to use Kubespray's provided automation to fix it. In your version of Kubespray, the `upgrade-cluster.yml` playbook is the designated tool for this job, as it includes tasks to regenerate certificates.
However, this approach led to a deadlock, manifesting as a timeout during the "Create kubeadm token for joining nodes" task. Here's a breakdown of why this happened:
1. **API Server is Down:** The primary certificate for the Kubernetes API server (`apiserver.crt`) had expired. This prevented the API server from starting correctly and serving traffic on its secure port (6443).
2. **Playbook Needs the API Server:** The `upgrade-cluster.yml` playbook, specifically the `kubeadm` tasks within it, needs to communicate with a healthy Kubernetes API server to perform its functions. To create a join token for other nodes, `kubeadm` must make a request to the API server.
3. **The Deadlock:** The playbook was trying to connect to the API server to fix the certificates, but it couldn't connect precisely *because* the certificates were already expired and the API server was unhealthy. This created a "chicken-and-egg" scenario where the automated solution couldn't run because the problem it was meant to fix was preventing it from running.
## The Solution: Manual Intervention on the Control Plane
To break this deadlock, we had to manually restore the core health of the control plane on the master node (`haumdaucher`) *before* letting the automation take over again. The process involved SSHing into the master node and using the `kubeadm` command-line tool to regenerate the essential certificates and configuration files.
Here is a detailed look at the commands executed and why they were necessary:
1. **`sudo -i`**
* **What it does:** Switches to the `root` user.
* **Why:** Modifying files in `/etc/kubernetes/` requires root privileges.
2. **`mv /etc/kubernetes/pki /etc/kubernetes/pki.backup-...`**
* **What it does:** Backs up the directory containing all the existing (and expired) cluster certificates.
* **Why:** This is a critical safety measure. If the manual renewal process failed, we could restore the original state to diagnose the problem further.
3. **`kubeadm init phase certs all`**
* **What it does:** This is the core of the manual fix. It tells `kubeadm` to execute *only* the certificate generation phase of the cluster initialization process. It creates a new Certificate Authority (CA) and uses it to sign a fresh set of certificates for all control plane components (API server, controller-manager, scheduler, etcd).
* **Why:** This directly replaces the expired certificates with new, valid ones, allowing the API server and other components to trust each other again.
4. **`mv /etc/kubernetes/*.conf /etc/kubernetes/*.conf.backup-...`**
* **What it does:** Backs up the kubeconfig files used by the administrator (`admin.conf`) and the control plane components.
* **Why:** These files contain embedded client certificates and keys for authentication. Since we just created new certificates, we also need to generate new kubeconfig files that use them. Backing them up is a standard precaution.
5. **`kubeadm init phase kubeconfig all`**
* **What it does:** This command generates new kubeconfig files (`admin.conf`, `kubelet.conf`, etc.) that use the newly created certificates from the previous step.
* **Why:** This ensures that all components, as well as the administrator using `kubectl` on the node, can successfully authenticate to the now-healthy API server.
6. **`systemctl restart kubelet`**
* **What it does:** Restarts the kubelet service on the master node.
* **Why:** The kubelet is the agent that runs on each node and is responsible for managing pods. It needs to be restarted to load its new configuration (`kubelet.conf`) and re-establish a secure connection to the API server using the new certificates.
## Why the Solution Works
By performing these manual steps, we effectively gave the control plane a "jump-start." We manually created the valid certificates and configuration files needed for the API server to start up successfully.
Once the API server was healthy and listening, the `upgrade-cluster.yml` playbook could be re-run. This time, when the `kubeadm` tasks within the playbook tried to connect to the API server to create join tokens, the connection succeeded. The playbook was then able to complete its remaining tasks, ensuring all nodes in the cluster were properly configured and joined.
## Future Prevention and Best Practices
1. **Monitor Certificate Expiration:** Use tools like `kubeadm certs check-expiration` or monitoring solutions (e.g., Prometheus with `kube-state-metrics`) to track certificate expiry dates proactively.
2. **Consider Upgrading Kubespray:** Newer versions of Kubespray may include a dedicated `renew-certs.yml` playbook. This playbook is designed for certificate rotation specifically and is less disruptive than a full `upgrade-cluster.yml`, as it typically avoids rotating service account keys.
3. **Understand the Manual Process:** Keeping this guide handy will allow you to quickly resolve similar deadlocks in the future without extensive troubleshooting. The key is recognizing that automation sometimes needs a manual boost when the system it's trying to fix is too broken to respond.

View File

@ -6,7 +6,7 @@
echo "######################################################################################"
echo "## Reinit repository"
rm -rf kubespray
VERSION="release-2.27"
VERSION="release-2.26"
git clone --branch $VERSION https://github.com/kubernetes-sigs/kubespray.git
echo "######################################################################################"

View File

@ -27,7 +27,7 @@ kube_kubeadm_apiserver_extra_args:
kube_api_anonymous_auth: true
## Change this to use another Kubernetes version, e.g. a current beta release
kube_version: v1.31.9
kube_version: v1.29.5
# kubernetes image repo define
#kube_image_repo: "k8s.gcr.io"

View File

@ -0,0 +1,78 @@
# see roles/network_plugin/calico/defaults/main.yml
## With calico it is possible to distributed routes with border routers of the datacenter.
## Warning : enabling router peering will disable calico's default behavior ('node mesh').
## The subnets of each nodes will be distributed by the datacenter router
# peer_with_router: false
# Enables Internet connectivity from containers
# nat_outgoing: true
# add default ippool name
# calico_pool_name: "default-pool"
# add default ippool blockSize (defaults kube_network_node_prefix)
# calico_pool_blocksize: 24
# add default ippool CIDR (must be inside kube_pods_subnet, defaults to kube_pods_subnet otherwise)
# calico_pool_cidr: 1.2.3.4/5
# Global as_num (/calico/bgp/v1/global/as_num)
# global_as_num: "64512"
# You can set MTU value here. If left undefined or empty, it will
# not be specified in calico CNI config, so Calico will use built-in
# defaults. The value should be a number, not a string.
# calico_mtu: 1500
# Configure the MTU to use for workload interfaces and tunnels.
# - If Wireguard is enabled, set to your network MTU - 60
# - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50
# - Otherwise, if IPIP is enabled, set to your network MTU - 20
# - Otherwise, if not using any encapsulation, set to your network MTU.
# calico_veth_mtu: 1440
# Advertise Cluster IPs
# calico_advertise_cluster_ips: true
# Choose data store type for calico: "etcd" or "kdd" (kubernetes datastore)
# calico_datastore: "etcd"
# Choose Calico iptables backend: "Legacy", "Auto" or "NFT"
#calico_iptables_backend: "NFT"
# Use typha (only with kdd)
# typha_enabled: false
# Generate TLS certs for secure typha<->calico-node communication
# typha_secure: false
# Scaling typha: 1 replica per 100 nodes is adequate
# Number of typha replicas
# typha_replicas: 1
# Set max typha connections
# typha_max_connections_lower_limit: 300
# Set calico network backend: "bird", "vxlan" or "none"
# bird enable BGP routing, required for ipip mode.
# calico_network_backend: bird
# IP in IP and VXLAN is mutualy exclusive modes.
# set IP in IP encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_ipip_mode: 'Always'
# set VXLAN encapsulation mode: "Always", "CrossSubnet", "Never"
# calico_vxlan_mode: 'Never'
# If you want to use non default IP_AUTODETECTION_METHOD for calico node set this option to one of:
# * can-reach=DESTINATION
# * interface=INTERFACE-REGEX
# see https://docs.projectcalico.org/reference/node/configuration
# calico_ip_auto_method: "interface=eth.*"
# Choose the iptables insert mode for Calico: "Insert" or "Append".
# calico_felix_chaininsertmode: Insert
# If you want use the default route interface when you use multiple interface with dynamique route (iproute2)
# see https://docs.projectcalico.org/reference/node/configuration : FELIX_DEVICEROUTESOURCEADDRESS
# calico_use_default_route_src_ipaddr: false

View File

@ -0,0 +1,10 @@
# see roles/network_plugin/canal/defaults/main.yml
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is choosing using the node's
# default route.
# canal_iface: ""
# Whether or not to masquerade traffic to destinations not within
# the pod network.
# canal_masquerade: "true"

View File

@ -0,0 +1 @@
# see roles/network_plugin/cilium/defaults/main.yml

View File

@ -0,0 +1,20 @@
# see roles/network_plugin/contiv/defaults/main.yml
# Forwarding mode: bridge or routing
# contiv_fwd_mode: routing
## With contiv, L3 BGP mode is possible by setting contiv_fwd_mode to "routing".
## In this case, you may need to peer with an uplink
## NB: The hostvars must contain a key "contiv" of which value is a dict containing "router_ip", "as"(defaults to contiv_global_as), "neighbor_as" (defaults to contiv_global_neighbor_as), "neighbor"
# contiv_peer_with_uplink_leaf: false
# contiv_global_as: "65002"
# contiv_global_neighbor_as: "500"
# Fabric mode: aci, aci-opflex or default
# contiv_fabric_mode: default
# Default netmode: vxlan or vlan
# contiv_net_mode: vxlan
# Dataplane interface
# contiv_vlan_interface: ""

View File

@ -0,0 +1,18 @@
# see roles/network_plugin/flannel/defaults/main.yml
## interface that should be used for flannel operations
## This is actually an inventory cluster-level item
# flannel_interface:
## Select interface that should be used for flannel operations by regexp on Name or IP
## This is actually an inventory cluster-level item
## example: select interface with ip from net 10.0.0.0/23
## single quote and escape backslashes
# flannel_interface_regexp: '10\\.0\\.[0-2]\\.\\d{1,3}'
# You can choose what type of flannel backend to use: 'vxlan' or 'host-gw'
# for experimental backend
# please refer to flannel's docs : https://github.com/coreos/flannel/blob/master/README.md
# flannel_backend_type: "vxlan"
# flannel_vxlan_vni: 1
# flannel_vxlan_port: 8472

View File

@ -0,0 +1,61 @@
# See roles/network_plugin/kube-router//defaults/main.yml
# Enables Pod Networking -- Advertises and learns the routes to Pods via iBGP
# kube_router_run_router: true
# Enables Network Policy -- sets up iptables to provide ingress firewall for pods
# kube_router_run_firewall: true
# Enables Service Proxy -- sets up IPVS for Kubernetes Services
# see docs/kube-router.md "Caveats" section
# kube_router_run_service_proxy: false
# Add Cluster IP of the service to the RIB so that it gets advertises to the BGP peers.
# kube_router_advertise_cluster_ip: false
# Add External IP of service to the RIB so that it gets advertised to the BGP peers.
# kube_router_advertise_external_ip: false
# Add LoadbBalancer IP of service status as set by the LB provider to the RIB so that it gets advertised to the BGP peers.
# kube_router_advertise_loadbalancer_ip: false
# Adjust manifest of kube-router daemonset template with DSR needed changes
# kube_router_enable_dsr: false
# Array of arbitrary extra arguments to kube-router, see
# https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md
# kube_router_extra_args: []
# ASN numbers of the BGP peer to which cluster nodes will advertise cluster ip and node's pod cidr.
# kube_router_peer_router_asns: ~
# The ip address of the external router to which all nodes will peer and advertise the cluster ip and pod cidr's.
# kube_router_peer_router_ips: ~
# The remote port of the external BGP to which all nodes will peer. If not set, default BGP port (179) will be used.
# kube_router_peer_router_ports: ~
# Setups node CNI to allow hairpin mode, requires node reboots, see
# https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md#hairpin-mode
# kube_router_support_hairpin_mode: false
# Select DNS Policy ClusterFirstWithHostNet, ClusterFirst, etc.
# kube_router_dns_policy: ClusterFirstWithHostNet
# Array of annotations for master
# kube_router_annotations_master: []
# Array of annotations for every node
# kube_router_annotations_node: []
# Array of common annotations for every node
# kube_router_annotations_all: []
# Enables scraping kube-router metrics with Prometheus
# kube_router_enable_metrics: false
# Path to serve Prometheus metrics on
# kube_router_metrics_path: /metrics
# Prometheus metrics port to use
# kube_router_metrics_port: 9255

View File

@ -0,0 +1,58 @@
# see roles/network_plugin/weave/defaults/main.yml
# Weave's network password for encryption, if null then no network encryption.
# weave_password: ~
# If set to 1, disable checking for new Weave Net versions (default is blank,
# i.e. check is enabled)
# weave_checkpoint_disable: false
# Soft limit on the number of connections between peers. Defaults to 100.
# weave_conn_limit: 100
# Weave Net defaults to enabling hairpin on the bridge side of the veth pair
# for containers attached. If you need to disable hairpin, e.g. your kernel is
# one of those that can panic if hairpin is enabled, then you can disable it by
# setting `HAIRPIN_MODE=false`.
# weave_hairpin_mode: true
# The range of IP addresses used by Weave Net and the subnet they are placed in
# (CIDR format; default 10.32.0.0/12)
# weave_ipalloc_range: "{{ kube_pods_subnet }}"
# Set to 0 to disable Network Policy Controller (default is on)
# weave_expect_npc: "{{ enable_network_policy }}"
# List of addresses of peers in the Kubernetes cluster (default is to fetch the
# list from the api-server)
# weave_kube_peers: ~
# Set the initialization mode of the IP Address Manager (defaults to consensus
# amongst the KUBE_PEERS)
# weave_ipalloc_init: ~
# Set the IP address used as a gateway from the Weave network to the host
# network - this is useful if you are configuring the addon as a static pod.
# weave_expose_ip: ~
# Address and port that the Weave Net daemon will serve Prometheus-style
# metrics on (defaults to 0.0.0.0:6782)
# weave_metrics_addr: ~
# Address and port that the Weave Net daemon will serve status requests on
# (defaults to disabled)
# weave_status_addr: ~
# Weave Net defaults to 1376 bytes, but you can set a smaller size if your
# underlying network has a tighter limit, or set a larger size for better
# performance if your network supports jumbo frames (e.g. 8916)
# weave_mtu: 1376
# Set to 1 to preserve the client source IP address when accessing Service
# annotated with `service.spec.externalTrafficPolicy=Local`. The feature works
# only with Weave IPAM (default).
# weave_no_masq_local: true
# Extra variables that passing to launch.sh, useful for enabling seed mode, see
# https://www.weave.works/docs/net/latest/tasks/ipam/ipam/
# weave_extra_args: ~

View File

@ -3,16 +3,35 @@
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
haumdaucher
#jetson1.dyndns.moritzgraf.de
# container_manager=containerd resolvconf_mode=docker_dns
[kube_control_plane]
#ns3088070.ip-37-59-40.eu ansible_host=37.59.40.95 ansible_become=yes ansible_become_method=sudo ansible_python_interpreter=/usr/bin/python3
#ns3100058.ip-37-59-61.eu ansible_host=37.59.61.198 ansible_become=yes ansible_become_method=sudo ansible_python_interpreter=/usr/bin/python3
# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1
# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
haumdaucher
#ns3088070.ip-37-59-40.eu
[etcd]
haumdaucher
[kube_node]
[kube-node]
haumdaucher
#jetson1.dyndns.moritzgraf.de
[k8s_cluster:children]
kube_control_plane
kube_node
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr

View File

@ -710,31 +710,6 @@ oci://8gears.container-registry.com/library/n8n \
To verify installation was correcet, use the following command:
```sh
```
helm get manifest mop-n8n -n n8n | less
```
Apply the garth mcp server:
```sh
kubectl apply -f n8n/garmin-mcp.yaml
```
Generate token:
```sh
uvx garth login
#login with user+pw+token
#take output, put in in garth_tkomen.txt
kubectl create secret generic garth-token-secret --from-file=GARTH_TOKEN=./garth_token.txt -n n8n
```
## n8n-fabi
```sh
kubectl create ns n8n-fabi
helm upgrade --cleanup-on-fail --install fabi-n8n \
oci://8gears.container-registry.com/library/n8n \
--namespace n8n-fabi --values n8n-fabi/n8n-fabi.secret.yml --version 1.0.15
```

View File

@ -1,110 +0,0 @@
#small deployment with nodeport for local testing or small deployments
image:
repository: n8nio/n8n
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: "stable"
main:
config:
generic:
timezone: Europe/Berlin
n8n:
editor_base_url: https://n8n-fabi.moritzgraf.de
webhook_url: https://n8n-fabi.moritzgraf.de
extra:
node_modules:
- axios
node:
function_allow_builtin: '*'
function_allow_external: '*'
db:
type: postgresdb
postgresdb:
host: db-rw
user: n8n
# password: password is read from cnpg db-app secretKeyRef
# Moritz: Assuming the db-app secret is created by cnpg operator
pool:
size: 10
ssl:
enabled: true
reject_Unauthorized: true
ca_file: "/home/ssl/certs/postgresql/ca.crt"
secret:
n8n:
encryption_key: "ephikoaloVeev7xaiz5sheig9ieZaNgeihaCaiTh5ahqua5Aelanu8eicooy"
extraEnv:
DB_POSTGRESDB_PASSWORD:
valueFrom:
secretKeyRef:
name: db-app
key: password
# Mount the CNPG CA Cert into N8N container
extraVolumeMounts:
- name: db-ca-cert
mountPath: /home/ssl/certs/postgresql
readOnly: true
extraVolumes:
- name: db-ca-cert
secret:
secretName: db-ca
items:
- key: ca.crt
path: ca.crt
resources:
limits:
memory: 2048Mi
requests:
memory: 512Mi
service:
type: NodePort
port: 5678
ingress:
# Enable ingress for home assistant
enabled: true
className: "nginx"
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
hosts:
- host: n8n-fabi.moritzgraf.de
paths:
- /
tls:
- hosts:
- "n8n-fabi.moritzgraf.de"
secretName: n8n-fabi-moritzgraf-de
# cnpg DB cluster request
extraManifests:
- apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: db
spec:
instances: 1
bootstrap:
initdb:
database: n8n
owner: n8n
postgresql:
parameters:
shared_buffers: "64MB"
resources:
requests:
memory: "512Mi"
limits:
memory: "512Mi"
storage:
size: 1Gi

View File

@ -0,0 +1,65 @@
# --- Garmin Token Generator Pod ---
# This pod is a temporary, interactive tool to generate Garmin Connect session tokens.
#
# --- WORKFLOW ---
# 1. Ensure the `garmin-credentials` secret is applied in the `n8n` namespace.
# 2. Apply this manifest: `kubectl apply -f token-generator-pod.yaml`
# 3. View the logs to see instructions: `kubectl logs -n n8n -f garmin-token-generator`
# 4. Exec into the pod and run the login command provided in the logs.
# 5. Retrieve the generated token data from within the pod.
# 6. Create the `garmin-tokens` Kubernetes secret.
# 7. Delete this pod: `kubectl delete pod -n n8n garmin-token-generator`
#
apiVersion: v1
kind: Pod
metadata:
name: garmin-token-generator
namespace: n8n
spec:
volumes:
# This is temporary storage that will be deleted when the pod is deleted.
# We use it to hold the generated token files.
- name: garmin-connect-storage
emptyDir: {}
containers:
- name: helper
image: python:3.12-slim
# Mount the garmin-credentials secret as environment variables
envFrom:
- secretRef:
name: garmin-credentials
# The main command installs dependencies, prints instructions, and then sleeps.
# This keeps the pod running so you can exec into it.
command: ["/bin/sh", "-c"]
args:
- |
set -e
echo "--- Garmin Token Generator Pod ---"
echo "[Step 1/3] Installing dependencies (git and uv)..."
# Set debconf to non-interactive mode to prevent installation prompts and fix warnings.
export DEBIAN_FRONTEND=noninteractive
# Redirect noisy output to /dev/null
apt-get update > /dev/null && apt-get install -y git --no-install-recommends > /dev/null && rm -rf /var/lib/apt/lists/*
pip install uv > /dev/null
echo "[Step 2/3] Setup complete. Pod is ready for interactive login."
echo "------------------------------------------------------------------"
echo "ACTION REQUIRED:"
echo "1. Ensure the 'garmin-credentials' secret has been applied to this namespace."
echo "2. Open a new terminal."
echo "3. Connect to this pod by running:"
echo " kubectl exec -it -n n8n garmin-token-generator -- /bin/sh"
echo ""
echo "4. Inside the pod's shell, run the following simplified command:"
echo " uvx --from garminconnect gcexport --username \"$GARMIN_EMAIL\" --password \"$GARMIN_PASSWORD\""
echo ""
echo "5. Follow the prompts to enter your MFA code."
echo "6. Once successful, the tokens will be saved to /root/.garminconnect"
echo "------------------------------------------------------------------"
echo "[Step 3/3] Sleeping indefinitely. This pod will remain running until you delete it."
sleep infinity
volumeMounts:
# Mount the temporary storage at the path where the library saves tokens.
- name: garmin-connect-storage
mountPath: "/root/.garminconnect"

View File

@ -1,90 +1,84 @@
# --- 1. Secret to hold your Garmin Connect token ---
# You must create this secret before applying the rest of the manifest.
# Replace 'your_base64_encoded_token_here' with your actual token encoded in Base64.
# To encode your token, run: echo -n 'your_token_from_login' | base64
# apiVersion: v1
# kind: Secret
# metadata:
# name: garth-mcp-secret
# namespace: default
# type: Opaque
# data:
# # This key MUST be GARTH_TOKEN to match the application's environment variable
# GARTH_TOKEN: your_base64_encoded_token_here
---
# deployment.yaml
# --- Garmin MCP Server Deployment ---
# This manifest defines the desired state for the Garmin MCP server.
# It uses a standard Python image and installs the necessary tools and
# the application itself upon startup.
apiVersion: apps/v1
kind: Deployment
metadata:
name: garth-mcp-server
name: garmin-mcp-server
namespace: n8n
labels:
app: garth-mcp-server
app: garmin-mcp-server
spec:
replicas: 1
selector:
matchLabels:
app: garth-mcp-server
app: garmin-mcp-server
template:
metadata:
labels:
app: garth-mcp-server
app: garmin-mcp-server
spec:
containers:
- name: garth-mcp-server
# Use a Python image version >= 3.13 as requested.
image: python:3.13-slim
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
# This command now installs dependencies and directly executes the mounted script.
- name: server
# Use a standard, slim Python image as the base.
image: python:3.12-slim
# Override the default command to perform setup and then run the server.
command: ["/bin/sh", "-c"]
args: [
"pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir uv mcp-proxy && \
echo '--- Setup complete, starting server ---' && \
mcp-proxy --host=0.0.0.0 --port=8080 --pass-environment uvx garth-mcp-server"
]
args:
- |
set -e
echo "Step 1/4: Installing git client..."
# Update package lists and install git. --no-install-recommends keeps it minimal.
# rm -rf cleans up the apt cache to keep the running container lean.
apt-get update && apt-get install -y git --no-install-recommends && rm -rf /var/lib/apt/lists/*
echo "Step 2/4: Installing uv package manager..."
pip install uv
echo "Step 3/4: Launching server via uvx (will install from git)..."
# 'uvx' runs a command from a temporary environment.
# We specify the git repository to install the package from.
# The '--host 0.0.0.0' flag is crucial to ensure the server
# is accessible from outside its container within the cluster network.
uvx --from git+https://github.com/Taxuspt/garmin_mcp garmin-mcp --host 0.0.0.0 --port 8000
echo "Step 4/4: Server has been started."
ports:
- containerPort: 8080
name: http
# Inject the Garmin token securely from the Kubernetes Secret.
- name: http
containerPort: 8000
protocol: TCP
# Mount the Garmin credentials from the Kubernetes secret as environment variables.
envFrom:
- secretRef:
name: garth-token-secret
# # Health probes for Kubernetes to manage the pod's lifecycle.
# livenessProbe:
# tcpSocket:
# port: 8080
# initialDelaySeconds: 15
# periodSeconds: 20
# readinessProbe:
# tcpSocket:
# port: 8080
# initialDelaySeconds: 60
# periodSeconds: 10
name: garmin-credentials
# Basic readiness probe to ensure the service is not marked ready until the app is listening.
readinessProbe:
tcpSocket:
port: 8000
initialDelaySeconds: 25 # Increased delay to account for git installation
periodSeconds: 10
---
# --- 3. Service to expose the Deployment ---
# This creates a stable internal endpoint for the server.
# --- Garmin MCP Server Service ---
# This manifest creates a stable internal endpoint (ClusterIP service)
# for the Garmin MCP server deployment. Your n8n instance will use this
# service's DNS name to communicate with the server.
apiVersion: v1
kind: Service
metadata:
name: garth-mcp-service
name: garmin-mcp-service
namespace: n8n
spec:
# This service will be of type ClusterIP, only reachable from within the cluster.
type: ClusterIP
selector:
app: garth-mcp-server
# This selector must match the labels of the pods created by the Deployment.
app: garmin-mcp-server
ports:
- name: http
protocol: TCP
# The port the service will be available on within the cluster
# The port the service will be available on within the cluster.
port: 80
# The port on the container that the service will forward traffic to
targetPort: 8080
# ClusterIP is the default, but we're explicit here.
# This service is only reachable from within the Kubernetes cluster.
type: ClusterIP
# The port on the container that the service will forward traffic to.
targetPort: 8000

BIN
k8s/n8n/garmin.secret.yaml Normal file

Binary file not shown.

View File

@ -1,10 +1,9 @@
#small deployment with nodeport for local testing or small deployments
image:
repository: n8nio/n8n
tag: 1.123.4
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
#tag: "stable"
tag: "stable"
main:
@ -81,8 +80,7 @@ ingress:
hosts:
- host: n8n.moritzgraf.de
paths:
- path: /
pathType: Prefix
- /
tls:
- hosts:
- "n8n.moritzgraf.de"