Lima k3s Staging Environment
Overview
A local Lima VM running k3s validates the full cloud deployment workflow (GHCR image pulls, Kustomize overlays, non-Docker-Desktop storage) before moving to AWS EC2 in Phase 4. This catches provisioning issues at zero AWS cost.
Constraint: Existing Lima VMs (node1, node2, node3) must not be touched.
VM Configuration
| Parameter | Value |
|---|---|
| VM name | galaxy-staging |
| OS | Ubuntu 22.04 LTS |
| CPUs | 4 |
| Memory | 4 GiB |
| Disk | 20 GiB |
| Architecture | native (amd64 or arm64) |
| Config file | lima/galaxy-staging.yaml |
k3s Provisioning
k3s is installed via the Lima provision block, which calls the reusable provisioning script scripts/provision-k3s.sh.
Installation flags
| Flag | Purpose |
|---|---|
--disable=traefik |
Project uses NodePort, not Ingress |
--write-kubeconfig-mode=644 |
Readable for extraction |
Provisioning script
scripts/provision-k3s.sh installs k3s with configurable version and args, waits for node Ready. Reusable for Packer AMI build (Phase 4).
Environment variables:
K3S_VERSION(optional): Pin k3s version (default: latest stable)K3S_EXTRA_ARGS(optional): Additional k3s server flags
Port Forwarding
| Host Port | Guest Port | Service |
|---|---|---|
| 16443 | 6443 | k3s API server |
| 31000 | 31000 | Web client (NodePort) |
| 31001 | 31001 | Admin dashboard (NodePort) |
| 31002 | 31002 | API gateway (NodePort) |
| 31090 | 31090 | Prometheus (NodePort) |
| 31091 | 31091 | Grafana (NodePort) |
Note: k3s API uses host port 16443 (not 6443) to avoid conflict with Docker Desktop Kubernetes.
Kubeconfig
scripts/lima-kubeconfig.sh extracts the k3s kubeconfig from the Lima VM:
- Copies
/etc/rancher/k3s/k3s.yamlfrom VM - Rewrites server URL to
https://127.0.0.1:16443 - Renames context, cluster, and user to
lima-galaxy - Writes to
~/.kube/config-lima-galaxy
Usage:
scripts/lima-kubeconfig.sh
export KUBECONFIG=~/.kube/config-lima-galaxy
kubectl get nodes
Kustomize Overlay
Directory: k8s/overlays/lima/
Based on the staging overlay with these differences:
| File | Purpose |
|---|---|
kustomization.yaml |
Namespace galaxy-lima, GHCR images, all patches |
configmaps.yaml |
CORS origins, frontend URLs (same ports as staging) |
services.yaml |
NodePort assignments 31000-31002 (same as staging) |
monitoring.yaml |
Prometheus scrape config targeting galaxy-lima namespace |
storage-class.yaml |
JSON patches: hostpath to local-path for all PVCs/VCTs |
replicas.yaml |
Reduce players, web-client, admin-dashboard to 1 replica |
migration-job.yaml |
GHCR image ref, imagePullPolicy: IfNotPresent |
Storage Class Patching
k3s provides local-path as its default StorageClass. The base manifests use hostpath (Docker Desktop default). The Lima overlay patches all storage references:
- PostgreSQL StatefulSet
volumeClaimTemplates[0].spec.storageClassName - Redis StatefulSet
volumeClaimTemplates[0].spec.storageClassName - PostgreSQL backup PVC
spec.storageClassName
Approach: JSON patches on StatefulSet volumeClaimTemplates (more reliable than strategic merge for VCTs).
Resource Budget (4 GiB VM)
| Component | Memory Limit |
|---|---|
| postgres | 512 Mi |
| redis | 256 Mi |
| tick-engine | 256 Mi |
| physics | 256 Mi |
| players (1 replica) | 256 Mi |
| galaxy | 256 Mi |
| api-gateway | 256 Mi |
| web-client (1 replica) | 128 Mi |
| admin-dashboard (1 replica) | 128 Mi |
| prometheus | 256 Mi |
| grafana | 128 Mi |
| Total services | ~2.7 GiB |
| k3s overhead | ~400 MiB |
| Headroom | ~900 MiB |
GHCR Authentication
Default (public repo): No auth needed. If GitHub packages inherit public visibility from the repo, image pulls work without credentials.
Fallback (private/restricted): Configure k3s registries.yaml with a GHCR token. The provisioning script checks for GHCR_TOKEN environment variable:
# /etc/rancher/k3s/registries.yaml
mirrors:
ghcr.io:
endpoint:
- "https://ghcr.io"
configs:
"ghcr.io":
auth:
username: ""
password: "<GHCR_TOKEN>"
If GHCR_TOKEN is set during provisioning, this file is written before k3s starts.
Deployment Workflow
# 1. Start the Lima VM
limactl start lima/galaxy-staging.yaml
# 2. Extract kubeconfig
scripts/lima-kubeconfig.sh
export KUBECONFIG=~/.kube/config-lima-galaxy
# 3. Verify k3s is ready
kubectl get nodes # Should show one Ready node
kubectl get sc # Should show local-path as default
# 4. Create TLS secrets
scripts/setup-tls.sh galaxy-lima
# 5. Create application secrets
scripts/create-secrets.sh galaxy-lima
# 6. Run database migrations
kubectl apply -k k8s/overlays/lima/ -l app.kubernetes.io/name=db-migration
kubectl wait --for=condition=complete job/db-migration -n galaxy-lima --timeout=120s
# 7. Deploy all services
scripts/deploy-k8s.sh galaxy-lima
# 8. Verify
kubectl get pods -n galaxy-lima
curl -k https://localhost:31000 # Web client
curl -k https://localhost:31002/health/ready # API gateway
Teardown
limactl stop galaxy-staging
limactl delete galaxy-staging # Removes VM and disk
rm ~/.kube/config-lima-galaxy
Relationship to Phase 4 (AWS EC2)
The k3s provisioning script (scripts/provision-k3s.sh) is designed to be reusable:
- Lima uses it via the
provisionblock - Packer will use it in the AMI build step
- Both share the same k3s flags and version pinning
The Lima Kustomize overlay validates that GHCR image pulls and local-path storage work before deploying to EC2, where the same patterns apply.