## Summary - Remove the 600Mi memory limit on the local Postgres Deployment. With `shared_buffers=384MB` + `max_connections=300` the pod was being repeatedly OOMKilled under real workloads, leaving clients with intermittent `connect: connection refused`. Keep the 400Mi request for scheduling. - Add `document` to the seeded database list in `initdb.sh`. ## Test plan - [x] `kubectl --context kind-unbound apply -f k8s/infra/postgres.yaml` — rollout succeeds - [x] Postgres pod stable (0 restarts) after apply - [ ] On a fresh cluster (`./setup`), verify the `document` database is created (requires empty `data/postgres/`) Reviewed-on: #259
Run Unbound environment in local K8S
This is a setup for running the Unbound environment in K8S using KinD
Prerequisites
Tools
- KinD
- kubectl
- Buildtools
- Lastpass CLI
- GNU Base64
- Bash 4.4+
- direnv to manage environment variables per directory
Setup
Create a Gitlab Personal Access Token with (at least) read_registry access and set it as environment variable GITLAB_TOKEN. Can be done with a .envrc-file in a parent directory:
source_up .envrc
export GITLAB_TOKEN=<xyz>
Create a .buildtools.yaml-file in the parent directory with the following content:
targets:
local:
context: kind-unbound
namespace: default
staging:
context: k8s.unbound.se
namespace: staging
prod:
context: k8s.unbound.se
namespace: default
Creating the cluster
Just run the following:
./setup
Wait for the cluster to be ready. The K8S context should be set automatically. Check what's been deployed by running:
kubectl get pod -A
Stopping/starting the cluster
If you need to stop the cluster to be able to use the exposed ports for other things, run:
docker stop unbound-control-plane
To start it again:
docker start unbound-control-plane
Removing the cluster
To remove the cluster completely, run:
kind delete cluster --name unbound
Cleaning up retained data
The setup stores data for containers in the data-directory. To start from scratch, stop the cluster, empty the directory
and start the cluster again.