Self-Hosted GitOps at Home: Managing 30+ Services with Komodo and a Proxmox Cluster

Running a handful of self-hosted services is easy. A single docker-compose up and you’re done. But at some point, “a handful” turns into 30+ stacks spread across multiple servers, and suddenly you need a real way to manage it all. That’s where my setup is today, and the most interesting part isn’t any single service: it’s the orchestration layer that ties everything together.

The problem

I run a lot of services at home. Photo management, home automation, file sync, NVR with AI detection, a personal CRM, RSS reader, document management, password manager, analytics, a Matrix chat server, LLM inference… the list goes on. Each one is a Docker Compose stack. Each one needs to be deployed, updated, monitored, and occasionally debugged.

For a while, I managed everything manually: SSH into a server, cd to the right directory, docker compose pull && docker compose up -d, check the logs. It works, but it doesn’t scale. When you’re managing services across six different hosts, you spend more time on logistics than on actually using the things you’ve built.

I needed a control plane.

The hardware: a Proxmox cluster

Everything runs on a Proxmox VE cluster built from repurposed enterprise hardware:

Node CPU RAM Role
Primary Xeon E5-2640 (12 cores @ 2.5 GHz) 32 GB Main workloads
Secondary Xeon E5-2430 (12 cores @ 2.2 GHz) 24 GB Secondary workloads

These are old servers, the kind you can pick up for next to nothing. They’re loud, they draw power, and they have more compute than I’ll ever need. Perfect for a homelab.

The cluster runs about 40 LXC containers. LXC containers are the real workhorse here: they’re lighter than VMs, boot in seconds, and give you proper isolation without the overhead of full virtualization. Most of my Docker hosts are LXC containers with a few gigs of RAM each.

The servers: who does what

Not every LXC container runs Docker. Some run standalone services (DNS, monitoring, reverse proxy, authentication). But the Docker hosts are grouped logically in Komodo as “servers”: each one is a lightweight LXC container dedicated to running a set of related stacks. Komodo sees six servers in total: a couple of general-purpose Docker hosts, a dedicated Home Assistant instance, a photo management host, an alerting container, and the vault.

The vault is the most interesting one. It’s actually a NAS running OpenMediaVault with an Intel Arc B580 GPU installed. That one card handles hardware video decoding for Frigate’s camera streams, AI object detection, and LLM inference via Ollama, all at the same time. I wrote about the Frigate side of this in a previous post.

Enter Komodo

Komodo is a self-hosted deployment manager. Think of it as a lightweight alternative to Portainer or Coolify, but with a focus on managing Docker Compose stacks across multiple servers. Here’s why I picked it:

  • Multi-server management from a single dashboard. One UI to see and control every stack on every server.
  • Git-based deployments. Stacks can pull their compose files from a Git repository. Push a change, and Komodo deploys it.
  • Webhook-triggered updates. My self-hosted Gitea instance sends webhooks to Komodo on every push. The stack redeploys automatically.
  • Auto-update for container images. Komodo can poll for new images and update containers without manual intervention.
  • Environment variable management. Secrets and config live in Komodo, not in the Git repo.

The architecture looks like this: a central Komodo Core instance runs on one server, and a lightweight Komodo Periphery agent runs on each remote host. The core talks to periphery agents to deploy and manage stacks. It’s simple, reliable, and doesn’t require Kubernetes.

The GitOps workflow

This is where it gets interesting. About half of my stacks are managed through Git repositories on my self-hosted Gitea instance. The workflow:

  1. I edit a docker-compose.yml in a Gitea repo
  2. I push the change
  3. Gitea fires a webhook to Komodo
  4. Komodo pulls the updated compose file
  5. Komodo redeploys the stack on the target server

For stacks that don’t need version-controlled compose files (simpler services), Komodo manages the files directly on the host. I can still edit them through the Komodo UI, but they aren’t backed by Git.

The split is intentional. Complex stacks with multiple services, custom configs, or frequent changes live in Git. Simple single-container services are managed inline. This avoids the overhead of Git for things that don’t need it, while giving me full version history and rollback for the things that do.

Home Assistant: a special case

Home Assistant deserves a mention because it’s managed differently. Its entire configuration is in a Gitea repo, and Komodo watches it with both polling and webhooks. When I push a config change, Home Assistant gets the update automatically. No more SSH-ing in to edit configuration.yaml.

The network: OPNsense and VLANs

Running dozens of services on a flat network would be a security nightmare. I use OPNsense as my firewall/router with multiple VLANs:

Network segment Purpose
Main Servers and trusted devices
IoT Smart home devices (cameras, sensors, ESPHome)
Lab Experimental VMs and containers
WireGuard VPN access from outside

IoT devices can’t talk to each other or to the main network directly; they can only reach the services they need (Home Assistant, Frigate). The lab network is isolated for testing. WireGuard gives me secure remote access to everything.

DNS: Unbound + Technitium

DNS is handled by two layers working together:

  • Unbound runs on OPNsense itself as the primary recursive resolver. It handles upstream DNS resolution with DNSSEC validation, and it’s fast: queries are answered from cache most of the time.
  • Technitium DNS (running in two LXC containers for redundancy) handles internal DNS zones, so I can reach services by name instead of memorizing IPs. It also provides split-horizon DNS for services that need different answers internally vs. externally.

NTP: GPS-disciplined time with Chrony

One detail I’m particularly happy with: the network’s time source is a USB GPS receiver (BN-808, u-blox M8N chipset) connected to a dedicated LXC container running Chrony. The GPS provides Stratum 1 time to the entire network.

The setup isn’t perfect: USB GPS doesn’t support PPS (Pulse Per Second), so precision is limited to about 40ms due to USB latency. Chrony compensates for this with a manual offset correction and falls back to internet NTP servers (Cloudflare, public pools) when needed. OPNsense distributes the time to all clients on the network.

It’s overkill for a homelab, but there’s something satisfying about having your own GPS-disciplined time source rather than depending entirely on external NTP pools.

Authelia provides single sign-on with two-factor authentication in front of all web-facing services, with Traefik as the reverse proxy.

Storage: the vault NAS

The “vault” server is an OpenMediaVault NAS that does double duty as a compute node. It has 8 drives, one NVMe for the OS and 7 HDDs ranging from 2 TB to 16 TB:

Drive Model Capacity
NVMe WD BLACK SN770 500 GB (boot)
HDD 1 Seagate IronWolf 10 TB
HDD 2 WD 10 TB
HDD 3 WD 12 TB
HDD 4 WD Green 2 TB
HDD 5 WD Red 2 TB
HDD 6 Seagate IronWolf Pro 16 TB
HDD 7 HGST Deskstar NAS 6 TB

A mix of whatever drives I had or found on sale. The beauty of my storage setup is that it doesn’t care about uniformity.

The nested mergerfs architecture

Storage is organized in layers using mergerfs pools:

  1. A parity-protected pool: a mergerfs pool combining 3 btrfs-formatted drives. These drives are protected by SnapRAID parity (the 16 TB Seagate acts as the parity drive).

  2. A main pool: a super-pool that merges the parity-protected pool with an additional direct drive into one large namespace. This is where Immich stores photos, Kavita stores books, and Frigate stores video clips.

  3. A read-only photos pool: a mergerfs view that aggregates all my photo directories (personal photos, DCIM camera imports) into a single mount point for easy access and import.

SnapRAID does parity syncs on a schedule (not real-time like traditional RAID), which means:

  • There’s no write penalty; writes go directly to the underlying btrfs drives
  • If a drive fails, I can recover its contents from parity + the remaining drives
  • Drives can be different sizes (and they very much are)
  • Each drive is a standard filesystem you can read independently in an emergency

The trade-off is that data written between parity syncs is unprotected. For a homelab storing photos and media, that’s an acceptable risk.

Proxmox Backup Server also runs in Docker on this same NAS, providing VM and container backups from the Proxmox cluster. Both Proxmox nodes mount the vault’s PBS storage directly.

Monitoring and alerting

You can’t manage what you can’t see. The monitoring stack:

Tool Role
Prometheus Metrics collection
Grafana Dashboards and visualization
Uptime Kuma Service uptime monitoring
cAdvisor Container resource metrics
NetAlertX Network device monitoring
ntfy Push notifications to my phone

Komodo itself integrates with ntfy for deployment alerts. If a stack fails to deploy or a container goes unhealthy, I get a push notification immediately. The ntfy-alerter stack maps Komodo alert severity to ntfy priority levels, so a critical alert gets a high-priority push that bypasses Do Not Disturb.

The whole thing runs 24/7, manages itself for the most part, and costs nothing beyond electricity and the initial hardware investment. When something does need attention, Komodo and ntfy make sure I know about it, and the GitOps workflow means I can fix most things with a git push from my phone.

If you’re running more than a handful of self-hosted services and you’re still managing them manually, give Komodo a look. It turned my homelab from a collection of Docker hosts into a proper managed platform.

Antoine Weill-Duflos
Antoine Weill-Duflos
Head of Technology and Applications

My research interests include haptic, mechatronics, micro-robotic and hci.