<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenMediaVault | Antoine Weill--Duflos</title>
    <link>https://antoine.weill-duflos.fr/en/tag/openmediavault/</link>
      <atom:link href="https://antoine.weill-duflos.fr/en/tag/openmediavault/index.xml" rel="self" type="application/rss+xml" />
    <description>OpenMediaVault</description>
    <generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 13 Apr 2026 00:00:00 +0000</lastBuildDate>
    
    
    <item>
      <title>Self-Hosted GitOps at Home: Managing 30&#43; Services with Komodo and a Proxmox Cluster</title>
      <link>https://antoine.weill-duflos.fr/en/post/komodo/</link>
      <pubDate>Mon, 13 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://antoine.weill-duflos.fr/en/post/komodo/</guid>
      <description>&lt;h1&gt;&lt;/h1&gt;
&lt;p&gt;Running a handful of self-hosted services is easy. A single &lt;code&gt;docker-compose up&lt;/code&gt; and you&amp;rsquo;re done. But at some point, &amp;ldquo;a handful&amp;rdquo; turns into 30+ stacks spread across multiple servers, and suddenly you need a real way to manage it all. That&amp;rsquo;s where my setup is today, and the most interesting part isn&amp;rsquo;t any single service: it&amp;rsquo;s the orchestration layer that ties everything together.&lt;/p&gt;
&lt;h2 id=&#34;the-problem&#34;&gt;The problem&lt;/h2&gt;
&lt;p&gt;I run a lot of services at home. Photo management, home automation, file sync, NVR with AI detection, a personal CRM, RSS reader, document management, password manager, analytics, a Matrix chat server, LLM inference&amp;hellip; the list goes on. Each one is a Docker Compose stack. Each one needs to be deployed, updated, monitored, and occasionally debugged.&lt;/p&gt;
&lt;p&gt;For a while, I managed everything manually: SSH into a server, &lt;code&gt;cd&lt;/code&gt; to the right directory, &lt;code&gt;docker compose pull &amp;amp;&amp;amp; docker compose up -d&lt;/code&gt;, check the logs. It works, but it doesn&amp;rsquo;t scale. When you&amp;rsquo;re managing services across six different hosts, you spend more time on logistics than on actually using the things you&amp;rsquo;ve built.&lt;/p&gt;
&lt;p&gt;I needed a control plane.&lt;/p&gt;
&lt;h2 id=&#34;the-hardware-a-proxmox-cluster&#34;&gt;The hardware: a Proxmox cluster&lt;/h2&gt;
&lt;p&gt;Everything runs on a &lt;strong&gt;Proxmox VE&lt;/strong&gt; cluster built from repurposed enterprise hardware:&lt;/p&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Node&lt;/th&gt;
          &lt;th&gt;CPU&lt;/th&gt;
          &lt;th&gt;RAM&lt;/th&gt;
          &lt;th&gt;Role&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;Primary&lt;/td&gt;
          &lt;td&gt;Xeon E5-2640 (12 cores @ 2.5 GHz)&lt;/td&gt;
          &lt;td&gt;32 GB&lt;/td&gt;
          &lt;td&gt;Main workloads&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;Secondary&lt;/td&gt;
          &lt;td&gt;Xeon E5-2430 (12 cores @ 2.2 GHz)&lt;/td&gt;
          &lt;td&gt;24 GB&lt;/td&gt;
          &lt;td&gt;Secondary workloads&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;These are old servers, the kind you can pick up for next to nothing. They&amp;rsquo;re loud, they draw power, and they have more compute than I&amp;rsquo;ll ever need. Perfect for a homelab.&lt;/p&gt;
&lt;p&gt;The cluster runs about 40 LXC containers. LXC containers are the real workhorse here: they&amp;rsquo;re lighter than VMs, boot in seconds, and give you proper isolation without the overhead of full virtualization. Most of my Docker hosts are LXC containers with a few gigs of RAM each.&lt;/p&gt;
&lt;h2 id=&#34;the-servers-who-does-what&#34;&gt;The servers: who does what&lt;/h2&gt;
&lt;p&gt;Not every LXC container runs Docker. Some run standalone services (DNS, monitoring, reverse proxy, authentication). But the Docker hosts are grouped logically in &lt;strong&gt;Komodo&lt;/strong&gt; as &amp;ldquo;servers&amp;rdquo;: each one is a lightweight LXC container dedicated to running a set of related stacks. Komodo sees six servers in total: a couple of general-purpose Docker hosts, a dedicated Home Assistant instance, a photo management host, an alerting container, and the vault.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;vault&lt;/strong&gt; is the most interesting one. It&amp;rsquo;s actually a NAS running &lt;strong&gt;OpenMediaVault&lt;/strong&gt; with an &lt;strong&gt;Intel Arc B580&lt;/strong&gt; GPU installed. That one card handles hardware video decoding for Frigate&amp;rsquo;s camera streams, AI object detection, &lt;em&gt;and&lt;/em&gt; LLM inference via Ollama, all at the same time. I wrote about the Frigate side of this in a &lt;a href=&#34;https://antoine.weill-duflos.fr/en/post/frigate/&#34;&gt;previous post&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id=&#34;enter-komodo&#34;&gt;Enter Komodo&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://komo.do/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Komodo&lt;/a&gt; is a self-hosted deployment manager. Think of it as a lightweight alternative to Portainer or Coolify, but with a focus on managing Docker Compose stacks across multiple servers. Here&amp;rsquo;s why I picked it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multi-server management from a single dashboard.&lt;/strong&gt; One UI to see and control every stack on every server.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Git-based deployments.&lt;/strong&gt; Stacks can pull their compose files from a Git repository. Push a change, and Komodo deploys it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Webhook-triggered updates.&lt;/strong&gt; My self-hosted Gitea instance sends webhooks to Komodo on every push. The stack redeploys automatically.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Auto-update for container images.&lt;/strong&gt; Komodo can poll for new images and update containers without manual intervention.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Environment variable management.&lt;/strong&gt; Secrets and config live in Komodo, not in the Git repo.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The architecture looks like this: a central &lt;strong&gt;Komodo Core&lt;/strong&gt; instance runs on one server, and a lightweight &lt;strong&gt;Komodo Periphery&lt;/strong&gt; agent runs on each remote host. The core talks to periphery agents to deploy and manage stacks. It&amp;rsquo;s simple, reliable, and doesn&amp;rsquo;t require Kubernetes.&lt;/p&gt;
&lt;h2 id=&#34;the-gitops-workflow&#34;&gt;The GitOps workflow&lt;/h2&gt;
&lt;p&gt;This is where it gets interesting. About half of my stacks are managed through Git repositories on my self-hosted &lt;strong&gt;Gitea&lt;/strong&gt; instance. The workflow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I edit a &lt;code&gt;docker-compose.yml&lt;/code&gt; in a Gitea repo&lt;/li&gt;
&lt;li&gt;I push the change&lt;/li&gt;
&lt;li&gt;Gitea fires a webhook to Komodo&lt;/li&gt;
&lt;li&gt;Komodo pulls the updated compose file&lt;/li&gt;
&lt;li&gt;Komodo redeploys the stack on the target server&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For stacks that don&amp;rsquo;t need version-controlled compose files (simpler services), Komodo manages the files directly on the host. I can still edit them through the Komodo UI, but they aren&amp;rsquo;t backed by Git.&lt;/p&gt;
&lt;p&gt;The split is intentional. Complex stacks with multiple services, custom configs, or frequent changes live in Git. Simple single-container services are managed inline. This avoids the overhead of Git for things that don&amp;rsquo;t need it, while giving me full version history and rollback for the things that do.&lt;/p&gt;
&lt;h3 id=&#34;home-assistant-a-special-case&#34;&gt;Home Assistant: a special case&lt;/h3&gt;
&lt;p&gt;Home Assistant deserves a mention because it&amp;rsquo;s managed differently. Its entire configuration is in a Gitea repo, and Komodo watches it with both polling and webhooks. When I push a config change, Home Assistant gets the update automatically. No more SSH-ing in to edit &lt;code&gt;configuration.yaml&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id=&#34;the-network-opnsense-and-vlans&#34;&gt;The network: OPNsense and VLANs&lt;/h2&gt;
&lt;p&gt;Running dozens of services on a flat network would be a security nightmare. I use &lt;strong&gt;OPNsense&lt;/strong&gt; as my firewall/router with multiple VLANs:&lt;/p&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Network segment&lt;/th&gt;
          &lt;th&gt;Purpose&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Main&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Servers and trusted devices&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;IoT&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Smart home devices (cameras, sensors, ESPHome)&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Lab&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Experimental VMs and containers&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;WireGuard&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;VPN access from outside&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;IoT devices can&amp;rsquo;t talk to each other or to the main network directly; they can only reach the services they need (Home Assistant, Frigate). The lab network is isolated for testing. WireGuard gives me secure remote access to everything.&lt;/p&gt;
&lt;h3 id=&#34;dns-unbound--technitium&#34;&gt;DNS: Unbound + Technitium&lt;/h3&gt;
&lt;p&gt;DNS is handled by two layers working together:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Unbound&lt;/strong&gt; runs on OPNsense itself as the primary recursive resolver. It handles upstream DNS resolution with DNSSEC validation, and it&amp;rsquo;s fast: queries are answered from cache most of the time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Technitium DNS&lt;/strong&gt; (running in two LXC containers for redundancy) handles internal DNS zones, so I can reach services by name instead of memorizing IPs. It also provides split-horizon DNS for services that need different answers internally vs. externally.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;ntp-gps-disciplined-time-with-chrony&#34;&gt;NTP: GPS-disciplined time with Chrony&lt;/h3&gt;
&lt;p&gt;One detail I&amp;rsquo;m particularly happy with: the network&amp;rsquo;s time source is a &lt;strong&gt;USB GPS receiver&lt;/strong&gt; (BN-808, u-blox M8N chipset) connected to a dedicated LXC container running &lt;strong&gt;Chrony&lt;/strong&gt;. The GPS provides Stratum 1 time to the entire network.&lt;/p&gt;
&lt;p&gt;The setup isn&amp;rsquo;t perfect: USB GPS doesn&amp;rsquo;t support PPS (Pulse Per Second), so precision is limited to about 40ms due to USB latency. Chrony compensates for this with a manual offset correction and falls back to internet NTP servers (Cloudflare, public pools) when needed. OPNsense distributes the time to all clients on the network.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s overkill for a homelab, but there&amp;rsquo;s something satisfying about having your own GPS-disciplined time source rather than depending entirely on external NTP pools.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Authelia&lt;/strong&gt; provides single sign-on with two-factor authentication in front of all web-facing services, with &lt;strong&gt;Traefik&lt;/strong&gt; as the reverse proxy.&lt;/p&gt;
&lt;h2 id=&#34;storage-the-vault-nas&#34;&gt;Storage: the vault NAS&lt;/h2&gt;
&lt;p&gt;The &amp;ldquo;vault&amp;rdquo; server is an &lt;strong&gt;OpenMediaVault&lt;/strong&gt; NAS that does double duty as a compute node. It has 8 drives, one NVMe for the OS and 7 HDDs ranging from 2 TB to 16 TB:&lt;/p&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Drive&lt;/th&gt;
          &lt;th&gt;Model&lt;/th&gt;
          &lt;th&gt;Capacity&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;NVMe&lt;/td&gt;
          &lt;td&gt;WD BLACK SN770&lt;/td&gt;
          &lt;td&gt;500 GB (boot)&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;HDD 1&lt;/td&gt;
          &lt;td&gt;Seagate IronWolf&lt;/td&gt;
          &lt;td&gt;10 TB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;HDD 2&lt;/td&gt;
          &lt;td&gt;WD&lt;/td&gt;
          &lt;td&gt;10 TB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;HDD 3&lt;/td&gt;
          &lt;td&gt;WD&lt;/td&gt;
          &lt;td&gt;12 TB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;HDD 4&lt;/td&gt;
          &lt;td&gt;WD Green&lt;/td&gt;
          &lt;td&gt;2 TB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;HDD 5&lt;/td&gt;
          &lt;td&gt;WD Red&lt;/td&gt;
          &lt;td&gt;2 TB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;HDD 6&lt;/td&gt;
          &lt;td&gt;Seagate IronWolf Pro&lt;/td&gt;
          &lt;td&gt;16 TB&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;HDD 7&lt;/td&gt;
          &lt;td&gt;HGST Deskstar NAS&lt;/td&gt;
          &lt;td&gt;6 TB&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;A mix of whatever drives I had or found on sale. The beauty of my storage setup is that it doesn&amp;rsquo;t care about uniformity.&lt;/p&gt;
&lt;h3 id=&#34;the-nested-mergerfs-architecture&#34;&gt;The nested mergerfs architecture&lt;/h3&gt;
&lt;p&gt;Storage is organized in layers using &lt;strong&gt;mergerfs&lt;/strong&gt; pools:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A parity-protected pool&lt;/strong&gt;: a mergerfs pool combining 3 btrfs-formatted drives. These drives are protected by &lt;strong&gt;SnapRAID&lt;/strong&gt; parity (the 16 TB Seagate acts as the parity drive).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A main pool&lt;/strong&gt;: a super-pool that merges the parity-protected pool with an additional direct drive into one large namespace. This is where Immich stores photos, Kavita stores books, and Frigate stores video clips.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A read-only photos pool&lt;/strong&gt;: a mergerfs view that aggregates all my photo directories (personal photos, DCIM camera imports) into a single mount point for easy access and import.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;SnapRAID does parity syncs on a schedule (not real-time like traditional RAID), which means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;There&amp;rsquo;s no write penalty; writes go directly to the underlying btrfs drives&lt;/li&gt;
&lt;li&gt;If a drive fails, I can recover its contents from parity + the remaining drives&lt;/li&gt;
&lt;li&gt;Drives can be different sizes (and they very much are)&lt;/li&gt;
&lt;li&gt;Each drive is a standard filesystem you can read independently in an emergency&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The trade-off is that data written between parity syncs is unprotected. For a homelab storing photos and media, that&amp;rsquo;s an acceptable risk.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Proxmox Backup Server&lt;/strong&gt; also runs in Docker on this same NAS, providing VM and container backups from the Proxmox cluster. Both Proxmox nodes mount the vault&amp;rsquo;s PBS storage directly.&lt;/p&gt;
&lt;h2 id=&#34;monitoring-and-alerting&#34;&gt;Monitoring and alerting&lt;/h2&gt;
&lt;p&gt;You can&amp;rsquo;t manage what you can&amp;rsquo;t see. The monitoring stack:&lt;/p&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Tool&lt;/th&gt;
          &lt;th&gt;Role&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Prometheus&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Metrics collection&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Grafana&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Dashboards and visualization&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;Uptime Kuma&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Service uptime monitoring&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;cAdvisor&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Container resource metrics&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;NetAlertX&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Network device monitoring&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;strong&gt;ntfy&lt;/strong&gt;&lt;/td&gt;
          &lt;td&gt;Push notifications to my phone&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Komodo itself integrates with &lt;strong&gt;ntfy&lt;/strong&gt; for deployment alerts. If a stack fails to deploy or a container goes unhealthy, I get a push notification immediately. The ntfy-alerter stack maps Komodo alert severity to ntfy priority levels, so a critical alert gets a high-priority push that bypasses Do Not Disturb.&lt;/p&gt;
&lt;p&gt;The whole thing runs 24/7, manages itself for the most part, and costs nothing beyond electricity and the initial hardware investment. When something does need attention, Komodo and ntfy make sure I know about it, and the GitOps workflow means I can fix most things with a &lt;code&gt;git push&lt;/code&gt; from my phone.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re running more than a handful of self-hosted services and you&amp;rsquo;re still managing them manually, give &lt;a href=&#34;https://komo.do/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Komodo&lt;/a&gt; a look. It turned my homelab from a collection of Docker hosts into a proper managed platform.&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>
