Ditching Cloud Cameras: Self-Hosted Security with Frigate, Tapo, and an Intel Arc B580

I live in Canada. At some point I started noticing signs that someone (or something) had been visiting my garden. Could be an animal, could be a person. Hard to tell. So I figured it was time to put up some cameras. What started as a simple purchase ended up becoming a proper self-hosted setup with AI-based detection. Here’s the story.

I went with Amazon Blink cameras first. They’re cheap, battery-powered, motion-triggered, and they handle Canadian winters without issues. No wiring needed, no network setup. Just stick them up and go.

The problem? False positives. Constantly. A branch moving in the wind, a shadow, snowfall… everything triggers an alert. After a few weeks of this you either start ignoring all notifications (which defeats the whole point) or you waste time scrubbing through clips of nothing. Neither option is great.

That’s when I started reading about Frigate and RTSP cameras, and things got interesting.

The new setup

Tapo cameras

I swapped the Blinks for Tapo cameras. The big deal with these is that they support RTSP streaming natively and have a micro SD card slot for local recording. So you get:

  • A local backup on the SD card even if your network goes down
  • A direct video stream over WiFi to your NVR, no cloud relay involved
  • Cameras that work 100% on your local network, no account or subscription required

Frigate

Frigate is an open-source NVR (network video recorder) and it has come a really long way. It gives you:

  • A single dashboard with live views from all your cameras
  • Motion detection as a first-pass filter
  • AI object detection that tells you what moved (person, animal, car…)
  • Full scene descriptions that explain what is actually happening in the clip

That last part is the game changer. Instead of getting “motion detected” you get something like this:

Person picks up bicycle wheel A person enters the frame from the bottom right and walks toward the center of the area where a bicycle wheel is lying on the ground. The person bends down, picks up the bicycle wheel, and stands holding it. The individual then turns and walks back toward the camera, exiting the frame at the bottom.

With that kind of detail, I only get notified when something real is going on.

Hardware: Intel Arc B580

I added an Intel Arc B580 (Battlemage) to my server. This one card does double duty:

  • Video decoding: hardware-accelerated decode of all the RTSP streams
  • AI inference: runs the detection and description models

The B580 support in Frigate is actually pretty solid. Both workloads run on the GPU at the same time, so the CPU barely notices. The descriptions aren’t always perfect but they’re more than good enough to filter out noise and only alert on real events.

One card, reasonable power draw, no need for a separate ML box. Works well.

The scene description model: llama.cpp + Vulkan

The AI descriptions don’t come from the cloud either. I run a local LLM server using llama.cpp with Vulkan backend on the B580. Frigate sends snapshots to it via an OpenAI-compatible API endpoint, and the model returns scene descriptions.

Here’s the docker-compose for the inference server:

version: "3.9"
services:
  llama-server-intel:
    image: ghcr.io/ggml-org/llama.cpp:server-vulkan
    container_name: llama-server-intel
    restart: unless-stopped
    devices:
      - /dev/dri:/dev/dri
    volumes:
      - /var/models:/models
    ports:
      - 4040:8080
    environment:
      # Workaround for Mesa ANV driver fp16 compute bug on Intel iGPUs.
      # Safe to set even on dGPUs (B580), ignored if not needed.
      - GGML_VK_DISABLE_F16=${GGML_VK_DISABLE_F16:-0}
    command: >
      --model /models/Qwen3.5-9B-UD-Q4_K_XL.gguf
      --mmproj /models/mmproj/mmproj9B-BF16.gguf
      --n-gpu-layers 99
      --ctx-size ${LLAMA_CTX_SIZE:-131072}
      --host 0.0.0.0
      --port 8080

The model I’m using is Qwen3.5-9B with a vision projector, quantized to fit in 12 GB of VRAM. Vulkan turns out to be 30-60% faster than SYCL on Intel Arc GPUs, so it’s the right backend for this card.

Docker setup for Frigate

Here’s the Frigate docker-compose:

version: "3.9"
services:
  frigate:
    container_name: frigate
    privileged: true
    restart: unless-stopped
    image: ghcr.io/blakeblackshear/frigate:stable
    shm_size: 512mb
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./config:/config
      - ./model:/model
      - ./storage:/media/frigate
      - /dev/dri/renderD128:/dev/dri/renderD128
      - type: tmpfs
        target: /tmp/cache
        tmpfs:
          size: 1000000000
    ports:
      - 5005:5000
      - 8971:8971
      - 8554:8554 # RTSP feeds
      - 8555:8555/tcp # WebRTC over tcp
      - 8555:8555/udp # WebRTC over udp
      - 1984:1984
    environment:
      PLUS_API_KEY: ${PLUS_API_KEY} # optional, for Frigate+
      FRIGATE_RTSP_PASSWORD: ${FRIGATE_RTSP_PASSWORD}
      OPENAI_BASE_URL: http://llama-server-intel:8080/v1 # points to the local LLM

A few things to note:

  • /dev/dri/renderD128 gives Frigate access to the B580 for hardware video decoding
  • The tmpfs mount is used as a fast cache for clips being processed
  • shm_size depends on how many camera streams you run (512 MB is fine for a few cameras)
  • OPENAI_BASE_URL points to the llama.cpp server for scene descriptions. Frigate uses the OpenAI-compatible API, so any local server that speaks that protocol works
  • Put your secrets in a .env file next to the docker-compose, not in the YAML

Home Assistant

Everything plugs into Home Assistant:

  • Camera feeds and Frigate events show up on the HA dashboard
  • Notifications go through HA’s automation engine
  • Remote access through HA’s built-in secure connection

Cameras detect, Frigate analyzes, Home Assistant notifies. All local.

Why bother self-hosting cameras?

  • Your data stays yours. No footage leaves your network. No cloud provider is storing or processing your video
  • No subscriptions. Cloud cameras love charging monthly for “premium” features like person detection. This costs nothing after the initial hardware
  • Way fewer false positives. AI detection vs. basic motion sensing is night and day
  • Actually useful alerts. You know what happened, not just that something moved
  • Works offline. SD card backup + local NVR means the system keeps recording even if your internet goes down

Summary

Component What it does
Tapo cameras RTSP stream + local SD card recording
Frigate Open-source NVR with AI object detection
Intel Arc B580 Video decoding + AI inference on one card
Home Assistant Dashboard, notifications, remote access
Local server Runs Frigate and Home Assistant

Final thoughts

It took a bit of time to set up, but I’m really happy with how this turned out. No more cloud dependency, no subscriptions, and the false positive problem is basically solved. The AI descriptions are surprisingly good and the B580 handles everything without breaking a sweat.

If you’re fed up with cloud camera notifications about tree branches, give Frigate a look. The project has matured a lot and with Home Assistant it makes for a solid, reliable system.

Most importantly: I can finally stop checking my phone every time the wind blows.

Antoine Weill-Duflos
Antoine Weill-Duflos
Head of Technology and Applications

My research interests include haptic, mechatronics, micro-robotic and hci.