Frigate is NVR software with motion detection, object detection, recording, etc… It has matured a lot over the past couple of years and I’m really happy with it.

I’ve been running Frigate for a while, but with version 0.17.0 it sounded like things have changed enough for me to update how I do things. I’m writing all of the following in case anyone else is in the same boat. There’s a lot to read, but hopefully it helps make sense of the options.

Keeping my camera feeds the same, I was interested in switching my object detector from a Google Coral to the embedded graphics in my 13th gen Intel CPU. The main reason for this was because the Google Coral was flaky and I was having to reboot all the time. Maybe because I run Frigate in a virtual machine in Proxmox, so the Coral has to be passed through to the VM? Not sure.

I also wanted to figure out how to get the camera streams to work better in Home Assistant.

Switching from Google Coral to OpenVINO

This was relatively straight forward. I mostly followed these directions and ended up with:

detectors:  
  ov:  
    type: openvino  
    device: GPU  

Switching from the default to YOLOv9

Frigate comes with some default ability to detect objects such as person and car. I kept hearing that YOLOv9 was more accurate, and they even got YOLOv9 working with Google Coral devices, just with a limited set of objects. So, I wanted to switch.

This took me a minute to wrap my head around since it’s not enabled out of the box.

I added the following to my config based on these directions :

model:  
  model_type: yolo-generic  
  width: 320 # <--- should match the imgsize set during model export  
  height: 320 # <--- should match the imgsize set during model export  
  input_tensor: nchw  
  input_dtype: float  
  path: /config/model_cache/yolo.onnx  
  labelmap_path: /labelmap/coco-80.txt  

… except for me the yolo file is called yolov9-t-320.onnx instead of yolo.onnx… but I could have just as easily renamed the file.

That brings us to the next part – how to get the yolo.onnx file. It’s a bit buried in the documentation, but I ran the commands provided here. I just copied the whole block of provided commands and ran them all at once. The result is an .onnx file in whatever folder you’re currently in.

The .onnx file needs to be copied to /config/model_cache/, wherever that might be based on your Docker Compose.

That made me wonder about the other file, coco-80.txt. Well, it turns out coco-80.txt is already included inside the container, so nothing to do there. That file is handy though, because it lists 80 possible things that you can track. Here’s the list on github.

I won’t go over the rest of the camera/motion configuration, because if you’re doing this then you definitely need to dive into the documentation for a bunch of other stuff.

Making the streams work in Home Assistant

I’ve had the Frigate integration running in Home Assistant for a long time, but clicking on the cameras only showed a still frame, and no video would play.

Home Assistant is not on the same host as Frigate, by the way. Otherwise I’d have an easier time with this. But that’s not how mine is set up.

It turns out my problem was caused by me using go2rtc in my Frigate setup. go2rtc is great and acts as a re-streamer. This might reduce bandwidth which is important especially for wifi cameras. But, it’s optional, and I learned that I don’t want it.

go2rtc should work with Home Assistant if they’re both running on the same host (same IP address), or if you run the Docker stack with network_mode: host so it has full access to everything. I tried doing that, but for some reason Frigate got into a boot loop, so I changed it back to the bridge network that I had previously.

The reason for this, apparently, is that go2rtc requires more than whatever published ports they say to open in Docker. Maybe it uses random ports or some other network magic. I’m not sure.

The downside of not having go2rtc is that the camera feeds in the Frigate UI are limited to 720p. I can live with that. The feeds in Home Assistant are still full quality, and recordings are still full quality.

By removing go2rtc from my config, Home Assistant now streams directly from the cameras themselves instead of looking for the go2rtc restream. You may have to click “Reconfigure” in the Home Assistant integration for the API to catch up.

Hope this helps. If not, sorry you had to read all of this.

  • Kupi@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    33 minutes ago

    I’ve been trying to configure frigate for a few days now and I’ve got it all working via restreaming through go2rtc because the WiFi cameras I have only allow a limited amount of connections and I can view my cameras just fine in the portal. But I gave up trying to add them to home assistant because no matter what I did, I would only get a still image.

    My setup seems the same as yours. (Frigate in docker via proxmox LXC) But I don’t have any external devices, just using the cpu of my server.

    Would it be possible to see your config file for this? I’m having a hard time understanding how you removed go2rtc. Also, are you using substreams at all?

    • walden@wetshav.ingOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 minutes ago

      I don’t have an external GPU either, just the onboard Intel graphics is what I use now. Also worth mentioning to use integrated graphics your Docker Compose needs:

      devices:
            - /dev/dri/renderD128:/dev/dri/renderD128
      

      I’m not using substreams. I have 2 cameras and the motion detection doesn’t stress the CPU too much. If I add more cameras I’d consider using substreams for motion detection to reduce the load.

      Your still frames in Home Assistant are the exact problem I was having. If your cameras really do need go2rtc to reduce connections (my wifi camera doesn’t seem to care), you might try changing your Docker container to network_mode: host and see if that fixes it.

      Here’s my config. Most of the notations were put there by Frigate and I’ve de-identified everything. Notice at the bottom go2rtc is all commented out, so if I want to add it back in I can just remove the #s. Hope it helps.

      config.yaml
      mqtt:
        enabled: true
        host: <ip of Home Assistant>
        port: 1883
        topic_prefix: frigate
        client_id: frigate
        user: mqtt username
        password: mqtt password
        stats_interval: 60
        qos: 0
      
      cameras:     # No cameras defined, UI wizard should be used
        baby_cam:
          enabled: true
          friendly_name: Baby Cam
          ffmpeg:
            inputs:
              - path: 
                  rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif
                roles:
                  - detect
                  - record
            hwaccel_args: preset-vaapi
          detect:
            enabled: true # <---- disable detection until you have a working camera feed
            width: 1920 # <---- update for your camera's resolution
            height: 1080 # <---- update for your camera's resolution
          record:
            enabled: true
            continuous:
              days: 150
            sync_recordings: true
            alerts:
              retain:
                days: 150
                mode: all
            detections:
              retain:
                days: 150
                mode: all
          snapshots:
            enabled: true
          motion:
            mask: 0.691,0.015,0.693,0.089,0.965,0.093,0.962,0.019
            threshold: 14
            contour_area: 20
            improve_contrast: true
          objects:
            track:
              - person
              - cat
              - dog
              - toothbrush
              - train
      
        front_cam:
          enabled: true
          friendly_name: Front Cam
          ffmpeg:
            inputs:
              - path: 
                  rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif
                roles:
                  - detect
                  - record
            hwaccel_args: preset-vaapi
          detect:
            enabled: true # <---- disable detection until you have a working camera feed
            width: 2688 # <---- update for your camera's resolution
            height: 1512 # <---- update for your camera's resolution
          record:
            enabled: true
            continuous:
              days: 150
            sync_recordings: true
            alerts:
              retain:
                days: 150
                mode: all
            detections:
              retain:
                days: 150
                mode: all
          snapshots:
            enabled: true
          motion:
            mask:
              - 0.765,0.003,0.765,0.047,0.996,0.048,0.992,0.002
              - 0.627,0.998,0.619,0.853,0.649,0.763,0.713,0.69,0.767,0.676,0.819,0.707,0.839,0.766,0.869,0.825,0.889,0.87,0.89,0.956,0.882,1
              - 0.29,0,0.305,0.252,0.786,0.379,1,0.496,0.962,0.237,0.925,0.114,0.879,0
              - 0,0,0,0.33,0.295,0.259,0.289,0
            threshold: 30
            contour_area: 10
            improve_contrast: true
          objects:
            track:
              - person
              - cat
              - dog
              - car
              - bicycle
              - motorcycle
              - airplane
              - boat
              - bird
              - horse
              - sheep
              - cow
              - elephant
              - bear
              - zebra
              - giraffe
              - skis
              - sports ball
              - kite
              - baseball bat
              - skateboard
              - surfboard
              - tennis racket
            filters:
              car:
                mask:
                  - 0.308,0.254,0.516,0.363,0.69,0.445,0.769,0.522,0.903,0.614,1,0.507,1,0,0.294,0.003
                  - 0,0.381,0.29,0.377,0.284,0,0,0
          zones:
            Main_Zone:
              coordinates: 0,0,0,1,1,1,1,0
              loitering_time: 0
      
      detectors: # <---- add detectors
        ov:
          type: openvino
          device: GPU
      
      model:
        model_type: yolo-generic
        width: 320 # <--- should match the imgsize set during model export
        height: 320 # <--- should match the imgsize set during model export
        input_tensor: nchw
        input_dtype: float
        path: /config/model_cache/yolov9-t-320.onnx
        labelmap_path: /labelmap/coco-80.txt
      
      version: 0.17-0
      
      
      #go2rtc:
      #  streams:
      #    front_cam:
      #      - ffmpeg:rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif
      #    baby_cam:
      #      - ffmpeg:rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif
      
    • walden@wetshav.ingOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Sounds like LXC is the way to go to pass a Coral through. Not sure why it’s so flaky with the Debian VM.

  • jaschen306@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    I have used frigate on bare metal using a Google coral with a E key M.2. then I moved it to a Synology + a USB coral. On both systems I have never had any issues with the coral. If I have to nitpick, I would be that frigate would trigger persons incorrectly. Like a blanket or my couch.

    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      I’m in a similar situation, I have a coral tpu, but I’ve switched to openvino. And I see fewer false positives as well.

      I suspect frigate devs aren’t working as hard on keeping the coral working with their ML models. Also, that coral driver is pretty stale; it’s from the 2014-era of google maps blurring car license plates.

    • walden@wetshav.ingOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      That’s good to hear. That reinforces my suspicion that my problems were caused by passing it through to the virtual machine using Proxmox.

      You might be interested in trying to enable the YOLOv9 models. The developer claims they are more accurate, and so far I’m tempted to agree.

  • CmdrShepard49@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    I also have frigate on proxmox with a Google coral but mine has been rock solid. The only difference is that I use an LXC instead of a VM. I recall there being more issues passing hardware to VMs in Proxmox since they don’t like to share.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    Yeah you probably need to pass the tpu to the VM directly. But openvino on CPU has been just fine for me.

    Although I’ve noticed in 0.17, it’s started complaining that ov takes a long time, with an absudly large value in ms. Nothing seems to be broken, and restarting the container clears it.

    • walden@wetshav.ingOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      I’ll keep an eye out for that. So far the Inference Speed is holding stead at 8.47ms.

      Are you using OpenVINO with the onboard GPU, or CPU? I think it works with both so you need to make sure it’s using the GPU if possible.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        34 minutes ago

        I checked and technically it’s on the GPU, but it’s Intel integrated graphics (i7-11700T). I don’t have a separate GPU in that system. Everything seems to work fine, even when it’s complaining about speed.

        It might also be due to these being USB cameras (long story) and if the stream drops, ffmpeg crashes and restarts.

        • walden@wetshav.ingOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          14 minutes ago

          That CPU has UHD Graphics 750 which is newer than mine which has 730. Should work quite nicely.

          Are you using Proxmox, too?

          • frongt@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 minutes ago

            Yes, and for extra fun I’m running this in docker on Debian in a VM, with the GPU function of the cpu passed through to Debian. I migrated from VMware years ago and never bothered trying proxmox containers.

  • jake_jake_jake_@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 hours ago

    I use Frigate and HomeAssistant, they are on different hosts and the only port allowed from Frigate to HomeAssistant is the non-auth api port. For normal users using Frigate, I use an Oauth2-proxy instance on the same host (same compose) as Frigate tied to a third host with keycloak. go2rtc is on the Frigate host, but it only talks to Frigate and the cameras themselves. You can also access go2rtc from outside if you want to access the streams directly but your HomeAssistant does not need too. I find that this is better than the cameras directly as the processing is not really meant for a whole bunch of streams at once.

    I followed docs for the HomeAssistant to Frigate stuff with the GIF notifications and it is working fine. I also use the Frigate integration (using HACS) so maybe there is a lot done for me.

    • walden@wetshav.ingOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      You seem a bit more network savvy than me. All I could figure is the Frigate integration (also HACS for me) talks to Frigate and asks it where to get the video from. If go2rtc is enabled in Frigate, the integration tries to stream from go2rtc. Without my Docker stack being in host network mode, it wouldn’t work for me.

      With no go2rtc, the Frigate integration asks Frigate where to get the stream, and it’s told to get it from the camera from what I can tell.

      All just guesses on my end. Hopefully I don’t sound too sure of myself because I’m not really sure.