Replies: 51 comments 232 replies
-
|
As we've mentioned in the past when asking for feedback, please don't open bug reports or support requests for development builds. At this time, what would be most helpful to us would be to keep all of your feedback here on this post. Thanks for helping test the new features! |
Beta Was this translation helpful? Give feedback.
-
|
Minor UI bug in build "[fbf4388-tensorrt]" From Review screen, select an object type. Click on an item to review. Click on "Yes" in response to "Is this object a []?" and "Submitted" appears. Now click on Tracking Details tab then to the Snapshot tab. The Submit to Frigate+ yes/no options reappear. The behaviour seems consistent across all object types. |
Beta Was this translation helpful? Give feedback.
-
|
My questions still stand related to the tracked object panel when I asked in #20748 (comment) This screen is pretty awesome, but there is still some usability I am wondering about I can press pause, and then click the selector dots along the line-of-ants, but the bounding box is not shown at each timepoint, only the first and last detection? Let me know if you want a more well-written explanation, I am being pretty vague I think |
Beta Was this translation helpful? Give feedback.
-
|
The Apple Silicon detector doesn't reconnect if the client process is stopped and restarted. Currently you have to start the client first or the detector won't work. It would be nice if the connection was more resilient since there is no indication in frigate that the client is not working. |
Beta Was this translation helpful? Give feedback.
-
|
213a1fb Recording itself was definitely working, since the review snapshots and timeline previews were generated correctly, Frigate just refused to play the video segments from that timeframe. I restarted the Docker container, and everything recorded after the restart works perfectly (scrubbing, playback, reviews, etc.). However, the older recordings from before the restart still won’t play at all. There’s no reliable way to reproduce it right now, and it only affected this single camera. No errors showed up in the logs for that camera either, so it may have just been a one-off glitch. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
I have a classification setup for dogs and get the following when training. Happens on both of the classifications (dog and car) I have setup. |
Beta Was this translation helpful? Give feedback.
-
|
Testing roles:
|
Beta Was this translation helpful? Give feedback.
-
|
Minor issue, but when using NPU has one of my detectors, the size of the NPU frame in the statistics page is slightly off, and looks weird. Not sure if this is intentional or not. |
Beta Was this translation helpful? Give feedback.
-
|
in 213a1fb (2 days ago) seems to be missing motion debug (that window with ~8 switches what to show),or i can't find it |
Beta Was this translation helpful? Give feedback.
-
|
In the add camera wizard, the little help "tooltips" (clicking on the question marks that give you extra info) just flashes the tooltip so fast you can't actually read it. Using Safari on MacOS 26 |
Beta Was this translation helpful? Give feedback.
-
|
Some feedback on inference times. I run an Intel Ultra 7 265K. Testing the NPU, I get inference times of between 8-10ms, and very low CPU usage (less than 1%) per detector. This compares well to the iGPU which has a similar inference time, but higher CPU usage (around 1-2% with low motion, but when there is a lot of motion this goes up to 8%). Overall, it seems the NPU has similar inference times, but it seems to be more scalable. If I increase the number of GPU detectors, my inference times slow down significantly. I have added up to 4 detectors and then see inference times around 25-30ms each. With the NPU, I have added 4 detectors and inference times remain at 8-10ms (possibly due to the lower CPU usage? Not sure what the mechanisms are here, to be honest). I will continue testing to see how the inference times hold up when the wind picks up tomorrow again and movement ramps up significantly. One big caveat is that movement seems to be a big factor in using CPU and therefore the speed of inference. So some of the variation I am seeing may be due to different levels of wind/movement. But overall, it seems to me that this CPU is great for quite large installs with lots of movements. I have tried up to 4 iGPU and 4 NPU detectors together (for a total of 8), with good inference times (around 10-20ms), so this should be good enough for quite a lot of cameras with a lot of movement (400-800 detections per second by my rough maths). |
Beta Was this translation helpful? Give feedback.
-
|
any ideas why i cant see the tracking paths when im viewing the details page on iOS 26.1? works fine on laptop. |
Beta Was this translation helpful? Give feedback.
-
|
I changed my media location /mnt/storage/frigate:/media/frigate and its saving there but its not reflecting that on the storage page, its still showing the old location. Also Camera Storage shows old location and 0% on every camera unless I do a manual recording then it shows just that. Ive had the new one (System 0.17.0-213a1fb) running for almost 24 hours now and the general states page is VERY slow to load or do anything. All other pages for everything have not done this. Quick restart fixes it. The memory bug seems to be fixed if not its filling up very slowly now. I did the auto detect for my Amcrest cameras and it worked great! Would be nice if it asked to limit the FPS of the cameras on the setup. I see the options for detect and record. Maybe after they are setup have an advance config page with more options. Makes an easier setup for the less techy types. After adding some objects to classification and it finds them, I can right click to select multiple on the first page/batchs it finds but my only options are to delete. I wanted to mark entire folders for training but I had to click into it and do them 1 by 1. Even after clicking in being able to select more then 1 at a time would be great. I just added me, dogs, chickens after a day of recording so we will see how it goes. For states I have all outdoor cameras so I have it set to see when my gate opens and closes. I wanted to set one for my dog bed when a dog was in it or not but it would never find it. In fact nothing ever came up. I just tried to do it again. I get No sample images generated. No matter how big I make the square. Also does it have to be that shape? Maybe allow rectangle? idk if that's possible. I added some triggers. Lets see if they work tomorrow. Can we move the "Mark these items as reviewed" to the top? I have cameras facing the road and I get 100's if not 1k a day. I know I know I don't have mine setup right, but I like it this way. It works for me for reasons no one on the internet cares about. Thats all I have for now. Thanks a lot for everything! |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Super weird edge case for classification tldr; I've moved my camera and now the suggested images when building a new classification are mostly from the cameras old position. I created a classification a few days ago. I then tweaked the camera angle today to better see the subject and deleted the classification with the intention of re-creating it with better data. Unfortunately, the images I've been presented with are mostly from the "old" camera position and are therefore useless. I'm guessing if I try again tomorrow, there will be fewer images from the old position... Question Suggestion |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I just tried the latest dev build (1b57fb1-rocm) on a system with an AMD Ryzen 9 6900HX and AMD Radeon 680M iGPU. My previously used dev build (224cbdc-rocm) on that system worked without any issues. I'm seeing the following logs: |
Beta Was this translation helpful? Give feedback.
-
|
I updated to the latest build (1b57fb1) to fix the issue with viewer and custom roles, but issue still persists. To clarify, the issue is that when logging in with a custom role or a viewer role user, no cameras display in the view where all cameras or grouped cameras should show. You see a black box and spinning loading icon. When clicking on the camera (or where the camera should be), you can live view the camera fine - so it's the grouped views that don't work. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Some thoughts questions on the /classification screen..
|
Beta Was this translation helpful? Give feedback.
-
|
I use Home Assistant to create custom events in Frigate in some cases (e.g. when door sensors are triggered). Would it make sense to include this functionality directly into Frigate, since the classification data originates from Frigate in the first place? |
Beta Was this translation helpful? Give feedback.
-
|
I have previously reported this but since 0.16.1 and still in the current dev, cold loading of the frigate UI is still VERY VERY slow.. 10s ish before the spinning wheels stop and show something. It takes almost twice as long as it used to in 0.16.0 and earlier releases. Using something like the Advanced camera card in home assistant with all cameras configured takes about 4s for the first camera to load in then they all swap fast. Would be nice to get to the bottom of this, I am considering upgrading my host but I think bases on past discussion it might be something in go2rtc querying the cameras or something EVERY time frigate is opened. |
Beta Was this translation helpful? Give feedback.
-
|
Sometimes I notice frigate shows the camera offline, even though it's online and mainview/subview work fine (which connects directly to go2rtc, birdseyeview will just show black for the camera which I think uses the detection stream). Looking at "ps aux", I noticed some "ffmpeg [defunct]" processes, I believe frigate is unable to spawn a new ffmpeg process when this happens, it's either stuck trying to kill the existing process which is unkillable for whatever reason, and fails to spawn a new ffmpeg, so the camera cannot resume recording / detection until the docker container is restarted. Can anything be done to prevent this? Like if there are defunct processes auto restart frigate, or have a timeout in the watchdog and spawn a new ffmpeg process if the old process fails to exit? |
Beta Was this translation helpful? Give feedback.
-
|
If I could train a model to detect if my car on the driveway has it's electric mirrors folded in or out to determine lock state that would be great! I've tried with passing camera grabs to an LLM in homeassistant but they've been extremely unreliable. Fingers crossed! |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
In the latest build, I see the following rules for classification models: May I be so bold as to suggest you reduce the 100% requirement to something like 95%? My experience with the classification model is that if you have a scene that isn't completely binary (the states are binary, but the POV has subtle changes), the recent classifications is filled up by a lot of 95% and up classifications. But I don't want to retrain those. I only want lower percentages. I do realise it is a slippery slope, and there is no "right" answer as to what is low enough to warrant submitting for additional trraining, but 100% feels too lofty a goal. |
Beta Was this translation helpful? Give feedback.
-
|
I know it is off topic for this discussion (if there is a separate thread for this let me know) but what is the point of the "Audio Transcription" as implemented? What benefit is it supposed to provided? It can display temporary? Real time Transcripts, or transcripts can be requested for events manually? Nether of the use cases I would have wanted it for seem to be possible with the implementation?
|
Beta Was this translation helpful? Give feedback.
-
|
I think auth with proxy is broken in 0.17 97b29d1-tensorrt builds because what ever I do it force me to directly to login, because my setup with 0.16 tensorrt works before with this nginx proxy manager config
} And my logs
|
Beta Was this translation helpful? Give feedback.


















Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Frigate 0.17 includes many new features, some of which require additional feedback to ensure a smooth user experience. Frigate 0.17 is still under active development and is not recommended for use in production systems (there may be large / breaking bugs).
As we get closer to a beta cycle we are looking to get additional feedback from users in some specific areas so that we may get a better understanding of how things are working and fix any bugs.
We are currently looking for feedback on everything in the release, but specifically looking for feedback on the below items.
If you experience any bugs with config migration, or bugs with existing functionality then please give that feedback as well.
Please leave all feedback in this discussion, do not create separate bug reports or support discussions.
The docs are: https://deploy-preview-19787--frigate-docs.netlify.app/
Object Detectors
RockChip NPU: RockChip now supports automatic model conversion which means Frigate+ and enrichments models are automatically converted to RKNN format and run on the NPU. Looking for general feedback such as inference speed and any issues.Apple Silicon: Apple silicon is now supported. This requires setting up a plugin on the host so Frigate can communicate with the GPU/NPU. see the docsClassification Models
Frigate now has end-to-end support, including training, for classification models. These come in two types,
stateclassification (ex: gate open / close) andobjectclassification (ex: dog recognition). You can use the UI to create and start training the model.We are specifically looking to see examples of accuracy (good and bad) with images, any feedback on the setup wizard, bugs, etc.
https://deploy-preview-19787--frigate-docs.netlify.app/category/custom-classification
Triggers
Triggers utilize Semantic Search to automate actions when a tracked object matches a specified image or description. Triggers can be configured so that Frigate executes a specific actions when a tracked object's image or description matches a predefined image or text, based on a similarity threshold. Triggers are managed per camera and can be configured via the Frigate UI in the Settings page under the Triggers tab.
https://deploy-preview-19787--frigate-docs.netlify.app/configuration/semantic_search#triggers
Review Item Descriptions
Frigate now supports using GenAI to summarize review items. Unlike object descriptions which are searchable, review item descriptions are structured metadata that include suspicious activity categorization. This allows Frigate UI and notifications to show the title / description along with indicators that something may require review based on the GenAI's understanding of the activity.
https://deploy-preview-19787--frigate-docs.netlify.app/configuration/genai/genai_review/
UI Changes
Frigate 0.17 has many UI changes and additional features, some that would be good to get additional feedback on for layout or behavior bugs:
Beta Was this translation helpful? Give feedback.
All reactions