What is the difference between frigate and frigate+ when i use face recognition? #19583
Replies: 19 comments 25 replies
-
Key Differences Between Frigate and Frigate+ for Face RecognitionThe main differences between Frigate and Frigate+ when using face recognition relate to face detection efficiency, not the recognition process itself. Face Detection Model DifferencesWith Frigate+ models: When running a Frigate+ model (or any custom model that natively detects faces), face detection runs at the same time as object detection and is more efficient(1). You should ensure that With default COCO models: When running a default COCO model or another model that does not include Answers to Your Specific Questions1. Image Source for Recognition: Face recognition runs on the camera stream defined by the 2. Source of Truth for Face Identity: Yes, regardless of the model type, face identity assignment is handled through the Face Library UI. When Frigate detects and attempts to recognize a face, it will appear in the Train tab of the Face Library, along with its associated recognition confidence(1). From the Train tab, you can assign the face to a new or existing person to improve recognition accuracy for the future(1). 3. Scope of Frigate+ Model Training: Frigate+ models are object detection models trained to detect a more relevant set of objects for security cameras(2). The 4. Efficiency Gains Explanation: The efficiency gain means that with Frigate+ models, face detection happens simultaneously with object detection in a single inference pass, rather than requiring separate processing steps. This eliminates the need to run additional face detection processing after detecting a person. Configuration NoteBased on your configuration showing 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
For large-scale detection and face recognition with 10+ cameras, you'll need to consider both object detection and face recognition hardware requirements. Hardware Recommendations for Large-Scale DeploymentsFor Commercial Installations (40+ cameras): Commercial installations or scenarios where a substantial number of cameras capture activity will necessitate robust object detection capabilities. You'll likely need a modern discrete GPU (1). For your 10-camera setup: You fall into the "Moderate Simultaneous Activity" category, which means you have 6-12 cameras with some overlapping motion. This increases the load on your object detection hardware, requiring more processing power (1). GPU Options for Object DetectionTensorRT - Nvidia GPU: Frigate can utilize an Nvidia GPU which supports the 12.x series of CUDA libraries (2). Based on the inference speeds shown in the documentation:
Face Recognition Hardware RequirementsThe Minimum for Enrichments: 8GB RAM - If you plan to utilize Frigate's enrichment features (e.g., facial recognition, license plate recognition, or other AI models that run alongside standard object detection), 8GB of RAM should be considered the minimum (1). Recommended: 16GB RAM - For most users, especially those with many cameras (8+) or who plan to heavily leverage enrichments, 16GB of RAM is highly recommended (1). GPU Memory ConsiderationsFrom the GitHub discussion, a user with a T400 (2GB VRAM) experienced memory allocation issues when running face recognition large model alongside semantic search (4). The solution involved setting LPR to CPU and using semantic search small model to free up GPU memory (4). For your upgrade, consider GPUs with significantly more VRAM than the T400's 2GB to handle both object detection and face recognition efficiently. 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
sorry for frequent question. When I use the face library, why the face images are much more blurred than the snapshots? I have increased the resolution of my camera. What's more, I found that when I applied frigate+ and used the basic model path provided on frigate+ website, the image become blurred as well. |
Beta Was this translation helpful? Give feedback.
-
|
How can I solve the motion blur? I always capture some blurred images when a person is moving which is hard for correct face recognition. Could I figure out this problem by revising the configuration or I need to do something with my camera. @inkeep |
Beta Was this translation helpful? Give feedback.
-
|
Motion blur in face snapshots is usually caused by the person moving quickly, resulting in blurry frames in your camera’s stream. The most effective way to reduce motion blur is to adjust your camera settings, specifically by increasing the shutter speed (sometimes called exposure time) so that each frame captures less movement. This setting is usually found in your camera's own firmware or web interface, not in Frigate’s configuration(1). Increasing the detect stream’s resolution in Frigate can help with detail, but it won’t solve motion blur if the original video is already blurry due to camera exposure settings(1)(2). The Frigate documentation emphasizes that the quality of face recognition depends on the clarity and sharpness of the face in the image, and that blurry images will reduce recognition accuracy(1)(3). Summary:
📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
There is no need to revise your Frigate configuration to address motion blur caused by a moving person. Motion blur is primarily determined by your camera’s shutter speed or exposure settings, which must be adjusted in your camera’s own firmware or web interface, not in Frigate’s config(1). Increasing the detect stream’s resolution in Frigate can improve detail but will not fix motion blur if the original camera video is already blurry due to fast movement(1), (2). 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Frigate does not provide a built-in configuration option to directly set a maximum storage limit (for example, a maximum size or percentage of disk used) for clips or recordings. Instead, Frigate manages storage by retaining files for a specified number of days, which you can configure in your YAML under the retain section for clips, alerts, detections, and recordings. For example: record:
enabled: True
alerts:
retain:
days: 10 # Number of days to keep alert recordings
detections:
retain:
days: 10 # Number of days to keep detection recordingsThis sets how long segments overlapping with alerts and detections are kept. You can adjust the days value to control how much storage is used over time; reducing the number will cause Frigate to delete older files sooner, indirectly limiting storage usage(1). There is no official configuration for limiting storage by size or percentage, but there is community interest in such a feature(2). For now, you must manage storage by tuning retention days. 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Based on your setup with an RTX 4070 Super, you should be using the GPU for object detection instead of the CPU detector. High CPU usage during detection typically means Frigate is still using the CPU detector rather than leveraging your GPU(1)(2). Configure ONNX Detector with TensorRTFor Nvidia GPUs, you need to configure the ONNX detector, which will automatically use TensorRT in the detectors:
onnx:
type: onnx
model:
model_type: yolonas
width: 320
height: 320
input_pixel_format: bgr
input_tensor: nchw
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txtUse the Correct Docker ImageEnsure you're using the services:
frigate:
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]Enable Hardware Acceleration for Video DecodingYour CPU is also likely being used heavily for video decoding. With 12 cameras, you should enable hardware-accelerated decoding using your Nvidia GPU(3): ffmpeg:
hwaccel_args: preset-nvidiaKey Points
Verify your GPU is being used by running 📚 ** 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
The restart issue is likely caused by missing model files. For the ONNX detector with Nvidia GPUs, you need to download a supported ONNX model file(1). Downloading YOLO-NAS ModelYOLO-NAS models are supported but not included by default(1). You'll need to download the model manually and place it in your Frigate config directory. For YOLO-NAS, after placing the downloaded onnx model in your config folder, use this configuration(1): detectors:
onnx:
type: onnx
model:
model_type: yolonas
width: 320
height: 320
input_pixel_format: bgr
input_tensor: nchw
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txtAlternative: Use YOLOv9 (Recommended for Nvidia GPUs)For Nvidia GPUs, YOLOv9 models are recommended(2). If you're a Frigate+ subscriber, you can use a Frigate+ YOLOv9 model instead(2): detectors:
onnx:
type: onnx
model:
path: plus://your_model_id_hereWhen using a Frigate+ model, you should not define any other model parameters except for Important Notes
Check your Frigate logs to see the specific error message about why it's restarting - this will help identify if it's a missing model file or another configuration issue. 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
The folders you see ( Download YOLOv9 Model (Recommended for Nvidia GPUs)YOLOv9 models can be exported as ONNX by running this command(1): docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz ${IMG_SIZE} --simplify --include onnx
FROM scratch
ARG MODEL_SIZE
ARG IMG_SIZE
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /yolov9-${MODEL_SIZE}-${IMG_SIZE}.onnx
EOFThis will create a file like Configure Frigate to Use the ModelAfter placing the model file in your config directory, update your configuration(1): detectors:
onnx:
type: onnx
model:
model_type: yolo-generic
width: 320
height: 320
input_tensor: nchw
input_dtype: float
path: /config/yolov9-t-320.onnx
labelmap_path: /labelmap/coco-80.txtAlternative: Use Frigate+ Model (Easier)If you're a Frigate+ subscriber, you can use a pre-trained model without manual download(2). YOLOv9 models are available in detectors:
onnx:
type: onnx
model:
path: plus://your_model_id_hereWhen using a Frigate+ model, you should not define any other model parameters except for Verify Your Docker ImageMake sure you're using the services:
frigate:
image: ghcr.io/blakeblackshear/frigate:stable-tensorrtThe ONNX detector will automatically use TensorRT in this image when configured(1). 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
That's excellent news! Your configuration with the ONNX detector and RTX 4070 Super is now working efficiently(1). With detector CPU usage at 20% and GPU at 22%, you've successfully offloaded object detection from the CPU to your Nvidia GPU(1)(2). Your setup demonstrates the performance benefits of using the ONNX detector with TensorRT on Nvidia GPUs(1). The For your 12-camera setup, this configuration should provide the scalability you need(2). If you ever need to handle additional cameras and find that one detector isn't keeping up, you can define multiple detectors assuming GPU resources are available(1): detectors:
onnx_0:
type: onnx
onnx_1:
type: onnxThanks for sharing your success and for spreading the word about Frigate! 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Based on your recent switch to the ONNX detector, there are a few key configuration points to check for face detection(1)(2). Model Requirements for Face DetectionWhen running a Frigate+ model (or any custom model that natively detects faces), you should ensure that However, when running a default COCO model or another model that does not include Configuration CheckSince you switched to the ONNX detector, verify your configuration:
objects:
track:
- person
- face
Face Recognition ConfigurationEnsure face recognition is still enabled in your config(1)(3): face_recognition:
enabled: true
model_size: largeHardware ConsiderationsThe Troubleshooting Steps
If you're still experiencing issues, please share your current configuration and any relevant log entries so we can identify the specific problem. 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
To record videos with 5 seconds before detection and 10 seconds after detection, you need to configure the Configuration for Pre and Post CaptureAdd this to your configuration file(1)(2): record:
enabled: True
alerts:
pre_capture: 5
post_capture: 10
retain:
days: 30
mode: motion
detections:
pre_capture: 5
post_capture: 10
retain:
days: 30
mode: motionImportant NotesRecording must be enabled: Recordings must be enabled in the config(1). If a camera has recordings disabled in the config, enabling via other methods will have no effect(1). Pre/post capture requires continuous or motion recording: For pre_capture and post_capture to work properly, you need to have some form of continuous recording enabled(1)(3). The Retention mode matters: The How It WorksFrigate saves from the stream with the 📚 Sources: Have another question? Just tag @inkeep. 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
This error means Frigate cannot move recording segments from the cache to disk fast enough(1). Recording segments are initially written to cache ( Troubleshooting StepsEnable debug logging to see how long it takes to move recordings from cache to disk(1): logger:
logs:
frigate.record.maintainer: debugThis will show logs like Common Causes and SolutionsIf copy times are greater than 1 second: The storage is too slow(1). Check:
mem_limit: <MAXRAM>
memswap_limit: <MAXSWAP>
mem_swappiness: 0If copy times are less than 1 second: The CPU load may be too high for Frigate to keep up(1). Try temporarily shutting down other services to see if the issue improves(1). Storage ConfigurationFrigate uses services:
frigate:
volumes:
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
@inkeep How can I record 24/7 on all cameras? |
Beta Was this translation helpful? Give feedback.
-
|
To enable 24/7 continuous recording on all cameras, configure the record:
enabled: True
retain:
days: 3
mode: allThis configuration will save all recording segments regardless of activity(1)(2). The You can also configure separate retention for alerts and detections(1)(2): record:
enabled: True
retain:
days: 3
mode: all
alerts:
retain:
days: 30
mode: motion
detections:
retain:
days: 30
mode: motionThis example keeps all video for 3 days, then retains only motion segments overlapping with alerts or detections for up to 30 days(1). Important: Continuous recording ( For each camera, ensure the cameras:
your_camera:
ffmpeg:
inputs:
- path: rtsp://your_camera_stream
roles:
- record📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Based on your setup with 84TB of storage but inability to play back recordings, the issue is likely your shared memory ( The Problem: Insufficient Shared MemoryFrigate uses shared memory to temporarily store decoded video frames before processing. With 12 cameras, 1.48 GiB is insufficient, which can prevent recordings from being written or played back properly(1). Solution: Increase Shared Memory SizeYou need to increase your Update your services:
frigate:
shm_size: '512mb' # Increase this value based on your camera countFor 12 cameras, a recommended starting point would be at least 512MB to 1GB of shared memory, though you may need more depending on your camera resolutions(1). Verify Recording ConfigurationEnsure recording is properly enabled in your configuration(2)(1): record:
enabled: True
retain:
days: 2 # Or your desired retention
mode: allAdditional ConsiderationsRecordings are stored at
New recording segments are written from the camera stream to cache first, then moved to disk if they match the retention policy(2). After increasing your shared memory size and restarting Frigate, check if recordings are being created and can be played back. ** 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Great to hear you resolved your recording issue! Moving Why This Fixed Your IssueThe By moving volumes:
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000Troubleshooting Recording IssuesIf you had enabled debug logging as recommended in the troubleshooting guide, you would have seen logs showing copy times exceeding 1 second when the cache was on NFS(1): logger:
logs:
frigate.record.maintainer: debugThis would have included logs like: Copy times greater than 1 second indicate that storage is too slow to keep up with recordings(1). Your solution of moving the cache to local storage directly addresses this bottleneck. Thanks for sharing your resolution—it may help others experiencing similar issues! 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Great to hear that increasing your shared memory (shm) allocation to 8GB has resolved many of your issues(1). Shared memory is critical for Frigate's operation because it's used as an internal cache for raw decoded frames(1). The minimum shm size depends on your camera resolution and the number of cameras you're running(1). You can calculate the minimum shm size for each camera using this formula(1): ///# Template for one camera without logs, replace <width> and <height>
$ python -c 'print("{:.2f}MB".format((<width> * <height> * 1.5 * 20 + 270480) / 1048576))'
///# Example for 1280x720, including logs
$ python -c 'print("{:.2f}MB".format((1280 * 720 * 1.5 * 20 + 270480) / 1048576 + 40))'
66.63MB
///# Example for eight cameras detecting at 1280x720, including logs
$ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576) * 8 + 40))'
253MB With your 12 cameras, 8GB (8192MB) of shm provides substantial headroom, which explains why you're seeing improved stability(1). This is especially important when running multiple high-resolution cameras simultaneously(1). In Docker Compose, your shm configuration would look like this(1): services:
frigate:
container_name: frigate
shm_size: "8gb"Your current setup with 8GB shm, the ONNX detector on your RTX 4070 Super, and hardware-accelerated video decoding should provide excellent performance for your 12-camera installation(1)(2). 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I'm currently using Frigate v0.16 and am a Frigate+ subscriber. I have a few questions to clarify the workflow between the Frigate, Frigate+, and the face recognition process. I want to make sure my understanding is correct before I proceed further with training.
Here is my understanding and my questions:
Image Source for Recognition: Are the images displayed in the "Explore" UI and the source images used for face recognition generated from the snapshot of detect stream?
Source of Truth for Face Identity: Is it correct to assume that regardless of whether I use a Frigate model or a Frigate+ model, the process of assigning a specific name to a face (e.g., 'John Doe') is always handled through the 'Face Library' UI?
Scope of Frigate+ Model Training: When I upload images to Frigate+ and use 'face annotation' to then train a new model, is the sole purpose of this training to improve the model's ability to detect the face object? In other words, this Frigate+ training has no impact on the recognition of specific individuals.
Clarification on Efficiency Gains: The instruction mentions, "However, using a Frigate+ model (or a custom model that detects face natively) enhances efficiency by eliminating the need for separate face detection per person."
Could you please elaborate on what "eliminating the need for separate face detection per person" means technically?
Here is my config.yml
mqtt:
enabled: true
host:
topic_prefix:
user:
password:
detectors:
onnx:
type: onnx
model:
path: plus://717c77b1c548a9f5371b44e1ec8466fd
detect:
enabled: true
width: 1920
height: 1080
fps: 15
stationary:
interval: 10
threshold: 50
objects:
track:
- person
- face
face_recognition:
enabled: true
unknown_score: 0.6
recognition_threshold: 0.7
min_area: 500
min_faces: 1
save_attempts: 1000
blur_confidence_filter: true
model_size: large
cameras:
testcamera1:
enabled: true
ffmpeg:
inputs:
- path:
rtsp://admin:###
roles:
- detect
- record
hwaccel_args: preset-nvidia
snapshots:
enabled: true
quality: 90
clean_copy: true
version: 0.16-0
Thank you for your time and for this amazing project!
Beta Was this translation helpful? Give feedback.
All reactions