May I know how to deploy the arm64 version of frigate? #20882
Replies: 9 comments 1 reply
-
|
To deploy the arm64 version of Frigate on your aarch64 Ubuntu server using only the CPU, pull the appropriate image (as you already did) and create a minimal services:
frigate:
container_name: frigate
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable-standard-arm64
shm_size: "512mb"
volumes:
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
- /etc/localtime:/etc/localtime:ro
ports:
- "8971:8971"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
environment:
- FRIGATE_RTSP_PASSWORD=yourpasswordThen run: docker compose up -dIf you only want to use CPU for detection, you don't need to set up device passthroughs for GPUs or other accelerators. Adjust the 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Wait a minute, why is there no interface port 5000? @inkeep |
Beta Was this translation helpful? Give feedback.
-
|
Port 5000 is Frigate's internal unauthenticated UI/API port, commonly used for direct (unauthenticated) access to the web interface and API—for example, http://:5000/(1). If you want to access the Frigate web UI without authentication, you should map and expose port 5000 in your docker-compose.yml like this: ports:
- "5000:5000"If you only expose port 8971, you will have access to the authenticated interface, which requires login and is generally recommended for security, especially if accessed outside your local network(1). Port 5000 should not be exposed to untrusted networks, as it bypasses authentication(1), (3). In summary:
You can include both in your compose file if you need both access types(1), (2). 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
So why isn't there:
|
Beta Was this translation helpful? Give feedback.
-
|
The environment variable YOLO_MODELS is only needed when you are using the TensorRT detector (for NVIDIA Jetson or CUDA GPUs) in Frigate, not when running on CPU only. On CPU (and most other detectors), model selection and paths are defined in the Frigate config file under the detectors section, not as an environment variable(1)(2)(3). For a CPU-only deployment, just exclude YOLO_MODELS from your docker-compose.yml. Instead, configure your detector and model in config.yml like this: detectors:
cpu:
type: cpu
model:
path: /config/your_model.tflite # or .onnx depending on your model type
width: 320
height: 320If you use YOLO_MODELS in the environment section, it will have no effect on a CPU-only setup(1). 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
This is the yml configuration file that I set up based on your settings: services:
frigate:
container_name: frigate
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:0.16.2-standard-arm64
shm_size: "512mb"
volumes:
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
- /etc/localtime:/etc/localtime:ro
ports:
- "8971:8971"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
- "18083:5000"
environment:
- FRIGATE_RTSP_PASSWORD=yourpassword
```
I have already activated it:
```text
94a75da6ef99 ghcr.io/blakeblackshear/frigate:0.16.2-standard-arm64 "/init" 3 minutes ago Up 3 minutes (health: starting) 0.0.0.0:8554-8555->8554-8555/tcp, 0.0.0.0:8971->8971/tcp, 0.0.0.0:8555->8555/udp, 0.0.0.0:18083->5000/tcp frigateBut I can't access his UI interface at all, and Docker logs keep showing restart: What should I do? @inkeep |
Beta Was this translation helpful? Give feedback.
-
|
Your Frigate container is stuck in a restart loop because NGINX is failing to start with the error: How to fix:
Here’s a minimal working example for arm64 with only the required volume and port mappings (no custom nginx config): services:
frigate:
container_name: frigate
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:0.16.2-standard-arm64
shm_size: "512mb"
volumes:
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
- /etc/localtime:/etc/localtime:ro
ports:
- "8971:8971"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
- "5000:5000"
environment:
- FRIGATE_RTSP_PASSWORD=yourpasswordMake sure you do not have any additional volume mount like Next steps:
Note: 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
This is my yml: services:
frigate:
container_name: frigate
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:0.16.2-standard-arm64
shm_size: "512mb"
volumes:
- ./config:/config
- ./storage:/media/frigate
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
- /etc/localtime:/etc/localtime:ro
ports:
- "8971:8971"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
- "18083:5000"
environment:
- FRIGATE_RTSP_PASSWORD=yourpasswordBut he still: |
Beta Was this translation helpful? Give feedback.
-
|
Your Frigate container is continuously failing because NGINX cannot start due to the error: This error is not part of a standard Frigate deployment. The official documentation does not require or mention mounting a custom NGINX config for normal operation. If you or a previous setup mounted a custom What to do:
Summary:
📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
want to deploy arm64 frigate on my aarch64 Ubuntu server. What should I do?
I have already used it
Then how should I start it?
How to configure its startup file yml?
Just like how do I deploy the arm64 version of frigate on AMD? And my server only has a CPU:
Beta Was this translation helpful? Give feedback.
All reactions