My experience with frigate as a noob, and also using ultralytics yolo12 on an A310 #20547
iambenmitchell
started this conversation in
Show and tell
Replies: 1 comment
-
|
Thank you for this write up! I have complied everything, and its ready to go... but fortunately/unfortunately I am running Frigate in an unraid docker container... so, I don't think I can upgrade openvino (I'm running an ARC A310 as well). So, I will likely have to wait until 0.17 is released.... patience is a virtue :-) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Wanted to share my experience using the ultralytics yolo12 model with an Intel Arc A310 and provide a reference for others who'd like to use other yolo models but are unsure how to compile it.
I am new to the world of ML models and until this week, I didn't have any security cameras aside form a ring doorbell. Now I have a few and I am using frigate to manage them. So far my experience has been great. My only issue has been related to old modules/dependencies in the latest version of Frigate, such as the OpenVino build Frigate ships with which hasn't been updated in over a year. The beta version of Frigate 17.0 will update this, but after manually updating it myself via the container's terminal, I am having a great experience.
Here are some of the neighbourhood cats that have been pickled up!





I have tried many different models, yolo12 seems to be the most accurate but I still face the same issue where it cannot reliably differentiate between a cat and a dog. This isn't really a problem as what I really care about are Persons, Cars and Animals. Knowing if it is a cat or a dog doesn't matter to me.
Next month I plan to subscribe to frigate+ and give those models a go, I will be happy to support this project as I've had a good experience with the support on this GitHub. The replies seem to be very quick and are helpful.
It took me a while to understand just how custom models work, but I figured it out last night and was able to get yolo12 working
Here's how:
Step 1: Launch a virtual python environment:
Step 2: Install ultralytics:
pip3 install ultralyticsStep 3: Download the yolo12 model from the ultralytics website and place it in the directory you are running your virtual environment in
Step 4: Export the model as OpenVino or Onnx
OpenVino:
yolo export model=yolo12l.pt format=openvino imgsz=320 half=false dynamic=false simplify=true opset=13 nms=false batch=1Onnx:
yolo export model=yolo12l.pt format=onnx imgsz=320 half=false dynamic=false simplify=true opset=13 nms=false batch=1make sure to change the
model=yolo12l.ptflag to the correct one depending on the size you selected.Step 5: Place the Openvino folder, or the onnx file in your /config directory in Frigate.
Step 6: adjust your config to use the model
OpenVino:
Onnx:
if you are running on an intel arc A310 GPU like me, you will need to update the openvino dependency. You will be modifying the docker container, so I suspect any support going forward is out the window so bare that in mind.
Step 1: Connect to the docker container's terminal via bash.
Step 2: Update openvino:
python3 pip install openvino -U --break-system-packagesOn an A310 I am getting around 10ms inference time on the small model, and around 16ms on the large one.


As you can see I am also running plate recognition, but my inference time is terrible. This isn't a problem really, but I will poke around and see what changes I can make to improve it.
Hope this article helps someone experiencing issues with the A310 or people who are unsure how to compile other yolo models such as ultralytics :)
Beta Was this translation helpful? Give feedback.
All reactions