[HW Accel Support]: NVIDIA QUADRO K2200 GPU - Model / Configuration Need help #21006
Replies: 1 comment 5 replies
-
|
The error Your Quadro K2200 has compute capability 5.0, which technically meets the minimum requirement for TensorRT (compute capability 5.0 or greater)(3). However, the specific error suggests the CUDA kernel binaries in the container aren't compatible with your older Maxwell architecture GPU(2). Configuration IssuesYour ONNX detector configuration is missing the device specification. Add this to your config(4): detectors:
onnx:
type: onnx
device: '0' # Add this lineModel Configuration ProblemThe YOLOx model configuration needs adjustments. Based on the documentation, here's the correct configuration for YOLOx(4): model:
model_type: yolox
width: 416 # <--- should match the imgsize set during model export
height: 416 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float_denorm
path: /config/model_cache/yolox_m.onnx
labelmap_path: /labelmap/coco-80.txtNote that you had Alternative ApproachGiven your older GPU architecture, consider using YOLO-NAS instead, which may have better compatibility(4). After downloading the YOLO-NAS model, use this configuration(4): detectors:
onnx:
type: onnx
device: '0'
model:
model_type: yolonas
width: 320
height: 320
input_pixel_format: bgr
input_tensor: nchw
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txtHardware AccelerationYour FFmpeg hardware acceleration configuration looks correct for NVIDIA GPUs(5): ffmpeg:
hwaccel_args: preset-nvidiaThe Quadro K2200 is quite old (Maxwell generation), and while it meets the minimum compute capability requirement, newer CUDA runtimes may have limited kernel support for this architecture(2). If GPU compatibility issues persist, you may need to fall back to CPU mode or consider upgrading to a more recent GPU. ** 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
Hello Folks,
I'm am already using FRIGATE 0.16.2 with a USB CORAL on a HPE Microserver Gen8.
I just got a HPE ML30 Gen9 on which i put a NVIDIA QUADRO K2200.
==> I have realized that it is not easy at all to make detection working with a GPU compared to a CORAL TPU.
I have understood that there is no out of the box configuration / model.
So i have first took the docker image ghcr.io/blakeblackshear/frigate:stable-tensorrt
NVIDIA GPU is found but i have an error when i try to use the model yolox_m.onnx directly downloaded from https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ONNXRuntime according to the documentation in Frigate.
I would first try to use a model already done whitout to having to build something or convert it by myself.
I have installed NVIDA driver : NVIDIA-Linux-x86_64-580.105.08.run et nvidia-smi

Here we don't see the FFmpeg process because the container is stopped but when it is up with the CPU as detector we can see all my camera with FFmpeg process.
I somebody could guide me to make everything working it would be good.
Thank you.
Version
0.16.2-4d58206
Frigate config file
docker-compose file or Docker CLI command
Relevant Frigate log output
Relevant go2rtc log output
FFprobe output from your camera
Install method
Docker Compose
Object Detector
TensorRT
Network connection
Wired
Camera make and model
REOLINK
Screenshots of the Frigate UI's System metrics pages
/
Any other information that may be helpful
No response
Beta Was this translation helpful? Give feedback.
All reactions