[HW Accel Support]: Possible how to use both openvino and tensorrt simultaneously in single container? Or create separate containers? #18477
Replies: 1 comment 8 replies
-
You can't mix and match the two detector types for object detection, but you could use one detector for object detection and the other for hardware accelerated enrichments (LPR, face recognition, semantic search). You could decide what your priority is and base the Docker image you use off of that. |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
I repurposed an old pc with gtx950 and have frigate 0.16.0 running with tensorrt for object detection and video decode. I noticed that the intel cpu is i7 6700 which has Intel® HD Graphics 530 which I have separately tested and is supported in frigate with openvino.
Is there any advantage or benefits to be had to add openvino support to a frigate container that is already running tensorrt? If the nvidia gpu begins to hit memory or core limits is there any method to offload any cameras, object detections, or anything else, for the openvino to process any loads in one container?
Or would the only solution be to run one frigate tensorrt container with some number of cameras and then separately run a different frigate container that uses openvino for the remaining cameras?
Version
0.16.0
Frigate config file
tbd
docker-compose file or Docker CLI command
tbd
Relevant Frigate log output
Relevant go2rtc log output
FFprobe output from your camera
Install method
Docker Compose
Object Detector
TensorRT
Network connection
Wired
Camera make and model
tbd
Screenshots of the Frigate UI's System metrics pages
tbc
Any other information that may be helpful
No response
Beta Was this translation helpful? Give feedback.
All reactions