Skip to content

yolov8n onnx for object detection -> how to postprocess the output? #20712

Answered by glenn-jocher
dominicnoy asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @dominicnoy,

Looking at your ONNX inference code, I see a few issues with how you're processing the model output:

  1. The output format for YOLOv8 ONNX models is different from what you're expecting. The raw output needs to be transposed from (1, 84, N) to (N, 84), which you're doing, but the data interpretation is incorrect.

  2. In YOLOv8's ONNX output, the boxes are already in xywh format and class confidences don't need the softmax function. The model has already applied the sigmoid activation.

  3. You're missing the non-maximum suppression step which is crucial.

Here's how to correctly process the outputs:

import numpy as np
import cv2
from ultralytics.utils.ops import non_max_suppression

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@glenn-jocher
Comment options

Answer selected by dominicnoy
@dominicnoy
Comment options

@glenn-jocher
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
question Further information is requested detect Object Detection issues, PR's exports Model exports (ONNX, TensorRT, TFLite, etc.)
3 participants