modes/track/ #7906
Replies: 152 comments 367 replies
-
Can I run two models simultaneously in one video? I want that two models will works simultaneously with cumulative results? Is it possible? Please let me know. Thanks in advance!! |
Beta Was this translation helpful? Give feedback.
-
Hi, First of all, I have been loving working with Yolov8. Great tool! However, I have been having difficulties with a certain task. I want to use model.track on videos that I have, and then use save_crop = True, but save with a naming convention where I can track each persons ID. Currently, save_crop just gives me the cropped images of the objectes detected, but there is not way to know from which frame of the video are the crops, also, which ID is attached which cropped image. The visualization through cv2.imshow shows the IDs accross the different frames, but I cant find a way to save them. The naming convention I am looking for is something like this: "frame_30_ID_1.jpg" My current code looks something like this: from ultralytics import YOLO model = YOLO("yolov8n.pt") # load model video_path = "path/to/video.mp4" ret = True while ret:
cap.release() Any help would be greatly apprecitated! Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi @pderrenger . Can I run the models using my phones camera? Can you please share the code to invoke my mobile's camera to test the model? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi, help me understand why I get this error when tracking with segmentation model . My ultimate goal is to use a custom car plate segmentation model for tracking. Thank you very much
|
Beta Was this translation helpful? Give feedback.
-
Yolov8 has very high overall practicality. Can I implement tracking with two cameras? I hope that when a car tracked by camera A moves to camera B, its frame ID remains the same. However, currently there is always an ID switch happening. Is it because of the model's accuracy? def cam2(): cap=cam a = threading.Thread(target=cam1) a.start() |
Beta Was this translation helpful? Give feedback.
-
Hey there,
|
Beta Was this translation helpful? Give feedback.
-
Hi I saw that I can use an openvino IR format model just like any other pytorch model and then run tracking like normal. I was wondering how I would load the IR '.xml' and '.bin' files as arguments into YOLO(), or if I should load my model using openvino library? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Can i use yolov8 model to track and reidentify person with same id assigned to it in multiple camera feed ? |
Beta Was this translation helpful? Give feedback.
-
How can we only track moving objects in the Plotting Tracks Over Time code: from collections import defaultdict import cv2 from ultralytics import YOLO Load the YOLOv8 modelmodel = YOLO('yolov8n.pt') Open the video filevideo_path = "path/to/video.mp4" Store the track historytrack_history = defaultdict(lambda: []) Loop through the video frameswhile cap.isOpened():
Release the video capture object and close the display windowcap.release() |
Beta Was this translation helpful? Give feedback.
-
import cv2 model = YOLO('yolov8_custom_train.engine', task="detect") Path to the input video fileinput_video_path = '/content/gdrive/MyDrive/yolov8-tensorrt/inference/output_video.mp4' Path to the output video fileoutput_video_path = 'outputtest_video.mp4' Define the coordinates of the polygonpolygon_points = [(670, 66), (1237, 550), (514, 1054), (161, 295)] Open the input video filecap = cv2.VideoCapture(input_video_path) Get video propertiesframe_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) Define the codec and create VideoWriter objectfourcc = cv2.VideoWriter_fourcc(*'mp4v') Function for finding the centroiddef calculate_centroid(box): Function to check if two bounding boxes overlapdef check_overlap(box1, box2): Read until video is completedwhile cap.isOpened():
Release video objectscap.release() Close all OpenCV windowscv2.destroyAllWindows() In this, I am tracking a label Person but in the next 2 to 3 frames, ids are changing so any solution for this? |
Beta Was this translation helpful? Give feedback.
-
What is the difference between these attributes of results[0].boxes: |
Beta Was this translation helpful? Give feedback.
-
is it possible to use our own weighs as a model to track? or we must include the yolov8n.pt? |
Beta Was this translation helpful? Give feedback.
-
So I am using Yolov8 for my current project, it's been a breeze so far. I do have a question on the tracking method provided by the Yolov8. When I am using the generic yolov8n model(or even a custom model mixed with few objects), I know I can specifically filter out things that doesn't interest me by their ID as below:
But, when I caught the an object that I am interested, can I at that time or at that frame, issue a track command to start tracking it? if it can be done, can you tell me how? an short example will be even better! thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi, I want some detailed help and guidance on how to use custom tracker models with my custom yolov8 pose model, the Re-identification problem is being face using bytetrack.yaml so I think I should use StrongSORT or DeepSORT. Therefore, I want the ultralytics team to help me on selecting my tracker model or use multiple tracker models, and guide me properly on how to use them with my YOLOv8 custom trained model. |
Beta Was this translation helpful? Give feedback.
-
import random opening the file in read modemy_file = open("utils/coco.txt", "r") reading the filedata = my_file.read() replacing end splitting the text | when newline ('\n') is seen.class_list = data.split("\n") Generate random colors for class listdetection_colors = [] load a pretrained YOLOv8n modelmodel = YOLO("weights/yolov8n.pt", "v8") Vals to resize video frames | small frame optimise the runframe_wid = 640 def CarBehaviour(frame, color_threshold=1100):
def detect_and_draw(frame, model, class_list, detection_colors):
Open video capturecap = cv2.VideoCapture("/home/opencv_env/Vehicle-rear-lights-analyser-master/testing_data/road_2.mp4") if not cap.isOpened(): while True:
When everything done, release the capturecap.release() |
Beta Was this translation helpful? Give feedback.
-
Hi, can you please tell me if the tracker works better on more modern version of ultralytics or version 8.3.68 is no different from 8.1.34 ? (When using botsort.yaml) |
Beta Was this translation helpful? Give feedback.
-
Is it good to have more points in my label file? my dataset size is 30k images having total of 16 classes |
Beta Was this translation helpful? Give feedback.
-
If i change the byte track parameters always it assign new track ids for same object. it's fine for same track id to different object. how can i implement this thing |
Beta Was this translation helpful? Give feedback.
-
Hey, is it possible to run tracking on already classified data (i.e., bounding boxes)? If so, would you be able to share a code snippet? I want to try different parameters for tracking, and it would be nice if I didn't need to run the whole detection again. It might be faster, right? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi Guys, I am running a pipeline to track person. In between process sometimes, the model not predict the people. it detect only few people. the model miss some people, the people front of the CCTV camera. but it was not detecting. If you have ideas about this problem help me guys. |
Beta Was this translation helpful? Give feedback.
-
Hello! I have trained a custom detection model based on YOLO11n that does tiled inference using SAHI (https://docs.ultralytics.com/guides/sahi-tiled-inference/). The detection works quite well, but I am having trouble thinking of a way to combine SAHI's detections with the tracking implementations discussed in this article, since the "track" method belongs to "model" itself. Is there some solution or workaround I could try? Thank you! |
Beta Was this translation helpful? Give feedback.
-
I have created a yolov8l transfer learned model for tracking cups from one place to another lets say if it passes line then it will be counted and i have used tracker deepsort now issue i am facing is tracker id is not stable and changes when it passes line so prev position of cup is missing and a new tracking id gets assigns which leads to not counting this cup can you help me here i want a tracker or logic to be built so that tracking id does not changes. and frame can have multiple cups at a time. |
Beta Was this translation helpful? Give feedback.
-
yolo tracker assign id to object, and it starts with 1,2,3.... i want it to randomly assign ids to the object and then track it. as i want to send its data to lstm but i dont want lstm to depend on the tracking_id number |
Beta Was this translation helpful? Give feedback.
-
Hi YOLO Team, model = YOLO(r"/home/best_pt_files/v1.0.2.pt") However, I noticed that my GPU memory usage gradually increases, leading to an "out of memory" error before the video finishes processing. Previously, the Python script worked without any issues. The memory issue has only recently started occurring. Could you please help me understand the possible reasons for this memory increase and how to address it? i have experimented with persist= True and False,stream = true and false, show = true and false also but the issue is still there |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Hello Grace lets see your code.
The tracking algorithm is a pretty straight forward, check on model.track()
…On Mon, Apr 7, 2025 at 9:52 AM grace-715 ***@***.***> wrote:
Hi,
I want to detect the objects in my videos and assign Ids to each object as
I'm designing a counting algorithm....but when I run the bytetrack tracker
only in the results video I do not see tracks or IDs
I would greatly appreciate your help.
Thankyou
—
Reply to this email directly, view it on GitHub
<#7906 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACXRGWG4ARLRQ3KXQ3FTOJT2YIODDAVCNFSM6AAAAABCROZ3EGVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTENZUGY3TKOI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
--
Regards
|
Beta Was this translation helpful? Give feedback.
-
Hii Guys,I want to know about tracking algorithm, I convert the model to tensorRT format. But it takes some processing time in jetson, without detection 75 FPS with detection it goes to 15-12 FPS. I think the tracking thing process in CPU that may be consumed the time. Do you have any option to run complete model.track algorithm run on GPU. all tracking algorithm also. |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm detecting Herring fish. I've retrained the model with my own data. While using the track model, I was able to twist the bot-track parameters to get pretty good tracking results. However, there is still one issue I don't know why: The id sometimes skips, from 18 to 20 for example: 0 0.214347 0.639162 0.20562 0.198323 0.937141 18 My understanding is the ID-19 might be assigned to a low confidence detection / false positive, so the model ignored it, and it never appeared in the results/plots? Could you please points out some possibilities? If I want to count the fish base on the ID, I might just count the total number at the end, instead of relying on the id. |
Beta Was this translation helpful? Give feedback.
-
Hi! I am working on a project where we are using Ultralytics' YOLO implementation with BoT-SORT. There was just a new update that supports the use of ReID, and I am therefore wondering how/and if I can edit the botsort.yaml file to activate ReID functionality. I am using the PyPi package of the repo. |
Beta Was this translation helpful? Give feedback.
-
Suppose I am processing 3 videos and I want the track ids not to reset, to stay continuous in the 3 videos, what parameter changes would you recommend? Second question, for reid, I have enabled it in the botsort.yaml file, does it currently support the use of a custom reid model? I have a custom osnet model and would like to use it with botsort for the reid. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
modes/track/
Learn how to use Ultralytics YOLO for object tracking in video streams. Guides to use different trackers and customise tracker configurations.
https://docs.ultralytics.com/modes/track/
Beta Was this translation helpful? Give feedback.
All reactions