image_to_pixle_params_yoloSAM/ultralytics-main/examples/YOLO-Interactive-Tracking-UI/README.md

7.6 KiB
Raw Blame History

Ultralytics YOLO Interactive Object Tracking UI 🚀

A real-time object detection and tracking UI built with Ultralytics YOLO11 and OpenCV, designed for interactive demos and seamless integration of tracking overlays. Whether you're just getting started with object tracking or looking to enhance it with additional features, this project provides a solid foundation.

https://github.com/user-attachments/assets/723e919e-555b-4cca-8e60-18e711d4f3b2

Features

🏗️ Project Structure

YOLO-Interactive-Tracking-UI/
├── interactive_tracker.py   # Main Python tracking UI script
└── README.md                # You're here!

💻 Hardware & Model Compatibility

Platform Model Format Example Model GPU Acceleration Notes
Raspberry Pi 4/5 NCNN (.param/.bin) yolov8n_ncnn_model CPU only Recommended format for Pi/ARM
Jetson Nano PyTorch (.pt) yolov8n.pt CUDA Real-time performance possible
Desktop w/ GPU PyTorch (.pt) yolov8s.pt CUDA Best performance
CPU-only laptops NCNN (.param/.bin) yolov8n_ncnn_model Decent performance (~1015 FPS)

Note: Performance may vary based on the specific hardware, model complexity, and input resolution.

🛠️ Installation

Basic Dependencies

Install the core ultralytics package:

pip install ultralytics

Tip: Use a virtual environment like venv or conda (recommended) to manage dependencies.

GPU Support: Install PyTorch based on your system and CUDA version by following the official guide: https://pytorch.org/get-started/locally/

🚀 Quickstart

Step 1: Download, Convert, or Specify Model

  • For pre-trained Ultralytics YOLO models (e.g., yolo11s.pt or yolov8s.pt), simply specify the model name in the script parameters (model_file). These models will be automatically downloaded and cached. You can also manually download them from Ultralytics Assets Releases and place them in the project folder.

  • If you're using a custom-trained YOLO model, ensure the model file is in the project folder or provide its relative path.

  • For CPU-only devices, export your chosen model (e.g., yolov8n.pt) to the NCNN format using the Ultralytics export mode.

  • Supported Formats:

    • yolo11s.pt (for GPU with PyTorch)
    • yolov8n_ncnn_model (directory containing .param and .bin files for CPU with NCNN)

Step 2: Configure the Script

Edit the global parameters at the top of interactive_tracker.py:

# --- Configuration ---
enable_gpu = False  # Set True if running with CUDA and PyTorch model
model_file = "yolo11s.pt"  # Path to model file (.pt for GPU, _ncnn_model dir for CPU)
show_fps = True  # Display current FPS in the top-left corner
show_conf = False  # Display confidence score for each detection
save_video = False  # Set True to save the output video stream
video_output_path = "interactive_tracker_output.avi"  # Output video file name

# --- Detection & Tracking Parameters ---
conf = 0.3  # Minimum confidence threshold for object detection
iou = 0.3  # IoU threshold for Non-Maximum Suppression (NMS)
max_det = 20  # Maximum number of objects to detect per frame

tracker = "bytetrack.yaml"  # Tracker configuration: 'bytetrack.yaml' or 'botsort.yaml'
track_args = {
    "persist": True,  # Keep track history across frames
    "verbose": False,  # Suppress detailed tracker debug output
}

window_name = "Ultralytics YOLO Interactive Tracking"  # Name for the OpenCV display window
# --- End Configuration ---
  • enable_gpu: Set to True if you have a CUDA-compatible GPU and are using a .pt model. Keep False for NCNN models or CPU-only execution.
  • model_file: Ensure this points to the correct model file or directory based on enable_gpu.
  • conf: Adjust the confidence threshold. Lower values detect more objects but may increase false positives.
  • iou: Set the Intersection over Union (IoU) threshold for Non-Maximum Suppression (NMS). Higher values allow more overlapping boxes.
  • tracker: Choose between available tracker configuration files (ByteTrack, BoT-SORT).

Step 3: Run the Object Tracking

Execute the script from your terminal:

python interactive_tracker.py

Controls

  • 🖱️ Left-click on a detected object's bounding box to start tracking it.
  • 🔄 Press the c key to cancel the current tracking and select a new object.
  • Press the q key to quit the application.

Saving Output Video (Optional)

If you want to record the tracking session, enable the save_video option in the configuration:

save_video = True  # Enables video recording
video_output_path = "output.avi"  # Customize your output file name (e.g., .mp4, .avi)

The video file will be saved in the project's working directory when you quit the application by pressing q.

👤 Author

📜 License & Disclaimer

This project is released under the AGPL-3.0 license. For full licensing details, please refer to the Ultralytics Licensing page.

This software is provided "as is" for educational and demonstration purposes. Use it responsibly and at your own risk. The author assumes no liability for misuse or unintended consequences.

🤝 Contributing

Contributions, feedback, and bug reports are welcome! Feel free to open an issue or submit a pull request on the original repository if you have improvements or suggestions.