Package Summary
Tags | No category tags. |
Version | 0.1.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/ieiauto/autodrrt.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2024-09-19 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Daisuke Nishimatsu
Authors
tensorrt_yolo
Purpose
This package detects 2D bounding boxes for target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLO(You only look once) model.
Inner-workings / Algorithms
Cite
yolov3
Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
yolov4
Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
yolov5
Jocher, G., et al. (2021). ultralytics/yolov5: v6.0 - YOLOv5n ‘Nano’ models, Roboflow integration, TensorFlow export, OpenCV DNN support (v6.0). Zenodo. https://doi.org/10.5281/zenodo.5563715
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
in/image |
sensor_msgs/Image |
The input image |
Output
Name | Type | Description |
---|---|---|
out/objects |
tier4_perception_msgs/DetectedObjectsWithFeature |
The detected objects with 2D bounding boxes |
out/image |
sensor_msgs/Image |
The image with 2D bounding boxes for visualization |
Parameters
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
anchors |
double array | [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] | The anchors to create bounding box candidates |
scale_x_y |
double array | [1.0, 1.0, 1.0] | The scale parameter to eliminate grid sensitivity |
score_thresh |
double | 0.1 | If the objectness score is less than this value, the object is ignored in yolo layer. |
iou_thresh |
double | 0.45 | The iou threshold for NMS method |
detections_per_im |
int | 100 | The maximum detection number for one frame |
use_darknet_layer |
bool | true | The flag to use yolo layer in darknet |
ignore_thresh |
double | 0.5 | If the output score is less than this value, ths object is ignored. |
Node Parameters
Name | Type | Default Value | Description |
---|---|---|---|
data_path |
string | ”” | Packages data and artifacts directory path |
onnx_file |
string | ”” | The onnx file name for yolo model |
engine_file |
string | ”” | The tensorrt engine file name for yolo model |
label_file |
string | ”” | The label file with label names for detected objects written on it |
calib_image_directory |
string | ”” | The directory name including calibration images for int8 inference |
calib_cache_file |
string | ”” | The calibration cache file for int8 inference |
mode |
string | “FP32” | The inference mode: “FP32”, “FP16”, “INT8” |
gpu_id |
int | 0 | GPU device ID that runs the model |
Assumptions / Known limits
This package includes multiple licenses.
Onnx model
All YOLO ONNX models are converted from the officially trained model. If you need information about training datasets and conditions, please refer to the official repositories.
All models are downloaded during env preparation by ansible (as mention in installation). It is also possible to download them manually, see Manual downloading of artifacts . When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.
YOLOv3
YOLOv3: Converted from darknet weight file and conf file.
- This code is used for converting darknet weight file and conf file to onnx.
YOLOv4
YOLOv4: Converted from darknet weight file and conf file.
YOLOv4-tiny: Converted from darknet weight file and conf file.
- This code is used for converting darknet weight file and conf file to onnx.
YOLOv5
Refer to this guide
Limitations
- If you want to run multiple instances of this node for multiple cameras using “yolo.launch.xml”, first of all, create a TensorRT engine by running the “tensorrt_yolo.launch.xml” launch file separately for each GPU. Otherwise, multiple instances of the node trying to create the same TensorRT engine can cause potential problems.
Reference repositories
Wiki Tutorials
Package Dependencies
Deps | Name |
---|---|
ament_cmake_auto | |
autoware_cmake | |
ament_lint_auto | |
autoware_lint_common | |
autoware_auto_perception_msgs | |
cv_bridge | |
image_transport | |
rclcpp | |
rclcpp_components | |
sensor_msgs | |
tier4_perception_msgs |
System Dependencies
Dependant Packages
Launch files
- launch/tensorrt_yolo.launch.xml
-
- yolo_type [default: yolov3]
- label_file [default: coco.names]
- input_topic [default: /image_raw]
- output_topic [default: rois]
- data_path [default: $(env HOME)/autoware_data]
- engine_file [default: $(var data_path)/tensorrt_yolo/$(var yolo_type).engine]
- calib_image_directory [default: $(find-pkg-share tensorrt_yolo)/calib_image/]
- mode [default: FP32]
- gpu_id [default: 0]
- launch/yolo.launch.xml
-
- image_raw0 [default: /image_raw0]
- gpu_id_image_raw0 [default: 0]
- image_raw1 [default: ]
- gpu_id_image_raw1 [default: 0]
- image_raw2 [default: ]
- gpu_id_image_raw2 [default: 0]
- image_raw3 [default: ]
- gpu_id_image_raw3 [default: 0]
- image_raw4 [default: ]
- gpu_id_image_raw4 [default: 0]
- image_raw5 [default: ]
- gpu_id_image_raw5 [default: 0]
- image_raw6 [default: ]
- gpu_id_image_raw6 [default: 0]
- image_raw7 [default: ]
- gpu_id_image_raw7 [default: 0]
- image_number [default: 1]
- output_topic [default: rois]