![]() |
yolov5_isaac_ros package from yolov5-with-isaac-ros repoyolov5_isaac_ros |
Package Summary
Tags | No category tags. |
Version | 0.0.0 |
License | Apache-2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2022-12-02 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- admin
Authors
YOLOv5 object detection with Isaac ROS
This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.
Requirements
Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.
Development Environment Setup
Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.
Usage
Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.
Model preparation
- Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
- Export to ONNX following steps here and visualize the ONNX model using Netron. Note
input
andoutput
names - these will be used to run the node. For instance,images
for input andoutput0
for output. Also note input dimensions, for instance,(1x3x640x640)
.
Object Detection pipeline Setup
- Following the development environment setup above, you should have a ROS2 workspace named
workspaces/isaac_ros-dev
. Clone this repository and its dependencies underworkspaces/isaac_ros-dev/src
:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git
- Download requirements.txt from the Ultralytics YOLOv5 project to
workspaces/isaac_ros-dev/src
. - Copy your ONNX model (say,
yolov5s.onnx
) from above toworkspaces/isaac_ros-dev/src
. - Follow Isaac ROS Realsense Setup to setup the camera.
- Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh
- Inside the container, run the following:
pip install -r src/requirements.txt
-
Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of
$torchvision_tag
below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .
- Download the utils folder from the Ultralytics YOLOv5 project and put it in the
yolov5_isaac_ros
folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
+- isaac_ros-dev
+- src
+- requirements.txt
+- yolov5s.onnx
+- isaac_ros_common
+- YOLOv5-with-Isaac-ROS
+- README
+- launch
+- images
+- yolov5_isaac_ros
+- utils
+- Yolov5Decoder.py
+- Yolov5DecoderUtils.py
Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.
- Make the following changes to
utils/general.py
,utils/torch_utils.py
andutils/metrics.py
after downloading utils from the Ultralytics YOLOv5 project:- In the import statements, add
yolov5_isaac_ros
beforeutils
. For instance - changefrom utils.metrics import box_iou
tofrom yolov5_isaac_ros.utils.metrics import box_iou
- In the import statements, add
Running the pipeline with TensorRT inference node
- Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash
- Launch the RealSense camera node as per step 7 here:
ros2 launch realsense2_camera rs_launch.py
- Verify that images are being published on
/camera/color/image_raw
. You could use RQt/Foxglove for this or use this command in another terminal inside the container:ros2 topic echo /camera/color/image_raw
- In another terminal inside the container, run the
isaac_ros_yolov5_tensor_rt
launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation asinput_binding_names
andoutput_binding_names
(for example,images
forinput_binding_names
andoutput0
foroutput_binding_names
). Similarly, use the input dimensions noted above asnetwork_image_width
andnetwork_image_height
:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640
- For subsequent runs, use the following command as the engine file
yolov5s.plan
is generated and saved inworkspaces/isaac_ros-dev/src/
after running the command above:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640
- You can also modify parameters to the YOLOv5 decoder node.
- The workflow is shown in the image above:
- The DNN image encoder node subscribes to images from the RealSense camera node on topic
/camera/color/image_raw
. - It encodes each image into an isaac_ros_tensor_list_interfaces/TensorList message and publishes on topic
tensor_pub
. - The TensorRT node uses the given ONNX model/TensorRT engine and performs inference on the tensors coming from the encoder node. It publishes results as a isaac_ros_tensor_list_interfaces/TensorList message on topic
tensor_sub
. - The YOLOv5 decoder node does post-processing on these tensors to extract the following information for each detection in the image: (bounding box center X and Y coordinates, bounding box height and width, detection confidence score and object class). It publishes this information on topic
object_detections
as a Detection2DArray message. -
isaac_ros_yolov5_visualizer.py
subscribes to topicscamera/color/image_raw
from the camera node andobject_detections
from the decoder node. It publishes images with the resulting bounding boxes on topicyolov5_processed_image
. - On running the pipeline, an RQt window will pop up, where you can view
yolov5_processed_image
. These images will contain bounding boxes, object classes and detection scores around detected objects. You could also use Foxglove to view images onyolov5_processed_image
.
- The DNN image encoder node subscribes to images from the RealSense camera node on topic
Using Triton inference node with TensorRT Backend
- Convert the ONNX model (say,
yolov5s.onnx
) to a TensorRT plan file namedmodel.plan
usingtrtexec
. To do this, run the following command from/usr/src/tensorrt/bin
and save the generated file underyolov5/1/
of this project.
cd /usr/src/tensorrt/bin
./trtexec --onnx=yolov5s.onnx --saveEngine=<absolute-path-to-save-location> --fp16
- File structure should look like this (all files not shown here):
.
+- workspaces
+- isaac_ros-dev
+- src
+- isaac_ros_common
+- YOLOv5-with-Isaac-ROS
+- README
+- launch
+- yolov5
+- config.pbtxt
+- 1
+- model.plan
- To launch the pipeline using Triton for inference (specify
network_image_width
andnetwork_image_height
as explained for the TensorRT node above):
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_triton.launch.py network_image_width:=640 network_image_height:=640
Visit Isaac ROS DNN Inference for more information about the Image encoder, TensorRT and Triton nodes.
Modifying detection parameters
Parameters like the confidence threshold can be specified in the decoder_params.yaml
file under the yolov5-isaac-ros-dnn/config
folder. Below is a description of each parameter:
- conf_thres: Detection confidence threshold
- iou_thres: IOU threshold
- max_det: Maximum number of detections per image
Support
Please reach out regarding issues and suggestions here.
Wiki Tutorials
Package Dependencies
System Dependencies
Name |
---|
python3-pytest |