No version for distro humble. Known supported distros are highlighted in the buttons above.
No version for distro jazzy. Known supported distros are highlighted in the buttons above.
No version for distro rolling. Known supported distros are highlighted in the buttons above.

yolov5_isaac_ros package from yolov5-with-isaac-ros repo

yolov5_isaac_ros

Package Summary

Tags No category tags.
Version 0.0.0
License Apache-2.0
Build type AMENT_PYTHON
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/nvidia-ai-iot/yolov5-with-isaac-ros.git
VCS Type git
VCS Version main
Last Updated 2022-12-02
Dev Status UNMAINTAINED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

ROS2 package for YOLOv5 object detection to use with Nvidia Isaac ROS

Additional Links

No additional links.

Maintainers

  • admin

Authors

No additional authors.

YOLOv5 object detection with Isaac ROS

This is a sample showing how to integrate YOLOv5 with Nvidia Isaac ROS DNN Inference.

Requirements

Tested on Jetson Orin running JetPack 5.0.2 and Intel RealSense D435 Webcam.

Development Environment Setup

Use the Isaac ROS Dev Docker for development. This provides an environment with all dependencies installed to run Isaac ROS packages.

Usage

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

Model preparation

  • Download the YOLOv5 PyTorch model - yolov5s.pt from the Ultralytics YOLOv5 project.
  • Export to ONNX following steps here and visualize the ONNX model using Netron. Note input and output names - these will be used to run the node. For instance, images for input and output0 for output. Also note input dimensions, for instance, (1x3x640x640).

Object Detection pipeline Setup

  1. Following the development environment setup above, you should have a ROS2 workspace named workspaces/isaac_ros-dev. Clone this repository and its dependencies under workspaces/isaac_ros-dev/src:
cd ~/workspaces/isaac_ros-dev/src
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference.git
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
git clone https://github.com/NVIDIA-AI-IOT/YOLOv5-with-Isaac-ROS.git

  1. Download requirements.txt from the Ultralytics YOLOv5 project to workspaces/isaac_ros-dev/src.
  2. Copy your ONNX model (say, yolov5s.onnx) from above to workspaces/isaac_ros-dev/src.
  3. Follow Isaac ROS Realsense Setup to setup the camera.
  4. Launch the Docker container using the run_dev.sh script:
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common
./scripts/run_dev.sh

  1. Inside the container, run the following:
pip install -r src/requirements.txt

  1. Install Torchvision: This project runs on a device with an Nvidia GPU. The Isaac ROS Dev container uses the Nvidia-built PyTorch version with CUDA-acceleration. Ensure that you install a compatible Torchvision version from source for CUDA-acceleration. Specify the compatible version in place of $torchvision_tag below:
git clone https://github.com/pytorch/vision.git
cd vision
git checkout $torchvision_tag
pip install -v .

  1. Download the utils folder from the Ultralytics YOLOv5 project and put it in the yolov5_isaac_ros folder of this repository. Finally, your file structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- requirements.txt
         +- yolov5s.onnx
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- images
            +- yolov5_isaac_ros
               +- utils
               +- Yolov5Decoder.py  
               +- Yolov5DecoderUtils.py    

Refer to the license terms for the YOLOv5 project before using this software and ensure you are using YOLOv5 under license terms compatible with your project requirements.

  1. Make the following changes to utils/general.py, utils/torch_utils.py and utils/metrics.py after downloading utils from the Ultralytics YOLOv5 project:
    1. In the import statements, add yolov5_isaac_ros before utils. For instance - change from utils.metrics import box_iou to from yolov5_isaac_ros.utils.metrics import box_iou

Running the pipeline with TensorRT inference node

  1. Inside the container, build and source the workspace:
cd /workspaces/isaac_ros-dev
colcon build --symlink-install
source install/setup.bash

  1. Launch the RealSense camera node as per step 7 here: ros2 launch realsense2_camera rs_launch.py
  2. Verify that images are being published on /camera/color/image_raw. You could use RQt/Foxglove for this or use this command in another terminal inside the container: ros2 topic echo /camera/color/image_raw
  3. In another terminal inside the container, run the isaac_ros_yolov5_tensor_rt launch file. This launches the DNN image encoder node, TensorRT inference node and YOLOv5 decoder node. It also launches a visualization script that shows results on RQt. Use the names noted above in Model preparation as input_binding_names and output_binding_names (for example, images for input_binding_names and output0 for output_binding_names). Similarly, use the input dimensions noted above as network_image_width and network_image_height:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py model_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.onnx engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640

  1. For subsequent runs, use the following command as the engine file yolov5s.plan is generated and saved in workspaces/isaac_ros-dev/src/ after running the command above:
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_tensor_rt.launch.py engine_file_path:=/workspaces/isaac_ros-dev/src/yolov5s.plan input_binding_names:=['images'] output_binding_names:=['output0'] network_image_width:=640 network_image_height:=640  

  1. You can also modify parameters to the YOLOv5 decoder node.
  2. The workflow is shown in the image above:
    • The DNN image encoder node subscribes to images from the RealSense camera node on topic /camera/color/image_raw.
    • It encodes each image into an isaac_ros_tensor_list_interfaces/TensorList message and publishes on topic tensor_pub.
    • The TensorRT node uses the given ONNX model/TensorRT engine and performs inference on the tensors coming from the encoder node. It publishes results as a isaac_ros_tensor_list_interfaces/TensorList message on topic tensor_sub.
    • The YOLOv5 decoder node does post-processing on these tensors to extract the following information for each detection in the image: (bounding box center X and Y coordinates, bounding box height and width, detection confidence score and object class). It publishes this information on topic object_detections as a Detection2DArray message.
    • isaac_ros_yolov5_visualizer.py subscribes to topics camera/color/image_raw from the camera node and object_detections from the decoder node. It publishes images with the resulting bounding boxes on topic yolov5_processed_image.
    • On running the pipeline, an RQt window will pop up, where you can view yolov5_processed_image. These images will contain bounding boxes, object classes and detection scores around detected objects. You could also use Foxglove to view images on yolov5_processed_image.

Using Triton inference node with TensorRT Backend

  1. Convert the ONNX model (say, yolov5s.onnx) to a TensorRT plan file named model.plan using trtexec. To do this, run the following command from /usr/src/tensorrt/bin and save the generated file under yolov5/1/ of this project.
cd /usr/src/tensorrt/bin
./trtexec --onnx=yolov5s.onnx --saveEngine=<absolute-path-to-save-location>  --fp16

  1. File structure should look like this (all files not shown here):
.
+- workspaces
   +- isaac_ros-dev
      +- src
         +- isaac_ros_common
         +- YOLOv5-with-Isaac-ROS
            +- README
            +- launch
            +- yolov5
               +- config.pbtxt
               +- 1
                  +- model.plan

  1. To launch the pipeline using Triton for inference (specify network_image_width and network_image_height as explained for the TensorRT node above):
ros2 launch yolov5_isaac_ros isaac_ros_yolov5_triton.launch.py network_image_width:=640 network_image_height:=640

Visit Isaac ROS DNN Inference for more information about the Image encoder, TensorRT and Triton nodes.

Modifying detection parameters

Parameters like the confidence threshold can be specified in the decoder_params.yaml file under the yolov5-isaac-ros-dnn/config folder. Below is a description of each parameter:

  • conf_thres: Detection confidence threshold
  • iou_thres: IOU threshold
  • max_det: Maximum number of detections per image

Support

Please reach out regarding issues and suggestions here.

CHANGELOG
No CHANGELOG found.

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged yolov5_isaac_ros at Robotics Stack Exchange

No version for distro noetic. Known supported distros are highlighted in the buttons above.
No version for distro ardent. Known supported distros are highlighted in the buttons above.
No version for distro bouncy. Known supported distros are highlighted in the buttons above.
No version for distro crystal. Known supported distros are highlighted in the buttons above.
No version for distro eloquent. Known supported distros are highlighted in the buttons above.
No version for distro dashing. Known supported distros are highlighted in the buttons above.
No version for distro galactic. Known supported distros are highlighted in the buttons above.
No version for distro foxy. Known supported distros are highlighted in the buttons above.
No version for distro iron. Known supported distros are highlighted in the buttons above.
No version for distro lunar. Known supported distros are highlighted in the buttons above.
No version for distro jade. Known supported distros are highlighted in the buttons above.
No version for distro indigo. Known supported distros are highlighted in the buttons above.
No version for distro hydro. Known supported distros are highlighted in the buttons above.
No version for distro kinetic. Known supported distros are highlighted in the buttons above.
No version for distro melodic. Known supported distros are highlighted in the buttons above.