Package Summary
Tags | No category tags. |
Version | 1.12.0 |
License | Apache 2.0 |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Description | autoware src learn and recode. |
Checkout URI | https://github.com/is-whale/autoware_learn.git |
VCS Type | git |
VCS Version | 1.14 |
Last Updated | 2025-03-14 |
Dev Status | UNKNOWN |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kosuke Murakami
Authors
Point Pillars for 3D Object Detection: ver. 1.0
Autoware package for Point Pillars. Referenced paper.
This node can be compiled either using cuDNN and TensorRT or by using TVM, the default are cuDNN and TensorRT.
Requirements
- CUDA Toolkit v9.0 or v10.0
To compile the node using the cuDNN and TensorRT support the requirements are:
-
cuDNN: Tested with v7.3.1
-
TensorRT: Tested with 5.0.2 -> How to install
To compile the node using TVM support the requirements are:
-
TVM runtime, TVM Python bindings and dlpack headers
-
tvm_utility package
How to setup
Setup the node using cuDNN and TensorRT support:
- Download the pretrained file from here.
$ git clone https://github.com/k0suke-murakami/kitti_pretrained_point_pillars.git
Setup the node using TVM support:
-
Clone the modelzoo repository for Autoware here.
- Use the TVM-CLI to export point pillars models to TVM (Instruction on how to do it are present in the repository
```), models are ```perception/lidar_obstacle_detection/point_pillars_pfe/onnx_fp32_kitti ``` and ```perception/lidar_obstacle_detection/point_pillars_rpn/onnx_fp32_kitti
-
Copy the generated files into the
tvm_models/tvm_point_pillars_pfe
andtvm_models/tvm_point_pillars_rpn
folders respectively for each model. With these files in place, the package will be built using TVM. - Compile the node.
How to launch
-
Launch file (cuDNN and TensorRT support):
roslaunch lidar_point_pillars lidar_point_pillars.launch pfe_onnx_file:=/PATH/TO/FILE.onnx rpn_onnx_file:=/PATH/TO/FILE.onnx input_topic:=/points_raw
-
Launch file (TVM support):
roslaunch lidar_point_pillars lidar_point_pillars.launch
-
You can launch it through the runtime manager in Computing tab, as well.
API
/**
* @brief Call PointPillars for the inference.
* @param[in] in_points_array pointcloud array
* @param[in] in_num_points Number of points
* @param[out] out_detections Output bounding box from the network
* @details This is an interface for the algorithm.
*/
void doInference(float* in_points_array, int in_num_points, std::vector<float> out_detections);
Parameters
Parameter | Type | Description | Default |
---|---|---|---|
input_topic |
String | Input topic Pointcloud. | /points_raw |
baselink_support |
Bool | Whether to use baselink to adjust parameters. | True |
reproduce_result_mode |
Bool | Whether to enable reproducible result mode at the cost of the runtime. | False |
score_threshold |
Float | Minimum score required to include the result [0,1] | 0.5 |
nms_overlap_threshold |
Float | Minimum IOU required to have when applying NMS [0,1] | 0.5 |
pfe_onnx_file |
String | Path to the PFE onnx file, unused if TVM build is chosen | |
rpn_onnx_file |
String | Path to the RPN onnx file, unused if TVM build is chosen |
Outputs
Topic | Type | Description |
---|---|---|
/detection/lidar_detector/objects |
autoware_msgs/DetectedObjetArray |
Array of Detected Objects in Autoware format |
Notes
-
To display the results in Rviz
objects_visualizer
is required. (Launch file launches automatically this node). -
Pretrained models are available here, trained with the help of the KITTI dataset. For this reason, these are not suitable for commercial purposes. Derivative works are bound to the BY-NC-SA 3.0 License. (https://creativecommons.org/licenses/by-nc-sa/3.0/)
Changelog for package lidar_point_pillars
1.11.0 (2019-03-21)
- [Feature]PointPillars (#2029)
- Contributors: Kosuke Murakami
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/lidar_point_pillars.launch
-
- input_topic [default: /points_raw]
- baselink_support [default: true]
- reproduce_result_mode [default: true]
- score_threshold [default: 0.5]
- nms_overlap_threshold [default: 0.5]
- pfe_onnx_file
- rpn_onnx_file