Package Summary
Tags | No category tags. |
Version | 0.43.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-04-04 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kenzo Lobos-Tsunekawa
- Amadeusz Szymko
- Satoshi Tanaka
Authors
autoware_lidar_centerpoint
Purpose
autoware_lidar_centerpoint is a package for detecting dynamic 3D objects.
Inner-workings / Algorithms
In this implementation, CenterPoint [1] uses a PointPillars-based [2] network to inference with TensorRT.
We trained the models using https://github.com/open-mmlab/mmdetection3d.
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
input pointcloud |
Output
Name | Type | Description |
---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
detected objects |
debug/cyclic_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
cyclic time (msg) |
debug/processing_time_ms |
autoware_internal_debug_msgs::msg::Float64Stamped |
processing time (ms) |
Parameters
ML Model Parameters
Note that these parameters are associated with ONNX file, predefined during the training phase. Be careful to change ONNX file as well when changing this parameter. Also, whenever you update the ONNX file, do NOT forget to check these values.
Name | Type | Default Value | Description |
---|---|---|---|
model_params.class_names |
list[string] | [“CAR”, “TRUCK”, “BUS”, “BICYCLE”, “PEDESTRIAN”] | list of class names for model outputs |
model_params.point_feature_size |
int | 4 |
number of features per point in the point cloud |
model_params.max_voxel_size |
int | 40000 |
maximum number of voxels |
model_params.point_cloud_range |
list[double] | [-76.8, -76.8, -4.0, 76.8, 76.8, 6.0] | detection range [min_x, min_y, min_z, max_x, max_y, max_z] [m] |
model_params.voxel_size |
list[double] | [0.32, 0.32, 10.0] | size of each voxel [x, y, z] [m] |
model_params.downsample_factor |
int | 1 |
downsample factor for coordinates |
model_params.encoder_in_feature_size |
int | 9 |
number of input features to the encoder |
model_params.has_variance |
bool | false |
true if the model outputs pose variance as well as pose for each bbox |
model_params.has_twist |
bool | false |
true if the model outputs velocity as well as pose for each bbox |
Core Parameters
Name | Type | Default Value | Description |
---|---|---|---|
encoder_onnx_path |
string | "" |
path to VoxelFeatureEncoder ONNX file |
encoder_engine_path |
string | "" |
path to VoxelFeatureEncoder TensorRT Engine file |
head_onnx_path |
string | "" |
path to DetectionHead ONNX file |
head_engine_path |
string | "" |
path to DetectionHead TensorRT Engine file |
build_only |
bool | false |
shutdown the node after TensorRT engine file is built |
trt_precision |
string | fp16 |
TensorRT inference precision: fp32 or fp16
|
post_process_params.score_threshold |
double | 0.4 |
detected objects with score less than threshold are ignored |
post_process_params.yaw_norm_thresholds |
list[double] | [0.3, 0.3, 0.3, 0.3, 0.0] | An array of distance threshold values of norm of yaw [rad]. |
post_process_params.iou_nms_search_distance_2d |
double | - | If two objects are farther than the value, NMS isn’t applied. |
post_process_params.iou_nms_threshold |
double | - | IoU threshold for the IoU-based Non Maximum Suppression |
post_process_params.has_twist |
boolean | false | Indicates whether the model outputs twist value. |
densification_params.world_frame_id |
string | map |
the world frame id to fuse multi-frame pointcloud |
densification_params.num_past_frames |
int | 1 |
the number of past frames to fuse with the current frame |
The build_only
option
The autoware_lidar_centerpoint
node has build_only
option to build the TensorRT engine file from the ONNX file.
Although it is preferred to move all the ROS parameters in .param.yaml
file in Autoware Universe, the build_only
option is not moved to the .param.yaml
file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:
ros2 launch autoware_lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_tiny model_path:=/home/autoware/autoware_data/lidar_centerpoint model_param_path:=$(ros2 pkg prefix autoware_lidar_centerpoint --share)/config/centerpoint_tiny.param.yaml build_only:=true
Assumptions / Known limits
- The
object.existence_probability
is stored the value of classification confidence of a DNN, not probability.
Trained Models
You can download the onnx format of trained models by clicking on the links below.
- Centerpoint: pts_voxel_encoder_centerpoint.onnx, pts_backbone_neck_head_centerpoint.onnx
- Centerpoint tiny: pts_voxel_encoder_centerpoint_tiny.onnx, pts_backbone_neck_head_centerpoint_tiny.onnx
Centerpoint
was trained in nuScenes
(~28k lidar frames) [8] and TIER IV’s internal database (~11k lidar frames) for 60 epochs.
Centerpoint tiny
was trained in Argoverse 2
(~110k lidar frames) [9] and TIER IV’s internal database (~11k lidar frames) for 20 epochs.
Training CenterPoint Model and Deploying to the Autoware
Overview
This guide provides instructions on training a CenterPoint model using the mmdetection3d repository and seamlessly deploying it within Autoware.
Installation
Install prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name train-centerpoint python=3.8 -y
conda activate train-centerpoint
Step 3. Install PyTorch
Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware.
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
Install mmdetection3d
Step 1. Install MMEngine, MMCV, and MMDetection using MIM
pip install -U openmim
mim install mmengine
mim install 'mmcv>=2.0.0rc4'
mim install 'mmdet>=3.0.0rc5, <3.3.0'
Step 2. Install mmdetection3d forked repository
Introduced several valuable enhancements in our fork of the mmdetection3d repository. Notably, we’ve made the PointPillar z voxel feature input optional to maintain compatibility with the original paper. In addition, we’ve integrated a PyTorch to ONNX converter and a T4 format reader for added functionality.
git clone https://github.com/autowarefoundation/mmdetection3d.git
cd mmdetection3d
pip install -v -e .
Use Training Repository with Docker
Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.
Clone fork of the mmdetection3d repository
git clone https://github.com/autowarefoundation/mmdetection3d.git
Build the Docker image by running the following command:
cd mmdetection3d
docker build -t mmdetection3d -f docker/Dockerfile .
Run the Docker container:
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
Preparing NuScenes dataset for training
Step 1. Download the NuScenes dataset from the official website and extract the dataset to a folder of your choice.
Note: The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.
Step 2. Create a symbolic link to the dataset folder
ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/
Step 3. Prepare the NuScenes data by running:
cd mmdetection3d
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
Training CenterPoint with NuScenes Dataset
Prepare the config file
The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is
located at mmdetection3d/projects/AutowareCenterPoint/configs
. This configuration file is a derived version of
this centerpoint configuration file
from mmdetection3D.
In this custom configuration, the use_voxel_center_z parameter is set as False to deactivate the z coordinate of the voxel center,
aligning with the original paper’s specifications and making the model compatible with Autoware. Additionally, the filter size is set as [32, 32].
The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file. This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more.
Start training
python tools/train.py projects/AutowareCenterPoint/configs/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom
Evaluation of the trained model
For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors: 1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians, 75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes, including training, evaluation, and fine-tuning of models. It is organized in the T4 format.
Download the sample dataset
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz
#Extract the dataset to a folder of your choice
tar -xvf lidar_detection_sample_dataset.tar.gz
#Create a symbolic link to the dataset folder
ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/
Prepare dataset and evaluate trained model
Create .pkl
files for training, evaluation, and testing.
The dataset was formatted according to T4Dataset specifications, with ‘sample_dataset’ designated as one of its versions.
python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2
Run evaluation
python tools/test.py projects/AutowareCenterPoint/configs/centerpoint_custom_test.py /PATH/OF/THE/CHECKPOINT --task lidar_det
Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset and the training dataset. The model’s training parameters are originally tailored to the NuScenes dataset, which employs a single lidar sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at the base link location of the vehicle.
Deploying CenterPoint model to Autoware
Convert CenterPoint PyTorch model to ONNX Format
The autoware_lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network,
such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository,
we have included a script that converts the CenterPoint model to Autoware compatible ONNX format.
You can find it in mmdetection3d/projects/AutowareCenterPoint
file.
python projects/AutowareCenterPoint/centerpoint_onnx_converter.py --cfg projects/AutowareCenterPoint/configs/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth --work-dir ./work_dirs/onnx_models
Create the config file for the custom model
Create a new config file named centerpoint_custom.param.yaml under the config file directory of the autoware_lidar_centerpoint node. Sets the parameters of the config file like point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.
/**:
ros__parameters:
class_names: ["CAR", "TRUCK", "BUS", "BICYCLE", "PEDESTRIAN"]
point_feature_size: 4
max_voxel_size: 40000
point_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0]
voxel_size: [0.2, 0.2, 8.0]
downsample_factor: 1
encoder_in_feature_size: 9
# post-process params
circle_nms_dist_threshold: 0.5
iou_nms_search_distance_2d: 10.0
iou_nms_threshold: 0.1
yaw_norm_thresholds: [0.3, 0.3, 0.3, 0.3, 0.0]
Launch the lidar_centerpoint node
cd /YOUR/AUTOWARE/PATH/Autoware
source install/setup.bash
ros2 launch autoware_lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_custom model_path:=/PATH/TO/ONNX/FILE/
Changelog
v1 (2022/07/06)
Name | URLs | Description |
---|---|---|
centerpoint |
pts_voxel_encoder pts_backbone_neck_head |
There is a single change due to the limitation in the implementation of this package. num_filters=[32, 32] of PillarFeatureNet
|
centerpoint_tiny |
pts_voxel_encoder pts_backbone_neck_head |
The same model as default of v0 . |
These changes are compared with this configuration.
v0 (2021/12/03)
Name | URLs | Description |
---|---|---|
default |
pts_voxel_encoder pts_backbone_neck_head |
There are two changes from the original CenterPoint architecture. num_filters=[32] of PillarFeatureNet and ds_layer_strides=[2, 2, 2] of RPN
|
(Optional) Error detection and handling
(Optional) Performance characterization
References/External links
[1] Yin, Tianwei, Xingyi Zhou, and Philipp Krähenbühl. “Center-based 3d object detection and tracking.” arXiv preprint arXiv:2006.11275 (2020).
[2] Lang, Alex H., et al. “PointPillars: Fast encoders for object detection from point clouds.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[3] https://github.com/tianweiy/CenterPoint
[4] https://github.com/open-mmlab/mmdetection3d
[5] https://github.com/open-mmlab/OpenPCDet
[6] https://github.com/yukkysaito/autoware_perception
[7] https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars
[8] https://www.nuscenes.org/nuscenes
[9] https://www.argoverse.org/av2.html
(Optional) Future extensions / Unimplemented parts
Acknowledgment: deepen.ai’s 3D Annotation Tools Contribution
Special thanks to Deepen AI for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset.
Legal Notice
The nuScenes dataset is released publicly for non-commercial use under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License. Additional Terms of Use can be found at https://www.nuscenes.org/terms-of-use. To inquire about a commercial license please contact nuscenes@motional.com.
Changelog for package autoware_lidar_centerpoint
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- chore(autoware_lidar_centerpoint): add maintainer (#10076)
- Contributors: Amadeusz Szymko, Fumiya Watanabe, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility &
tensorrt_common as unified lib for all perception components
(#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
* style(autoware_lidar_centerpoint): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> * docs(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- fix(autoware_lidar_transfusion): reuse cast variable
- fix(autoware_tensorrt_common): remove deprecated inference API
* style(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> * style(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- fix(autoware_tensorrt_common): const pointer
- fix(autoware_tensorrt_common): remove unused method declaration
- style(pre-commit): autofix
* refactor(autoware_tensorrt_common): readability Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- fix(autoware_tensorrt_common): return if layer not registered
* refactor(autoware_tensorrt_common): readability Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- fix(autoware_tensorrt_common): rename struct
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- feat(lidar_centerpoint, pointpainting): add diag publisher for max voxel size (#9720)
- fix(autoware_lidar_centerpoint): fixed rounding errors that caused illegal memory access (#9795) fix: fixed rounding errors that caused illegal memory address
- feat(autoware_lidar_centerpoint): process front voxels first
(#9608)
- feat: optimize voxel indexing in preprocess_kernel.cu
- fix: remove redundant index check
* fix: modify voxel index for better memory access ---------
- Contributors: Amadeusz Szymko, Fumiya Watanabe, Kenzo Lobos Tsunekawa, Taekjin LEE, kminoda
0.40.0 (2024-12-12)
- Merge branch 'main' into release-0.40.0
- Revert "chore(package.xml): bump version to 0.39.0 (#9587)" This reverts commit c9f0f2688c57b0f657f5c1f28f036a970682e7f5.
- fix(lidar_centerpoint): non-maximum suppression target decision
logic
(#9595)
- refactor(lidar_centerpoint): optimize non-maximum suppression search distance calculation
- feat(lidar_centerpoint): do not suppress if one side of the object is pedestrian
- style(pre-commit): autofix
- refactor(lidar_centerpoint): remove unused variables
* refactor: remove unused variables fix: implement non-maximum suppression logic to the transfusion refactor: remove unused parameter iou_nms_target_class_names Revert "fix: implement non-maximum suppression logic to the transfusion" This reverts commit b8017fc366ec7d67234445ef5869f8beca9b6f45. fix: revert transfusion modification ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- fix: fix ticket links in CHANGELOG.rst (#9588)
- chore(package.xml): bump version to 0.39.0
(#9587)
- chore(package.xml): bump version to 0.39.0
- fix: fix ticket links in CHANGELOG.rst
* fix: remove unnecessary diff ---------Co-authored-by: Yutaka Kondo <<yutaka.kondo@youtalk.jp>>
- fix: fix ticket links in CHANGELOG.rst (#9588)
- fix(cpplint): include what you use - perception (#9569)
- fix(autoware_lidar_centerpoint): fix clang-diagnostic-delete-abstract-non-virtual-dtor (#9515)
- feat(autoware_lidar_centerpoint): added a check to notify if we
are dropping pillars
(#9488)
- feat: added a check to notify if we are dropping pillars
- chore: updated text
* chore: throttled the message ---------
- fix(autoware_lidar_centerpoint): fix clang-diagnostic-unused-private-field (#9471)
- 0.39.0
- update changelog
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- chore(package.xml): bump version to 0.38.0
(#9266)
(#9284)
- unify package.xml version to 0.37.0
- remove system_monitor/CHANGELOG.rst
- add changelog
* 0.38.0
- Contributors: Esteve Fernandez, Fumiya Watanabe, Kenzo Lobos Tsunekawa, M. Fatih Cırıt, Ryohsuke Mitsudome, Taekjin LEE, Yutaka Kondo, kobayu858
0.39.0 (2024-11-25)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- chore(package.xml): bump version to 0.38.0
(#9266)
(#9284)
- unify package.xml version to 0.37.0
- remove system_monitor/CHANGELOG.rst
- add changelog
* 0.38.0
- Contributors: Esteve Fernandez, Yutaka Kondo
0.38.0 (2024-11-08)
- unify package.xml version to 0.37.0
- refactor(tensorrt_common)!: fix namespace, directory structure &
move to perception namespace
(#9099)
- refactor(tensorrt_common)!: fix namespace, directory structure & move to perception namespace
- refactor(tensorrt_common): directory structure
- style(pre-commit): autofix
* fix(tensorrt_common): correct package name for logging ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- refactor(object_recognition_utils): add autoware prefix to object_recognition_utils (#8946)
-
fix(autoware_lidar_centerpoint): fix twist covariance orientation (#8996) * fix(autoware_lidar_centerpoint): fix covariance converter considering the twist covariance matrix is based on the object coordinate fix style * fix: update test of box3DToDetectedObject function ---------
- fix(autoware_lidar_centerpoint): convert object's velocity to
follow its definition
(#8980)
- fix: convert object's velocity to follow its definition in box3DToDetectedObject function
* Update perception/autoware_lidar_centerpoint/lib/ros_utils.cpp Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> ---------Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- feat(autoware_lidar_centerpoint): shuffled points before feeding
them to the model
(#8814)
- feat: shuffling points before feeding them into the model to achieve uniform sampling into the voxels
* Update perception/autoware_lidar_centerpoint/src/node.cpp Co-authored-by: kminoda <<44218668+kminoda@users.noreply.github.com>> * Update perception/autoware_lidar_centerpoint/src/node.cpp Co-authored-by: kminoda <<44218668+kminoda@users.noreply.github.com>> * Update perception/autoware_lidar_centerpoint/lib/centerpoint_trt.cpp Co-authored-by: kminoda <<44218668+kminoda@users.noreply.github.com>> * Update perception/autoware_lidar_centerpoint/include/autoware/lidar_centerpoint/centerpoint_config.hpp Co-authored-by: kminoda <<44218668+kminoda@users.noreply.github.com>> ---------Co-authored-by: kminoda <<44218668+kminoda@users.noreply.github.com>>
- refactor(autoware_lidar_centerpoint): use std::size_t instead of
size_t
(#8820)
- refactor(autoware_lidar_centerpoint): use std::size_t instead of size_t
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- chore(autoware_lidar_centerpoint): add centerpoint sigma parameter (#8731) add centerpoint sigma parameter
- fix(autoware_lidar_centerpoint): fix unusedFunction (#8572) fix:unusedFunction
- fix(autoware_lidar_centerpoint): place device vector in CUDA device system (#8272)
- docs(centerpoint): add description for ml package params (#8187)
- chore(autoware_lidar_centerpoint): updated tests (#8158) chore: updated centerpoin tests. they are currently commented out but they were not compiling (forgot to update them when I added the new cloud capacity parameter)
- refactor(lidar_centerpoint)!: fix namespace and directory structure
(#8049)
- add prefix in lidar_centerpoint
- add .gitignore
- change include package name in image_projection_based fusion
- fix
- change in codeowner
- delete package
- style(pre-commit): autofix
- style(pre-commit): autofix
- solve conflict too
- fix include file
- fix typo in launch file
- add prefix in README
- fix bugs by conflict
- style(pre-commit): autofix
- change namespace from to
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- Contributors: Amadeusz Szymko, Esteve Fernandez, Kenzo Lobos Tsunekawa, Masato Saeki, Taekjin LEE, Yoshi Ri, Yutaka Kondo, kminoda, kobayu858
0.26.0 (2024-04-03)
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/lidar_centerpoint.launch.xml
-
- input/pointcloud [default: /sensing/lidar/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: centerpoint_tiny]
- model_path [default: $(var data_path)/lidar_centerpoint]
- model_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config/$(var model_name).param.yaml]
- ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- common_param_path [default: $(find-pkg-share autoware_lidar_centerpoint)/config/centerpoint_common.param.yaml]
- build_only [default: false]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]