Package Summary
Tags | No category tags. |
Version | 0.43.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-04-03 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Dai Nguyen
- Kotaro Uetake
- Tao Zhong
Authors
autoware_image_projection_based_fusion
Purpose
The autoware_image_projection_based_fusion
package is designed to enhance obstacle detection accuracy by integrating information from both image-based and LiDAR-based perception. It fuses detected obstacles — such as bounding boxes or segmentation — from 2D images with 3D point clouds or other obstacle representations, including bounding boxes, clusters, or segmentation. This fusion helps to refine obstacle classification and detection in autonomous driving applications.
Fusion algorithms
The package provides multiple fusion algorithms, each designed for specific use cases. Below are the different fusion methods along with their descriptions and detailed documentation links:
Fusion Name | Description | Detail |
---|---|---|
roi_cluster_fusion | Assigns classification labels to LiDAR-detected clusters by matching them with Regions of Interest (ROIs) from a 2D object detector. | link |
roi_detected_object_fusion | Updates classification labels of detected objects using ROI information from a 2D object detector. | link |
pointpainting_fusion | Augments the point cloud by painting each point with additional information from ROIs of a 2D object detector. The enriched point cloud is then processed by a 3D object detector for improved accuracy. | link |
roi_pointcloud_fusion | Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects. | link |
segmentation_pointcloud_fusion | Filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation. | link |
Inner Workings / Algorithms
The fusion process operates on two primary types of input data:
- Msg3d: This includes 3D data such as point clouds, bounding boxes, or clusters from LiDAR.
- RoIs (Regions of Interest): These are 2D detections or proposals from camera-based perception modules, such as object detection bounding boxes.
Both inputs come with timestamps, which are crucial for synchronization and fusion. Since sensors operate at different frequencies and may experience network delays, a systematic approach is needed to handle their arrival, align their timestamps, and ensure reliable fusion.
The following steps describe how the node processes these inputs, synchronizes them, and performs multi-sensor fusion.
Step 1: Matching and Creating a Collector
When a Msg3d or a set of RoIs arrives, its timestamp is checked, and an offset is subtracted to determine the reference timestamp. The node then searches for an existing collector with the same reference timestamp.
- If a matching collector is found, the incoming data is added to it.
- If no matching collector exists, a new collector is created and initialized with the reference timestamp.
Step 2: Triggering the Timer
Once a collector is created, a countdown timer is started. The timeout duration depends on which message type arrived first and is defined by either msg3d_timeout_sec
for msg3d or rois_timeout_sec
for RoIs.
The collector will attempt to fuse the collected 3D and 2D data either:
- When both Msg3d and RoI data are available, or
- When the timer expires.
If no Msg3d is received before the timer expires, the collector will discard the data without performing fusion.
Step 3: Fusion Process
The fusion process consists of three main stages:
- Preprocessing – Preparing the input data for fusion.
- Fusion – Aligning and merging RoIs with the 3D point cloud.
- Postprocessing – Refining the fused output based on the algorithm’s requirements.
The specific operations performed during these stages may vary depending on the type of fusion being applied.
Step 4: Publishing the Fused Result
After the fusion process is completed, the fused output is published. The collector is then reset to an idle state, ready to process the next incoming message.
The figure below shows how the input data is fused in different scenarios.
Parameters
All of the fusion nodes have the common parameters described in the following
{{ json_to_markdown(“perception/autoware_image_projection_based_fusion/schema/fusion_common.schema.json”) }}
Parameter Settings
Timeout
The order in which RoIs
or the msg3d
message arrives at the fusion node depends on your system and sensor configuration. Since the primary goal is to fuse 2D RoIs
with msg3d
data, msg3d
is essential for processing.
If RoIs
arrive earlier, they must wait until msg3d
is received. You can adjust the waiting time using the rois_timeout_sec
parameter.
If msg3d
arrives first, the fusion process should proceed as quickly as possible, so the waiting time for msg3d
(msg3d_timeout_sec
) should be kept minimal.
RoIs Offsets
The offset between each camera and the LiDAR is determined by their shutter timing. To ensure accurate fusion, users must understand the timing offset between the RoIs
and msg3d
. Once this offset is known, it should be specified in the parameter rois_timestamp_offsets
.
In the figure below, the LiDAR completes a full scan from the rear in 100 milliseconds. When the LiDAR scan reaches the area where the camera is facing, the camera is triggered, capturing an image with a corresponding timestamp. The rois_timestamp_offsets
can then be calculated by subtracting the LiDAR header timestamp from the camera header timestamp. As a result, the rois_timestamp_offsets
would be [0.059, 0.010, 0.026, 0.042, 0.076, 0.093]
.
To check the header timestamp of the msg3d and RoIs, user can easily run
ros2 echo [topic] --header field
Matching Strategies
We provide two matching strategies for different scenarios:
Naive Mode
User should use this mode if the concatenation node from the Autoware point cloud preprocessor is not being used. User should also set an appropriate threshold
value to determine whether the time interval between the msg3d
and RoI
messages is within an acceptable range.
If the interval is less than the match threshold, the messages are considered matched.
- Example usage:
matching_strategy:
type: naive
threshold: 0.05
Advanced Mode
If the concatenation node from the Autoware point cloud preprocessor is being used, enable this mode.
The advanced mode parses diagnostics from the concatenation node to verify whether all point clouds have been successfully concatenated. If concatenation is incomplete, it dynamically adjusts rois_timestamp_offsets
based on diagnostic messages.
Instead of using a fixed threshold, this mode requires setting two parameters:
-
msg3d_noise_window
(a single value) -
rois_timestamp_noise_window
(a vector)
These parameters enforce stricter matching between the RoI
messages and msg3d
input.
- Example usage:
matching_strategy:
type: advanced
msg3d_noise_window: 0.02
rois_timestamp_noise_window: [0.01, 0.01, 0.01, 0.01, 0.01, 0.01]
Approximate camera projection
For algorithms like pointpainting_fusion
, the computation required to project points onto an unrectified (raw) image can be substantial. To address this, an option is provided to reduce the computational load. Set the approximate_camera_projection parameter
to true
for each camera (ROIs). If the corresponding point_project_to_unrectified_image
parameter is also set to true, the projections will be pre-cached.
The cached projections are calculated using a grid, with the grid size specified by the approximation_grid_width_size
and approximation_grid_height_size
parameters in the configuration file. The centers of the grid are used as the projected points.
Debug and Diagnostics
To verify whether the node has successfully fuse the msg3d or rois, the user can examine rqt or the /diagnostics
topic using the following command:
ros2 topic echo /diagnostics
Below is an example output when the fusion is success:
- msg3d has a value of
True
. - Each rois has a value of
True
. - The
fusion_success
isTrue
. - The
level
value is\0
. (diagnostic_msgs::msg::DiagnosticStatus::OK)
header:
stamp:
sec: 1722492015
nanosec: 848508777
frame_id: ''
status:
- level: "\0"
name: 'pointpainting: pointpainting_fusion_status'
message: Fused output is published and include all rois and msg3d
hardware_id: pointpainting_fusion_checker
values:
- key: msg3d/is_fused
value: 'True'
- key: fused_timestamp
value: '1738725170.860273600'
- key: reference_timestamp_min
value: '1738725170.850771904'
- key: reference_timestamp_max
value: '1738725170.870771885'
- key: /rois0/timestamp
value: '1738725170.903310537'
- key: /rois0/is_fused
value: 'True'
- key: /rois1/timestamp
value: '1738725170.934378386'
- key: /rois1/is_fused
value: 'True'
- key: /rois2/timestamp
value: '1738725170.917550087'
- key: /rois2/is_fused
value: 'True'
- key: fusion_success
value: 'True'
Below is an example output when the fusion is failed:
- msg3d has a value of
True
. - Each rois has a value of
False
. - The
fusion_success
isFalse
. - The
level
value is\x02
. (diagnostic_msgs::msg::DiagnosticStatus::ERROR)
header:
stamp:
sec: 1722492015
nanosec: 848508777
frame_id: ''
status:
- level: "\x02"
name: 'pointpainting: pointpainting_fusion_status'
message: Fused output msg is published but misses some ROIs
hardware_id: pointpainting_fusion_checker
values:
- key: msg3d/is_fused
value: 'True'
- key: fused_timestamp
value: '1738725170.860273600'
- key: reference_timestamp_min
value: '1738725170.850771904'
- key: reference_timestamp_max
value: '1738725170.870771885'
- key: /rois0/is_fused
value: 'False'
- key: /rois1/timestamp
value: '1738725170.934378386'
- key: /rois1/is_fused
value: 'True'
- key: /rois2/timestamp
value: '1738725170.917550087'
- key: /rois2/is_fused
value: 'True'
- key: fusion_success
value: 'False'
The build_only
option
The pointpainting_fusion
node has build_only
option to build the TensorRT engine file from the ONNX file.
Although it is preferred to move all the ROS parameters in .param.yaml
file in Autoware Universe, the build_only
option is not moved to the .param.yaml
file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:
ros2 launch autoware_image_projection_based_fusion pointpainting_fusion.launch.xml model_name:=pointpainting model_path:=/home/autoware/autoware_data/image_projection_based_fusion model_param_path:=$(ros2 pkg prefix autoware_image_projection_based_fusion --share)/config/pointpainting.param.yaml build_only:=true
Changelog for package autoware_image_projection_based_fusion
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- fix(segmentation_pointcloud_fusion): fix typo of defaut camera info topic (#10272) fix(segmentation_pointcloud_fusion): typo for defaut camera info topic
- refactor: add autoware_cuda_dependency_meta (#10073)
- fix(segmentation_pointcloud_fusion): set valid pointcloud field for output pointcloud (#10196) set valid pointcloud field
- feat(autoware_image_based_projection_fusion): redesign image based projection fusion node (#10016)
- Contributors: Esteve Fernandez, Hayato Mizushima, Kento Yabuuchi, Yi-Hsiang Fang (Vivid), Yutaka Kondo, badai nguyen
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- chore: refine maintainer list
(#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
- fix(autoware_image_projection_based_fusion): modify incorrect
index access in pointcloud filtering for out-of-range points
(#10087)
- fix(pointpainting): modify pointcloud index
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- Contributors: Fumiya Watanabe, Shunsuke Miura, keita1523, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_image_projection_based_fusion)!: tier4_debug-msgs changed to autoware_internal_debug_msgs in autoware_image_projection_based_fusion (#9877)
- fix(image_projection_based_fusion): revise message publishers
(#9865)
- refactor: fix condition for publishing painted pointcloud message
- fix: publish output revised
- feat: fix condition for publishing painted pointcloud message
- feat: roi-pointclout fusion - publish empty image even when there is no target roi
- fix: remap output topic for clusters in roi_pointcloud_fusion
- style(pre-commit): autofix
- feat: fix condition for publishing painted pointcloud message
- feat: Add debug publisher for internal debugging
* feat: remove !! pointer to bool conversion ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility &
tensorrt_common as unified lib for all perception components
(#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
* style(autoware_lidar_centerpoint): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> * docs(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- fix(autoware_lidar_transfusion): reuse cast variable
- fix(autoware_tensorrt_common): remove deprecated inference API
* style(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> * style(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- fix(autoware_tensorrt_common): const pointer
- fix(autoware_tensorrt_common): remove unused method declaration
- style(pre-commit): autofix
* refactor(autoware_tensorrt_common): readability Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- fix(autoware_tensorrt_common): return if layer not registered
* refactor(autoware_tensorrt_common): readability Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- fix(autoware_tensorrt_common): rename struct
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- fix(image_projection_based_fusion): remove mutex (#9862) refactor: Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- refactor(autoware_image_projection_based_fusion): organize
2d-detection related members
(#9789)
- chore: input_camera_topics_ is only for debug
- feat: fuse main message with cached roi messages in fusion_node.cpp
- chore: add comments on each process step, organize methods
* feat: Export process method in fusion_node.cpp Export the [exportProcess()]{.title-ref} method in [fusion_node.cpp]{.title-ref} to handle the post-processing and publishing of the fused messages. This method cancels the timer, performs the necessary post-processing steps, publishes the output message, and resets the flags. It also adds processing time for debugging purposes. This change improves the organization and readability of the code. * feat: Refactor fusion_node.hpp and fusion_node.cpp Refactor the [fusion_node.hpp]{.title-ref} and [fusion_node.cpp]{.title-ref} files to improve code organization and readability. This includes exporting the [exportProcess()]{.title-ref} method in [fusion_node.cpp]{.title-ref} to handle post-processing and publishing of fused messages, adding comments on each process step, organizing methods, and fusing the main message with cached ROI messages. These changes enhance the overall quality of the codebase.
- Refactor fusion_node.cpp and fusion_node.hpp for improved code organization and readability
- Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- feat: Refactor fusion_node.cpp for improved code organization and readability
- Refactor fusion_node.cpp for improved code organization and readability
- feat: implement mutex per 2d detection process
- Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- revise template, inputs first and output at the last
- explicit in and out types 1
- clarify pointcloud message type
- Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- Refactor publisher types in fusion_node.hpp and node.hpp
- fix: resolve cppcheck issue shadowVariable
- Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- chore: rename Det2dManager to Det2dStatus
- revert mutex related changes
- refactor: review member and method's access
- fix: resolve shadowVariable of 'det2d'
- fix missing line
- refactor message postprocess and publish methods
- publish the main message is common
- fix: replace pointcloud message type by the typename
- review member access
- Refactor fusion_node.hpp and fusion_node.cpp for improved code organization and readability
- refactor: fix condition for publishing painted pointcloud message
* fix: remove unused variable ---------
- feat(lidar_centerpoint, pointpainting): add diag publisher for max voxel size (#9720)
- feat(pointpainting_fusion): enable cloud display on image (#9813)
- feat(image_projection_based_fusion): add cache for camera
projection
(#9635)
- add camera_projection class and projection cache
- style(pre-commit): autofix
- fix FOV filtering
- style(pre-commit): autofix
- remove unused includes
- update schema
- fix cpplint error
- apply suggestion: more simple and effcient computation
- correct terminology
- style(pre-commit): autofix
- fix comments
* fix var name Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> * fix variable names Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> * fix variable names Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> * fix variable names Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> * fix variable names Co-authored-by: Taekjin LEE <<technolojin@gmail.com>>
- fix variable names
* fix initialization Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>> * add verification to point_project_to_unrectified_image when loading Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>> * chore: add option description to project points to unrectified image ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Taekjin LEE <<technolojin@gmail.com>> Co-authored-by: badai nguyen <<94814556+badai-nguyen@users.noreply.github.com>> Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
- feat(image_projection_based_fusion): add timekeeper
(#9632)
- add timekeeper
- chore: refactor time-keeper position
- chore: bring back a missing comment
* chore: remove redundant timekeepers ---------Co-authored-by: Taekjin LEE <<taekjin.lee@tier4.jp>>
- Contributors: Amadeusz Szymko, Fumiya Watanabe, Masaki Baba, Taekjin LEE, Vishal Chauhan, Yi-Hsiang Fang (Vivid), kminoda
0.40.0 (2024-12-12)
- Merge branch 'main' into release-0.40.0
- Revert "chore(package.xml): bump version to 0.39.0 (#9587)" This reverts commit c9f0f2688c57b0f657f5c1f28f036a970682e7f5.
- fix(lidar_centerpoint): non-maximum suppression target decision
logic
(#9595)
- refactor(lidar_centerpoint): optimize non-maximum suppression search distance calculation
- feat(lidar_centerpoint): do not suppress if one side of the object is pedestrian
- style(pre-commit): autofix
- refactor(lidar_centerpoint): remove unused variables
* refactor: remove unused variables fix: implement non-maximum suppression logic to the transfusion refactor: remove unused parameter iou_nms_target_class_names Revert "fix: implement non-maximum suppression logic to the transfusion" This reverts commit b8017fc366ec7d67234445ef5869f8beca9b6f45. fix: revert transfusion modification ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- feat: remove max rois limit in the image projection based fusion (#9596) feat: remove max rois limit
- fix: fix ticket links in CHANGELOG.rst (#9588)
- fix(autoware_image_projection_based_fusion): detected object roi
box projection fix
(#9519)
* fix: detected object roi box projection fix
- eliminate misuse of std::numeric_limits<double>::min()
2. fix roi range up to the image edges * fix: fix roi range calculation in RoiDetectedObjectFusionNode Improve the calculation of the region of interest (ROI) in the RoiDetectedObjectFusionNode. The previous code had an issue where the ROI range was not correctly limited to the image edges. This fix ensures that the ROI is within the image boundaries by using the correct comparison operators for the x and y coordinates. ---------
- chore(package.xml): bump version to 0.39.0
(#9587)
- chore(package.xml): bump version to 0.39.0
- fix: fix ticket links in CHANGELOG.rst
* fix: remove unnecessary diff ---------Co-authored-by: Yutaka Kondo <<yutaka.kondo@youtalk.jp>>
- fix: fix ticket links in CHANGELOG.rst (#9588)
- ci(pre-commit): update cpplint to 2.0.0 (#9557)
- fix(cpplint): include what you use - perception (#9569)
- chore(image_projection_based_fusion): add debug for roi_pointcloud fusion (#9481)
- fix(autoware_image_projection_based_fusion): fix clang-diagnostic-inconsistent-missing-override (#9509)
- fix(autoware_image_projection_based_fusion): fix clang-diagnostic-unused-private-field (#9505)
- fix(autoware_image_projection_based_fusion): fix clang-diagnostic-inconsistent-missing-override (#9495)
- fix(autoware_image_projection_based_fusion): fix clang-diagnostic-inconsistent-missing-override (#9516) fix: clang-diagnostic-inconsistent-missing-override
- fix(autoware_image_projection_based_fusion): fix clang-diagnostic-inconsistent-missing-override (#9510)
- fix(autoware_image_projection_based_fusion): fix
clang-diagnostic-unused-private-field
(#9473)
- fix: clang-diagnostic-unused-private-field
* fix: build error ---------
- fix(autoware_image_projection_based_fusion): fix clang-diagnostic-inconsistent-missing-override (#9472)
- 0.39.0
- update changelog
- Merge commit '6a1ddbd08bd' into release-0.39.0
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- chore(package.xml): bump version to 0.38.0
(#9266)
(#9284)
- unify package.xml version to 0.37.0
- remove system_monitor/CHANGELOG.rst
- add changelog
* 0.38.0
- fix(autoware_image_projection_based_fusion): make optional to consider lens distortion in the point projection (#9233) chore: add point_project_to_unrectified_image parameter to fusion_common.param.yaml
- fix(autoware_image_projection_based_fusion): fix
bugprone-misplaced-widening-cast
(#9226)
- fix: bugprone-misplaced-widening-cast
* fix: clang-format ---------
- fix(autoware_image_projection_based_fusion): fix
bugprone-misplaced-widening-cast
(#9229)
- fix: bugprone-misplaced-widening-cast
* fix: clang-format ---------
- Contributors: Esteve Fernandez, Fumiya Watanabe, M. Fatih Cırıt, Ryohsuke Mitsudome, Taekjin LEE, Yoshi Ri, Yutaka Kondo, awf-autoware-bot[bot], badai nguyen, kobayu858
0.39.0 (2024-11-25)
- Merge commit '6a1ddbd08bd' into release-0.39.0
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- chore(package.xml): bump version to 0.38.0
(#9266)
(#9284)
- unify package.xml version to 0.37.0
- remove system_monitor/CHANGELOG.rst
- add changelog
* 0.38.0
- fix(autoware_image_projection_based_fusion): make optional to consider lens distortion in the point projection (#9233) chore: add point_project_to_unrectified_image parameter to fusion_common.param.yaml
- fix(autoware_image_projection_based_fusion): fix
bugprone-misplaced-widening-cast
(#9226)
- fix: bugprone-misplaced-widening-cast
* fix: clang-format ---------
- fix(autoware_image_projection_based_fusion): fix
bugprone-misplaced-widening-cast
(#9229)
- fix: bugprone-misplaced-widening-cast
* fix: clang-format ---------
- Contributors: Esteve Fernandez, Taekjin LEE, Yutaka Kondo, kobayu858
0.38.0 (2024-11-08)
- unify package.xml version to 0.37.0
- refactor(autoware_point_types): prefix namespace with autoware::point_types (#9169)
- fix(autoware_image_projection_based_fusion): pointpainting bug fix for point projection (#9150) fix: projected 2d point has 1.0 of depth
- refactor(object_recognition_utils): add autoware prefix to object_recognition_utils (#8946)
- fix(autoware_image_projection_based_fusion): roi cluster fusion has no existence probability update (#8864) fix: add existence probability update, refactoring
- fix(autoware_image_projection_based_fusion): resolve issue with segmentation pointcloud fusion node failing with multiple mask inputs (#8769)
- fix(image_projection_based_fusion): remove unused variable (#8634) fix: remove unused variable
- fix(autoware_image_projection_based_fusion): fix unusedFunction (#8567) fix:unusedFunction
- fix(image_projection_based_fusion): add run length decoding for
segmentation_pointcloud_fusion
(#7909)
- fix: add rle decompress
- style(pre-commit): autofix
- fix: use rld in tensorrt utils
- fix: rebase error
- fix: dependency
- fix: skip publish debug mask
* Update perception/autoware_image_projection_based_fusion/src/segmentation_pointcloud_fusion/node.cpp Co-authored-by: kminoda <<44218668+kminoda@users.noreply.github.com>>
- style(pre-commit): autofix
* Revert "fix: skip publish debug mask" This reverts commit 30fa9aed866a019705abde71e8f5c3f98960c19e. ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: kminoda <<44218668+kminoda@users.noreply.github.com>>
- fix(image_projection_based_fusion): handle projection errors in
image fusion nodes
(#7747)
- fix: add check for camera distortion model
- feat(utils): add const qualifier to local variables in checkCameraInfo function
- style(pre-commit): autofix
* chore(utils): update checkCameraInfo function to use RCLCPP_ERROR_STREAM for unsupported distortion model and coefficients size ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- fix(autoware_image_projection_based_fusion): fix passedByValue (#8234) fix:passedByValue
- refactor(image_projection_based_fusion)!: add package name prefix of autoware_ (#8162) refactor: rename image_projection_based_fusion to autoware_image_projection_based_fusion
- Contributors: Esteve Fernandez, Taekjin LEE, Yi-Hsiang Fang (Vivid), Yoshi Ri, Yutaka Kondo, badai nguyen, kobayu858
0.26.0 (2024-04-03)
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Name | Deps |
---|---|
tier4_perception_launch |
Launch files
- launch/roi_detected_object_fusion.launch.xml
-
- input/rois_number [default: 6]
- input/rois0 [default: rois0]
- input/camera_info0 [default: /camera_info0]
- input/rois1 [default: rois1]
- input/camera_info1 [default: /camera_info1]
- input/rois2 [default: rois2]
- input/camera_info2 [default: /camera_info2]
- input/rois3 [default: rois3]
- input/camera_info3 [default: /camera_info3]
- input/rois4 [default: rois4]
- input/camera_info4 [default: /camera_info4]
- input/rois5 [default: rois5]
- input/camera_info5 [default: /camera_info5]
- input/rois6 [default: rois6]
- input/camera_info6 [default: /camera_info6]
- input/rois7 [default: rois7]
- input/camera_info7 [default: /camera_info7]
- input/objects [default: objects]
- output/objects [default: fused_objects]
- param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_detected_object_fusion.param.yaml]
- sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
- input_rois_number [default: $(var input/rois_number)]
- input/image0 [default: /image_raw0]
- input/image1 [default: /image_raw1]
- input/image2 [default: /image_raw2]
- input/image3 [default: /image_raw3]
- input/image4 [default: /image_raw4]
- input/image5 [default: /image_raw5]
- input/image6 [default: /image_raw6]
- input/image7 [default: /image_raw7]
- launch/roi_pointcloud_fusion.launch.xml
-
- pointcloud_container_name [default: pointcloud_container]
- input/rois_number [default: 6]
- input/rois0 [default: rois0]
- input/camera_info0 [default: /camera_info0]
- input/rois1 [default: rois1]
- input/camera_info1 [default: /camera_info1]
- input/rois2 [default: rois2]
- input/camera_info2 [default: /camera_info2]
- input/rois3 [default: rois3]
- input/camera_info3 [default: /camera_info3]
- input/rois4 [default: rois4]
- input/camera_info4 [default: /camera_info4]
- input/rois5 [default: rois5]
- input/camera_info5 [default: /camera_info5]
- input/rois6 [default: rois6]
- input/camera_info6 [default: /camera_info6]
- input/rois7 [default: rois7]
- input/camera_info7 [default: /camera_info7]
- input/pointcloud [default: /perception/object_recognition/detection/pointcloud_map_filtered/pointcloud]
- output/clusters [default: output/clusters]
- debug/clusters [default: roi_pointcloud_fusion/debug/clusters]
- param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_pointcloud_fusion.param.yaml]
- sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
- input_rois_number [default: $(var input/rois_number)]
- input/image0 [default: /image_raw0]
- input/image1 [default: /image_raw1]
- input/image2 [default: /image_raw2]
- input/image3 [default: /image_raw3]
- input/image4 [default: /image_raw4]
- input/image5 [default: /image_raw5]
- input/image6 [default: /image_raw6]
- input/image7 [default: /image_raw7]
- launch/segmentation_pointcloud_fusion.launch.xml
-
- input/camera_number [default: 1]
- input/mask0 [default: /perception/object_recognition/detection/mask0]
- input/camera_info0 [default: /sensing/camera/camera0/camera_info]
- input/mask1 [default: /perception/object_recognition/detection/mask1]
- input/camera_info1 [default: /sensing/camera/camera1/camera_info]
- input/mask2 [default: /perception/object_recognition/detection/mask2]
- input/camera_info2 [default: /sensing/camera/camera2/camera_info]
- input/mask3 [default: /perception/object_recognition/detection/mask3]
- input/camera_info3 [default: /sensing/camera/camera3/camera_info]
- input/mask4 [default: /perception/object_recognition/detection/mask4]
- input/camera_info4 [default: /sensing/camera/camera4/camera_info]
- input/mask5 [default: /perception/object_recognition/detection/mask5]
- input/camera_info5 [default: /sensing/camera/camera5/camera_info]
- input/mask6 [default: /perception/object_recognition/detection/mask6]
- input/camera_info6 [default: /sensing/camera/camera6/camera_info]
- input/mask7 [default: /perception/object_recognition/detection/mask7]
- input/camera_info7 [default: /sensing/camera/camera7/camera_info]
- input/pointcloud [default: /sensing/lidar/top/outlier_filtered/pointcloud]
- output/pointcloud [default: output/pointcloud]
- sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
- semantic_segmentation_based_filter_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/segmentation_pointcloud_fusion.param.yaml]
- input/image0 [default: /image_raw0]
- input/image1 [default: /image_raw1]
- input/image2 [default: /image_raw2]
- input/image3 [default: /image_raw3]
- input/image4 [default: /image_raw4]
- input/image5 [default: /image_raw5]
- input/image6 [default: /image_raw6]
- input/image7 [default: /image_raw7]
- launch/pointpainting_fusion.launch.xml
-
- input/rois_number [default: 6]
- input/rois0 [default: rois0]
- input/camera_info0 [default: /camera_info0]
- input/rois1 [default: rois1]
- input/camera_info1 [default: /camera_info1]
- input/rois2 [default: rois2]
- input/camera_info2 [default: /camera_info2]
- input/rois3 [default: rois3]
- input/camera_info3 [default: /camera_info3]
- input/rois4 [default: rois4]
- input/camera_info4 [default: /camera_info4]
- input/rois5 [default: rois5]
- input/camera_info5 [default: /camera_info5]
- input/rois6 [default: rois6]
- input/camera_info6 [default: /camera_info6]
- input/rois7 [default: rois7]
- input/camera_info7 [default: /camera_info7]
- input/pointcloud [default: /sensing/lidar/top/rectified/pointcloud]
- output/objects [default: objects]
- data_path [default: $(env HOME)/autoware_data]
- model_name [default: pointpainting]
- model_path [default: $(var data_path)/image_projection_based_fusion]
- model_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/pointpainting.param.yaml]
- ml_package_param_path [default: $(var model_path)/$(var model_name)_ml_package.param.yaml]
- class_remapper_param_path [default: $(var model_path)/detection_class_remapper.param.yaml]
- sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
- build_only [default: false]
- use_pointcloud_container [default: false]
- pointcloud_container_name [default: pointcloud_container]
- input_rois_number [default: $(var input/rois_number)]
- input/image0 [default: /image_raw0]
- input/image1 [default: /image_raw1]
- input/image2 [default: /image_raw2]
- input/image3 [default: /image_raw3]
- input/image4 [default: /image_raw4]
- input/image5 [default: /image_raw5]
- input/image6 [default: /image_raw6]
- input/image7 [default: /image_raw7]
- launch/roi_cluster_fusion.launch.xml
-
- input/rois_number [default: 6]
- input/rois0 [default: rois0]
- input/camera_info0 [default: /camera_info0]
- input/rois1 [default: rois1]
- input/camera_info1 [default: /camera_info1]
- input/rois2 [default: rois2]
- input/camera_info2 [default: /camera_info2]
- input/rois3 [default: rois3]
- input/camera_info3 [default: /camera_info3]
- input/rois4 [default: rois4]
- input/camera_info4 [default: /camera_info4]
- input/rois5 [default: rois5]
- input/camera_info5 [default: /camera_info5]
- input/rois6 [default: rois6]
- input/camera_info6 [default: /camera_info6]
- input/rois7 [default: rois7]
- input/camera_info7 [default: /camera_info7]
- input/clusters [default: clusters]
- output/clusters [default: labeled_clusters]
- param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/roi_cluster_fusion.param.yaml]
- sync_param_path [default: $(find-pkg-share autoware_image_projection_based_fusion)/config/fusion_common.param.yaml]
- input/image0 [default: /image_raw0]
- input/image1 [default: /image_raw1]
- input/image2 [default: /image_raw2]
- input/image3 [default: /image_raw3]
- input/image4 [default: /image_raw4]
- input/image5 [default: /image_raw5]
- input/image6 [default: /image_raw6]
- input/image7 [default: /image_raw7]