Package Summary
Tags | No category tags. |
Version | 0.43.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-04-04 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Fumiya Watanabe
- Kosuke Takeuchi
- Kotaro Uetake
- Kyoichi Sugahara
- Yoshi Ri
- Junya Sasaki
Authors
- Kosuke Takeuchi
Perception Evaluator
A node for evaluating the output of perception systems.
Purpose
This module allows for the evaluation of how accurately perception results are generated without the need for annotations. It is capable of confirming performance and can evaluate results from a few seconds prior, enabling online execution.
Inner-workings / Algorithms
The evaluated metrics are as follows:
- predicted_path_deviation
- predicted_path_deviation_variance
- lateral_deviation
- yaw_deviation
- yaw_rate
- total_objects_count
- average_objects_count
- interval_objects_count
Predicted Path Deviation / Predicted Path Deviation Variance
Compare the predicted path of past objects with their actual traveled path to determine the deviation for MOVING OBJECTS. For each object, calculate the mean distance between the predicted path points and the corresponding points on the actual path, up to the specified time step. In other words, this calculates the Average Displacement Error (ADE). The target object to be evaluated is the object from $T_N$ seconds ago, where $T_N$ is the maximum value of the prediction time horizon $[T_1, T_2, …, T_N]$.
[!NOTE] The object from $T_N$ seconds ago is the target object for all metrics. This is to unify the time of the target object across metrics.
- $n_{points}$ : Number of points in the predicted path
- $T$ : Time horizon for prediction evaluation.
- $dt$ : Time interval of the predicted path
- $d_i$ : Distance between the predicted path and the actual traveled path at path point $i$
- $ADE$ : Mean deviation of the predicted path for the target object.
- $Var$ : Variance of the predicted path deviation for the target object.
The final predicted path deviation metrics are calculated by averaging the mean deviation of the predicted path for all objects of the same class, and then calculating the mean, maximum, and minimum values of the mean deviation.
- $n_{objects}$ : Number of objects
- $ADE_{mean}$ : Mean deviation of the predicted path through all objects
- $ADE_{max}$ : Maximum deviation of the predicted path through all objects
- $ADE_{min}$ : Minimum deviation of the predicted path through all objects
- $Var_{mean}$ : Mean variance of the predicted path deviation through all objects
- $Var_{max}$ : Maximum variance of the predicted path deviation through all objects
- $Var_{min}$ : Minimum variance of the predicted path deviation through all objects
The actual metric name is determined by the object class and time horizon. For example, predicted_path_deviation_variance_CAR_5.00
Lateral Deviation
Calculates lateral deviation between the smoothed traveled trajectory and the perceived position to evaluate the stability of lateral position recognition for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The lateral deviation is calculated by comparing the smoothed traveled trajectory with the perceived position of the past object whose timestamp is $T=T_n$ seconds ago. For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Deviation
Calculates the deviation between the recognized yaw angle of an past object and the yaw azimuth angle of the smoothed traveled trajectory for MOVING OBJECTS. The smoothed traveled trajectory is calculated by applying a centered moving average filter whose window size is specified by the parameter smoothing_window_size
. The yaw deviation is calculated by comparing the yaw azimuth angle of smoothed traveled trajectory with the perceived orientation of the past object whose timestamp is $T=T_n$ seconds ago.
For stopped objects, the smoothed traveled trajectory is unstable, so this metric is not calculated.
Yaw Rate
Calculates the yaw rate of an object based on the change in yaw angle from the previous time step. It is evaluated for STATIONARY OBJECTS and assesses the stability of yaw rate recognition. The yaw rate is calculated by comparing the yaw angle of the past object with the yaw angle of the object received in the previous cycle. Here, t2 is the timestamp that is $T_n$ seconds ago.
Object Counts
Counts the number of detections for each object class within the specified detection range. These metrics are measured for the most recent object not past objects.
In the provided illustration, the range $R$ is determined by a combination of lists of radii (e.g., $r_1, r_2, \ldots$) and heights (e.g., $h_1, h_2, \ldots$). For example,
- the number of CAR in range $R = (r_1, h_1)$ equals 1
- the number of CAR in range $R = (r_1, h_2)$ equals 2
- the number of CAR in range $R = (r_2, h_1)$ equals 3
- the number of CAR in range $R = (r_2, h_2)$ equals 4
Total Object Count
Counts the number of unique objects for each class within the specified detection range. The total object count is calculated as follows:
\[\begin{align} \text{Total Object Count (Class, Range)} = \left| \bigcup_{t=0}^{T_{\text{now}}} \{ \text{uuid} \mid \text{class}(t, \text{uuid}) = C \wedge \text{position}(t, \text{uuid}) \in R \} \right| \end{align}\]where:
- $\bigcup$ represents the union across all frames from $t = 0$ to $T_{\text{now}}$, which ensures that each uuid is counted only once.
- $\text{class}(t, \text{uuid}) = C$ specifies that the object with uuid at time $t$ belongs to class $C$.
- $\text{position}(t, \text{uuid}) \in R$ indicates that the object with uuid at time $t$ is within the specified range $R$.
-
$\left { \ldots } \right $ denotes the cardinality of the set, which counts the number of unique uuids that meet the class and range criteria across all considered times.
Average Object Count
Counts the average number of objects for each class within the specified detection range. This metric measures how many objects were detected in one frame, without considering uuids. The average object count is calculated as follows:
\[\begin{align} \text{Average Object Count (Class, Range)} = \frac{1}{N} \sum_{t=0}^{T_{\text{now}}} \left| \{ \text{object} \mid \text{class}(t, \text{object}) = C \wedge \text{position}(t, \text{object}) \in R \} \right| \end{align}\]where:
- $N$ represents the total number of frames within the time period time to $T_{\text{now}}$ (it is precisely
detection_count_purge_seconds
) - $text{object}$ denotes the number of objects that meet the class and range criteria at time $t$.
Interval Object Count
Counts the average number of objects for each class within the specified detection range over the last objects_count_window_seconds
. This metric measures how many objects were detected in one frame, without considering uuids. The interval object count is calculated as follows:
where:
- $W$ represents the total number of frames within the last
objects_count_window_seconds
. - $T_W$ represents the time window
objects_count_window_seconds
Inputs / Outputs
Name | Type | Description |
---|---|---|
~/input/objects |
autoware_perception_msgs::msg::PredictedObjects |
The predicted objects to evaluate. |
~/metrics |
tier4_metric_msgs::msg::MetricArray |
Metric information about perception accuracy. |
~/markers |
visualization_msgs::msg::MarkerArray |
Visual markers for debugging and visualization. |
Parameters
Name | Type | Description |
---|---|---|
selected_metrics |
List | Metrics to be evaluated, such as lateral deviation, yaw deviation, and predicted path deviation. |
smoothing_window_size |
Integer | Determines the window size for smoothing path, should be an odd number. |
prediction_time_horizons |
list[double] | Time horizons for prediction evaluation in seconds. |
stopped_velocity_threshold |
double | threshold velocity to check if vehicle is stopped |
detection_radius_list |
list[double] | Detection radius for objects to be evaluated.(used for objects count only) |
detection_height_list |
list[double] | Detection height for objects to be evaluated. (used for objects count only) |
detection_count_purge_seconds |
double | Time window for purging object detection counts. |
objects_count_window_seconds |
double | Time window for keeping object detection counts. The number of object detections within this time window is stored in detection_count_vector_
|
target_object.*.check_lateral_deviation |
bool | Whether to check lateral deviation for specific object types (car, truck, etc.). |
target_object.*.check_yaw_deviation |
bool | Whether to check yaw deviation for specific object types (car, truck, etc.). |
target_object.*.check_predicted_path_deviation |
bool | Whether to check predicted path deviation for specific object types (car, truck, etc.). |
target_object.*.check_yaw_rate |
bool | Whether to check yaw rate for specific object types (car, truck, etc.). |
target_object.*.check_total_objects_count |
bool | Whether to check total object count for specific object types (car, truck, etc.). |
target_object.*.check_average_objects_count |
bool | Whether to check average object count for specific object types (car, truck, etc.). |
target_object.*.check_interval_average_objects_count |
bool | Whether to check interval average object count for specific object types (car, truck, etc.). |
debug_marker.* |
bool | Debugging parameters for marker visualization (history path, predicted path, etc.). |
Assumptions / Known limits
It is assumed that the current positions of PredictedObjects are reasonably accurate.
Future extensions / Unimplemented parts
- Increase rate in recognition per class
- Metrics for objects with strange physical behavior (e.g., going through a fence)
- Metrics for splitting objects
- Metrics for problems with objects that are normally stationary but move
- Disappearing object metrics
\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ Changelog for package autoware_perception_online_evaluator \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- Contributors: Hayato Mizushima, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat(autoware_utils): replace autoware_universe_utils with autoware_utils (#10191)
- chore: refine maintainer list
(#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
- feat(autoware_vehicle_info_utils): replace autoware_universe_utils with autoware_utils (#10167)
- Contributors: Fumiya Watanabe, Ryohsuke Mitsudome, Shunsuke Miura, 心刚
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- feat: apply [autoware_]{.title-ref} prefix for
[perception_online_evaluator]{.title-ref}
(#9956)
- feat(perception_online_evaluator): apply [autoware_]{.title-ref} prefix (see below):
* In this commit, I did not organize a folder structure. The folder structure will be organized in the next some commits.
- The changes will follow the Autoware's guideline as below:
- https://autowarefoundation.github.io/autoware-documentation/main/contributing/coding-guidelines/ros-nodes/directory-structure/#package-folder
- bug(perception_online_evaluator): remove duplicated properties
- It seems the [motion_evaluator]{.title-ref} is defined and used in the [autoware_planning_evaluator]{.title-ref}
- rename(perception_online_evaluator): move headers under `include/autoware`:
- Fixes due to this changes for .hpp/.cpp files will be applied in the next commit
- fix(perception_online_evaluator): fix include paths
- To follow the previous commit
- rename: [perception_online_evaluator]{.title-ref} => [autoware_perception_online_evaluator]{.title-ref}
- style(pre-commit): autofix
- bug(autoware_perception_online_evaluator): revert wrongly updated copyright
- bug(autoware_perception_online_evaluator): [autoware_]{.title-ref} prefix is not needed here
- update: [CODEOWNERS]{.title-ref}
* bug(autoware_perception_online_evaluator): fix a wrong package name ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- Contributors: Fumiya Watanabe, Junya Sasaki
0.40.0 (2024-12-12)
- Merge branch 'main' into release-0.40.0
- Revert "chore(package.xml): bump version to 0.39.0 (#9587)" This reverts commit c9f0f2688c57b0f657f5c1f28f036a970682e7f5.
- fix: fix ticket links in CHANGELOG.rst (#9588)
- chore(package.xml): bump version to 0.39.0
(#9587)
- chore(package.xml): bump version to 0.39.0
- fix: fix ticket links in CHANGELOG.rst
* fix: remove unnecessary diff ---------Co-authored-by: Yutaka Kondo <<yutaka.kondo@youtalk.jp>>
- fix: fix ticket links in CHANGELOG.rst (#9588)
- fix(cpplint): include what you use - evaluator (#9566)
- refactor(perception_online_evaluator): use tier4_metric_msgs instead of diagnostic_msgs (#9485)
- refactor(evaluators, autoware_universe_utils): rename Stat class
to Accumulator and move it to autoware_universe_utils
(#9459)
- add Accumulator class to autoware_universe_utils
- use Accumulator on all evaluators.
- pre-commit
- found and fixed a bug. add more tests.
- pre-commit
* Update common/autoware_universe_utils/include/autoware/universe_utils/math/accumulator.hpp Co-authored-by: Kosuke Takeuchi <<kosuke.tnp@gmail.com>> ---------Co-authored-by: Kosuke Takeuchi <<kosuke.tnp@gmail.com>>
- 0.39.0
- update changelog
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- fix(evaluator): missing dependency in evaluator components (#9074)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- chore(package.xml): bump version to 0.38.0
(#9266)
(#9284)
- unify package.xml version to 0.37.0
- remove system_monitor/CHANGELOG.rst
- add changelog
* 0.38.0
- Contributors: Esteve Fernandez, Fumiya Watanabe, Kem (TiankuiXian), Kotaro Uetake, M. Fatih Cırıt, Ryohsuke Mitsudome, Yutaka Kondo, ぐるぐる
0.39.0 (2024-11-25)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- chore(package.xml): bump version to 0.38.0
(#9266)
(#9284)
- unify package.xml version to 0.37.0
- remove system_monitor/CHANGELOG.rst
- add changelog
* 0.38.0
- Contributors: Esteve Fernandez, Yutaka Kondo
0.38.0 (2024-11-08)
- unify package.xml version to 0.37.0
- refactor(object_recognition_utils): add autoware prefix to object_recognition_utils (#8946)
- fix(perception_online_evaluator): fix unusedFunction (#8559) fix:unusedFunction
- feat(evalautor): rename evaluator diag topics
(#8152)
- feat(evalautor): rename evaluator diag topics
* perception ---------
- fix(perception_online_evaluator): passedByValue (#8201) fix: passedByValue
- fix(perception_online_evaluator): fix shadowVariable
(#7933)
- fix:shadowVariable
- fix:clang-format
* fix:shadowVariable ---------
- feat: add [autoware_]{.title-ref} prefix to [lanelet2_extension]{.title-ref} (#7640)
- refactor(universe_utils/motion_utils)!: add autoware namespace (#7594)
- refactor(motion_utils)!: add autoware prefix and include dir (#7539) refactor(motion_utils): add autoware prefix and include dir
- feat(autoware_universe_utils)!: rename from tier4_autoware_utils (#7538) Co-authored-by: kosuke55 <<kosuke.tnp@gmail.com>>
- refactor(vehicle_info_utils)!: prefix package and namespace with
autoware
(#7353)
- chore(autoware_vehicle_info_utils): rename header
- chore(bpp-common): vehicle info
- chore(path_optimizer): vehicle info
- chore(velocity_smoother): vehicle info
- chore(bvp-common): vehicle info
- chore(static_centerline_generator): vehicle info
- chore(obstacle_cruise_planner): vehicle info
- chore(obstacle_velocity_limiter): vehicle info
- chore(mission_planner): vehicle info
- chore(obstacle_stop_planner): vehicle info
- chore(planning_validator): vehicle info
- chore(surround_obstacle_checker): vehicle info
- chore(goal_planner): vehicle info
- chore(start_planner): vehicle info
- chore(control_performance_analysis): vehicle info
- chore(lane_departure_checker): vehicle info
- chore(predicted_path_checker): vehicle info
- chore(vehicle_cmd_gate): vehicle info
- chore(obstacle_collision_checker): vehicle info
- chore(operation_mode_transition_manager): vehicle info
- chore(mpc): vehicle info
- chore(control): vehicle info
- chore(common): vehicle info
- chore(perception): vehicle info
- chore(evaluator): vehicle info
- chore(freespace): vehicle info
- chore(planning): vehicle info
- chore(vehicle): vehicle info
- chore(simulator): vehicle info
- chore(launch): vehicle info
- chore(system): vehicle info
- chore(sensing): vehicle info
* fix(autoware_joy_controller): remove unused deps ---------
- fix(perception_online_evaluator): add metric_value not only stat
(#7100)(#7118)
(revert of revert)
(#7167)
* Revert "fix(perception_online_evaluator): revert "add
metric_value not only s…" This reverts commit
d827b1bd1f4bbacf0333eb14a62ef42e56caef25.
- Update evaluator/perception_online_evaluator/include/perception_online_evaluator/perception_online_evaluator_node.hpp
- Update evaluator/perception_online_evaluator/src/perception_online_evaluator_node.cpp
* use emplace back ---------Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- feat!: replace autoware_auto_msgs with autoware_msgs for evaluator modules (#7241) Co-authored-by: Cynthia Liu <<cynthia.liu@autocore.ai>> Co-authored-by: NorahXiong <<norah.xiong@autocore.ai>> Co-authored-by: beginningfan <<beginning.fan@autocore.ai>>
- fix(perception_online_evaluator): revert "add metric_value not only stat (#7100)" (#7118)
- feat(perception_online_evaluator): add metric_value not only stat (#7100)
- fix(perception_online_evaluator): fix range resolution (#7115)
- chore(glog): add initialization check (#6792)
- fix(perception_online_evaluator): fix bug of constStatement (#6922)
- feat(perception_online_evaluator): imporve yaw rate metrics
considering flip
(#6881)
- feat(perception_online_evaluator): imporve yaw rate metrics considering flip
* fix test ---------
- feat(perception_evaluator): counts objects within detection range
(#6848)
* feat(perception_evaluator): counts objects within detection
range detection counter add enable option and refactoring fix update
document readme clean up
- fix from review
* use $ fix * fix include ---------
- docs(perception_online_evaluator): update metrics explanation (#6819)
- feat(perception_online_evaluator): better waitForDummyNode (#6827)
- feat(perception_online_evaluator): add predicted path variance
(#6793)
- feat(perception_online_evaluator): add predicted path variance
- add unit test
- update readme
* pre commit ---------
- feat(perception_online_evaluator): ignore reversal of orientation from yaw_rate calculation (#6748)
- docs(perception_online_evaluator): add description about yaw rate evaluation (#6737)
- Contributors: Esteve Fernandez, Fumiya Watanabe, Kosuke Takeuchi, Kyoichi Sugahara, Nagi70, Ryohsuke Mitsudome, Ryuta Kambe, Satoshi OTA, Takamasa Horibe, Takayuki Murooka, Yutaka Kondo, kobayu858
0.26.0 (2024-04-03)
- feat(perception_online_evaluator): extract moving object for deviation check (#6682) fix test
- feat(perception_online_evaluator): unify debug markers instead of
separating for each object
(#6681)
- feat(perception_online_evaluator): unify debug markers instead of separating for each object
* fix for
-
feat(perception_online_evaluator): add yaw rate metrics for stopped object (#6667) * feat(perception_online_evaluator): add yaw rate metrics for stopped object add add test * feat: add stopped vel parameter ---------
- fix(perception_online_evaluator): fix build error (#6595)
- build(perception_online_evaluator): add lanelet_extension dependency (#6592)
- feat(perception_online_evaluator): publish metrics of each object class (#6556)
- feat(perception_online_evaluator): add
perception_online_evaluator
(#6493)
* feat(perception_evaluator): add perception_evaluator tmp update
add add add update clean up change time horizon
- fix build werror
- fix topic name
- clean up
- rename to perception_online_evaluator
- refactor: remove timer
- feat: add test
* fix: ci check ---------
- Contributors: Esteve Fernandez, Kosuke Takeuchi, Satoshi OTA
Wiki Tutorials
Package Dependencies
System Dependencies
Name |
---|
eigen |
Dependant Packages
Launch files
- launch/perception_online_evaluator.launch.xml
-
- input/objects [default: /perception/object_recognition/objects]