No version for distro humble. Known supported distros are highlighted in the buttons above.
No version for distro jazzy. Known supported distros are highlighted in the buttons above.
No version for distro rolling. Known supported distros are highlighted in the buttons above.

autoware_traffic_light_classifier package from autoware_universe repo

autoware_adapi_specs autoware_agnocast_wrapper autoware_auto_common autoware_component_interface_specs_universe autoware_component_interface_tools autoware_component_interface_utils autoware_cuda_dependency_meta autoware_fake_test_node autoware_glog_component autoware_goal_distance_calculator autoware_grid_map_utils autoware_path_distance_calculator autoware_polar_grid autoware_time_utils autoware_traffic_light_recognition_marker_publisher autoware_traffic_light_utils autoware_universe_utils tier4_api_utils autoware_autonomous_emergency_braking autoware_collision_detector autoware_control_performance_analysis autoware_control_validator autoware_external_cmd_selector autoware_joy_controller autoware_lane_departure_checker autoware_mpc_lateral_controller autoware_obstacle_collision_checker autoware_operation_mode_transition_manager autoware_pid_longitudinal_controller autoware_predicted_path_checker autoware_pure_pursuit autoware_shift_decider autoware_smart_mpc_trajectory_follower autoware_trajectory_follower_base autoware_trajectory_follower_node autoware_vehicle_cmd_gate autoware_control_evaluator autoware_kinematic_evaluator autoware_localization_evaluator autoware_perception_online_evaluator autoware_planning_evaluator autoware_scenario_simulator_v2_adapter tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch autoware_geo_pose_projector autoware_gyro_odometer autoware_ar_tag_based_localizer autoware_landmark_manager autoware_lidar_marker_localizer autoware_localization_error_monitor autoware_ndt_scan_matcher autoware_pose2twist autoware_pose_covariance_modifier autoware_pose_estimator_arbiter autoware_pose_initializer autoware_pose_instability_detector yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer autoware_lanelet2_map_visualizer autoware_map_height_fitter autoware_map_tf_generator autoware_bytetrack autoware_cluster_merger autoware_compare_map_segmentation autoware_crosswalk_traffic_light_estimator autoware_detected_object_feature_remover autoware_detected_object_validation autoware_detection_by_tracker autoware_elevation_map_loader autoware_euclidean_cluster autoware_ground_segmentation autoware_image_projection_based_fusion autoware_lidar_apollo_instance_segmentation autoware_lidar_centerpoint autoware_lidar_transfusion autoware_map_based_prediction autoware_multi_object_tracker autoware_object_merger autoware_object_range_splitter autoware_object_velocity_splitter autoware_occupancy_grid_map_outlier_filter autoware_probabilistic_occupancy_grid_map autoware_radar_crossing_objects_noise_filter autoware_radar_fusion_to_detected_object autoware_radar_object_clustering autoware_radar_object_tracker autoware_radar_tracks_msgs_converter autoware_raindrop_cluster_filter autoware_shape_estimation autoware_simple_object_merger autoware_tensorrt_classifier autoware_tensorrt_common autoware_tensorrt_yolox autoware_tracking_object_merger autoware_traffic_light_arbiter autoware_traffic_light_category_merger autoware_traffic_light_classifier autoware_traffic_light_fine_detector autoware_traffic_light_map_based_detector autoware_traffic_light_multi_camera_fusion autoware_traffic_light_occlusion_predictor autoware_traffic_light_selector autoware_traffic_light_visualization perception_utils autoware_costmap_generator autoware_external_velocity_limit_selector autoware_freespace_planner autoware_freespace_planning_algorithms autoware_mission_planner_universe autoware_obstacle_cruise_planner autoware_obstacle_stop_planner autoware_path_optimizer autoware_path_smoother autoware_planning_validator autoware_remaining_distance_time_calculator autoware_rtc_interface autoware_scenario_selector autoware_surround_obstacle_checker autoware_behavior_path_avoidance_by_lane_change_module autoware_behavior_path_dynamic_obstacle_avoidance_module autoware_behavior_path_external_request_lane_change_module autoware_behavior_path_goal_planner_module autoware_behavior_path_lane_change_module autoware_behavior_path_planner autoware_behavior_path_planner_common autoware_behavior_path_sampling_planner_module autoware_behavior_path_side_shift_module autoware_behavior_path_start_planner_module autoware_behavior_path_static_obstacle_avoidance_module autoware_behavior_velocity_blind_spot_module autoware_behavior_velocity_crosswalk_module autoware_behavior_velocity_detection_area_module autoware_behavior_velocity_intersection_module autoware_behavior_velocity_no_drivable_lane_module autoware_behavior_velocity_no_stopping_area_module autoware_behavior_velocity_occlusion_spot_module autoware_behavior_velocity_rtc_interface autoware_behavior_velocity_run_out_module autoware_behavior_velocity_speed_bump_module autoware_behavior_velocity_template_module autoware_behavior_velocity_traffic_light_module autoware_behavior_velocity_virtual_traffic_light_module autoware_behavior_velocity_walkway_module autoware_motion_velocity_dynamic_obstacle_stop_module autoware_motion_velocity_obstacle_cruise_module autoware_motion_velocity_obstacle_slow_down_module autoware_motion_velocity_obstacle_velocity_limiter_module autoware_motion_velocity_out_of_lane_module autoware_bezier_sampler autoware_frenet_planner autoware_path_sampler autoware_sampler_common autoware_cuda_pointcloud_preprocessor autoware_cuda_utils autoware_image_diagnostics autoware_image_transport_decompressor autoware_imu_corrector autoware_pcl_extensions autoware_pointcloud_preprocessor autoware_radar_scan_to_pointcloud2 autoware_radar_static_pointcloud_filter autoware_radar_threshold_filter autoware_radar_tracks_noise_filter autoware_livox_tag_filter autoware_carla_interface autoware_dummy_perception_publisher autoware_fault_injection autoware_learning_based_vehicle_model autoware_simple_planning_simulator autoware_vehicle_door_simulator tier4_dummy_object_rviz_plugin autoware_bluetooth_monitor autoware_component_monitor autoware_component_state_monitor autoware_default_adapi autoware_adapi_adaptors autoware_adapi_visualizers autoware_automatic_pose_initializer autoware_diagnostic_graph_aggregator autoware_diagnostic_graph_utils autoware_dummy_diag_publisher autoware_dummy_infrastructure autoware_duplicated_node_checker autoware_hazard_status_converter autoware_mrm_comfortable_stop_operator autoware_mrm_emergency_stop_operator autoware_mrm_handler autoware_processing_time_checker autoware_system_diagnostic_monitor autoware_system_monitor autoware_topic_relay_controller autoware_topic_state_monitor autoware_velodyne_monitor reaction_analyzer autoware_accel_brake_map_calibrator autoware_external_cmd_converter autoware_raw_vehicle_cmd_converter autoware_steer_offset_estimator autoware_bag_time_manager_rviz_plugin autoware_mission_details_overlay_rviz_plugin autoware_overlay_rviz_plugin autoware_string_stamped_rviz_plugin autoware_perception_rviz_plugin tier4_adapi_rviz_plugin tier4_camera_view_rviz_plugin tier4_datetime_rviz_plugin tier4_localization_rviz_plugin tier4_planning_factor_rviz_plugin tier4_planning_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin

Package Summary

Tags No category tags.
Version 0.43.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/autowarefoundation/autoware_universe.git
VCS Type git
VCS Version main
Last Updated 2025-04-03
Dev Status UNMAINTAINED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

The autoware_traffic_light_classifier package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Yoshi Ri
  • Tao Zhong
  • Masato Saeki

Authors

No additional authors.

autoware_traffic_light_classifier

Purpose

autoware_traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier and hsv_classifier.

Inner-workings / Algorithms

If height and width of ~/input/rois is 0, color, shape, and confidence of ~/output/traffic_signals become UNKNOWN, CIRCLE, and 0.0. If ~/input/rois is judged as backlight, color, shape, and confidence of ~/output/traffic_signals become UNKNOWN, UNKNOWN, and 0.0.

cnn_classifier

Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2.
We trained classifiers for vehicular signals and pedestrian signals separately. For vehicular signals, a total of 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.

Name Input Size Test Accuracy
EfficientNet-b1 128 x 128 99.76%
MobileNet-v2 224 x 224 99.81%

For pedestrian signals, a total of 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.
The information of the models is listed here:

Name Input Size Test Accuracy
EfficientNet-b1 128 x 128 97.89%
MobileNet-v2 224 x 224 99.10%

hsv_classifier

Traffic light colors (green, yellow and red) are classified in HSV model.

About Label

The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.

There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2 .

For example, the simple red and red cross traffic light label must be expressed as “red-circle, red-cross”.

These colors and shapes are assigned to the message as follows: TrafficLightDataStructure.jpg

Inputs / Outputs

Input

Name Type Description
~/input/image sensor_msgs::msg::Image input image
~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray rois of traffic lights

Output

Name Type Description
~/output/traffic_signals tier4_perception_msgs::msg::TrafficLightArray classified signals
~/output/debug/image sensor_msgs::msg::Image image for debugging

Parameters

Node Parameters

car_traffic_light_classifier

{{ json_to_markdown(“perception/autoware_traffic_light_classifier/schema/car_traffic_light_classifier.schema.json”) }}

pedestrian_traffic_light_classifier

{{ json_to_markdown(“perception/autoware_traffic_light_classifier/schema/pedestrian_traffic_light_classifier.schema.json”) }}

Core Parameters

cnn_classifier

Including this section

hsv_classifier

Name Type Description
green_min_h int the minimum hue of green color
green_min_s int the minimum saturation of green color
green_min_v int the minimum value (brightness) of green color
green_max_h int the maximum hue of green color
green_max_s int the maximum saturation of green color
green_max_v int the maximum value (brightness) of green color
yellow_min_h int the minimum hue of yellow color
yellow_min_s int the minimum saturation of yellow color
yellow_min_v int the minimum value (brightness) of yellow color
yellow_max_h int the maximum hue of yellow color
yellow_max_s int the maximum saturation of yellow color
yellow_max_v int the maximum value (brightness) of yellow color
red_min_h int the minimum hue of red color
red_min_s int the minimum saturation of red color
red_min_v int the minimum value (brightness) of red color
red_max_h int the maximum hue of red color
red_max_s int the maximum saturation of red color
red_max_v int the maximum value (brightness) of red color

Training Traffic Light Classifier Model

Overview

This guide provides detailed instructions on training a traffic light classifier model using the mmlab/mmpretrain repository and deploying it using mmlab/mmdeploy. If you wish to create a custom traffic light classifier model with your own dataset, please follow the steps outlined below.

Data Preparation

Use Sample Dataset

Autoware offers a sample dataset that illustrates the training procedures for traffic light classification. This dataset comprises 1045 images categorized into red, green, and yellow labels. To utilize this sample dataset, please download it from link and extract it to a designated folder of your choice.

Use Your Custom Dataset

To train a traffic light classifier, adopt a structured subfolder format where each subfolder represents a distinct class. Below is an illustrative dataset structure example;

DATASET_ROOT
    ├── TRAIN
        ├── RED
           ├── 001.png
           ├── 002.png
           └── ...
        
        ├── GREEN
            ├── 001.png
            ├── 002.png
            └──...
        
        ├── YELLOW
            ├── 001.png
            ├── 002.png
            └──...
        └── ...
    
    ├── VAL
           └──...
    
    
    └── TEST
           └── ...



Installation

Prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name tl-classifier python=3.8 -y
conda activate tl-classifier

Step 3. Install PyTorch

Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware

conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia

Install mmlab/mmpretrain

Step 1. Install mmpretrain from source

cd ~/
git clone https://github.com/open-mmlab/mmpretrain.git
cd mmpretrain
pip install -U openmim && mim install -e .

Training

MMPretrain offers a training script that is controlled through a configuration file. Leveraging an inheritance design pattern, you can effortlessly tailor the training script using Python files as configuration files.

In the example, we demonstrate the training steps on the MobileNetV2 model, but you have the flexibility to employ alternative classification models such as EfficientNetV2, EfficientNetV3, ResNet, and more.

Create a config file

Generate a configuration file for your preferred model within the configs folder

touch ~/mmpretrain/configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py

Open the configuration file in your preferred text editor and make a copy of the provided content. Adjust the data_root variable to match the path of your dataset. You are welcome to customize the configuration parameters for the model, dataset, and scheduler to suit your preferences

# Inherit model, schedule and default_runtime from base model
_base_ = [
    '../_base_/models/mobilenet_v2_1x.py',
    '../_base_/schedules/imagenet_bs256_epochstep.py',
    '../_base_/default_runtime.py'
]

# Set the number of classes to the model
# You can also change other model parameters here
# For detailed descriptions of model parameters, please refer to link below
# (Customize model)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/modules.html]
model = dict(head=dict(num_classes=3, topk=(1, 3)))

# Set max epochs and validation interval
train_cfg = dict(by_epoch=True, max_epochs=50, val_interval=5)

# Set optimizer and lr scheduler
optim_wrapper = dict(
    optimizer=dict(type='SGD', lr=0.001, momentum=0.9))
param_scheduler = dict(type='StepLR', by_epoch=True, step_size=1, gamma=0.98)

dataset_type = 'CustomDataset'
data_root = "/PATH/OF/YOUR/DATASET"

# Customize data preprocessing and dataloader pipeline for training set
# These parameters calculated for the sample dataset
data_preprocessor = dict(
    mean=[0.2888 * 256, 0.2570 * 256, 0.2329 * 256],
    std=[0.2106 * 256, 0.2037 * 256, 0.1864 * 256],
    num_classes=3,
    to_rgb=True,
)

# Customize data preprocessing and dataloader pipeline for train set
# For detailed descriptions of data pipeline, please refer to link below
# (Customize data pipeline)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/pipeline.html]
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='Resize', scale=224),
    dict(type='RandomFlip', prob=0.5, direction='horizontal'),
    dict(type='PackInputs'),
]
train_dataloader = dict(
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        ann_file='',
        data_prefix='train',
        with_label=True,
        pipeline=train_pipeline,
    ),
    num_workers=8,
    batch_size=32,
    sampler=dict(type='DefaultSampler', shuffle=True)
)

# Customize data preprocessing and dataloader pipeline for test set
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='Resize', scale=224),
    dict(type='PackInputs'),
]

# Customize data preprocessing and dataloader pipeline for validation set
val_cfg = dict()
val_dataloader = dict(
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        ann_file='',
        data_prefix='val',
        with_label=True,
        pipeline=test_pipeline,
    ),
    num_workers=8,
    batch_size=32,
    sampler=dict(type='DefaultSampler', shuffle=True)
)

val_evaluator = dict(topk=(1, 3,), type='Accuracy')

test_dataloader = val_dataloader
test_evaluator = val_evaluator


Start training

cd ~/mmpretrain
python tools/train.py configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py

Training logs and weights will be saved in the work_dirs/mobilenet-v2_8xb32_custom folder.

Convert PyTorch model to ONNX model

Install mmdeploy

The ‘mmdeploy’ toolset is designed for deploying your trained model onto various target devices. With its capabilities, you can seamlessly convert PyTorch models into the ONNX format.

# Activate your conda environment
conda activate tl-classifier

# Install mmenigne and mmcv
mim install mmengine
mim install "mmcv>=2.0.0rc2"

# Install mmdeploy
pip install mmdeploy==1.2.0

# Support onnxruntime
pip install mmdeploy-runtime==1.2.0
pip install mmdeploy-runtime-gpu==1.2.0
pip install onnxruntime-gpu==1.8.1

#Clone mmdeploy repository
cd ~/
git clone -b main https://github.com/open-mmlab/mmdeploy.git

Convert PyTorch model to ONNX model

cd ~/mmdeploy

# Run deploy.py script
# deploy.py script takes 5 main arguments with these order; config file path, train config file path,
# checkpoint file path, demo image path, and work directory path
python tools/deploy.py \
~/mmdeploy/configs/mmpretrain/classification_onnxruntime_static.py\
~/mmpretrain/configs/mobilenet_v2/train_mobilenet_v2.py \
~/mmpretrain/work_dirs/train_mobilenet_v2/epoch_300.pth \
/SAMPLE/IAMGE/DIRECTORY \
--work-dir mmdeploy_model/mobilenet_v2

Converted ONNX model will be saved in the mmdeploy/mmdeploy_model/mobilenet_v2 folder.

After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path, label_file_path, input_h, input_w…). Note that, we only support labels defined in tier4_perception_msgs::msg::TrafficLightElement.

Assumptions / Known limits

(Optional) Error detection and handling

(Optional) Performance characterization

[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

[2] Tan, Mingxing, and Quoc Le. “EfficientNet: Rethinking model scaling for convolutional neural networks.” International conference on machine learning. PMLR, 2019.

(Optional) Future extensions / Unimplemented parts

CHANGELOG

Changelog for package autoware_traffic_light_classifier

0.43.0 (2025-03-21)

  • Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
  • chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
  • feat(traffic_light_classifier): update diagnostics when harsh backlight is detected (#10218) feat: update diagnostics when harsh backlight is detected
  • chore(perception): refactor perception launch (#10186)
    • fundamental change
    • style(pre-commit): autofix
    • fix typo
    • fix params and modify some packages
    • pre-commit
    • fix
    • fix spell check
    • fix typo
    • integrate model and label path
    • style(pre-commit): autofix
    • for pre-commit
    • run pre-commit
    • for awsim
    • for simulatior
    • style(pre-commit): autofix
    • fix grammer in launcher
    • add schema for yolox_tlr
    • style(pre-commit): autofix
    • fix file name
    • fix
    • rename
    • modify arg name to
    • fix typo
    • change param name
    • style(pre-commit): autofix

    * chore

    Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Shintaro Tomie <<58775300+Shin-kyoto@users.noreply.github.com>> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>

  • refactor: add autoware_cuda_dependency_meta (#10073)
  • Contributors: Esteve Fernandez, Hayato Mizushima, Kotaro Uetake, Masato Saeki, Yutaka Kondo

0.42.0 (2025-03-03)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore: refine maintainer list (#10110)
    • chore: remove Miura from maintainer

    * chore: add Taekjin-san to perception_utils package maintainer ---------

  • feat(autoware_traffic_light_classifier): add traffic light classifier schema, README and car and ped launcher (#10048)
    • feat(autoware_traffic_light_classifier):Add traffic light classifier schema and README
    • add individual launcher
    • style(pre-commit): autofix
    • fix description
    • fix README and source code
    • separate schema in README
    • fix README
    • fix launcher
    • style(pre-commit): autofix

    * fix typo ---------Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>

  • Contributors: Fumiya Watanabe, Shunsuke Miura, Vishal Chauhan

0.41.2 (2025-02-19)

  • chore: bump version to 0.41.1 (#10088)
  • Contributors: Ryohsuke Mitsudome

0.41.1 (2025-02-10)

0.41.0 (2025-01-29)

  • Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
  • chore(autoware_traffic_light_classifier): modify docs (#9819)
    • modify docs
    • style(pre-commit): autofix

    * fix docs ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>

  • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components (#9762)
    • refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
    • style(pre-commit): autofix
    • style(autoware_tensorrt_common): linting

    * style(autoware_lidar_centerpoint): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> * docs(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>

    • fix(autoware_lidar_transfusion): reuse cast variable
    • fix(autoware_tensorrt_common): remove deprecated inference API

    * style(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> * style(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>

    • fix(autoware_tensorrt_common): const pointer
    • fix(autoware_tensorrt_common): remove unused method declaration
    • style(pre-commit): autofix

    * refactor(autoware_tensorrt_common): readability Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>

    • fix(autoware_tensorrt_common): return if layer not registered

    * refactor(autoware_tensorrt_common): readability Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>

    • fix(autoware_tensorrt_common): rename struct

    * style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>

  • Contributors: Amadeusz Szymko, Fumiya Watanabe, Masato Saeki

0.40.0 (2024-12-12)

  • Merge branch 'main' into release-0.40.0
  • Revert "chore(package.xml): bump version to 0.39.0 (#9587)" This reverts commit c9f0f2688c57b0f657f5c1f28f036a970682e7f5.
  • fix: fix ticket links in CHANGELOG.rst (#9588)
  • chore(package.xml): bump version to 0.39.0 (#9587)
    • chore(package.xml): bump version to 0.39.0
    • fix: fix ticket links in CHANGELOG.rst

    * fix: remove unnecessary diff ---------Co-authored-by: Yutaka Kondo <<yutaka.kondo@youtalk.jp>>

  • fix: fix ticket links in CHANGELOG.rst (#9588)
  • fix(autoware_traffic_light_classifier): fix clang-diagnostic-delete-abstract-non-virtual-dtor (#9497) fix: clang-diagnostic-delete-abstract-non-virtual-dtor
  • 0.39.0
  • update changelog
  • Merge commit '6a1ddbd08bd' into release-0.39.0
  • fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
  • fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
  • chore(autoware_traffic_light*): add maintainer (#9280)
    • add fundamental commit

    * add forgot package ---------

  • chore(package.xml): bump version to 0.38.0 (#9266) (#9284)
    • unify package.xml version to 0.37.0
    • remove system_monitor/CHANGELOG.rst
    • add changelog

    * 0.38.0

  • refactor(cuda_utils): prefix package and namespace with autoware (#9171)
  • Contributors: Esteve Fernandez, Fumiya Watanabe, Masato Saeki, Ryohsuke Mitsudome, Yutaka Kondo, kobayu858

0.39.0 (2024-11-25)

0.38.0 (2024-11-08)

  • unify package.xml version to 0.37.0
  • refactor(tensorrt_common)!: fix namespace, directory structure & move to perception namespace (#9099)
    • refactor(tensorrt_common)!: fix namespace, directory structure & move to perception namespace
    • refactor(tensorrt_common): directory structure
    • style(pre-commit): autofix

    * fix(tensorrt_common): correct package name for logging ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>

  • fix(traffic_light_classifier): fix traffic light monitor warning (#8412) fix traffic light monitor warning
  • fix(autoware_traffic_light_classifier): fix passedByValue (#8392) fix:passedByValue
  • fix(traffic_light_classifier): fix zero size roi bug (#7608)
    • fix: continue to process when input roi size is zero

    * fix: consider when roi size is zero, rois is empty fix * fix: use emplace_back instead of push_back for adding images and backlight indices The code changes in [traffic_light_classifier_node.cpp]{.title-ref} modify the way images and backlight indices are added to the respective vectors. Instead of using [push_back]{.title-ref}, the code now uses [emplace_back]{.title-ref}. This change improves performance and ensures proper object construction.

    • refactor: bring back for loop skim and output_msg filling
    • chore: refactor code to handle empty input ROIs in traffic_light_classifier_node.cpp

    * refactor: using index instead of vector length ---------

  • fix(traffic_light_classifier): fix funcArgNamesDifferent (#8153)
    • fix:funcArgNamesDifferent

    * fix:clang format ---------

  • refactor(traffic_light_*)!: add package name prefix of autoware_ (#8159)
    • chore: rename traffic_light_fine_detector to autoware_traffic_light_fine_detector
    • chore: rename traffic_light_multi_camera_fusion to autoware_traffic_light_multi_camera_fusion
    • chore: rename traffic_light_occlusion_predictor to autoware_traffic_light_occlusion_predictor
    • chore: rename traffic_light_classifier to autoware_traffic_light_classifier
    • chore: rename traffic_light_map_based_detector to autoware_traffic_light_map_based_detector

    * chore: rename traffic_light_visualization to autoware_traffic_light_visualization ---------

  • Contributors: Amadeusz Szymko, Sho Iwasawa, Taekjin LEE, Yutaka Kondo, kobayu858

0.26.0 (2024-04-03)

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

  • launch/car_traffic_light_classifier.launch.xml
      • data_path [default: $(env HOME)/autoware_data]
      • input/image [default: ~/image_raw]
      • input/rois [default: ~/rois]
      • output/traffic_signals [default: classified/traffic_signals]
      • param_path [default: $(find-pkg-share autoware_traffic_light_classifier)/config/car_traffic_light_classifier.param.yaml]
      • model_path [default: $(var data_path)/traffic_light_classifier/traffic_light_classifier_mobilenetv2_batch_6.onnx]
      • label_path [default: $(var data_path)/traffic_light_classifier/lamp_labels.txt]
      • build_only [default: false]
  • launch/pedestrian_traffic_light_classifier.launch.xml
      • data_path [default: $(env HOME)/autoware_data]
      • input/image [default: ~/image_raw]
      • input/rois [default: ~/rois]
      • output/traffic_signals [default: classified/traffic_signals]
      • param_path [default: $(find-pkg-share autoware_traffic_light_classifier)/config/pedestrian_traffic_light_classifier.param.yaml]
      • model_path [default: $(var data_path)/traffic_light_classifier/ped_traffic_light_classifier_mobilenetv2_batch_6.onnx]
      • label_path [default: $(var data_path)/traffic_light_classifier/lamp_labels_ped.txt]
      • build_only [default: false]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged autoware_traffic_light_classifier at Robotics Stack Exchange

No version for distro noetic. Known supported distros are highlighted in the buttons above.
No version for distro ardent. Known supported distros are highlighted in the buttons above.
No version for distro bouncy. Known supported distros are highlighted in the buttons above.
No version for distro crystal. Known supported distros are highlighted in the buttons above.
No version for distro eloquent. Known supported distros are highlighted in the buttons above.
No version for distro dashing. Known supported distros are highlighted in the buttons above.
No version for distro galactic. Known supported distros are highlighted in the buttons above.
No version for distro foxy. Known supported distros are highlighted in the buttons above.
No version for distro iron. Known supported distros are highlighted in the buttons above.
No version for distro lunar. Known supported distros are highlighted in the buttons above.
No version for distro jade. Known supported distros are highlighted in the buttons above.
No version for distro indigo. Known supported distros are highlighted in the buttons above.
No version for distro hydro. Known supported distros are highlighted in the buttons above.
No version for distro kinetic. Known supported distros are highlighted in the buttons above.
No version for distro melodic. Known supported distros are highlighted in the buttons above.