No version for distro humble. Known supported distros are highlighted in the buttons above.
No version for distro jazzy. Known supported distros are highlighted in the buttons above.
No version for distro rolling. Known supported distros are highlighted in the buttons above.

traffic_light_classifier package from autodrrt repo

autonomous_emergency_braking control_performance_analysis control_validator external_cmd_selector joy_controller lane_departure_checker mpc_lateral_controller obstacle_collision_checker operation_mode_transition_manager pid_longitudinal_controller predicted_path_checker pure_pursuit shift_decider trajectory_follower_base trajectory_follower_node vehicle_cmd_gate diagnostic_converter kinematic_evaluator localization_evaluator planning_evaluator ekf_localizer geo_pose_projector gyro_odometer ar_tag_based_localizer landmark_manager localization_error_monitor localization_util ndt_scan_matcher pose2twist pose_initializer pose_instability_detector stop_filter tree_structured_parzen_estimator twist2accel yabloc_common yabloc_image_processing yabloc_monitor yabloc_particle_filter yabloc_pose_initializer map_height_fitter map_loader map_projection_loader map_tf_generator lanelet2_map_preprocessor ros2_bevdet ros2_bevformer bevfusion bytetrack cluster_merger compare_map_segmentation crosswalk_traffic_light_estimator detected_object_feature_remover detected_object_validation detection_by_tracker elevation_map_loader euclidean_cluster front_vehicle_velocity_estimator ground_segmentation heatmap_visualizer image_projection_based_fusion lidar_apollo_instance_segmentation lidar_apollo_segmentation_tvm lidar_apollo_segmentation_tvm_nodes lidar_centerpoint lidar_centerpoint_tvm map_based_prediction multi_object_tracker object_merger object_range_splitter object_velocity_splitter occupancy_grid_map_outlier_filter probabilistic_occupancy_grid_map radar_crossing_objects_noise_filter radar_fusion_to_detected_object radar_object_clustering radar_object_tracker radar_tracks_msgs_converter shape_estimation simple_object_merger tensorrt_classifier tensorrt_yolo tensorrt_yolox tracking_object_merger traffic_light_arbiter traffic_light_classifier traffic_light_fine_detector traffic_light_map_based_detector traffic_light_multi_camera_fusion traffic_light_occlusion_predictor traffic_light_ssd_fine_detector traffic_light_visualization behavior_path_avoidance_by_lane_change_module behavior_path_avoidance_module behavior_path_external_request_lane_change_module behavior_path_goal_planner_module behavior_path_lane_change_module behavior_path_planner behavior_path_planner_common behavior_path_side_shift_module behavior_path_start_planner_module behavior_velocity_blind_spot_module behavior_velocity_crosswalk_module behavior_velocity_detection_area_module behavior_velocity_intersection_module behavior_velocity_no_drivable_lane_module behavior_velocity_no_stopping_area_module behavior_velocity_occlusion_spot_module behavior_velocity_out_of_lane_module behavior_velocity_planner behavior_velocity_planner_common behavior_velocity_run_out_module behavior_velocity_speed_bump_module behavior_velocity_stop_line_module behavior_velocity_template_module behavior_velocity_traffic_light_module behavior_velocity_virtual_traffic_light_module behavior_velocity_walkway_module costmap_generator external_velocity_limit_selector freespace_planner freespace_planning_algorithms mission_planner motion_velocity_smoother objects_of_interest_marker_interface obstacle_avoidance_planner obstacle_cruise_planner obstacle_stop_planner obstacle_velocity_limiter path_smoother planning_debug_tools planning_test_utils planning_topic_converter planning_validator route_handler rtc_interface rtc_replayer bezier_sampler frenet_planner path_sampler sampler_common scenario_selector static_centerline_optimizer surround_obstacle_checker gnss_poser image_diagnostics image_transport_decompressor imu_corrector livox_tag_filter pointcloud_preprocessor radar_scan_to_pointcloud2 radar_static_pointcloud_filter radar_threshold_filter radar_tracks_noise_filter tier4_pcl_extensions vehicle_velocity_converter autoware_auto_msgs_adapter bluetooth_monitor component_state_monitor default_ad_api ad_api_adaptors ad_api_visualizers automatic_pose_initializer diagnostic_graph_aggregator dummy_diag_publisher dummy_infrastructure duplicated_node_checker emergency_handler mrm_comfortable_stop_operator mrm_emergency_stop_operator system_error_monitor system_monitor topic_state_monitor velodyne_monitor accel_brake_map_calibrator external_cmd_converter raw_vehicle_cmd_converter steer_offset_estimator vehicle_info_util launch launch_ros autoware_ad_api_specs autoware_adapi_v1_msgs autoware_adapi_version_msgs autoware_auto_common autoware_auto_geometry autoware_auto_control_msgs autoware_auto_geometry_msgs autoware_auto_mapping_msgs autoware_auto_msgs autoware_auto_perception_msgs autoware_auto_planning_msgs autoware_auto_system_msgs autoware_auto_vehicle_msgs autoware_auto_perception_rviz_plugin autoware_auto_tf2 autoware_cmake autoware_lint_common autoware_utils lanelet2_extension autoware_common_msgs autoware_control_msgs autoware_localization_msgs autoware_map_msgs autoware_perception_msgs autoware_planning_msgs autoware_sensing_msgs autoware_system_msgs autoware_vehicle_msgs autoware_point_types autoware_testing bag_time_manager_rviz_plugin component_interface_specs component_interface_tools component_interface_utils cuda_utils fake_test_node geography_utils global_parameter_loader glog_component goal_distance_calculator grid_map_utils interpolation kalman_filter motion_utils object_recognition_utils osqp_interface path_distance_calculator perception_utils polar_grid qp_interface rtc_manager_rviz_plugin signal_processing tensorrt_common tier4_adapi_rviz_plugin tier4_api_utils tier4_automatic_goal_rviz_plugin tier4_autoware_utils tier4_calibration_rviz_plugin tier4_camera_view_rviz_plugin tier4_control_rviz_plugin tier4_datetime_rviz_plugin tier4_debug_rviz_plugin tier4_debug_tools tier4_localization_rviz_plugin tier4_perception_rviz_plugin tier4_planning_rviz_plugin tier4_screen_capture_rviz_plugin tier4_simulated_clock_rviz_plugin tier4_state_rviz_plugin tier4_system_rviz_plugin tier4_target_object_type_rviz_plugin tier4_traffic_light_rviz_plugin tier4_vehicle_rviz_plugin time_utils simulator_compatibility_test traffic_light_recognition_marker_publisher traffic_light_utils tvm_utility dma_customer_msg dma_transfer eagleye_coordinate eagleye_navigation eagleye_msgs eagleye_rt eagleye_can_velocity_converter eagleye_fix2kml eagleye_geo_pose_converter eagleye_geo_pose_fusion eagleye_gnss_converter eagleye_tf llh_converter morai_msgs mussp ndt_omp orocos_kdl python_orocos_kdl pointcloud_to_laserscan rtklib_bridge rtklib_msgs autoware_external_api_msgs autoware_iv_external_api_adaptor autoware_iv_internal_api_adaptor awapi_awiv_adapter tier4_api_msgs tier4_auto_msgs_converter tier4_control_msgs tier4_debug_msgs tier4_external_api_msgs tier4_hmi_msgs tier4_localization_msgs tier4_map_msgs tier4_perception_msgs tier4_planning_msgs tier4_rtc_msgs tier4_simulation_msgs tier4_system_msgs tier4_v2x_msgs tier4_vehicle_msgs io_opt tier4_autoware_api_launch tier4_control_launch tier4_localization_launch tier4_map_launch tier4_perception_launch tier4_planning_launch tier4_sensing_launch tier4_simulator_launch tier4_system_launch tier4_vehicle_launch fastrtps cyclonedds lanelet2 lanelet2_core lanelet2_examples lanelet2_io lanelet2_maps lanelet2_matching lanelet2_projection lanelet2_python lanelet2_routing lanelet2_traffic_rules lanelet2_validation sophus angles behaviortree_cpp_v3 bond bond_core bondcpp bondpy smclib test_bond cudnn_cmake_module diagnostic_aggregator diagnostic_common_diagnostics diagnostic_updater diagnostics self_test filters geodesy geographic_info geographic_msgs grid_map grid_map_cmake_helpers grid_map_core grid_map_costmap_2d grid_map_cv grid_map_demos grid_map_filters grid_map_loader grid_map_msgs grid_map_octomap grid_map_pcl grid_map_ros grid_map_rviz_plugin grid_map_sdf grid_map_visualization mrt_cmake_modules nav2_amcl nav2_behavior_tree nav2_behaviors nav2_bringup nav2_bt_navigator nav2_collision_monitor nav2_common nav2_controller nav2_core nav2_costmap_2d costmap_queue dwb_core dwb_critics dwb_msgs dwb_plugins nav2_dwb_controller nav_2d_msgs nav_2d_utils nav2_lifecycle_manager nav2_map_server nav2_msgs nav2_navfn_planner nav2_planner nav2_regulated_pure_pursuit_controller nav2_rotation_shim_controller nav2_rviz_plugins nav2_simple_commander nav2_smac_planner nav2_smoother nav2_system_tests nav2_theta_star_planner nav2_util nav2_velocity_smoother nav2_voxel_grid nav2_waypoint_follower navigation2 dynamic_edt_3d octomap octovis octomap_msgs osqp_vendor pacmod3_msgs pcl_msgs pcl_conversions pcl_ros perception_pcl point_cloud_msg_wrapper radar_msgs can_msgs rqt_tf_tree tensorrt_cmake_module topic_tools topic_tools_interfaces tvm_vendor cv_bridge image_geometry opencv_tests vision_opencv xacro rviz2 rviz_assimp_vendor rviz_common rviz_default_plugins rviz_ogre_vendor rviz_rendering rviz_rendering_tests rviz_visual_testing_framework dummy_perception_publisher fault_injection simple_planning_simulator classformsg node_v2x image_view v4l2_camera can_interface_custom cgi430_can_driver cgi610_driver ARS408_driver data_format_dump data_preprocess_launch lidar_centerpoint_collect lidar_saver message_sync time_cal camera_calibration direct_visual_lidar_calibration multi_lidar_calibration

Package Summary

Tags No category tags.
Version 0.1.0
License Apache License 2.0
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Checkout URI https://github.com/ieiauto/autodrrt.git
VCS Type git
VCS Version main
Last Updated 2024-09-19
Dev Status UNMAINTAINED
CI status No Continuous Integration
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (0)
Good First Issues (0)
Pull Requests to Review (0)

Package Description

The traffic_light_classifier package

Additional Links

No additional links.

Maintainers

  • Yukihiro Saito
  • Shunsuke Miura

Authors

No additional authors.

traffic_light_classifier

Purpose

traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier and hsv_classifier.

Inner-workings / Algorithms

cnn_classifier

Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2.
Totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.
The information of the models is listed here:

Name Input Size Test Accuracy
EfficientNet-b1 128 x 128 99.76%
MobileNet-v2 224 x 224 99.81%

hsv_classifier

Traffic light colors (green, yellow and red) are classified in HSV model.

About Label

The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.

There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2 .

For example, the simple red and red cross traffic light label must be expressed as “red-circle, red-cross”.

These colors and shapes are assigned to the message as follows: TrafficLightDataStructure.jpg

Inputs / Outputs

Input

Name Type Description
~/input/image sensor_msgs::msg::Image input image
~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray rois of traffic lights

Output

Name Type Description
~/output/traffic_signals tier4_perception_msgs::msg::TrafficSignalArray classified signals
~/output/debug/image sensor_msgs::msg::Image image for debugging

Parameters

Node Parameters

Name Type Description
classifier_type int if the value is 1, cnn_classifier is used
data_path str packages data and artifacts directory path

Core Parameters

cnn_classifier

Name Type Description
classifier_label_path str path to the model file
classifier_model_path str path to the label file
classifier_precision str TensorRT precision, fp16 or int8
classifier_mean vector<double> 3-channel input image mean
classifier_std vector<double> 3-channel input image std
apply_softmax bool whether or not apply softmax

hsv_classifier

Name Type Description
green_min_h int the minimum hue of green color
green_min_s int the minimum saturation of green color
green_min_v int the minimum value (brightness) of green color
green_max_h int the maximum hue of green color
green_max_s int the maximum saturation of green color
green_max_v int the maximum value (brightness) of green color
yellow_min_h int the minimum hue of yellow color
yellow_min_s int the minimum saturation of yellow color
yellow_min_v int the minimum value (brightness) of yellow color
yellow_max_h int the maximum hue of yellow color
yellow_max_s int the maximum saturation of yellow color
yellow_max_v int the maximum value (brightness) of yellow color
red_min_h int the minimum hue of red color
red_min_s int the minimum saturation of red color
red_min_v int the minimum value (brightness) of red color
red_max_h int the maximum hue of red color
red_max_s int the maximum saturation of red color
red_max_v int the maximum value (brightness) of red color

Training Traffic Light Classifier Model

Overview

This guide provides detailed instructions on training a traffic light classifier model using the mmlab/mmpretrain repository and deploying it using mmlab/mmdeploy. If you wish to create a custom traffic light classifier model with your own dataset, please follow the steps outlined below.

Data Preparation

Use Sample Dataset

Autoware offers a sample dataset that illustrates the training procedures for traffic light classification. This dataset comprises 1045 images categorized into red, green, and yellow labels. To utilize this sample dataset, please download it from link and extract it to a designated folder of your choice.

Use Your Custom Dataset

To train a traffic light classifier, adopt a structured subfolder format where each subfolder represents a distinct class. Below is an illustrative dataset structure example;

DATASET_ROOT
    ├── TRAIN
        ├── RED
           ├── 001.png
           ├── 002.png
           └── ...
        
        ├── GREEN
            ├── 001.png
            ├── 002.png
            └──...
        
        ├── YELLOW
            ├── 001.png
            ├── 002.png
            └──...
        └── ...
    
    ├── VAL
           └──...
    
    
    └── TEST
           └── ...



Installation

Prerequisites

Step 1. Download and install Miniconda from the official website.

Step 2. Create a conda virtual environment and activate it

conda create --name tl-classifier python=3.8 -y
conda activate tl-classifier

Step 3. Install PyTorch

Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware

conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia

Install mmlab/mmpretrain

Step 1. Install mmpretrain from source

cd ~/
git clone https://github.com/open-mmlab/mmpretrain.git
cd mmpretrain
pip install -U openmim && mim install -e .

Training

MMPretrain offers a training script that is controlled through a configuration file. Leveraging an inheritance design pattern, you can effortlessly tailor the training script using Python files as configuration files.

In the example, we demonstrate the training steps on the MobileNetV2 model, but you have the flexibility to employ alternative classification models such as EfficientNetV2, EfficientNetV3, ResNet, and more.

Create a config file

Generate a configuration file for your preferred model within the configs folder

touch ~/mmpretrain/configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py

Open the configuration file in your preferred text editor and make a copy of the provided content. Adjust the data_root variable to match the path of your dataset. You are welcome to customize the configuration parameters for the model, dataset, and scheduler to suit your preferences

# Inherit model, schedule and default_runtime from base model
_base_ = [
    '../_base_/models/mobilenet_v2_1x.py',
    '../_base_/schedules/imagenet_bs256_epochstep.py',
    '../_base_/default_runtime.py'
]

# Set the number of classes to the model
# You can also change other model parameters here
# For detailed descriptions of model parameters, please refer to link below
# (Customize model)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/modules.html]
model = dict(head=dict(num_classes=3, topk=(1, 3)))

# Set max epochs and validation interval
train_cfg = dict(by_epoch=True, max_epochs=50, val_interval=5)

# Set optimizer and lr scheduler
optim_wrapper = dict(
    optimizer=dict(type='SGD', lr=0.001, momentum=0.9))
param_scheduler = dict(type='StepLR', by_epoch=True, step_size=1, gamma=0.98)

dataset_type = 'CustomDataset'
data_root = "/PATH/OF/YOUR/DATASET"

# Customize data preprocessing and dataloader pipeline for training set
# These parameters calculated for the sample dataset
data_preprocessor = dict(
    mean=[0.2888 * 256, 0.2570 * 256, 0.2329 * 256],
    std=[0.2106 * 256, 0.2037 * 256, 0.1864 * 256],
    num_classes=3,
    to_rgb=True,
)

# Customize data preprocessing and dataloader pipeline for train set
# For detailed descriptions of data pipeline, please refer to link below
# (Customize data pipeline)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/pipeline.html]
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='Resize', scale=224),
    dict(type='RandomFlip', prob=0.5, direction='horizontal'),
    dict(type='PackInputs'),
]
train_dataloader = dict(
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        ann_file='',
        data_prefix='train',
        with_label=True,
        pipeline=train_pipeline,
    ),
    num_workers=8,
    batch_size=32,
    sampler=dict(type='DefaultSampler', shuffle=True)
)

# Customize data preprocessing and dataloader pipeline for test set
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='Resize', scale=224),
    dict(type='PackInputs'),
]

# Customize data preprocessing and dataloader pipeline for validation set
val_cfg = dict()
val_dataloader = dict(
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        ann_file='',
        data_prefix='val',
        with_label=True,
        pipeline=test_pipeline,
    ),
    num_workers=8,
    batch_size=32,
    sampler=dict(type='DefaultSampler', shuffle=True)
)

val_evaluator = dict(topk=(1, 3,), type='Accuracy')

test_dataloader = val_dataloader
test_evaluator = val_evaluator


Start training

cd ~/mmpretrain
python tools/train.py configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py

Training logs and weights will be saved in the work_dirs/mobilenet-v2_8xb32_custom folder.

Convert PyTorch model to ONNX model

Install mmdeploy

The ‘mmdeploy’ toolset is designed for deploying your trained model onto various target devices. With its capabilities, you can seamlessly convert PyTorch models into the ONNX format.

# Activate your conda environment
conda activate tl-classifier

# Install mmenigne and mmcv
mim install mmengine
mim install "mmcv>=2.0.0rc2"

# Install mmdeploy
pip install mmdeploy==1.2.0

# Support onnxruntime
pip install mmdeploy-runtime==1.2.0
pip install mmdeploy-runtime-gpu==1.2.0
pip install onnxruntime-gpu==1.8.1

#Clone mmdeploy repository
cd ~/
git clone -b main https://github.com/open-mmlab/mmdeploy.git

Convert PyTorch model to ONNX model

cd ~/mmdeploy

# Run deploy.py script
# deploy.py script takes 5 main arguments with these order; config file path, train config file path,
# checkpoint file path, demo image path, and work directory path
python tools/deploy.py \
~/mmdeploy/configs/mmpretrain/classification_onnxruntime_static.py\
~/mmpretrain/configs/mobilenet_v2/train_mobilenet_v2.py \
~/mmpretrain/work_dirs/train_mobilenet_v2/epoch_300.pth \
/SAMPLE/IAMGE/DIRECTORY \
--work-dir mmdeploy_model/mobilenet_v2

Converted ONNX model will be saved in the mmdeploy/mmdeploy_model/mobilenet_v2 folder.

After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path, label_file_path, input_h, input_w…). Note that, we only support labels defined in tier4_perception_msgs::msg::TrafficLightElement.

Assumptions / Known limits

(Optional) Error detection and handling

(Optional) Performance characterization

[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

[2] Tan, Mingxing, and Quoc Le. “EfficientNet: Rethinking model scaling for convolutional neural networks.” International conference on machine learning. PMLR, 2019.

(Optional) Future extensions / Unimplemented parts

CHANGELOG
No CHANGELOG found.

Wiki Tutorials

This package does not provide any links to tutorials in it's rosindex metadata. You can check on the ROS Wiki Tutorials page for the package.

Launch files

  • launch/traffic_light_classifier.launch.xml
      • input/image [default: ~/image_raw]
      • input/rois [default: ~/rois]
      • output/traffic_signals [default: classified/traffic_signals]
      • data_path [default: $(env HOME)/autoware_data]
      • classifier_label_path [default: $(var data_path)/traffic_light_classifier/lamp_labels.txt]
      • classifier_model_path [default: $(var data_path)/traffic_light_classifier/traffic_light_classifier_mobilenetv2_batch_6.onnx]
      • classifier_precision [default: fp16]
      • use_gpu [default: true]
      • classifier_type [default: 1]
      • classifier_type [default: 0]
      • build_only [default: false]

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged traffic_light_classifier at Robotics Stack Exchange

No version for distro noetic. Known supported distros are highlighted in the buttons above.
No version for distro ardent. Known supported distros are highlighted in the buttons above.
No version for distro bouncy. Known supported distros are highlighted in the buttons above.
No version for distro crystal. Known supported distros are highlighted in the buttons above.
No version for distro eloquent. Known supported distros are highlighted in the buttons above.
No version for distro dashing. Known supported distros are highlighted in the buttons above.
No version for distro galactic. Known supported distros are highlighted in the buttons above.
No version for distro foxy. Known supported distros are highlighted in the buttons above.
No version for distro iron. Known supported distros are highlighted in the buttons above.
No version for distro lunar. Known supported distros are highlighted in the buttons above.
No version for distro jade. Known supported distros are highlighted in the buttons above.
No version for distro indigo. Known supported distros are highlighted in the buttons above.
No version for distro hydro. Known supported distros are highlighted in the buttons above.
No version for distro kinetic. Known supported distros are highlighted in the buttons above.
No version for distro melodic. Known supported distros are highlighted in the buttons above.