Package Summary
Tags | No category tags. |
Version | 0.43.0 |
License | Apache License 2.0 |
Build type | AMENT_CMAKE |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/autowarefoundation/autoware_universe.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-04-03 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Yukihiro Saito
- Yoshi Ri
- Tao Zhong
- Masato Saeki
Authors
autoware_traffic_light_classifier
Purpose
autoware_traffic_light_classifier
is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier
and hsv_classifier
.
Inner-workings / Algorithms
If height and width of ~/input/rois
is 0
, color, shape, and confidence of ~/output/traffic_signals
become UNKNOWN
, CIRCLE
, and 0.0
.
If ~/input/rois
is judged as backlight, color, shape, and confidence of ~/output/traffic_signals
become UNKNOWN
, UNKNOWN
, and 0.0
.
cnn_classifier
Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2.
We trained classifiers for vehicular signals and pedestrian signals separately.
For vehicular signals, a total of 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.
Name | Input Size | Test Accuracy |
---|---|---|
EfficientNet-b1 | 128 x 128 | 99.76% |
MobileNet-v2 | 224 x 224 | 99.81% |
For pedestrian signals, a total of 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.
The information of the models is listed here:
Name | Input Size | Test Accuracy |
---|---|---|
EfficientNet-b1 | 128 x 128 | 97.89% |
MobileNet-v2 | 224 x 224 | 99.10% |
hsv_classifier
Traffic light colors (green, yellow and red) are classified in HSV model.
About Label
The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.
There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2
.
For example, the simple red and red cross traffic light label must be expressed as “red-circle, red-cross”.
These colors and shapes are assigned to the message as follows:
Inputs / Outputs
Input
Name | Type | Description |
---|---|---|
~/input/image |
sensor_msgs::msg::Image |
input image |
~/input/rois |
tier4_perception_msgs::msg::TrafficLightRoiArray |
rois of traffic lights |
Output
Name | Type | Description |
---|---|---|
~/output/traffic_signals |
tier4_perception_msgs::msg::TrafficLightArray |
classified signals |
~/output/debug/image |
sensor_msgs::msg::Image |
image for debugging |
Parameters
Node Parameters
car_traffic_light_classifier
{{ json_to_markdown(“perception/autoware_traffic_light_classifier/schema/car_traffic_light_classifier.schema.json”) }}
pedestrian_traffic_light_classifier
{{ json_to_markdown(“perception/autoware_traffic_light_classifier/schema/pedestrian_traffic_light_classifier.schema.json”) }}
Core Parameters
cnn_classifier
Including this section
hsv_classifier
Name | Type | Description |
---|---|---|
green_min_h |
int | the minimum hue of green color |
green_min_s |
int | the minimum saturation of green color |
green_min_v |
int | the minimum value (brightness) of green color |
green_max_h |
int | the maximum hue of green color |
green_max_s |
int | the maximum saturation of green color |
green_max_v |
int | the maximum value (brightness) of green color |
yellow_min_h |
int | the minimum hue of yellow color |
yellow_min_s |
int | the minimum saturation of yellow color |
yellow_min_v |
int | the minimum value (brightness) of yellow color |
yellow_max_h |
int | the maximum hue of yellow color |
yellow_max_s |
int | the maximum saturation of yellow color |
yellow_max_v |
int | the maximum value (brightness) of yellow color |
red_min_h |
int | the minimum hue of red color |
red_min_s |
int | the minimum saturation of red color |
red_min_v |
int | the minimum value (brightness) of red color |
red_max_h |
int | the maximum hue of red color |
red_max_s |
int | the maximum saturation of red color |
red_max_v |
int | the maximum value (brightness) of red color |
Training Traffic Light Classifier Model
Overview
This guide provides detailed instructions on training a traffic light classifier model using the mmlab/mmpretrain repository and deploying it using mmlab/mmdeploy. If you wish to create a custom traffic light classifier model with your own dataset, please follow the steps outlined below.
Data Preparation
Use Sample Dataset
Autoware offers a sample dataset that illustrates the training procedures for traffic light classification. This dataset comprises 1045 images categorized into red, green, and yellow labels. To utilize this sample dataset, please download it from link and extract it to a designated folder of your choice.
Use Your Custom Dataset
To train a traffic light classifier, adopt a structured subfolder format where each subfolder represents a distinct class. Below is an illustrative dataset structure example;
DATASET_ROOT
├── TRAIN
│ ├── RED
│ │ ├── 001.png
│ │ ├── 002.png
│ │ └── ...
│ │
│ ├── GREEN
│ │ ├── 001.png
│ │ ├── 002.png
│ │ └──...
│ │
│ ├── YELLOW
│ │ ├── 001.png
│ │ ├── 002.png
│ │ └──...
│ └── ...
│
├── VAL
│ └──...
│
│
└── TEST
└── ...
Installation
Prerequisites
Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name tl-classifier python=3.8 -y
conda activate tl-classifier
Step 3. Install PyTorch
Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
Install mmlab/mmpretrain
Step 1. Install mmpretrain from source
cd ~/
git clone https://github.com/open-mmlab/mmpretrain.git
cd mmpretrain
pip install -U openmim && mim install -e .
Training
MMPretrain offers a training script that is controlled through a configuration file. Leveraging an inheritance design pattern, you can effortlessly tailor the training script using Python files as configuration files.
In the example, we demonstrate the training steps on the MobileNetV2 model, but you have the flexibility to employ alternative classification models such as EfficientNetV2, EfficientNetV3, ResNet, and more.
Create a config file
Generate a configuration file for your preferred model within the configs
folder
touch ~/mmpretrain/configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py
Open the configuration file in your preferred text editor and make a copy of the provided content. Adjust the data_root variable to match the path of your dataset. You are welcome to customize the configuration parameters for the model, dataset, and scheduler to suit your preferences
# Inherit model, schedule and default_runtime from base model
_base_ = [
'../_base_/models/mobilenet_v2_1x.py',
'../_base_/schedules/imagenet_bs256_epochstep.py',
'../_base_/default_runtime.py'
]
# Set the number of classes to the model
# You can also change other model parameters here
# For detailed descriptions of model parameters, please refer to link below
# (Customize model)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/modules.html]
model = dict(head=dict(num_classes=3, topk=(1, 3)))
# Set max epochs and validation interval
train_cfg = dict(by_epoch=True, max_epochs=50, val_interval=5)
# Set optimizer and lr scheduler
optim_wrapper = dict(
optimizer=dict(type='SGD', lr=0.001, momentum=0.9))
param_scheduler = dict(type='StepLR', by_epoch=True, step_size=1, gamma=0.98)
dataset_type = 'CustomDataset'
data_root = "/PATH/OF/YOUR/DATASET"
# Customize data preprocessing and dataloader pipeline for training set
# These parameters calculated for the sample dataset
data_preprocessor = dict(
mean=[0.2888 * 256, 0.2570 * 256, 0.2329 * 256],
std=[0.2106 * 256, 0.2037 * 256, 0.1864 * 256],
num_classes=3,
to_rgb=True,
)
# Customize data preprocessing and dataloader pipeline for train set
# For detailed descriptions of data pipeline, please refer to link below
# (Customize data pipeline)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/pipeline.html]
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=224),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='PackInputs'),
]
train_dataloader = dict(
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='',
data_prefix='train',
with_label=True,
pipeline=train_pipeline,
),
num_workers=8,
batch_size=32,
sampler=dict(type='DefaultSampler', shuffle=True)
)
# Customize data preprocessing and dataloader pipeline for test set
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=224),
dict(type='PackInputs'),
]
# Customize data preprocessing and dataloader pipeline for validation set
val_cfg = dict()
val_dataloader = dict(
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='',
data_prefix='val',
with_label=True,
pipeline=test_pipeline,
),
num_workers=8,
batch_size=32,
sampler=dict(type='DefaultSampler', shuffle=True)
)
val_evaluator = dict(topk=(1, 3,), type='Accuracy')
test_dataloader = val_dataloader
test_evaluator = val_evaluator
Start training
cd ~/mmpretrain
python tools/train.py configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py
Training logs and weights will be saved in the work_dirs/mobilenet-v2_8xb32_custom
folder.
Convert PyTorch model to ONNX model
Install mmdeploy
The ‘mmdeploy’ toolset is designed for deploying your trained model onto various target devices. With its capabilities, you can seamlessly convert PyTorch models into the ONNX format.
# Activate your conda environment
conda activate tl-classifier
# Install mmenigne and mmcv
mim install mmengine
mim install "mmcv>=2.0.0rc2"
# Install mmdeploy
pip install mmdeploy==1.2.0
# Support onnxruntime
pip install mmdeploy-runtime==1.2.0
pip install mmdeploy-runtime-gpu==1.2.0
pip install onnxruntime-gpu==1.8.1
#Clone mmdeploy repository
cd ~/
git clone -b main https://github.com/open-mmlab/mmdeploy.git
Convert PyTorch model to ONNX model
cd ~/mmdeploy
# Run deploy.py script
# deploy.py script takes 5 main arguments with these order; config file path, train config file path,
# checkpoint file path, demo image path, and work directory path
python tools/deploy.py \
~/mmdeploy/configs/mmpretrain/classification_onnxruntime_static.py\
~/mmpretrain/configs/mobilenet_v2/train_mobilenet_v2.py \
~/mmpretrain/work_dirs/train_mobilenet_v2/epoch_300.pth \
/SAMPLE/IAMGE/DIRECTORY \
--work-dir mmdeploy_model/mobilenet_v2
Converted ONNX model will be saved in the mmdeploy/mmdeploy_model/mobilenet_v2
folder.
After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path
, label_file_path
, input_h
, input_w
…).
Note that, we only support labels defined in tier4_perception_msgs::msg::TrafficLightElement.
Assumptions / Known limits
(Optional) Error detection and handling
(Optional) Performance characterization
References/External links
[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.
[2] Tan, Mingxing, and Quoc Le. “EfficientNet: Rethinking model scaling for convolutional neural networks.” International conference on machine learning. PMLR, 2019.
(Optional) Future extensions / Unimplemented parts
Changelog for package autoware_traffic_light_classifier
0.43.0 (2025-03-21)
- Merge remote-tracking branch 'origin/main' into chore/bump-version-0.43
- chore: rename from [autoware.universe]{.title-ref} to [autoware_universe]{.title-ref} (#10306)
- feat(traffic_light_classifier): update diagnostics when harsh backlight is detected (#10218) feat: update diagnostics when harsh backlight is detected
- chore(perception): refactor perception launch
(#10186)
- fundamental change
- style(pre-commit): autofix
- fix typo
- fix params and modify some packages
- pre-commit
- fix
- fix spell check
- fix typo
- integrate model and label path
- style(pre-commit): autofix
- for pre-commit
- run pre-commit
- for awsim
- for simulatior
- style(pre-commit): autofix
- fix grammer in launcher
- add schema for yolox_tlr
- style(pre-commit): autofix
- fix file name
- fix
- rename
- modify arg name to
- fix typo
- change param name
- style(pre-commit): autofix
* chore
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Shintaro Tomie <<58775300+Shin-kyoto@users.noreply.github.com>> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- refactor: add autoware_cuda_dependency_meta (#10073)
- Contributors: Esteve Fernandez, Hayato Mizushima, Kotaro Uetake, Masato Saeki, Yutaka Kondo
0.42.0 (2025-03-03)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- chore: refine maintainer list
(#10110)
- chore: remove Miura from maintainer
* chore: add Taekjin-san to perception_utils package maintainer ---------
- feat(autoware_traffic_light_classifier): add traffic light
classifier schema, README and car and ped launcher
(#10048)
- feat(autoware_traffic_light_classifier):Add traffic light classifier schema and README
- add individual launcher
- style(pre-commit): autofix
- fix description
- fix README and source code
- separate schema in README
- fix README
- fix launcher
- style(pre-commit): autofix
* fix typo ---------Co-authored-by: MasatoSaeki <<masato.saeki@tier4.jp>> Co-authored-by: Masato Saeki <<78376491+MasatoSaeki@users.noreply.github.com>> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- Contributors: Fumiya Watanabe, Shunsuke Miura, Vishal Chauhan
0.41.2 (2025-02-19)
- chore: bump version to 0.41.1 (#10088)
- Contributors: Ryohsuke Mitsudome
0.41.1 (2025-02-10)
0.41.0 (2025-01-29)
- Merge remote-tracking branch 'origin/main' into tmp/bot/bump_version_base
- chore(autoware_traffic_light_classifier): modify docs
(#9819)
- modify docs
- style(pre-commit): autofix
* fix docs ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com>
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility &
tensorrt_common as unified lib for all perception components
(#9762)
- refactor(autoware_tensorrt_common): multi-TensorRT compatibility & tensorrt_common as unified lib for all perception components
- style(pre-commit): autofix
- style(autoware_tensorrt_common): linting
* style(autoware_lidar_centerpoint): typo Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> * docs(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- fix(autoware_lidar_transfusion): reuse cast variable
- fix(autoware_tensorrt_common): remove deprecated inference API
* style(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> * style(autoware_tensorrt_common): grammar Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- fix(autoware_tensorrt_common): const pointer
- fix(autoware_tensorrt_common): remove unused method declaration
- style(pre-commit): autofix
* refactor(autoware_tensorrt_common): readability Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- fix(autoware_tensorrt_common): return if layer not registered
* refactor(autoware_tensorrt_common): readability Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- fix(autoware_tensorrt_common): rename struct
* style(pre-commit): autofix ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>> Co-authored-by: Kotaro Uetake <<60615504+ktro2828@users.noreply.github.com>>
- Contributors: Amadeusz Szymko, Fumiya Watanabe, Masato Saeki
0.40.0 (2024-12-12)
- Merge branch 'main' into release-0.40.0
- Revert "chore(package.xml): bump version to 0.39.0 (#9587)" This reverts commit c9f0f2688c57b0f657f5c1f28f036a970682e7f5.
- fix: fix ticket links in CHANGELOG.rst (#9588)
- chore(package.xml): bump version to 0.39.0
(#9587)
- chore(package.xml): bump version to 0.39.0
- fix: fix ticket links in CHANGELOG.rst
* fix: remove unnecessary diff ---------Co-authored-by: Yutaka Kondo <<yutaka.kondo@youtalk.jp>>
- fix: fix ticket links in CHANGELOG.rst (#9588)
- fix(autoware_traffic_light_classifier): fix clang-diagnostic-delete-abstract-non-virtual-dtor (#9497) fix: clang-diagnostic-delete-abstract-non-virtual-dtor
- 0.39.0
- update changelog
- Merge commit '6a1ddbd08bd' into release-0.39.0
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- chore(autoware_traffic_light*): add maintainer
(#9280)
- add fundamental commit
* add forgot package ---------
- chore(package.xml): bump version to 0.38.0
(#9266)
(#9284)
- unify package.xml version to 0.37.0
- remove system_monitor/CHANGELOG.rst
- add changelog
* 0.38.0
- refactor(cuda_utils): prefix package and namespace with autoware (#9171)
- Contributors: Esteve Fernandez, Fumiya Watanabe, Masato Saeki, Ryohsuke Mitsudome, Yutaka Kondo, kobayu858
0.39.0 (2024-11-25)
- Merge commit '6a1ddbd08bd' into release-0.39.0
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- fix: fix ticket links to point to https://github.com/autowarefoundation/autoware_universe (#9304)
- chore(autoware_traffic_light*): add maintainer
(#9280)
- add fundamental commit
* add forgot package ---------
- chore(package.xml): bump version to 0.38.0
(#9266)
(#9284)
- unify package.xml version to 0.37.0
- remove system_monitor/CHANGELOG.rst
- add changelog
* 0.38.0
- refactor(cuda_utils): prefix package and namespace with autoware (#9171)
- Contributors: Esteve Fernandez, Masato Saeki, Yutaka Kondo
0.38.0 (2024-11-08)
- unify package.xml version to 0.37.0
- refactor(tensorrt_common)!: fix namespace, directory structure &
move to perception namespace
(#9099)
- refactor(tensorrt_common)!: fix namespace, directory structure & move to perception namespace
- refactor(tensorrt_common): directory structure
- style(pre-commit): autofix
* fix(tensorrt_common): correct package name for logging ---------Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]\@users.noreply.github.com> Co-authored-by: Kenzo Lobos Tsunekawa <<kenzo.lobos@tier4.jp>>
- fix(traffic_light_classifier): fix traffic light monitor warning (#8412) fix traffic light monitor warning
- fix(autoware_traffic_light_classifier): fix passedByValue (#8392) fix:passedByValue
- fix(traffic_light_classifier): fix zero size roi bug
(#7608)
- fix: continue to process when input roi size is zero
* fix: consider when roi size is zero, rois is empty fix * fix: use emplace_back instead of push_back for adding images and backlight indices The code changes in [traffic_light_classifier_node.cpp]{.title-ref} modify the way images and backlight indices are added to the respective vectors. Instead of using [push_back]{.title-ref}, the code now uses [emplace_back]{.title-ref}. This change improves performance and ensures proper object construction.
- refactor: bring back for loop skim and output_msg filling
- chore: refactor code to handle empty input ROIs in traffic_light_classifier_node.cpp
* refactor: using index instead of vector length ---------
- fix(traffic_light_classifier): fix funcArgNamesDifferent
(#8153)
- fix:funcArgNamesDifferent
* fix:clang format ---------
- refactor(traffic_light_*)!: add package name prefix of autoware_
(#8159)
- chore: rename traffic_light_fine_detector to autoware_traffic_light_fine_detector
- chore: rename traffic_light_multi_camera_fusion to autoware_traffic_light_multi_camera_fusion
- chore: rename traffic_light_occlusion_predictor to autoware_traffic_light_occlusion_predictor
- chore: rename traffic_light_classifier to autoware_traffic_light_classifier
- chore: rename traffic_light_map_based_detector to autoware_traffic_light_map_based_detector
* chore: rename traffic_light_visualization to autoware_traffic_light_visualization ---------
- Contributors: Amadeusz Szymko, Sho Iwasawa, Taekjin LEE, Yutaka Kondo, kobayu858
0.26.0 (2024-04-03)
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Name | Deps |
---|---|
tier4_perception_launch |
Launch files
- launch/car_traffic_light_classifier.launch.xml
-
- data_path [default: $(env HOME)/autoware_data]
- input/image [default: ~/image_raw]
- input/rois [default: ~/rois]
- output/traffic_signals [default: classified/traffic_signals]
- param_path [default: $(find-pkg-share autoware_traffic_light_classifier)/config/car_traffic_light_classifier.param.yaml]
- model_path [default: $(var data_path)/traffic_light_classifier/traffic_light_classifier_mobilenetv2_batch_6.onnx]
- label_path [default: $(var data_path)/traffic_light_classifier/lamp_labels.txt]
- build_only [default: false]
- launch/pedestrian_traffic_light_classifier.launch.xml
-
- data_path [default: $(env HOME)/autoware_data]
- input/image [default: ~/image_raw]
- input/rois [default: ~/rois]
- output/traffic_signals [default: classified/traffic_signals]
- param_path [default: $(find-pkg-share autoware_traffic_light_classifier)/config/pedestrian_traffic_light_classifier.param.yaml]
- model_path [default: $(var data_path)/traffic_light_classifier/ped_traffic_light_classifier_mobilenetv2_batch_6.onnx]
- label_path [default: $(var data_path)/traffic_light_classifier/lamp_labels_ped.txt]
- build_only [default: false]