![]() |
crlfnet repositoryradar_plugin msgs per_msgs pkg point_cloud site_model velodyne_description velodyne_gazebo_plugins velodyne_simulator |
|
Repository Summary
Checkout URI | https://github.com/orangesodahub/crlfnet.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2023-03-25 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
radar_plugin | 1.0.0 |
msgs | 0.0.0 |
per_msgs | 0.0.0 |
pkg | 0.0.0 |
point_cloud | 0.0.0 |
site_model | 1.0.0 |
velodyne_description | 1.0.9 |
velodyne_gazebo_plugins | 1.0.9 |
velodyne_simulator | 1.0.9 |
README
CRLFnet
The source code of the CRLFnet.
INSTALL & BUILD
Env: Ubuntu20.04 + ROS(Noetic) + Python3.x
- If using Google-colab, there is a recommanded environment: CUDA10.2+PyTorch1.6.
- Refer to INSTALL.md for the installation of
OpenPCDet
. - Install
ros_numpy
package mannually: [Source code][Install]
Absolute paths may need your mind: | file path | Line(s) | |———————————-|—————————————| | src/camera_info/get_cam_info.cpp | 26,64,102,140,178,216,254,292,330,368,| | src/LidCamFusion/OpenPCDet/tools/cfgs/custom_models/pointrcnn.yaml|4,5 | | src/LidCamFusion/OpenPCDet/tools/cfgs/custom_models/pv_rcnn.yaml|5,6 |
Docker
Build project from Dockerfile
:
docker build -t [name]:tag /docker/
or pull image directly:
docker pull gzzyyxy/crlfnet:yxy
Launch the Site
This needs ROS to be installed.
cd /ROOT
# launch the site
roslaunch site_model spwan.launch
# launch the vehicles (optional)
woslaunch pkg racecar.launch
Rad-Cam Fusion
This part integrates the Kalman-Filter to real-time radar data.
Necessary Configurations on GPU and model data
-
Set
use_cuda
toTrue
insrc/site_model/config/config.yaml
to use GPU. -
Download
yolo_weights.pth
from jbox, and move tosrc/site_model/src/utils/yolo/model_data
.
Run The Rad-Cam Fusion Model
The steps to run the radar-camera fusion is listed as follows.
For the last command, an optional parameter --save
or -s
is available if you need to save the track of vehicles as images. The --mode
or -m
parameter has three options, which are normal
, off-yolo
and from-save
. The off-yolo
and from-save
modes enable the user to run YOLO seprately to simulate a higher FPS.
#--- AFTER THE SITE LAUNCHED ---#
# run the radar message filter
rosrun site_model radar_listener.py
# run the rad-cam fusion program
cd src/site_model
python -m src.RadCamFusion.fusion [-m MODE] [-s]
Camera Calibration
The calibration parameters are needed in related camera-data transformation. Once the physical models are modified, update the camera calibration parameters:
#--- AFTER THE SITE LAUNCHED ---#
# get physical parameters of cameras
rosrun site_model get_cam_info
# generate calibration formula according to parameters of cameras
python src/site_model/src/utils/generate_calib.py
Lid-Cam Fusion
This part integrates OpenPCDet
to real-time lidar object detection, refer to CustomDataset.md to find how to proceed with self-product dataset using only raw lidar data.
Config Files
Configurations for model and dataset need to be specified:
-
Model Configs
tools/cfgs/custom_models/XXX.yaml
-
Dataset Configs
tools/cfgs/dataset_configs/custom_dataset.yaml
Now pointrcnn.yaml
and pv_rcnn.yaml
are supported.
Datasets
Create dataset infos before training:
cd OpenPCDet/
python -m pcdet.datasets.custom.custom_dataset create_custom_infos tools/cfgs/dataset_configs/custom_dataset.yaml
File custom_infos_train.pkl
, custom_dbinfos_train.pkl
and custom_infos_test.pkl
will be saved to data/custom
.
Train
Specify the model using YAML files defined above.
cd tools/
python train.py --cfg_file path/to/config/file/
For example, if using PV_RCNN
for training:
cd tools/
python train.py --cfg_file cfgs/custom_models/pv_rcnn.yaml --batch_size 2 --workers 4 --epochs 80
Pretrained Model
Download pretrained model through these links: |model |time cost |URL | |————–|—————-|———————————————————————————-| |PointRCNN |~3h |Google drive / Jbox| |PV_RCNN |~6h |Google drive / Jbox|
Predict (Local)
Prediction on local dataset help to check the result of trainin. Prepare the input properly.
python pred.py --cfg_file path/to/config/file/ --ckpt path/to/checkpoint/ --data_path path/to/dataset/
For example:
python pred.py --cfg_file cfgs/custom_models/pv_rcnn.yaml --ckpt ../output/custom_models/pv_rcnn/default/ckpt/checkpoint_epoch_80.pth --data_path ../data/custom/testing/velodyne/
Visualize the results in rviz, white boxes represents the vehicles.
Lid-Cam Fusion
Follow these steps for only lidar-camera fusion. Some of them need different bash terminals. For the last command, additional parameter --save_result
is required if need to save the results of fusion in the form of image.
#--- AFTER THE SITE LAUNCHED --#
# cameras around lidars start working
python src/site_model/src/LidCamFusion/camera_listener.py
# lidars start working
python src/site_model/src/LidCamFusion/pointcloud_listener.py
# combine all the point clouds and fix their coords
rosrun site_model pointcloud_combiner
# start camera-lidar fusion
cd src/site_model/
python -m src.LidCamFusion.fusion [--config] [--eval] [--re] [--disp] [--printl] [--printm]
TODO…
Issues
Some problems may occurred during debugging.
- Confused: set the batch_size=1 and still out of memory: https://github.com/open-mmlab/OpenPCDet/issues/140
- 段错误(核心已转储) when run dem.py: https://github.com/open-mmlab/OpenPCDet/issues/846
- N > 0 assert faild. CUDA kernel launch blocks must be positive, but got N= 0 when training: https://github.com/open-mmlab/OpenPCDet/issues/945
- raise NotImplementedError, NaN or Inf found in input tensor when training: https://github.com/open-mmlab/OpenPCDet/issues/280
- fix recall calculation bug for empty scene: https://github.com/open-mmlab/OpenPCDet/pull/908
- installation Error “ fatal error: THC/THC.h: No such file or directory #include <THC/THC.h> “: https://github.com/open-mmlab/OpenPCDet/issues/1014
- …
- Welcome to report more issues!