![]() |
offlinerl-interaction repositorylanelet2 lanelet2_core lanelet2_examples lanelet2_io lanelet2_maps lanelet2_matching lanelet2_projection lanelet2_python lanelet2_routing lanelet2_validation |
|
Repository Summary
Description | |
Checkout URI | https://github.com/weiaif/offlinerl-interaction.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2022-09-23 |
Dev Status | UNKNOWN |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
lanelet2 | 1.1.1 |
lanelet2_core | 1.1.1 |
lanelet2_examples | 1.1.1 |
lanelet2_io | 1.1.1 |
lanelet2_maps | 1.1.1 |
lanelet2_matching | 1.1.1 |
lanelet2_projection | 1.1.1 |
lanelet2_python | 1.1.1 |
lanelet2_routing | 1.1.1 |
lanelet2_validation | 1.1.1 |
README
OfflineRL-INTERACTION Dataset
This repo is the implementation of the paper “Offline Reinforcement Learning for Autonomous Driving with Real World Driving Data”. It contains I-Sim that can replay the scenarios in the INTERACTION dataset while also can be to generate augmented data. It also contains the process of real world driving data, autonomous driving offline training dataset and benchmark with four different algorithms.
Get INTERACTION Dataset
The process of Real World Driving Data
cd offlinedata
python create_demo.py
Deploy I-Sim
Docker install lanelet2
cd Lanelet2-master
docker build -t #image_name# .
Run docker and do port mapping
docker run -it -e DISPLAY -p 5557-5561:5557-5561 -v $path for 'interaction-master'$:/home/developer/workspace/interaction-dataset-master -v /tmp/.X11-unix:/tmp/.X11-unix --user="$(id --user):$(id --group)" --name #container_name# #image_name#:latest bash
Software updata
cd Docker #image_name#
sudo apt update
sudo apt install python-tk #python2
Start I-Sim
docker restart #container_name#
docker exec -it #container_name# bash
cd interaction-dataset-master/python/interaction_gym_merge/
export DISPLAY=:0
Test and run I-Sim
python interaction_env.py "DR_CHN_Merging_ZS"
Offline RL Training
We provide implementation of 3 offline RL algorithms and imitation learning algorithm for evaluating
| Offline RL method | Name | Paper |
|—|—|—|
| Behavior Cloning | bc
| paper|
| BCQ | bcq
| paper|
| TD3+BC | td3_bc
| paper |
| CQL | cql
| paper|
After processing the dataset, you can evaluate it using offline RL method. For example, if you want to run TD3+BC then you can run
python train_offline.py --port 5557 --scenario_name DR_CHN_Merging_ZS --alog_name TD3_BC --buffer_name CHN_human_expert_0
Visualization of the results
Buffer: offline_expert, algo: TD3+BC
Buffer: expert_exploratory, algo: TD3+BC