Repository Summary
Description | [ICLR 2025 Spotlight] MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility |
Checkout URI | https://github.com/metadriverse/metaurban.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2025-02-25 |
Dev Status | UNKNOWN |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
metaurban_example_bridge | 0.0.0 |
README
MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility
MetaUrban
is a cutting-edge simulation platform designed for Embodied AI research in urban spaces. It offers:
- π Infinite Urban Scene Generation: Create diverse, interactive city environments.
- ποΈ High-Quality Urban Objects: Includes real-world infrastructure and clutter.
- π§ Rigged Human Models: SMPL-compatible models with diverse motions.
- π€ Urban Agents: Including delivery robots, cyclists, skateboarders, and more.
- πΉοΈ Flexible User Interfaces: Compatible with mouse, keyboard, joystick, and racing wheel.
- π₯ Configurable Sensors: Supports RGB, depth, semantic map, and LiDAR.
- βοΈ Rigid-Body Physics: Realistic mechanics for agents and environments.
- π OpenAI Gym Interface: Seamless integration for AI and reinforcement learning tasks.
- π Framework Compatibility: Seamlessly integrates with Ray, Stable Baselines, Torch, Imitation, etc.
π Check out MetaUrban
Documentation to learn more!
Latest Updates
- [24/01/25] v0.1.0: The first official release of MetaUrban :wrench: [release notes]
Table of Contents
- MetaUrban
π Citation
If you find MetaUrban helpful for your research, please cite the following BibTeX entry.
@article{wu2025metaurban,
title={MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility},
author={Wu, Wayne and He, Honglin and He, Jack and Wang, Yiran and Duan, Chenda and Liu, Zhizheng and Li, Quanyi and Zhou, Bolei},
journal={ICLR},
year={2025}
}
π Quick Start
Hardware Recommendations
To ensure the best experience with MetaUrban, please review the following hardware guidelines:
-
Tested Platforms:
- Linux: Supported and Recommended (preferably Ubuntu).
- Windows: Works with WSL2.
- MacOS: Supported.
-
Recommended Hardware:
- GPU: Nvidia GPU with at least 8GB RAM and 3GB VRAM.
- Storage: Minimum of 10GB free space.
-
Performance Benchmarks:
- Tested GPUs: Nvidia RTX-3090, RTX-4080, RTX-4090, RTX-A5000, Tesla V100.
- Example benchmark:
- Running
metaurban/examples/drive_in_static_env.py
achieves:- ~60 FPS
- ~2GB GPU memory usage
- Running
One-step Installation
git clone -b main --depth 1 https://github.com/metadriverse/metaurban.git
cd metaurban
bash install.sh
conda activate metaurban
Step-by-step Installation
If not installed successfully by running install.sh
, try step-by-step installation.
Create conda environment and install metaurban
conda create -n metaurban python=3.9
conda activate metaurban
pip install -e .
Install ORCA algorithm for trajectory generation
conda install pybind11 -c conda-forge
cd metaurban/orca_algo && rm -rf build
bash compile.sh && cd ../..
It should be noted that you should install
``` on your system before installing ORCA, more details can be found in [FAQs](documentation/FAQs.md).
Then install the following libs to use MetaUrban for RL training and testing.
```bash
pip install stable_baselines3 imitation tensorboard wandb scikit-image pyyaml gdown
Quick Run
We provide a script to quickly run our simulator with a tiny subset of 3D assets. The assets (~500mb) will be downloaded automatically the first time you run the script:
python metaurban/examples/tiny_example.py
User Registration
In order to access the entire dataset and use the complete version, please fill out a form to register through registration link. This process will be triggered automatically when you attempt to pull the full set of assets.
Assets
The assets are compressed and password-protected. Youβll be prompted to fill out a registration form the first time you run the script to download all assets. Youβll receive the password once the form is completed.
Assets will be downloaded automatically when first running the script
python metaurban/examples/drive_in_static_env.py
Or use the script
python metaurban/pull_asset.py --update
If you cannot download assets by Python scripts, please download assets via the link in the Python file and organize the folder as:
-metaurban
-metaurban
-assets
-assets_pedestrian
-base_class
-...
Docker Setup
We provide a docker file for MetaUrban. This works on machines with an NVIDIA GPU. To set up the MetaUrban using docker, follow the below steps:
[sudo] docker -D build -t metaurban .
[sudo] docker run -it metaurban
cd metaurban/orca_algo && rm -rf build
bash compile.sh && cd ../..
Then you can run the simulator in docker.
πββοΈ Simulation Environment Roam
We provide examples to demonstrate features and basic usages of metaurban after the local installation.
Point Navigation Environment
In a point navigation environment, there will be only static objects besides the ego agent in the scenario.
Run the following command to launch a simple scenario with manual control. Press W,S,A,D
to control the delivery robot.
python -m metaurban.examples.drive_in_static_env
--density_obj 0.4
Press the key
``` to load a new scenario. If there is no response when you press `W,S,A,D`, press `T` to enable manual control.
### Social Navigation Environment
In a social navigation environment, there will be vehicles, pedestrians, and some other agents in the scenario.
Run the following command to launch a simple scenario with manual control. Press `W,S,A,D` to control the delivery robot.
```bash
python -m metaurban.examples.drive_in_dynamic_env
--density_obj 0.4 --density_ped 1.0
π€ Run a Pre-Trained (PPO) Model
We provide RL models trained on the task of navigation, which can be used to preview the performance of RL agents.
python -m metaurban.examples.drive_with_pretrained_policy
π Model Training and Evaluation
Reinforcement Learning
Training
python RL/PointNav/train_ppo.py
For PPO training in PointNav Env. You can change the parameters in the file.
python RL/SocialNav/train_ppo.py
For PPO training in Social Env. You can change the parameters in the file.
We only test training on Linux, there may be some issues on Windows and MacOS.
Evaluation
We provide a script used to evaluate the quantitative performance of the RL agent
python RL/PointNav/eval_ppo.py --policy ./pretrained_policy_576k.zip
As an example of evaluating the provided policy.
Imitation Learning
Data collection
python scripts/collect_data_in_custom_env.py
For expert data collection used in IL. You can change the parameters in the file custom_metaurban_env.yaml
to modify the environment.
Training
python IL/PointNav/train_BC.py
For behavior cloning, you should change the path of the expert_data_path
.
python IL/PointNav/train_GAIL.py
For GAIL, you should change the path of the expert_data_path
.
π Questions and Support
For frequently asked questions about installing, RL training and other modules, please refer to: FAQs
Canβt find the answer to your question? Try asking the developers and community on our Discussions forum.
π Acknowledgement
The simulator can not be built without the help from Panda3D community and the following open-sourced projects:
- panda3d-simplepbr: https://github.com/Moguri/panda3d-simplepbr
- panda3d-gltf: https://github.com/Moguri/panda3d-gltf
- RenderPipeline (RP): https://github.com/tobspr/RenderPipeline
- Water effect for RP: https://github.com/kergalym/RenderPipeline
- procedural_panda3d_model_primitives: https://github.com/Epihaius/procedural_panda3d_model_primitives
- DiamondSquare for terrain generation: https://github.com/buckinha/DiamondSquare
- KITSUNETSUKI-Asset-Tools: https://github.com/kitsune-ONE-team/KITSUNETSUKI-Asset-Tools
- Objaverse: https://github.com/allenai/objaverse-xl
- OmniObject3D: https://github.com/omniobject3d/OmniObject3D
- Synbody: https://github.com/SynBody/SynBody
- BEDLAM: https://github.com/pixelite1201/BEDLAM
- ORCA: https://gamma.cs.unc.edu/ORCA/