![]() |
cyberrunner repositorycyberrunner_camera cyberrunner_dreamer cyberrunner_dynamixel cyberrunner_interfaces cyberrunner_state_estimation |
|
Repository Summary
Checkout URI | https://github.com/thomasbi1/cyberrunner.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2024-10-21 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
cyberrunner_camera | 0.0.0 |
cyberrunner_dreamer | 0.0.0 |
cyberrunner_dynamixel | 0.0.0 |
cyberrunner_interfaces | 0.0.0 |
cyberrunner_state_estimation | 0.0.0 |
README
CyberRunner
CyberRunner is an AI robot whose task is to learn how to play the popular and widely accessible labyrinth marble game. It is able to beat the best human player with only 6 hours of practice.
This repository contains all necessary code and documentation to build your own CyberRunner robot, and let the robot learn to solve the maze!
Author: Thomas Bi
With contributions by: Ethan Marot, Tim Flückiger, Cara Koepele, Aswin Ramachandran
To learn more:
Overview
CyberRunner exploits recent advances in model-based reinforcement learning and its ability to make informed decisions about potentially successful behaviors by planning into the future. The robot learns by collecting experience. While playing the game, it captures observations and receives rewards based on its performance, all through the “eyes” of a camera looking down at the labyrinth. A memory is kept of the collected experience. Using this memory, the model-based reinforcement learning algorithm learns how the system behaves, and based on its understanding of the game it recognizes which strategies and behaviors are more promising. Consequently, the way the robot uses the two motors – its “hands” – to play the game is continuously improved. Importantly, the robot does not stop playing to learn; the algorithm runs concurrently with the robot playing the game. As a result, the robot keeps getting better, run after run.
Documentation
To get started with CyberRunner, please refer to the Docs.
Citing
If you use this work in an academic context, please cite the following publication:
-
T. Bi, R. D’Andrea, “Sample-Efficient Learning to Solve a Real-World Labyrinth Game Using Data-Augmented Model-Based Reinforcement Learning”, 2023. (PDF)
@article{bi2023sample, title={Sample-Efficient Learning to Solve a Real-World Labyrinth Game Using Data-Augmented Model-Based Reinforcement Learning}, author={Bi, Thomas and D'Andrea, Raffaello}, journal={arXiv preprint arXiv:2312.09906}, year={2023} }
License
The source code is released under an AGPL-3.0 license.