![]() |
aws-deepracer-follow-the-leader-sample-project repositoryctrl_pkg deepracer_interfaces_pkg ftl_launcher ftl_navigation_pkg object_detection_pkg webserver_pkg |
|
Repository Summary
Checkout URI | https://github.com/aws-deepracer/aws-deepracer-follow-the-leader-sample-project.git |
VCS Type | git |
VCS Version | main |
Last Updated | 2022-06-06 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
ctrl_pkg | 0.0.1 |
deepracer_interfaces_pkg | 0.0.1 |
ftl_launcher | 0.0.1 |
ftl_navigation_pkg | 0.0.1 |
object_detection_pkg | 0.0.1 |
webserver_pkg | 0.0.1 |
README
AWS DeepRacer Follow the Leader (FTL) sample project
Overview
The AWS DeepRacer Follow the Leader (FTL) sample project is an sample application built on top of the existing AWS DeepRacer application, which uses an object-detection machine learning model through which the AWS DeepRacer device can identify and follow a person. For detailed information on the FTL sample project, see the Getting started section.
License
The source code is released under Apache 2.0.
Installation
Follow these steps to install the AWS DeepRacer Follow the Leader (FTL) sample project.
Prerequisites
The AWS DeepRacer device comes with all the prerequisite packages and libraries installed to run the FTL sample project. For more information about the preinstalled set of packages and libraries on the DeepRacer device, and about installing the required build systems, see Getting started with AWS DeepRacer OpenSource. The FTL sample project requires you to install the AWS DeepRacer application on the device, because it leverages most of the packages from the core application.
The following are additional software and hardware requirements for using the FTL sample project on the AWS DeepRacer device.
-
Download and optimize the object-detection model: Follow the instructions to download and optimize the object-detection model and copy it to the required location on the AWS DeepRacer device.
-
Calibrate the AWS DeepRacer (optional): Follow the instructions to calibrate the mechanics of your AWS DeepRacer vehicle so the vehicle performance is optimal and it behaves as expected.
-
Set up the Intel Neural Compute Stick 2 (optional): The
object_detection_node
provides functionality to offload the inference to a Intel Neural Compute Stick 2 connected to the AWS DeepRacer device. This is an optional setting that enhances the inference performance of the object-detection model. For more details about running inference on the Movidius NCS (Neural Compute Stick) with OpenVINO™ toolkit, see this video.
Attach the Neural Compute Stick 2 firmly in the back slot of the AWS DeepRacer, open a terminal, and run the following commands as the root user to install the dependencies of the Intel Neural Compute Stick 2 on the AWS DeepRacer device.
-
Switch to the root user:
sudo su
-
Navigate to the OpenVino installation directory:
cd /opt/intel/openvino_2021/install_dependencies
-
Set the environment variables required to run the Intel OpenVino scripts:
source /opt/intel/openvino_2021/bin/setupvars.sh
-
Run the dependency installation script for the Intel Neural Compute Stick:
./install_NCS_udev_rules.sh
Downloading and building
Open a terminal on the AWS DeepRacer device and run the following commands as the root user.
-
Switch to the root user before you source the ROS 2 installation:
sudo su
-
Stop the
deepracer-core.service
that is currently running on the device:systemctl stop deepracer-core
-
Source the ROS 2 Foxy setup bash script:
source /opt/ros/foxy/setup.bash
-
Set the environment variables required to run Intel OpenVino scripts:
source /opt/intel/openvino_2021/bin/setupvars.sh
-
Create a workspace directory for the package:
mkdir -p ~/deepracer_ws cd ~/deepracer_ws
-
Clone the entire FTL sample project on the AWS DeepRacer device:
git clone https://github.com/aws-deepracer/aws-deepracer-follow-the-leader-sample-project.git cd ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/
-
Clone the
async_web_server_cpp
,web_video_server
, andrplidar_ros dependency
packages on the AWS DeepRacer device:cd ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/ && ./install_dependencies.sh
-
Fetch the unreleased dependencies:
cd ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/ rosws update
-
Resolve the dependencies:
cd ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/ && rosdep install -i --from-path . --rosdistro foxy -y
-
Build the packages in the workspace:
cd ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/ && colcon build
Using the FTL sample application
Follow this procedure to use the FTL sample application.
Running the node
To launch the FTL sample application as the root user on the AWS DeepRacer device, open another terminal on the device and run the following commands as the root user.
-
Switch to the root user before you source the ROS 2 installation:
sudo su
-
Source the ROS 2 Foxy setup bash script:
source /opt/ros/foxy/setup.bash
-
Set the environment variables required to run Intel OpenVino scripts:
source /opt/intel/openvino_2021/bin/setupvars.sh
-
Source the setup script for the installed packages:
source ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/install/setup.bash
-
Launch the nodes required for the FTL sample project:
ros2 launch ftl_launcher ftl_launcher.py
Once the FTL sample application is launched, you can follow the steps here to open the AWS DeepRacer Vehicle’s Device Console and checkout the FTL mode tab which will help you control the vehicle.
Enabling followtheleader
mode using the CLI
Once the ftl_launcher
has been kicked off, open a new terminal as the root user.
-
Switch to the root user before you source the ROS2 installation:
sudo su
-
Navigate to the FTL workspace:
cd ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/
-
Source the ROS 2 Foxy setup bash script:
source /opt/ros/foxy/setup.bash
-
Source the setup script for the installed packages:
source ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/install/setup.bash
-
Set the mode of the AWS DeepRacer via
ctrl_pkg
tofollowtheleader
using the following ROS 2 service call:ros2 service call /ctrl_pkg/vehicle_state deepracer_interfaces_pkg/srv/ActiveStateSrv "{state: 3}"
-
Enable
followtheleader
mode using the following ROS 2 service call:ros2 service call /ctrl_pkg/enable_state deepracer_interfaces_pkg/srv/EnableStateSrv "{is_active: True}"
Changing the MAX_SPEED
scale of the AWS DeepRacer:
You can modify the MAX_SPEED
scale of the AWS DeepRacer using an ROS 2 service call in case the car isn’t moving as expected. This can occur because of the vehicle battery percentage, the surface on which the car is operating, or for other reasons.
-
Switch to the root user before you source the ROS 2 installation:
sudo su
-
Navigate to the FTL workspace:
cd ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/
-
Source the ROS 2 Foxy setup bash script:
source /opt/ros/foxy/setup.bash
-
Source the setup script for the installed packages:
source ~/deepracer_ws/aws-deepracer-follow-the-leader-sample-project/deepracer_follow_the_leader_ws/install/setup.bash
-
Change the
MAX SPEED
to xx% of theMAX
Scale:ros2 service call /ftl_navigation_pkg/set_max_speed deepracer_interfaces_pkg/srv/SetMaxSpeedSrv "{max_speed_pct: 0.xx}"
Example: Change the
MAX SPEED
to 75% of theMAX
Scale:ros2 service call /ftl_navigation_pkg/set_max_speed deepracer_interfaces_pkg/srv/SetMaxSpeedSrv "{max_speed_pct: 0.75}"
Launch files
The ftl_launcher.py
, included in this package, is the main launcher file that launches all the required nodes for the FTL sample project. This launcher file also includes the nodes from the AWS DeepRacer core application.
from launch import LaunchDescription
from launch_ros.actions import Node
def generate_launch_description():
ld = LaunchDescription()
object_detection_node = Node(
package='object_detection_pkg',
namespace='object_detection_pkg',
executable='object_detection_node',
name='object_detection_node',
parameters=[{
'DEVICE': 'CPU',
'PUBLISH_DISPLAY_OUTPUT': True
}]
)
ftl_navigation_node = Node(
package='ftl_navigation_pkg',
namespace='ftl_navigation_pkg',
executable='ftl_navigation_node',
name='ftl_navigation_node'
)
camera_node = Node(
package='camera_pkg',
namespace='camera_pkg',
executable='camera_node',
name='camera_node',
parameters=[
{'resize_images': False}
]
)
ctrl_node = Node(
package='ctrl_pkg',
namespace='ctrl_pkg',
executable='ctrl_node',
name='ctrl_node'
)
deepracer_navigation_node = Node(
package='deepracer_navigation_pkg',
namespace='deepracer_navigation_pkg',
executable='deepracer_navigation_node',
name='deepracer_navigation_node'
)
software_update_node = Node(
package='deepracer_systems_pkg',
namespace='deepracer_systems_pkg',
executable='software_update_node',
name='software_update_node'
)
model_loader_node = Node(
package='deepracer_systems_pkg',
namespace='deepracer_systems_pkg',
executable='model_loader_node',
name='model_loader_node'
)
otg_control_node = Node(
package='deepracer_systems_pkg',
namespace='deepracer_systems_pkg',
executable='otg_control_node',
name='otg_control_node'
)
network_monitor_node = Node(
package='deepracer_systems_pkg',
namespace='deepracer_systems_pkg',
executable='network_monitor_node',
name='network_monitor_node'
)
deepracer_systems_scripts_node = Node(
package='deepracer_systems_pkg',
namespace='deepracer_systems_pkg',
executable='deepracer_systems_scripts_node',
name='deepracer_systems_scripts_node'
)
device_info_node = Node(
package='device_info_pkg',
namespace='device_info_pkg',
executable='device_info_node',
name='device_info_node'
)
battery_node = Node(
package='i2c_pkg',
namespace='i2c_pkg',
executable='battery_node',
name='battery_node'
)
inference_node = Node(
package='inference_pkg',
namespace='inference_pkg',
executable='inference_node',
name='inference_node'
)
model_optimizer_node = Node(
package='model_optimizer_pkg',
namespace='model_optimizer_pkg',
executable='model_optimizer_node',
name='model_optimizer_node'
)
rplidar_node = Node(
package='rplidar_ros2',
namespace='rplidar_ros',
executable='rplidar_scan_publisher',
name='rplidar_scan_publisher',
parameters=[{
'serial_port': '/dev/ttyUSB0',
'serial_baudrate': 115200,
'frame_id': 'laser',
'inverted': False,
'angle_compensate': True,
}]
)
sensor_fusion_node = Node(
package='sensor_fusion_pkg',
namespace='sensor_fusion_pkg',
executable='sensor_fusion_node',
name='sensor_fusion_node'
)
servo_node = Node(
package='servo_pkg',
namespace='servo_pkg',
executable='servo_node',
name='servo_node'
)
status_led_node = Node(
package='status_led_pkg',
namespace='status_led_pkg',
executable='status_led_node',
name='status_led_node'
)
usb_monitor_node = Node(
package='usb_monitor_pkg',
namespace='usb_monitor_pkg',
executable='usb_monitor_node',
name='usb_monitor_node'
)
webserver_publisher_node = Node(
package='webserver_pkg',
namespace='webserver_pkg',
executable='webserver_publisher_node',
name='webserver_publisher_node'
)
web_video_server_node = Node(
package='web_video_server',
namespace='web_video_server',
executable='web_video_server',
name='web_video_server'
)
ld.add_action(object_detection_node)
ld.add_action(ftl_navigation_node)
ld.add_action(camera_node)
ld.add_action(ctrl_node)
ld.add_action(deepracer_navigation_node)
ld.add_action(software_update_node)
ld.add_action(model_loader_node)
ld.add_action(otg_control_node)
ld.add_action(network_monitor_node)
ld.add_action(deepracer_systems_scripts_node)
ld.add_action(device_info_node)
ld.add_action(battery_node)
ld.add_action(inference_node)
ld.add_action(model_optimizer_node)
ld.add_action(rplidar_node)
ld.add_action(sensor_fusion_node)
ld.add_action(servo_node)
ld.add_action(status_led_node)
ld.add_action(usb_monitor_node)
ld.add_action(webserver_publisher_node)
ld.add_action(web_video_server_node)
return ld
Configuration file and parameters
Parameter name | Description |
---|---|
DEVICE (optional) |
If set as MYRIAD , uses the Intel Compute Stick 2 for inference. Else, uses the CPU for inference by default, even if it is removed. |
PUBLISH_DISPLAY_OUTPUT |
Set to True or False if the inference output images need to be published to localhost using web_video_server . |
resize_images |
Set to True or False depending on if you want to resize the images in camera_pkg |
Demo
Resources
CONTRIBUTING
Contributing guidelines
Thank you for your interest in contributing to our project. Whether it’s a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.
Reporting bugs and requesting features
Use the GitHub issue tracker to report bugs or suggest features.
When filing an issue, check existing open and recently closed issues to make sure someone else hasn’t already reported the issue. Try to include as much information as you can. Details like these are incredibly useful:
- A reproducible test case or series of steps
- The version of our code being used
- Any modifications you’ve made relevant to the bug
- Anything unusual about your environment or deployment
Contributing through pull requests
Contributions made through pull requests are much appreciated. Before sending us a pull request, ensure that:
- You are working against the latest source on the
main
branch. - You check existing open and recently merged pull requests to make sure someone else hasn’t addressed the problem already.
- You open an issue to discuss any significant work; we would hate for your time to be wasted.
To send us a pull request:
- Fork the repository.
- Modify the source; focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
- Ensure local tests pass.
- Commit to your fork using clear commit messages.
- Send us a pull request, answering any default questions in the pull request interface.
- Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
GitHub provides additional documentation on forking a repository and creating a pull request.
Finding ways to contribute
Looking at the existing issues is a great way to find something on which to contribute. As our projects, by default, use the default GitHub issue labels (enhancement
, bug
, duplicate
, help wanted
, invalid
, question
, wontfix
), looking at any help wanted
issues is a great place to start.
Code of Conduct
This project has adopted the Amazon Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opensource-codeofconduct@amazon.com with any additional questions or comments.
Security issue notifications
If you discover a potential security issue in this project we ask that you notify Amazon Security via our vulnerability reporting page. Please do not create a public GitHub issue.
Licensing
See the LICENSE file for our project’s licensing. We will ask you to confirm the licensing of your contribution.