![]() |
stretch_deep_perception package from stretch_ros2 repohello_helpers stretch_calibration stretch_core stretch_deep_perception stretch_demos stretch_description stretch_funmap stretch_nav2 stretch_octomap stretch_rtabmap |
Package Summary
Tags | No category tags. |
Version | 0.2.0 |
License | Apache License 2.0 |
Build type | AMENT_PYTHON |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/hello-robot/stretch_ros2.git |
VCS Type | git |
VCS Version | humble |
Last Updated | 2025-03-10 |
Dev Status | UNMAINTAINED |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Hello Robot Inc.
Authors
Overview
stretch_deep_perception provides demonstration code that uses open deep learning models to perceive the world.
This code depends on the stretch_deep_perception_models repository, which should be installed under ~/stretch_user/ on your Stretch robot.
Link to the stretch_deep_perception_models repository: https://github.com/hello-robot/stretch_deep_perception_models
Getting Started Demos
There are two demonstrations for you to try.
Face Estimation Demo
First, try running the face detection demonstration via the following command:
ros2 launch stretch_deep_perception stretch_detect_faces.launch.py
RViz should show you the robot, the point cloud from the camera, and information about detected faces. If it detects a face, it should show a 3D planar model of the face and 3D facial landmarks. These deep learning models come from OpenCV and the Open Model Zoo (https://github.com/opencv/open_model_zoo).
You can use the keyboard_teleop commands within the terminal that you ran the launch in order to move the robot’s head around to see your face.
i (tilt up)
j (pan left) l (pan right)
, (tilt down)
Pan left and pan right are in terms of the robot’s left and the robot’s right.
Now shut down everything that was launched by pressing q and Ctrl-C in the terminal.
Object Detection Demo
Second, try running the object detection demo, which uses the tiny YOLO v5 object detection network (https://pytorch.org/hub/ultralytics_yolov5/). RViz will display planar detection regions. Detection class labels will be printed to the terminal.
ros2 launch stretch_deep_perception stretch_detect_objects.launch.py
References
[1] Hand It Over or Set It Down: A User Study of Object Delivery with an Assistive Mobile Manipulator, Young Sang Choi, Tiffany L. Chen, Advait Jain, Cressel Anderson, Jonathan D. Glass, and Charles C. Kemp, IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2009. http://pwp.gatech.edu/hrl/wp-content/uploads/sites/231/2016/05/roman2009_delivery.pdf
License
For license information, please see the LICENSE files.
Wiki Tutorials
Package Dependencies
Deps | Name |
---|---|
ament_copyright | |
ament_flake8 | |
ament_pep257 | |
actionlib_msgs | |
geometry_msgs | |
nav_msgs | |
control_msgs | |
trajectory_msgs | |
rclpy | |
std_msgs | |
sensor_msgs | |
sensor_msgs_py | |
std_srvs | |
tf2 |