Mentions légales du service

Skip to content
Snippets Groups Projects


Body Perception

This is the repository for Spring WP3 regarging the Robot Body Perception.

Main partners are : Inria Perception

This project is the ROS implementation of the Robot Body Perception, i.e. to detect and to track the humans on the scene and add them to the HRI framework. This implementation is working in real time on the ARI robot from Pal Robotics.

Table of Contents
  1. Getting Started
  2. Usage
  3. Roadmap
  4. License
  5. Contact
  6. Acknowledgments

Getting Started

Getting Started

Prerequisites

In order to run this ROS implementation, you can install the different dependencies manually or directly build the docker container image. On your local machine, you will need to install ROS (if you want to install manually) or Docker (if you just want to build the docker container image).

First, install the Body Perception repository and its submodules:

git clone https://gitlab.inria.fr/spring/wp3_av_perception/docker_body_perception.git
git checkout main
git pull
git submodule update --init
git submodule update --recursive --remote
cd modules/omnicam/
git checkout master
git pull
cd ../pygco/
git checkout main   
git pull
cd ../../src/body_3d_tracker
git checkout main
git pull
cd ../body_to_face_mapper
git checkout main
git pull
cd ../front_fisheye_2d_body_pose_detector/
git checkout main
git pull
cd ../group_detector/
git checkout main
git pull
cd ../hri_msgs
git checkout 0.8.0
git pull
cd ../hri_person_manager/
git checkout master
git pull
cd ../libhri/
git checkout main
git pull
cd ../ros_openpose/
git checkout master
git pull
cd ../skeleton-extrapolate/
git checkout main
git pull
cd ../py_spring_hri/
git checkout main
git pull
cd ../spring_msgs/
git checkout master
git pull

Installation

  • Manual Installation (after having installed Python, PyTorch and Openpose):

    cd modules/pygco/
    pip3 install --global-option=build_ext .
    cd ../omnicam/
    pip3 install .
    cd ../../src/body_3d_tracker
    pip3 install --upgrade --force-reinstall -r requirements.txt
    source /opt/ros/noetic/setup.bash && catkin_make
    source devel/setup.bash

    A build and devel folder should have been created in your ROS workspace. See ROS workspace tutorial for more informations.

  • Docker:

    export DOCKER_BUILDKIT=1
    docker build -t body_perception --target production 

    A docker container image, named body_perception should have been created with the tag: latest.

    You can run the docker container based on this images creating and running a docker_run.sh file like in the Play Gestures repository:

    ./docker_run.sh

    In this bash file, you will need to change the CONTAINER_NAME value with the name you want for this docker container, the DOCKER_IMAGE value with docker image name you just put before, the ROS_IP value, the ROS_MASTER_URI value.

Usage

Usage

To run the whole Body Perception, execute all these following steps.

ROS Modules Required

The Body Perception pipeline is at the end of the even bigger pipeline (tracking pipeline and human robot interaction (HRI) person manager pipeline). Therefore, a lot of other ROS modules should be running before running our Body Perception Pipeline. The following services of the docker-compose.yml file should be running:

Body Perception Pipeline

When all the ROS required modules are running, we will be able run all the Behavior actions on the robot by running the Planning Manager node. This node will launch all the following node:

  • body_3d_tracker: see body_3d_tracker repository
  • body_to_face_mapper: see the body_to_face_mapper repository
  • front_fisheye_2d_body_pose_detector_op: see the front_fisheye_2d_body_pose_detector repository
  • front_fisheye_basestation_node: republish node from the front_fisheye compressed image topic on the robot to all the decompressed ones on the basestation
  • group_detector: see the group_detector repository
  • head_front_basestation_node: republish node from the head_front compressed image topic on the robot to all the decompressed ones on the basestation
  • rear_fisheye_basestation_node: republish node from the rear_fisheye compressed image topic on the robot to all the decompressed ones on the basestation
  • rosOpenpose: see the ros_openpose repository
  • skeleton_extrapolate: see the skeleton-extrapolate repository

To launch this node, execute the following step:

  • Manual Installation:
    roslaunch body_perception_pipeline node.launch
  • Docker: If you have not overwritten the entrypoint when launch the docker container as it is mentioned in the installation section. You don't need to do anything, it is launched at the starting. Otherwise, in the docker container you need to run the launch file as before:
    roslaunch body_perception_pipeline node.launch

Now that all the ROS modules required for the Body Perception are running, you will be able to detect and track all the humans. You can check the following topics:

  • /humans/bodies/tracked
  • tf or tf on rviz
  • /tracked_pose_2d/image_raw/compressed

Roadmap

  • Feature 1
  • Feature 2
  • Feature 3
    • Nested Feature

See the open issues for a full list of proposed features (and known issues).

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact

Alex Auternaud - alex.auternaud@inria.fr - alex.auternaud07@gmail.com

Project Link: https://gitlab.inria.fr/spring/wp3_av_perception/docker_body_perception

Acknowledgments