|
|
# Tutorial 1 : Qolo and Cuybot in a Crowded Hangar
|
|
|
|
|
|
---------------
|
|
|
|
|
|
*Note: if you are totally new with Unity, you can explore the set of tutorials available [here](https://learn.unity.com/).*
|
|
|
|
|
|
---------------
|
|
|
|
|
|
<img src="uploads/eb13a2fbbdbabd8daf7349278c6426d8/image.png" width="320" height="216">
|
|
|
<img src="uploads/2bc3141ead90400162d904a83986ed26/image.png" width="320" height="216>
|
|
|
|
|
|
---------------
|
|
|
|
|
|
# Table of content
|
|
|
|
|
|
[[_TOC_]]
|
|
|
|
|
|
---------------
|
|
|
|
|
|
# Objective
|
|
|
|
|
|
---------------
|
|
|
|
|
|
The objective of this tutorial is to recreate from scratch the scene "Tuto1" which you can find in the directory **Assets/Scene/Tutos/**
|
|
|
|
|
|
Doing so, you will learn the important components necessary to run a simulation.
|
|
|
|
|
|
First, create a new scene (File -> new scene) and save it in the Asset/Scenes directory.
|
|
|
Remove everything that is in the scene (camera and light).
|
|
|
|
|
|
---------------
|
|
|
|
|
|
# Configuration files
|
|
|
|
|
|
The philosophy of the CrowdBot simulator is to set up everything with several layers of configuration files.
|
|
|
It standardizes the notion of scenario defined in the CrowdBot project.
|
|
|
It also allows you to have control over the experiment when using a build version of the CrowdBot simulator.
|
|
|
|
|
|
More details are given in the section [Configuration files](Tutorials/ConfigFiles).
|
|
|
|
|
|
---------------
|
|
|
# Main manager
|
|
|
|
|
|
---------------
|
|
|
## Description
|
|
|
|
|
|
The MainManager is the prefab is in charge of managing the experiment.
|
|
|
|
|
|
It reads the configurations files and loads the relevant assets in the simulation.
|
|
|
|
|
|
It is a mandatory asset which have the tag "GameManager" in the inspector.
|
|
|
|
|
|
*Pro-tip : the main manager must have the highest priority in the script execution order, after default time.
|
|
|
*
|
|
|
|
|
|
In the "Project" window, search for the **MainManager** prefab in the search bar, or in the directory *Assets/Prefabs/Simulation*, and Drag-and-drop it in the Hierarchy window.
|
|
|
|
|
|
Then, in the inspector, complete the path toward the main config file you plan to use.
|
|
|
This path is concatenated to the current directory: ${path_to_CrowdBotUnity_project}${config_path}.
|
|
|
|
|
|
*Pro-tip: don't forget the '/' at the beginning of the path*
|
|
|
|
|
|

|
|
|
|
|
|
---------------
|
|
|
# Stage
|
|
|
|
|
|
The stage is the asset that contain the objects that compose your environment.
|
|
|
|
|
|
For instance, select the prefab "Hangar" in the directory "Assets/Stages/Tuto1/", and drap-&-drop it in the scene.
|
|
|
|
|
|
It contain the 3d elements for the scene: the floor, walls and obstacles, and the lights.
|
|
|
|
|
|
It has a "CrowdBotSim_TrialManager" script that has an obstacle container attribute: it is a child object of the stage that would contain all obstacles. An obstacle can be either a pillar or a wall, and in each case, it has be tagged (in the inspector) as such. Such tagged obstacles are taken into account in the crowd simulator (for local avoidance).
|
|
|
|
|
|
If you create your own scene, you would need to create a NavMesh, a component used by unity for global planning for the crowd simulation. To do so, select the floor and walls, and in the "Navigation" window, bake the NavMesh. More details about this procedure [here](https://docs.unity3d.com/Manual/nav-BuildingNavMesh.html).
|
|
|
|
|
|
For a more precise NavMeshn you might need to select "Height mesh" in advanced options.
|
|
|
|
|
|
The Stage also contain the light of your scene. The automatic rendering of the light is deactivated in this project. You might need to generate the light in "Window > rendering > lightning setting".
|
|
|
|
|
|
---------------
|
|
|
# Agents Models
|
|
|
|
|
|
In the scene, you will need to insert some animated characters to be controlled by the crowd simulation.
|
|
|
|
|
|
The CrowdBot simulation toolbox is the first robotics simulator that propose a high level of realism of a crowd. The human of this simulator uses the same two components as a robot: a visual representation and a collider. We use animations in order to have a realistic movements.
|
|
|
|
|
|
The prefab "PlasticMan"in "Prefabs/HumanModels" is a faceless character that is usable by default in the simulator. You can drag&drop it in the scene.
|
|
|
|
|
|
It has an animation controller script attached to it, and an animator with the "LocomotionFinal" controller.
|
|
|
|
|
|
You can use the more realistic set of characters "RocketBox" which is an addon to the project, for which you should request access [here](https://gitlab.inria.fr/crowdmp/addons/rocketboxtxtvar).
|
|
|
|
|
|
The RocketBox, provide more diversity: it is a set of 40 realistic characters (20 male and 20 female). For each character, we have different levels of resolution (i.e more or less complex mesh). Also, we have textures to apply on the mesh, which gives a realistic human visual representation. We also have variation on the texture (hair, skin, clothes).
|
|
|
|
|
|
---------------
|
|
|
# "Player"
|
|
|
|
|
|
The CrowdBot Simulator is designed for user studies, which means that you can have an actual person interaction with the elements of the simulation.
|
|
|
|
|
|
For more details, please refers to [Tutorial 3](Tutorials/Tuto3) and [the Inria use case](UseCase/Inria).
|
|
|
|
|
|
In this tutorial, we are not interested into immersing someone. However, we still want a free camera that render the scene from were we want: we can use the prefab "CamPlayer" following the directory Assets/MainAssetS/Agents/Player.
|
|
|
|
|
|
A player should be tagged as player and have a "Head" component tagged as HeadPlayer.
|
|
|
|
|
|
The CamPlayer defines a camera which render on the display 1 and is tagged as MainCamera.
|
|
|
|
|
|
---------------
|
|
|
# Robots
|
|
|
---------------
|
|
|
|
|
|
The main Robots available in the simulator are : Qolo_sensors, Cuybot_sensors, Pepper_sensors, Wheelchair_sensors.
|
|
|
Other robot models exist, which are a partial version of the previous ones.
|
|
|
|
|
|
Select the prefab Qolo_sensors (Assets/Robots/RobotsModels) and drag and drop it in the scene.
|
|
|
|
|
|
The root object of a robot is tagged "Robot". It contain 2 childs: a RosConnector and a base_link.
|
|
|
The root of the object have a script "URDF Robot" which allow you to control the parameters of the base_link.
|
|
|
|
|
|
---------------
|
|
|
## ROS
|
|
|
---------------
|
|
|
The RosConnector is the script connecting the simulator to [ROS](https://www.ros.org/) using [rosbridge](http://wiki.ros.org/rosbridge_suite).
|
|
|
If you are unfamiliar with ROS, then check the [tutorials](http://wiki.ros.org/ROS/Tutorials).
|
|
|
|
|
|
The simulator uses the library [ROS#](https://github.com/siemens/ros-sharp) which communicates with ROS.
|
|
|
It means that you can run the simulator on one computer or server and run your ros nodes on another computer.
|
|
|
|
|
|
The RosConnector script have a "Ros bridge server url" which correspond to your $ROS_MASTER_URI parameter.
|
|
|
|
|
|
*Important note: the RosConnector script is he only one of the scripts attached to the RosConnector that is enabled.*
|
|
|
|
|
|

|
|
|
|
|
|
You should run the command
|
|
|
|
|
|
```
|
|
|
|
|
|
roslaunch crowdbotsim unity_sim.launch
|
|
|
|
|
|
```
|
|
|
|
|
|
in a terminal on the computer running the ROS core.
|
|
|
|
|
|
---------------
|
|
|
## The Clock
|
|
|
---------------
|
|
|
|
|
|
The CrowdBot simulator gives you full control of the time through.
|
|
|
|
|
|
To use an external clock, the RosConnector have to be tagged as "Clock" and have must have a ClockSubscriber script attached to it, with a topic name "/clock".
|
|
|
|
|
|
On the computer running ROS, you can publish a default clock using the following command
|
|
|
|
|
|
```
|
|
|
rosrun crowdbotsim clock_publisher.py ${sleep_time} ${delta_time}
|
|
|
```
|
|
|
|
|
|
Which publish a clock every "sleep_time" seconds (first parameter) updated of "delta_time" seconds (second parameter).
|
|
|
|
|
|
*Important note: the clock controls the time step of the whole simulation: the crowd, the sensors, physics... Which means that a high delta time might give absurd results (robot going through floor, people not avoiding...), and a small one might be far from real time if it is smaller than the time required to compute a step of the simulation. These parameters depends on the simulation itself and must be tuned for each case. Generally speaking, we recommand a delta time between 0.01 s and 0.1 s.*
|
|
|
|
|
|
---------------
|
|
|
## The URDF Robot
|
|
|
---------------
|
|
|
The rest of the game object consist in virtually building the robot, following the URDF standard. It starts with the base_link.
|
|
|
|
|
|
*Important note: the name convention base_link, as a child of the object tagged robot, is important for the crowd simulator. Its position in the scene is considered to be the position of the robot*
|
|
|
|
|
|

|
|
|
|
|
|
In the URDF standard, a link is a combination of visual elements and collision element.
|
|
|
|
|
|
We can connect links together with joints by clicking on "add link with joint" to create a child link.
|
|
|
|
|
|
*note : The physics engine limits the colliders (collision elements used to compute contacts) to convex shapes when using non kinematic rigid body. To improve precision of the collision elements, we divide the complex 3d models in smaller pieces and link them in a child/parent relationship in the Collision element. For instance with the base_link of Qolo, the structure of the robot is divided in 20 pieces.*
|
|
|
|
|
|

|
|
|
|
|
|
The rest of the Qolo hierachy are the motorised wheels (see section controllers) which use continuous joints, and the caster wheels which use, for the moment, a floating joint with translation constraints, and a sphere as collider. Also, the sensors are added as a new links with fixed joint (see section sensors).
|
|
|
|
|
|
For Qolo, We added a dummy plastic man for the pilot.
|
|
|
|
|
|

|
|
|

|
|
|
|
|
|
---------------
|
|
|
## Controllers
|
|
|
---------------
|
|
|
|
|
|
The default way to control a robot is by publishing a [twist message](https://docs.ros.org/api/geometry_msgs/html/msg/Twist.html) through ROS.
|
|
|
|
|
|
In the RosConnector, you should use a TwistSubscriber script which subscribe to the the published tiwst with the corresponding name.
|
|
|
|
|
|
|
|
|

|
|
|
|
|
|
The subscribed transform should be the base_link of the robot.
|
|
|
|
|
|
You can select the modes of controller:
|
|
|
* Kinematic: This mode ignore physics, thus will move the robot precisely but without realism. It is a good mode to begin with
|
|
|
* Forced based: This mode use physics and apply a second Newton law F = m * dv/dt where m is the mass of the base_link, dv is the difference between the base_link rigidbody velocity and the desired velocity, and dt is the time spent between two computations. A similar treatment is apply to the torque. Due to approximation of the mass, plus friction constraints on the wheels, this mode will not be precise, the desired velocity will not be reached. However, it is more realistic than kinematic mode for contacts.
|
|
|
* Differential drive (Fix in progress): * (this mode is only available for non-holonomic robots )* In this mode we use the virtual motors on the motorised wheel, i.e. a continuous joint with a "Motor Writer" script, and the odometry of the robot (see sensors). This mode require the tuning of the controller (PID) but will provide the best compromise between precision and realism.
|
|
|
|
|
|

|
|
|
|
|
|
---------------
|
|
|
## Sensors
|
|
|
---------------
|
|
|
|
|
|
### Proximity sensors
|
|
|
|
|
|
Our virtual sensors give the same output as real ones.
|
|
|
|
|
|
Our simulated sensors offer the possibility to tune the usual parameters according to the real sensor properties, such as maximum and minimum distance of detection, maximum incidence angle required for detection, field of view, update rate, bit-precision...
|
|
|
|
|
|
In order to provide more realistic sensors simulation, we have the possibility to add various types of noise to any type of sensors.
|
|
|
|
|
|
We implemented:
|
|
|
* a gaussian noise generator
|
|
|
* an offset generator
|
|
|
* a peak generator
|
|
|
|
|
|
They are configurable and can be added to the output of any kind of sensors described below.
|
|
|
|
|
|

|
|
|
|
|
|
Robot sensor data are generated thanks to the 3D elements (their colliders and visual representation) of the simulation: the scene, and the human avatars.
|
|
|
|
|
|
About the sensors, we implemented
|
|
|
* Infrared (IR)
|
|
|
* ultrasound (US) proximity sensors
|
|
|
* 2D laser (LIDAR) sensors
|
|
|
* Camera (RGB)
|
|
|
* Depth sensor (RGB-D)
|
|
|
* Graphic based Lidar and depth (Work in progress)
|
|
|
|
|
|
For the US, IR, and Lidar, the simulated sensors cast a predefined number of rays in a given detection zone.
|
|
|
|
|
|
These rays are generated by the physics engine which returns a collision point when a ray hits a collider.
|
|
|
|
|
|
The rays are cast evenly in the detection zone, according to the predefined number of rays.
|
|
|
|
|
|
For a more realistic result, we propose to simulate the fact that such sensors fail to detect obstacles if the incidence angle is too high, by giving the possibility to specify a maximum incidence angle.
|
|
|
If the ray hits an object in its path, and the hit angle is smaller than this value, then the object is detected and the distance between the sensor and the hitting point is recorded.
|
|
|
|
|
|
|
|
|
US sensors have specific detection zone, which we approximate by a teardrop function centered on the position and the angle of the sensor. The simulated US sensor returns the smallest observed value from the set of rays.
|
|
|
|
|
|
We approximate the detection zone of a IR sensor by a cone centered on the sensor reference in the simulated world. We can configure the maximum distance of detection, the angle of the cone and the level of sampling.
|
|
|
|
|
|
We simulate Lidar sensor by casting rays according to a partial sphere.
|
|
|
You can configure the angular resolution, the starting and ending angles, as well as the minimum and maximum range of the LIDAR.
|
|
|
|
|
|
RGB cameras are simulated using simple images extracted from Unity.
|
|
|
|
|
|
The depth component of the RGB-D camera is simulated using Unity shaders, which are usually in charge of rendering textures. We use them to generate a depth texture, that gives us a grayscale image where each pixel value corresponds to the depth value. Also, the simulated camera is highly configurable (e.g. field of view, resolution...).
|
|
|
|
|
|
|
|
|
Generally speaking, a sensors is composed with 2 things: a sensor provider and and sensor publisher.
|
|
|
The sensor provider is a script attached to a gameobject which is in the robot hierarchy, and which compute the synthetic data.
|
|
|
|
|
|
The sensor publisher is the script attached to the RosConnector which takes a sensor provider as input and communicates with ROS.
|
|
|
|
|
|

|
|
|
|
|
|
### Localization
|
|
|
|
|
|
We use odometry and JointState Reader for localization.
|
|
|
|
|
|
To initialize the JointState publisher, you should first click on the button "Generate" on the URDF robot script of the robot prefab.
|
|
|
|
|
|
Then, on the RosConnector you should add a "JointState patcher", fill the "Urdf robot" attribute by drag and dropping you robot from the hierachy, and click on "Enable" for publish joint state.
|
|
|
Disable the script (the RosConnector script will enable it at startup), add a proper topic name (for instance /qolo/joint_state) and change the frameId from "Unity" to "base_link".
|
|
|
|
|
|
This maneuver allows to publish the joint_state on ROS which can tranform this information in "tf" messages and visualize it on rviz.
|
|
|
|
|
|
Now, we need to publish an odometry message. This is done with two components: an OdomPublisher on RosConnector and an OdomProvider script on the base_link of the robot.
|
|
|
|
|
|
The OdomProvider use 2 methods:
|
|
|
* (differential drive robots only) We can use Iverse Kinematic and the velocity of the motrised wheels to compute a linear and angular velocity. This method is subject to errors but is very similar to what is done in real life. This method returns 0 if the robot move with the Kinematic controller.
|
|
|
* (to fix) We read the velocity of the bae_link rigidbody directly. This method returns 0 if the robot move with the Kinematic controller.
|
|
|
|
|
|
* Note: The OdomProvider needs joints state readers on the wheels, so we recommand to use the Joint state patcher *
|
|
|
---------------
|
|
|
# Trial Generator
|
|
|
---------------
|
|
|
The trial generator is a useful tool that allow you to generate trial configuration very easily.
|
|
|
The picture show the different attributes that the trial generator needs to generate a trial configuration.
|
|
|
|
|
|

|
|
|
|
|
|
* Path: a relative path to the created file should be given. This file should also be written (by yourself) in the FileOrder reference in the main configuration file, if you want to use it in runtime. (cf [configuration files](Tutorials/ConfigFiles)
|
|
|
|
|
|
* Stage: This attribute is the GameObject described in the section "Stage", which you want to use in the simulation.
|
|
|
|
|
|
* RecordingFile: an optional tool to record some data. By default, nothing is recorded. See [tutorial 3](Tutorials/Tuto3).
|
|
|
*Pro-tip: for more convenience, we suggest the use of rosbag for recording.*
|
|
|
|
|
|
* Player Generator: This attribute refers to a child of the trial generator GameObject. The "PlayerParams" have a PlayerGen script which have thet player GameObject as component, and details regarding it behavior in the crowd simulation. By default, the Player will use a camera controller and is not part of the simulation. You can give a control law to the player if you wish.
|
|
|
|
|
|
* Robot generator: it is an array of GameObjects, Each element of the array refers to a child of the TrialGenerator, which contain a "RobotGen" script. This script refers to a game object of a the robot you want in your simulation. You should specify a radius, a parameter used by the crowd simulation (local avoidance), and if you wish, control law (global) and control sim (local) generators. By default, we consider that the robot is being controlled through ROS messages.
|
|
|
|
|
|
* Spawners: it is an array of child of the TrialGenerator. A spawner is what allows you to generate a crowd. See the next section for an explanation on how to use it.
|
|
|
|
|
|
---------------
|
|
|
# Spawners: Generate your crowd
|
|
|
|
|
|
The Spawn block contains all the classes used by the TrialGenerator object allowing spawning agents offline, visually modifying them in the editor and generating an XML trial file to run and XP with these agents.
|
|
|
|
|
|
* AgentGen.cs: Generate the agent part of the XML file.
|
|
|
* Spawn.cs: Spawn agents according to the linked AgentGen in a specific area.
|
|
|
* Goal.cs: Small script to handle sequence of goals/waypoints.
|
|
|
* SpawnEditorButtons.cs: Handle all the editor buttons for the spawns.
|
|
|
* ControlLawGen.cs: Interface to generate the control law part in the XML file.
|
|
|
* ControlLawGen_LawFileData.cs: Generate XML for the law FileData.
|
|
|
* ControlLawGen_LawGoals.cs: Generate XML for the law Goals.
|
|
|
* …: One class for each existing control law.
|
|
|
* ControlSimGen.cs: Interface to generate the control sim part in the XML file.
|
|
|
* ControlLawGen_RVO.cs: Generate XML for the sim RVO.
|
|
|
* ControlLawGen_Helbing.cs: Generate XML for the sim Helbing.
|
|
|
* …: One class for each existing simulator.
|
|
|
|
|
|
|