diff --git a/spring-architecture.md b/spring-architecture.md
index 1a11d052be3aa5c500ce4dd1b4eb7c191724211c..962dc13d8d87de150f031749650ad361d01acf4b 100644
--- a/spring-architecture.md
+++ b/spring-architecture.md
@@ -1,6 +1,6 @@
 # SPRING architecture
 
-**Version:** 0.5.0
+**Version:** 1.0.0
 
 EU H2020 SPRING architecture
 
@@ -8,51 +8,65 @@ EU H2020 SPRING architecture
 
 | **Node** | **id** | **Partner** | **Status** |
 |----------|--------|-------------|------------|
-| [Person re-identification](#personreidentification) | personreidentification | UNITN | mock-up |
-| [Non-verbal behaviours](#nonverbalbehaviours) | nonverbalbehaviours | UNITN | mock-up |
-| [torso_rgbd_camera](#torso_rgbd_camera) | torso_rgbd_camera | PAL | mock-up |
-| [ InLoc Server](#inlocserver) | inlocserver | CVUT | mock-up |
-| [ spring_msgs](#spring_msgs) | spring_msgs | PAL | released (version 0.0.2) |
-| [User attention estimation](#userattentionestimation) | userattentionestimation | UNITN | mock-up |
-| [ interaction_manager_msgs](#interaction_manager_msgs) | interaction_manager_msgs | HWU | released (version spring_dev) (dependency) |
-| [interaction_manager](#interaction_manager) | interaction_manager | HWU | released (version spring_dev) |
-| [ hri_msgs](#hri_msgs) | hri_msgs | PAL | released (version 0.1.1) (dependency) |
-| [Object detection/identification/localisation](#objectdetectionidentificationlocalisation) | objectdetectionidentificationlocalisation | CVUT | mock-up |
-| [Voice speech matching](#voicespeechmatching) | voicespeechmatching | BIU | mock-up |
-| [Speaker identification](#speakeridentification) | speakeridentification | BIU | mock-up |
-| [mask_detector](#mask_detector) | mask_detector | UNITN | released (version master) |
-| [hri_person_manager](#hri_person_manager) | hri_person_manager | PAL | released (version master) |
-| [soft_biometrics_estimator](#soft_biometrics_estimator) | soft_biometrics_estimator | UNITN | released (version master) |
-| [ROS mediapipe/openpose](#rosmediapipeopenpose) | rosmediapipeopenpose | INRIA | mock-up |
-| [People 3D tracker](#people3dtracker) | people3dtracker | INRIA | mock-up |
 | [F-formation](#fformation) | fformation | UNITN | mock-up |
-| [ robot_behaviour_msgs](#robot_behaviour_msgs) | robot_behaviour_msgs | HWU | released (version spring_dev) (dependency) |
+| [social_scene_context_understanding](#social_scene_context_understanding) | social_scene_context_understanding | HWU | released (version spring_dev) |
+| [Activity reco](#activityreco) | activityreco | UNITN | mock-up |
 | [respeaker_ros](#respeaker_ros) | respeaker_ros | PAL | released (version master) |
+| [mask_detector](#mask_detector) | mask_detector | UNITN | released (version master) |
 | [fisheye](#fisheye) | fisheye | PAL | mock-up |
-| [dialogue arbiter](#dialoguearbiter) | dialoguearbiter | HWU | released (version spring_dev) |
-| [Face detection](#facedetection) | facedetection | UNITN | mock-up |
-| [Depth estimation from monocular](#depthestimationfrommonocular) | depthestimationfrommonocular | UNITN | mock-up |
-| [Robot functional layer](#robotfunctionallayer) | robotfunctionallayer | PAL | mock-up |
+| [soft_biometrics_estimator](#soft_biometrics_estimator) | soft_biometrics_estimator | UNITN | released (version master) |
+| [ robot_behaviour_msgs](#robot_behaviour_msgs) | robot_behaviour_msgs | HWU | released (version spring_dev) (dependency) |
+| [ hri_msgs](#hri_msgs) | hri_msgs | PAL | released (version 0.1.1) (dependency) |
+| [tracker](#tracker) | tracker | INRIA | released (version devel) |
+| [people_facts](#people_facts) | people_facts | PAL | mock-up |
+| [raspicam](#raspicam) | raspicam | PAL | mock-up |
+| [google_translate](#google_translate) | google_translate | HWU | mock-up |
+| [front_fisheye_basestation_node](#front_fisheye_basestation_node) | front_fisheye_basestation_node | INRIA | mock-up |
+| [controller_node](#controller_node) | controller_node | INRIA | released (version devel) |
+| [interaction_manager](#interaction_manager) | interaction_manager | HWU | released (version spring_dev) |
+| [go_towards_action_server](#go_towards_action_server) | go_towards_action_server | HWU | mock-up |
+| [republish_from_rtab_map_node](#republish_from_rtab_map_node) | republish_from_rtab_map_node | INRIA | released (version 0.0.1) |
+| [track_frame_node](#track_frame_node) | track_frame_node | INRIA | released (version 0.0.1) |
 | [ audio_msgs](#audio_msgs) | audio_msgs | HWU | released (version spring_dev) (dependency) |
-| [FairMOT Multi-people body tracker](#fairmotmultipeoplebodytracker) | fairmotmultipeoplebodytracker | INRIA | released (version devel) |
-| [MoDE](#mode) | mode | BIU | released (version BIU_dev) |
-| [Speech synthesis](#speechsynthesis) | speechsynthesis | PAL | mock-up |
-| [ social_scene_msgs](#social_scene_msgs) | social_scene_msgs | HWU | released (version spring_dev) (dependency) |
-| [User visual focus](#uservisualfocus) | uservisualfocus | UNITN | mock-up |
-| [ORB SLAM](#orbslam) | orbslam | PAL | mock-up |
-| [robot_behavior](#robot_behavior) | robot_behavior | INRIA | released (version devel) |
+| [knowledge_core](#knowledge_core) | knowledge_core | PAL | mock-up |
 | [Semantic mapping](#semanticmapping) | semanticmapping | CVUT | mock-up |
-| [social_scene_context_understanding](#social_scene_context_understanding) | social_scene_context_understanding | HWU | released (version spring_dev) |
+| [ interaction_manager_msgs](#interaction_manager_msgs) | interaction_manager_msgs | HWU | released (version spring_dev) (dependency) |
+| [social_strategy_supervisor](#social_strategy_supervisor) | social_strategy_supervisor | HWU | mock-up |
+| [User attention estimation](#userattentionestimation) | userattentionestimation | UNITN | mock-up |
+| [ spring_msgs](#spring_msgs) | spring_msgs | PAL | released (version 0.0.2) |
+| [speech](#speech) | speech | HWU | released (version BIU_dev) |
+| [gaze_estimation](#gaze_estimation) | gaze_estimation | UNITN | mock-up |
+| [Non-verbal behaviours](#nonverbalbehaviours) | nonverbalbehaviours | UNITN | mock-up |
+| [alana_node](#alana_node) | alana_node | HWU | mock-up |
+| [torso_rgbd_camera](#torso_rgbd_camera) | torso_rgbd_camera | PAL | mock-up |
+| [Object detection/identification/localisation](#objectdetectionidentificationlocalisation) | objectdetectionidentificationlocalisation | CVUT | mock-up |
+| [ InLoc Server](#inlocserver) | inlocserver | CVUT | mock-up |
+| [ social_scene_msgs](#social_scene_msgs) | social_scene_msgs | HWU | released (version spring_dev) (dependency) |
+| [InLoc-ROS](#inlocros) | inlocros | CVUT | mock-up |
+| [Voice-body matching](#voicebodymatching) | voicebodymatching | BIU | mock-up |
+| [depth_estimation](#depth_estimation) | depth_estimation | UNITN | mock-up |
+| [rtabmap](#rtabmap) | rtabmap | INRIA | mock-up |
+| [Robot functional layer](#robotfunctionallayer) | robotfunctionallayer | PAL | mock-up |
+| [ros_petri_net_node](#ros_petri_net_node) | ros_petri_net_node | HWU | mock-up |
+| [social_state_analyzer](#social_state_analyzer) | social_state_analyzer | HWU | mock-up |
+| [audio_processing_mode](#audio_processing_mode) | audio_processing_mode | BIU | released (version BIU_dev) |
+| [dialogue_say](#dialogue_say) | dialogue_say | PAL | mock-up |
+| [Speaker identification](#speakeridentification) | speakeridentification | BIU | mock-up |
 | [Robot GUI](#robotgui) | robotgui | ERM | mock-up |
-| [ros_openpose](#ros_openpose) | ros_openpose | UNITN | released (version 0.0.1) |
-| [Activity reco](#activityreco) | activityreco | UNITN | mock-up |
+| [ros_mediapipe_node](#ros_mediapipe_node) | ros_mediapipe_node | INRIA | released (version 0.0.1) |
+| [Speaker separation/diarization](#speakerseparationdiarization) | speakerseparationdiarization | BIU | mock-up |
+| [look_at_action_server](#look_at_action_server) | look_at_action_server | HWU | mock-up |
+| [navigate_action_server](#navigate_action_server) | navigate_action_server | HWU | mock-up |
+| [sound source localisation](#soundsourcelocalisation) | soundsourcelocalisation | BIU | mock-up |
+| [ORB SLAM](#orbslam) | orbslam | PAL | mock-up |
+| [dialogue_speech](#dialogue_speech) | dialogue_speech | HWU | mock-up |
+| [Voice speech matching](#voicespeechmatching) | voicespeechmatching | BIU | mock-up |
 | [ wp4_msgs](#wp4_msgs) | wp4_msgs | UNITN | released (version master) (dependency) |
-| [robot_behaviour_plan_actions](#robot_behaviour_plan_actions) | robot_behaviour_plan_actions | HWU | released (version spring_dev) |
-| [Multi-person audio tracking](#multipersonaudiotracking) | multipersonaudiotracking | BIU | mock-up |
-| [InLoc-ROS](#inlocros) | inlocros | CVUT | mock-up |
-| [google_asr](#google_asr) | google_asr | BIU | released (version BIU_dev) |
+| [hri_person_manager](#hri_person_manager) | hri_person_manager | PAL | released (version master) |
 | [Occupancy map](#occupancymap) | occupancymap | CVUT | mock-up |
-| [raspicam](#raspicam) | raspicam | PAL | mock-up |
+| [dialogue_arbiter](#dialogue_arbiter) | dialogue_arbiter | HWU | released (version spring_dev) |
+| [User visual focus](#uservisualfocus) | uservisualfocus | UNITN | mock-up |
+| [recipe_planner](#recipe_planner) | recipe_planner | HWU | released (version spring_dev) |
 
 ## Detailed description
 
@@ -60,11 +74,11 @@ EU H2020 SPRING architecture
 
 ---
 
-### personreidentification
+### fformation
+
+Node *F-formation* (id: `fformation`) is overseen by UNITN.
 
-Node *Person re-identification* (id: `personreidentification`) is overseen by UNITN.
 
-MOCK: Person re-identification is...
 
 #### Status
 
@@ -72,21 +86,47 @@ MOCK: Person re-identification is...
 
 #### Inputs/outputs
 
- - Input: `/h/f/*/roi [sensor_msgs/RegionOfInterest]` (topic)
+ - Input: `tf: /person_id` (tf)
+ - Input: `/h/i/gaze [hri_msgs/Gaze]` (topic)
 
- - Output: `person_id_candidate [hri_msgs/IdsMatch]` (undefined)
+ - Output: `/h/i/groups [hri_msgs/Group]` (topic)
+
+#### Dependencies
+
+- `tf/transform_listener`
+- `hri_msgs/Group`
+- `hri_msgs/Gaze`
+
+
+---
+
+### social_scene_context_understanding
+
+Node *social_scene_context_understanding* (id: `social_scene_context_understanding`) is overseen by HWU.
+
+REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/interaction.git
+SUBFOLDER:social_scene_context_understanding
+
+#### Status
+
+**Current release: spring_dev** 
+
+#### Inputs/outputs
+
+ - Input: `scene graph` (undefined)
+
+ - Output: `semantic description` (undefined)
 
 #### Dependencies
 
 - `std_msgs/Empty`
-- `sensor_msgs/RegionOfInterest`
 
 
 ---
 
-### nonverbalbehaviours
+### activityreco
 
-Node *Non-verbal behaviours* (id: `nonverbalbehaviours`) is overseen by UNITN.
+Node *Activity reco* (id: `activityreco`) is overseen by UNITN.
 
 
 
@@ -96,23 +136,80 @@ Node *Non-verbal behaviours* (id: `nonverbalbehaviours`) is overseen by UNITN.
 
 #### Inputs/outputs
 
- - Input: `/h/f/*/roi [hri_msgs/RegionOfInterest]` (topic)
- - Input: `/h/v/*/audio [audio_common_msgs/AudioData]` (topic)
+ - Input: `/h/b/*/skeleton_2d [hri_msg/Skeleton2D]` (topic)
 
- - Output: `/h/f/*/expression [hri_msgs/Expression]` (topic)
+ - Output: `[?] output` (undefined)
 
 #### Dependencies
 
-- `hri_msgs/RegionOfInterest`
-- `hri_msgs/Expression`
+- `hri_msg/Skeleton2D`
+- `std_msgs/Empty`
+
+
+---
+
+### respeaker_ros
+
+Node *respeaker_ros* (id: `respeaker_ros`) is overseen by PAL.
+
+REPO:git@gitlab.inria.fr:spring/wp7_ari/respeaker_ros.git BIN:respeaker_multichan_node.py
+
+#### Status
+
+**Current release: master** 
+
+#### Inputs/outputs
+
+
+ - Output: `/audio/raw_audio [respeaker_ros/RawAudioData]` (topic)
+ - Output: `/audio/ego_audio [audio_common_msgs/AudioData]` (topic)
+
+#### Dependencies
+
+- `respeaker_ros/RawAudioData`
 - `audio_common_msgs/AudioData`
 
 
 ---
 
-### torso_rgbd_camera
+### mask_detector
 
-Node *torso_rgbd_camera* (id: `torso_rgbd_camera`) is overseen by PAL.
+Node *mask_detector* (id: `mask_detector`) is overseen by UNITN.
+
+Detects presence of a facial mask
+REPO:git@gitlab.inria.fr:spring/wp4_behavior/wp4_behavior_understanding.git
+SUBFOLDER:wp4_people_characteristics
+BIN:mask_detector.py
+
+#### Status
+
+**Current release: master** 
+
+#### Inputs/outputs
+
+ - Input: `/humans/bodies/tracked [hri_msgs/IdsList]` (topic)
+ - Input: `/h/b/*/cropped [sensor_msg/Image]` (topic)
+
+ - Output: `/h/f/*/cropped [sensor_msg/Image]` (topic)
+ - Output: `/humans/faces/tracked [hri_msgs/IdsList]` (topic)
+ - Output: `/h/f/*/roi [sensor_msgs/RegionOfInterest]` (topic)
+ - Output: `/h/f/*/has_mask [std_msgs/Bool]` (topic)
+ - Output: `/humans/candidate_matches [hri_msgs/IdsMatch] [face <-> body]` (topic)
+
+#### Dependencies
+
+- `sensor_msg/Image`
+- `hri_msgs/IdsList`
+- `sensor_msgs/RegionOfInterest`
+- `std_msgs/Bool`
+- `face <-> body/face <-> body`
+
+
+---
+
+### fisheye
+
+Node *fisheye* (id: `fisheye`) is overseen by PAL.
 
 
 
@@ -123,7 +220,7 @@ Node *torso_rgbd_camera* (id: `torso_rgbd_camera`) is overseen by PAL.
 #### Inputs/outputs
 
 
- - Output: `/torso_rgbd_camera/color/image_raw [sensor_msgs/Image]` (topic)
+ - Output: `/torso_front_camera/color/image_raw [sensor_msgs/Image]` (topic)
 
 #### Dependencies
 
@@ -132,9 +229,71 @@ Node *torso_rgbd_camera* (id: `torso_rgbd_camera`) is overseen by PAL.
 
 ---
 
-### inlocserver
+### soft_biometrics_estimator
 
-Node * InLoc Server* (id: `inlocserver`) is overseen by CVUT.
+Node *soft_biometrics_estimator* (id: `soft_biometrics_estimator`) is overseen by UNITN.
+
+Detects age/gender
+REPO:git@gitlab.inria.fr:spring/wp4_behavior/wp4_behavior_understanding.git
+SUBFOLDER:wp4_people_characteristics
+BIN:soft_biometrics_estimator.py
+
+#### Status
+
+**Current release: master** 
+
+#### Inputs/outputs
+
+ - Input: `/humans/faces/tracked [hri_msgs/IdsList]` (topic)
+ - Input: `/head_front_camera/color/image_raw/compressed [sensor_msgs/CompressedImage]` (topic)
+
+ - Output: `/humans/candidate_matches [hri_msgs/IdsMatch]` (topic)
+ - Output: `/h/f/*/softbiometrics [hri_msgs/SoftBiometrics]` (topic)
+
+#### Dependencies
+
+- `hri_msgs/IdsList`
+- `hri_msgs/IdsMatch`
+- `hri_msgs/SoftBiometrics`
+- `sensor_msgs/CompressedImage`
+
+
+---
+
+### tracker
+
+Node *tracker* (id: `tracker`) is overseen by INRIA.
+
+This code is primarily developed at INRIA by Luis Gomez Camara.
+REPO: https://gitlab.inria.fr/spring/wp3_av_perception/multi-person_visual_tracker/
+
+#### Status
+
+**Current release: devel** 
+
+#### Inputs/outputs
+
+ - Input: `/front_camera_basetation/fisheye/image_raw/compressed [sensor_msgs/CompressedImage]` (topic)
+
+ - Output: `/h/b/*/cropped [sensor_msg/Image]` (topic)
+ - Output: `/h/b/*/roi [sensor_msgs/RegionOfInterest` (undefined)
+ - Output: `/humans/bodies/tracked [hri_msgs/IdsList]` (topic)
+ - Output: `/tracker/tracker_output [std_msgs/String]` (topic)
+
+#### Dependencies
+
+- `sensor_msgs/CompressedImage`
+- `sensor_msg/Image`
+- `std_msgs/Empty`
+- `hri_msgs/IdsList`
+- `std_msgs/String`
+
+
+---
+
+### people_facts
+
+Node *people_facts* (id: `people_facts`) is overseen by PAL.
 
 
 
@@ -144,37 +303,43 @@ Node * InLoc Server* (id: `inlocserver`) is overseen by CVUT.
 
 #### Inputs/outputs
 
+ - Input: `/h/p/...` (undefined)
 
+ - Output: `/kb/add_fact [std_msgs/String]` (topic)
 
 #### Dependencies
 
+- `std_msgs/Empty`
+- `std_msgs/String`
 
 
 ---
 
-### spring_msgs
+### raspicam
+
+Node *raspicam* (id: `raspicam`) is overseen by PAL.
 
-Node * spring_msgs* (id: `spring_msgs`) is overseen by PAL.
 
-REPO:git@gitlab.inria.fr:spring/wp7_ari/spring_msgs.git NOT EXECUTABLE
 
 #### Status
 
-**Current release: 0.0.2** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
 
+ - Output: `/head_front_camera/color/image_raw [sensor_msgs/Image]` (topic)
 
 #### Dependencies
 
+- `sensor_msgs/Image`
 
 
 ---
 
-### userattentionestimation
+### google_translate
 
-Node *User attention estimation* (id: `userattentionestimation`) is overseen by UNITN.
+Node *google_translate* (id: `google_translate`) is overseen by HWU.
 
 
 
@@ -184,17 +349,86 @@ Node *User attention estimation* (id: `userattentionestimation`) is overseen by
 
 #### Inputs/outputs
 
- - Input: `TF (faces)` (undefined)
- - Input: `/h/f/*/roi [hri_msgs/RegionOfInterest]` (topic)
+ - Input: `/get_answer` (undefined)
 
- - Output: `tf: /face_id_gaze` (tf)
- - Output: `x,y + attention heatmap` (undefined)
+ - Output: `/response` (undefined)
 
 #### Dependencies
 
-- `tf/transform_broadcaster`
 - `std_msgs/Empty`
-- `hri_msgs/RegionOfInterest`
+
+
+---
+
+### front_fisheye_basestation_node
+
+Node *front_fisheye_basestation_node* (id: `front_fisheye_basestation_node`) is overseen by INRIA.
+
+
+
+#### Status
+
+**This node is currently auto-generated (mock-up)** 
+
+#### Inputs/outputs
+
+ - Input: `/front_camera/fisheye/image_raw/compressed [sensor_msgs/CompressedImage]` (topic)
+ - Input: `/head_front_camera/color/image_raw/compressed` (undefined)
+ - Input: `/torso_front_camera/aligned_depth_to_color/image_raw/theora` (undefined)
+ - Input: `/torso_front_camera/color/image_raw/theora` (undefined)
+
+ - Output: `/*_basestation/...` (undefined)
+
+#### Dependencies
+
+- `std_msgs/Empty`
+- `sensor_msgs/CompressedImage`
+
+
+---
+
+### controller_node
+
+Node *controller_node* (id: `controller_node`) is overseen by INRIA.
+
+The code is primarily developed at INRIA by Timothée Wintz.
+REPO: https://gitlab.inria.fr/spring/wp6_robot_behavior/robot_behavior
+SUBFOLDER:src/robot_behavior
+
+#### Status
+
+**Current release: devel** 
+
+#### Inputs/outputs
+
+ - Input: `/go_towards [GoTowards]` (topic)
+ - Input: `/joint_states` (undefined)
+ - Input: `tf: /person_id` (tf)
+ - Input: `/humans/persons/tracked` (undefined)
+ - Input: `/look_at [LookAt]` (topic)
+ - Input: `/rtabmap/proj_map [OccupancyGrid]` (topic)
+ - Input: `/rtabmap/rtabmap_local_map_rectified [OccupancyGrid]` (topic)
+ - Input: `/navigate [Navigate]` (topic)
+ - Input: `status` (undefined)
+
+ - Output: `/h/i/groups [hri_msgs/Group]` (topic)
+ - Output: `tf: ?` (tf)
+ - Output: `/nav_vel [Twist]` (topic)
+ - Output: `status` (undefined)
+ - Output: `/head_controller/command [JointTrajectory]` (topic)
+
+#### Dependencies
+
+- `hri_msgs/Group`
+- `GoTowards/GoTowards`
+- `std_msgs/Empty`
+- `tf/transform_listener`
+- `tf/transform_broadcaster`
+- `LookAt/LookAt`
+- `OccupancyGrid/OccupancyGrid`
+- `Twist/Twist`
+- `Navigate/Navigate`
+- `JointTrajectory/JointTrajectory`
 
 
 ---
@@ -212,16 +446,13 @@ SUBFOLDER:interaction_manager
 
 #### Inputs/outputs
 
- - Input: `semantic scene description` (undefined)
- - Input: `/h/p/*/softbiometrics [hri_msgs/Softbiometrics]` (topic)
  - Input: `TF` (undefined)
  - Input: `robot state` (undefined)
+ - Input: `/h/p/*/softbiometrics [hri_msgs/Softbiometrics]` (topic)
  - Input: `dialogue state` (undefined)
+ - Input: `input` (undefined)
+ - Input: `semantic scene description` (undefined)
 
- - Output: `verbal command` (undefined)
- - Output: `nav goals` (undefined)
- - Output: `who to look at` (undefined)
- - Output: `active personID` (undefined)
  - Output: `gestures` (undefined)
 
 #### Dependencies
@@ -232,9 +463,9 @@ SUBFOLDER:interaction_manager
 
 ---
 
-### objectdetectionidentificationlocalisation
+### go_towards_action_server
 
-Node *Object detection/identification/localisation* (id: `objectdetectionidentificationlocalisation`) is overseen by CVUT.
+Node *go_towards_action_server* (id: `go_towards_action_server`) is overseen by HWU.
 
 
 
@@ -244,21 +475,85 @@ Node *Object detection/identification/localisation* (id: `objectdetectionidentif
 
 #### Inputs/outputs
 
- - Input: `/camera_head/color/image_raw [sensor_msgs/Image]` (topic)
+ - Input: `goal` (undefined)
+ - Input: `/controller_status [ControllerStatus]` (topic)
 
- - Output: `/detected_objects [spring_msgs/DetectedObjectArray]` (topic)
+ - Output: `/go_towards [GoTowards]` (topic)
 
 #### Dependencies
 
-- `spring_msgs/DetectedObjectArray`
-- `sensor_msgs/Image`
+- `GoTowards/GoTowards`
+- `std_msgs/Empty`
+- `ControllerStatus/ControllerStatus`
 
 
 ---
 
-### voicespeechmatching
+### republish_from_rtab_map_node
 
-Node *Voice speech matching* (id: `voicespeechmatching`) is overseen by BIU.
+Node *republish_from_rtab_map_node* (id: `republish_from_rtab_map_node`) is overseen by INRIA.
+
+Purpose: 
+Provide a local cost map from the global map 
+
+#### Status
+
+**Current release: 0.0.1** 
+
+#### Inputs/outputs
+
+ - Input: `/rtabmap/proj_map [OccupancyGrid]` (topic)
+ - Input: `/tracked_pose_2df/frame  [openpose/Frame]` (topic)
+
+ - Output: `tf: ?` (tf)
+ - Output: `/rtabmap/rtabmap_local_map_rectified [OccupancyGrid]` (topic)
+
+#### Dependencies
+
+- `OccupancyGrid/OccupancyGrid`
+- `tf/transform_broadcaster`
+- `openpose/Frame`
+
+
+---
+
+### track_frame_node
+
+Node *track_frame_node* (id: `track_frame_node`) is overseen by INRIA.
+
+MOCK: People 3D tracker [position/orientation] is...
+
+#### Status
+
+**Current release: 0.0.1** 
+
+#### Inputs/outputs
+
+ - Input: `/tracked_pose_2df/frame [openpose/Frame] [feet position]` (topic)
+
+ - Output: `/h/b/*/roi [sensor_msgs/RegionOfInterest` (undefined)
+ - Output: `/humans/bodies/tracked [hri_msgs/IdsList]` (topic)
+ - Output: `/h/b/*/cropped [sensor_msg/Image]` (topic)
+ - Output: `/humans/candidate_matches [hri_msgs/IdsMatch] [anonymous bodies]` (topic)
+ - Output: `tf: /body_id` (tf)
+ - Output: `/h/b/*/skeleton_2d [hri_msg/Skeleton2D]` (topic)
+
+#### Dependencies
+
+- `std_msgs/Empty`
+- `hri_msgs/IdsList`
+- `feet position/feet position`
+- `sensor_msg/Image`
+- `anonymous bodies/anonymous bodies`
+- `tf/transform_broadcaster`
+- `hri_msg/Skeleton2D`
+
+
+---
+
+### knowledge_core
+
+Node *knowledge_core* (id: `knowledge_core`) is overseen by PAL.
 
 
 
@@ -268,26 +563,23 @@ Node *Voice speech matching* (id: `voicespeechmatching`) is overseen by BIU.
 
 #### Inputs/outputs
 
- - Input: `/audio/speech_streams [std_msgs/String]` (topic)
- - Input: `/h/v/*/audio [audio_common_msgs/AudioData]` (topic)
+ - Input: `/kb/add_fact [std_msgs/String]` (topic)
 
- - Output: `/h/v/*/speech [hri_msgs/LiveSpeech]` (topic)
+ - Output: `service: /kb/query` (undefined)
 
 #### Dependencies
 
-- `hri_msgs/LiveSpeech`
+- `std_msgs/Empty`
 - `std_msgs/String`
-- `audio_common_msgs/AudioData`
 
 
 ---
 
-### speakeridentification
+### semanticmapping
+
+Node *Semantic mapping* (id: `semanticmapping`) is overseen by CVUT.
 
-Node *Speaker identification* (id: `speakeridentification`) is overseen by BIU.
 
-- online services
-- not started yet
 
 #### Status
 
@@ -295,112 +587,169 @@ Node *Speaker identification* (id: `speakeridentification`) is overseen by BIU.
 
 #### Inputs/outputs
 
- - Input: `tracking information` (undefined)
- - Input: `/audio/postprocess_audio_streams [audio_common_msgs/AudioData]` (topic)
+ - Input: `dense 3d map` (undefined)
+ - Input: `/detected_objects [spring_msgs/DetectedObjectArray]` (topic)
 
- - Output: `person_id_candidate [hri_msgs/IdsMatch]` (undefined)
- - Output: `/h/v/*/audio [audio_common_msgs/AudioData]` (topic)
+ - Output: `scene graph` (undefined)
 
 #### Dependencies
 
 - `std_msgs/Empty`
-- `audio_common_msgs/AudioData`
+- `spring_msgs/DetectedObjectArray`
 
 
 ---
 
-### mask_detector
+### social_strategy_supervisor
+
+Node *social_strategy_supervisor* (id: `social_strategy_supervisor`) is overseen by HWU.
 
-Node *mask_detector* (id: `mask_detector`) is overseen by UNITN.
 
-Detects presence of a facial mask
-REPO:git@gitlab.inria.fr:spring/wp4_behavior/wp4_behavior_understanding.git
-SUBFOLDER:wp4_people_characteristics
-BIN:mask_detector.py
 
 #### Status
 
-**Current release: master** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
- - Input: `/h/f/*/roi [sensor_msgs/RegionOfInterest]` (topic)
 
- - Output: `/h/f/*/mask [std_msgs/Bool]` (topic)
+ - Output: `output?` (undefined)
 
 #### Dependencies
 
-- `sensor_msgs/RegionOfInterest`
-- `std_msgs/Bool`
+- `std_msgs/Empty`
 
 
 ---
 
-### hri_person_manager
+### userattentionestimation
+
+Node *User attention estimation* (id: `userattentionestimation`) is overseen by UNITN.
 
-Node *hri_person_manager* (id: `hri_person_manager`) is overseen by PAL.
 
-REPO:git@gitlab.inria.fr:spring/wp7_ari/hri_person_manager.git
 
 #### Status
 
-**Current release: master** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
  - Input: `TF (faces)` (undefined)
- - Input: `/h/f/*/softbiometrics [hri_msgs/Softbiometrics]` (topic)
- - Input: `candidate_matches [hri_msgs/IdsMatch]` (undefined)
- - Input: `TF (voices)` (undefined)
+ - Input: `/h/f/*/roi [hri_msgs/RegionOfInterest]` (topic)
 
- - Output: `/h/p/*/voice_id [std_msgs/String]` (topic)
- - Output: `/h/p/*/softbiometrics [hri_msgs/Softbiometrics]` (topic)
- - Output: `/h/p/*/face_id [std_msgs/String]` (topic)
- - Output: `/humans/persons/*/body_id [std_msgs/String]` (topic)
- - Output: `tf: /person_id` (tf)
+ - Output: `x,y + attention heatmap` (undefined)
+ - Output: `tf: /face_id_gaze` (tf)
 
 #### Dependencies
 
-- `std_msgs/String`
 - `std_msgs/Empty`
-- `hri_msgs/Softbiometrics`
+- `hri_msgs/RegionOfInterest`
 - `tf/transform_broadcaster`
 
 
 ---
 
-### soft_biometrics_estimator
+### spring_msgs
 
-Node *soft_biometrics_estimator* (id: `soft_biometrics_estimator`) is overseen by UNITN.
+Node * spring_msgs* (id: `spring_msgs`) is overseen by PAL.
 
-Detects age/gender
-REPO:git@gitlab.inria.fr:spring/wp4_behavior/wp4_behavior_understanding.git
-SUBFOLDER:wp4_people_characteristics
-BIN:soft_biometrics_estimator.py
+REPO:git@gitlab.inria.fr:spring/wp7_ari/spring_msgs.git NOT EXECUTABLE
 
 #### Status
 
-**Current release: master** 
+**Current release: 0.0.2** 
+
+#### Inputs/outputs
+
+
+
+#### Dependencies
+
+
+
+---
+
+### speech
+
+Node *speech* (id: `speech`) is overseen by HWU.
+
+REPO:https://gitlab.inria.fr/spring/wp5_spoken_conversations/asr
+SUBFOLDER:google_asr/google_asr
+
+#### Status
+
+**Current release: BIU_dev** 
+
+#### Inputs/outputs
+
+ - Input: `/audio/postprocess_audio_streams [audio_common_msgs/AudioData]` (topic)
+
+ - Output: `/audio/speech_streams [std_msgs/String]` (topic)
+
+#### Dependencies
+
+- `audio_common_msgs/AudioData`
+- `std_msgs/String`
+
+
+---
+
+### gaze_estimation
+
+Node *gaze_estimation* (id: `gaze_estimation`) is overseen by UNITN.
+
+
+
+#### Status
+
+**This node is currently auto-generated (mock-up)** 
+
+#### Inputs/outputs
+
+ - Input: `/h/f/*/roi [sensor_msgs/RegionOfInterest]` (topic)
+ - Input: `vision_msgs/depth_estimation [DepthFrame]` (undefined)
+
+ - Output: `GazeFrame [2D point in heatmap]` (undefined)
+
+#### Dependencies
+
+- `sensor_msgs/RegionOfInterest`
+- `std_msgs/Empty`
+
+
+---
+
+### nonverbalbehaviours
+
+Node *Non-verbal behaviours* (id: `nonverbalbehaviours`) is overseen by UNITN.
+
+
+
+#### Status
+
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
+ - Input: `/h/v/*/raw_audio [spring_msgs/RawAudioData]` (topic)
  - Input: `/h/f/*/roi [hri_msgs/RegionOfInterest]` (topic)
 
- - Output: `/h/f/*/softbiometrics [hri_msgs/Softbiometrics]` (topic)
+ - Output: `/h/f/*/expression [hri_msgs/Expression]` (topic)
 
 #### Dependencies
 
+- `spring_msgs/RawAudioData`
+- `hri_msgs/Expression`
 - `hri_msgs/RegionOfInterest`
-- `hri_msgs/Softbiometrics`
 
 
 ---
 
-### rosmediapipeopenpose
+### alana_node
+
+Node *alana_node* (id: `alana_node`) is overseen by HWU.
 
-Node *ROS mediapipe/openpose* (id: `rosmediapipeopenpose`) is overseen by INRIA.
 
-MOCK: ROS mediapipe/openpose is...
 
 #### Status
 
@@ -408,9 +757,9 @@ MOCK: ROS mediapipe/openpose is...
 
 #### Inputs/outputs
 
- - Input: `input` (undefined)
+ - Input: `/get_answer` (undefined)
 
- - Output: `output` (undefined)
+ - Output: `/response` (undefined)
 
 #### Dependencies
 
@@ -419,11 +768,11 @@ MOCK: ROS mediapipe/openpose is...
 
 ---
 
-### people3dtracker
+### torso_rgbd_camera
+
+Node *torso_rgbd_camera* (id: `torso_rgbd_camera`) is overseen by PAL.
 
-Node *People 3D tracker* (id: `people3dtracker`) is overseen by INRIA.
 
-MOCK: People 3D tracker [position/orientation] is...
 
 #### Status
 
@@ -431,23 +780,22 @@ MOCK: People 3D tracker [position/orientation] is...
 
 #### Inputs/outputs
 
- - Input: `ground plane` (undefined)
- - Input: `people RoIs` (undefined)
- - Input: `feet position` (undefined)
 
- - Output: `tf: /body_id` (tf)
+ - Output: `/torso_front_camera/imu` (undefined)
+ - Output: `torso_front_camera/infra*/*` (undefined)
+ - Output: `/torso_front_camera/color/image_raw [sensor_msgs/Image]` (topic)
 
 #### Dependencies
 
 - `std_msgs/Empty`
-- `tf/transform_broadcaster`
+- `sensor_msgs/Image`
 
 
 ---
 
-### fformation
+### objectdetectionidentificationlocalisation
 
-Node *F-formation* (id: `fformation`) is overseen by UNITN.
+Node *Object detection/identification/localisation* (id: `objectdetectionidentificationlocalisation`) is overseen by CVUT.
 
 
 
@@ -457,47 +805,69 @@ Node *F-formation* (id: `fformation`) is overseen by UNITN.
 
 #### Inputs/outputs
 
- - Input: `/h/i/gaze [hri_msgs/Gaze]` (topic)
- - Input: `tf: /person_id` (tf)
+ - Input: `/camera_head/color/image_raw [sensor_msgs/Image]` (topic)
 
- - Output: `/h/i/groups [hri_msgs/Group]` (topic)
+ - Output: `/detected_objects [spring_msgs/DetectedObjectArray]` (topic)
 
 #### Dependencies
 
-- `hri_msgs/Group`
-- `hri_msgs/Gaze`
-- `tf/transform_listener`
+- `sensor_msgs/Image`
+- `spring_msgs/DetectedObjectArray`
 
 
 ---
 
-### respeaker_ros
+### inlocserver
+
+Node * InLoc Server* (id: `inlocserver`) is overseen by CVUT.
 
-Node *respeaker_ros* (id: `respeaker_ros`) is overseen by PAL.
 
-REPO:git@gitlab.inria.fr:spring/wp7_ari/respeaker_ros.git BIN:respeaker_multichan_node.py
 
 #### Status
 
-**Current release: master** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
 
- - Output: `/audio/ego_audio [audio_common_msgs/AudioData]` (topic)
- - Output: `/audio/raw_audio [respeaker_ros/RawAudioData]` (topic)
 
 #### Dependencies
 
-- `audio_common_msgs/AudioData`
-- `respeaker_ros/RawAudioData`
 
 
 ---
 
-### fisheye
+### inlocros
+
+Node *InLoc-ROS* (id: `inlocros`) is overseen by CVUT.
+
+
+
+#### Status
+
+**This node is currently auto-generated (mock-up)** 
+
+#### Inputs/outputs
+
+ - Input: `localisation prior` (undefined)
+ - Input: `/camera_torso/color/image_torso [sensor_msgs/Image]` (topic)
+ - Input: `/camera_head/color/image_head [sensor_msgs/Image]` (topic)
+
+ - Output: `dense 3d map` (undefined)
+ - Output: `tf: /odom` (tf)
+
+#### Dependencies
+
+- `std_msgs/Empty`
+- `sensor_msgs/Image`
+- `tf/transform_broadcaster`
+
+
+---
+
+### voicebodymatching
 
-Node *fisheye* (id: `fisheye`) is overseen by PAL.
+Node *Voice-body matching* (id: `voicebodymatching`) is overseen by BIU.
 
 
 
@@ -507,46 +877,46 @@ Node *fisheye* (id: `fisheye`) is overseen by PAL.
 
 #### Inputs/outputs
 
+ - Input: `tf: /voice_id` (tf)
+ - Input: `tf: /body_id` (tf)
 
- - Output: `/torso_front_camera/color/image_raw [sensor_msgs/Image]` (topic)
+ - Output: `/humans/candidate_matches [hri_msgs/IdsMatch] [body<->voice]` (topic)
 
 #### Dependencies
 
-- `sensor_msgs/Image`
+- `tf/transform_listener`
+- `body<->voice/body<->voice`
 
 
 ---
 
-### dialoguearbiter
+### depth_estimation
+
+Node *depth_estimation* (id: `depth_estimation`) is overseen by UNITN.
 
-Node *dialogue arbiter* (id: `dialoguearbiter`) is overseen by HWU.
 
-REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/dialogue.git
-SUBFOLDER:dialogue_arbiter
 
 #### Status
 
-**Current release: spring_dev** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
- - Input: `interaction messages` (undefined)
- - Input: `/h/v/*/speech [hri_msgs/LiveSpeech]` (topic)
+ - Input: `/head_front_camera/color/image_raw/compressed [sensor_msgs/CompressedImage]` (topic)
 
- - Output: `next utterance` (undefined)
- - Output: `DialogueState` (undefined)
+ - Output: `vision_msgs/depth_estimation [DepthFrame]` (undefined)
 
 #### Dependencies
 
+- `sensor_msgs/CompressedImage`
 - `std_msgs/Empty`
-- `hri_msgs/LiveSpeech`
 
 
 ---
 
-### facedetection
+### rtabmap
 
-Node *Face detection* (id: `facedetection`) is overseen by UNITN.
+Node *rtabmap* (id: `rtabmap`) is overseen by INRIA.
 
 
 
@@ -556,23 +926,24 @@ Node *Face detection* (id: `facedetection`) is overseen by UNITN.
 
 #### Inputs/outputs
 
- - Input: `/camera_head/color/image_raw [sensor_msgs/Image]` (topic)
+ - Input: `/torso_front_camera/imu` (undefined)
+ - Input: `torso_front_camera/infra*/*` (undefined)
 
- - Output: `/h/f/*/cropped [sensor_msg/Image]` (topic)
- - Output: `/h/f/*/roi [sensor_msgs/RegionOfInterest]` (topic)
+ - Output: `/rtabmap/proj_map [OccupancyGrid]` (topic)
+ - Output: `tf: /odom` (tf)
 
 #### Dependencies
 
-- `sensor_msg/Image`
-- `sensor_msgs/RegionOfInterest`
-- `sensor_msgs/Image`
+- `OccupancyGrid/OccupancyGrid`
+- `std_msgs/Empty`
+- `tf/transform_broadcaster`
 
 
 ---
 
-### depthestimationfrommonocular
+### robotfunctionallayer
 
-Node *Depth estimation from monocular* (id: `depthestimationfrommonocular`) is overseen by UNITN.
+Node *Robot functional layer* (id: `robotfunctionallayer`) is overseen by PAL.
 
 
 
@@ -584,7 +955,7 @@ Node *Depth estimation from monocular* (id: `depthestimationfrommonocular`) is o
 
  - Input: `input` (undefined)
 
- - Output: `depth` (undefined)
+ - Output: `/joint_states` (undefined)
 
 #### Dependencies
 
@@ -593,9 +964,9 @@ Node *Depth estimation from monocular* (id: `depthestimationfrommonocular`) is o
 
 ---
 
-### robotfunctionallayer
+### ros_petri_net_node
 
-Node *Robot functional layer* (id: `robotfunctionallayer`) is overseen by PAL.
+Node *ros_petri_net_node* (id: `ros_petri_net_node`) is overseen by HWU.
 
 
 
@@ -605,8 +976,11 @@ Node *Robot functional layer* (id: `robotfunctionallayer`) is overseen by PAL.
 
 #### Inputs/outputs
 
- - Input: `input` (undefined)
+ - Input: `plan` (undefined)
 
+ - Output: `look_at goals` (undefined)
+ - Output: `nav goals` (undefined)
+ - Output: `go_towards goals` (undefined)
 
 #### Dependencies
 
@@ -615,43 +989,39 @@ Node *Robot functional layer* (id: `robotfunctionallayer`) is overseen by PAL.
 
 ---
 
-### fairmotmultipeoplebodytracker
+### social_state_analyzer
+
+Node *social_state_analyzer* (id: `social_state_analyzer`) is overseen by HWU.
 
-Node *FairMOT Multi-people body tracker* (id: `fairmotmultipeoplebodytracker`) is overseen by INRIA.
 
-This code is primarily developed at INRIA by Luis Gomez Camara.
-REPO: https://gitlab.inria.fr/spring/wp3_av_perception/multi-person_visual_tracker/
 
 #### Status
 
-**Current release: devel** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
- - Input: `/camera_head/color/image_raw [sensor_msgs/Image]` (topic)
+ - Input: `/humans/persons/tracked` (undefined)
 
- - Output: `/h/b/*/cropped [sensor_msg/Image]` (topic)
- - Output: `/h/b/*/roi [sensor_msgs/RegionOfInterest` (undefined)
+ - Output: `output?` (undefined)
 
 #### Dependencies
 
-- `sensor_msg/Image`
-- `sensor_msgs/Image`
 - `std_msgs/Empty`
 
 
 ---
 
-### mode
+### audio_processing_mode
 
-Node *MoDE* (id: `mode`) is overseen by BIU.
+Node *audio_processing_mode* (id: `audio_processing_mode`) is overseen by BIU.
+
+[update 05/22]
+- single microphone improvement
 
 This node does:
-- speech echo cancelation,
-- speech enhancement,
-- speech separation and diarization
-REPO:https://gitlab.inria.fr/spring/wp3_av_perception/speech-enhancement
-SUBFOLDER:audio_processing 
+
+ - speech echo cancelation, - speech enhancement, - speech separation and diarization REPO:https://gitlab.inria.fr/spring/wp3_av_perception/speech-enhancement SUBFOLDER:audio_processing 
 
 #### Status
 
@@ -659,24 +1029,24 @@ SUBFOLDER:audio_processing
 
 #### Inputs/outputs
 
- - Input: `sound localization` (undefined)
  - Input: `/audio/ego_audio [audio_common_msgs/AudioData]` (topic)
- - Input: `/audio/raw_audio [respeaker_ros/RawAudioData]` (topic)
+ - Input: `sound localization` (undefined)
+ - Input: `/audio/raw_audio [spring_msgs/RawAudioData]` (topic)
 
- - Output: `/audio/postprocess_audio_streams [audio_common_msgs/AudioData]` (topic)
+ - Output: `/audio/enh_audio [spring_msgs/RawAudioData]` (topic)
 
 #### Dependencies
 
+- `spring_msgs/RawAudioData`
 - `audio_common_msgs/AudioData`
 - `std_msgs/Empty`
-- `respeaker_ros/RawAudioData`
 
 
 ---
 
-### speechsynthesis
+### dialogue_say
 
-Node *Speech synthesis* (id: `speechsynthesis`) is overseen by PAL.
+Node *dialogue_say* (id: `dialogue_say`) is overseen by PAL.
 
 
 
@@ -697,11 +1067,12 @@ Node *Speech synthesis* (id: `speechsynthesis`) is overseen by PAL.
 
 ---
 
-### uservisualfocus
-
-Node *User visual focus* (id: `uservisualfocus`) is overseen by UNITN.
+### speakeridentification
 
+Node *Speaker identification* (id: `speakeridentification`) is overseen by BIU.
 
+- online services
+- not started yet
 
 #### Status
 
@@ -709,25 +1080,28 @@ Node *User visual focus* (id: `uservisualfocus`) is overseen by UNITN.
 
 #### Inputs/outputs
 
- - Input: `scene` (undefined)
- - Input: `attention` (undefined)
- - Input: `depth` (undefined)
- - Input: `gaze direction` (undefined)
+ - Input: `/audio/audio_sources  [spring_msgs/RawAudioData[]]` (topic)
+ - Input: `tracking information` (undefined)
 
- - Output: `who's looking at what?` (undefined)
- - Output: `/h/i/gaze [hri_msgs/Gaze]` (topic)
+ - Output: `/h/v/*/doa` (undefined)
+ - Output: `/humans/candidate_matches [hri_msgs/IdsMatch] [person<->voice]` (topic)
+ - Output: `tf: /voice_id` (tf)
+ - Output: `/h/v/*/raw_audio [spring_msgs/RawAudioData]` (topic)
 
 #### Dependencies
 
 - `std_msgs/Empty`
-- `hri_msgs/Gaze`
+- `person<->voice/person<->voice`
+- `tf/transform_broadcaster`
+- `spring_msgs/RawAudioData[]`
+- `spring_msgs/RawAudioData`
 
 
 ---
 
-### orbslam
+### robotgui
 
-Node *ORB SLAM* (id: `orbslam`) is overseen by PAL.
+Node *Robot GUI* (id: `robotgui`) is overseen by ERM.
 
 
 
@@ -737,54 +1111,51 @@ Node *ORB SLAM* (id: `orbslam`) is overseen by PAL.
 
 #### Inputs/outputs
 
- - Input: `/camera_torso/color/image_raw [sensor_msgs/Image]` (topic)
+ - Input: `speech output` (undefined)
+ - Input: `/tts/feedback` (undefined)
+ - Input: `/h/v/*/speech [hri_msgs/LiveSpeech]` (topic)
+ - Input: `additional support material` (undefined)
 
- - Output: `/map [nav_msgs/OccupancyGrid]` (topic)
- - Output: `tf: /odom` (tf)
 
 #### Dependencies
 
-- `nav_msgs/OccupancyGrid`
-- `tf/transform_broadcaster`
-- `sensor_msgs/Image`
+- `std_msgs/Empty`
+- `hri_msgs/LiveSpeech`
 
 
 ---
 
-### robot_behavior
+### ros_mediapipe_node
 
-Node *robot_behavior* (id: `robot_behavior`) is overseen by INRIA.
+Node *ros_mediapipe_node* (id: `ros_mediapipe_node`) is overseen by INRIA.
 
-The code is primarily developed at INRIA by Timothée Wintz.
-REPO: https://gitlab.inria.fr/spring/wp6_robot_behavior/robot_behavior
-SUBFOLDER:src/robot_behavior
+MOCK: ROS mediapipe/openpose is...
 
 #### Status
 
-**Current release: devel** 
+**Current release: 0.0.1** 
 
 #### Inputs/outputs
 
- - Input: `following/nav goals` (undefined)
- - Input: `status` (undefined)
- - Input: `/h/i/groups [hri_msgs/Group]` (topic)
- - Input: `look at` (undefined)
- - Input: `occupancy map` (undefined)
+ - Input: `/front_camera_basetation/fisheye/image_raw/compressed [sensor_msgs/CompressedImage]` (topic)
+ - Input: `/tracker/tracker_output [std_msgs/String]` (topic)
 
- - Output: `status` (undefined)
- - Output: `low-level actions` (undefined)
+ - Output: `/tracked_pose_2df/image_raw/compressed` (undefined)
+ - Output: `/tracked_pose_2df/frame  [openpose/Frame]` (topic)
 
 #### Dependencies
 
+- `sensor_msgs/CompressedImage`
 - `std_msgs/Empty`
-- `hri_msgs/Group`
+- `openpose/Frame`
+- `std_msgs/String`
 
 
 ---
 
-### semanticmapping
+### speakerseparationdiarization
 
-Node *Semantic mapping* (id: `semanticmapping`) is overseen by CVUT.
+Node *Speaker separation/diarization* (id: `speakerseparationdiarization`) is overseen by BIU.
 
 
 
@@ -794,46 +1165,47 @@ Node *Semantic mapping* (id: `semanticmapping`) is overseen by CVUT.
 
 #### Inputs/outputs
 
- - Input: `dense 3d map` (undefined)
- - Input: `/detected_objects [spring_msgs/DetectedObjectArray]` (topic)
+ - Input: `/audio/enh_audio [spring_msgs/RawAudioData]` (topic)
 
- - Output: `scene graph` (undefined)
+ - Output: `/audio/postprocess_audio_streams [audio_common_msgs/AudioData[]]` (topic)
 
 #### Dependencies
 
-- `std_msgs/Empty`
-- `spring_msgs/DetectedObjectArray`
+- `spring_msgs/RawAudioData`
+- `audio_common_msgs/AudioData[]`
 
 
 ---
 
-### social_scene_context_understanding
+### look_at_action_server
+
+Node *look_at_action_server* (id: `look_at_action_server`) is overseen by HWU.
 
-Node *social_scene_context_understanding* (id: `social_scene_context_understanding`) is overseen by HWU.
 
-REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/interaction.git
-SUBFOLDER:social_scene_context_understanding
 
 #### Status
 
-**Current release: spring_dev** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
- - Input: `scene graph` (undefined)
+ - Input: `goal` (undefined)
+ - Input: `/controller_status [ControllerStatus]` (topic)
 
- - Output: `semantic description` (undefined)
+ - Output: `/look_at [LookAt]` (topic)
 
 #### Dependencies
 
 - `std_msgs/Empty`
+- `LookAt/LookAt`
+- `ControllerStatus/ControllerStatus`
 
 
 ---
 
-### robotgui
+### navigate_action_server
 
-Node *Robot GUI* (id: `robotgui`) is overseen by ERM.
+Node *navigate_action_server* (id: `navigate_action_server`) is overseen by HWU.
 
 
 
@@ -843,48 +1215,47 @@ Node *Robot GUI* (id: `robotgui`) is overseen by ERM.
 
 #### Inputs/outputs
 
- - Input: `speech output` (undefined)
- - Input: `additional support material` (undefined)
- - Input: `/h/v/*/speech [hri_msgs/LiveSpeech]` (topic)
- - Input: `/tts/feedback` (undefined)
+ - Input: `goal` (undefined)
+ - Input: `/controller_status [ControllerStatus]` (topic)
 
+ - Output: `/navigate [Navigate]` (topic)
 
 #### Dependencies
 
+- `Navigate/Navigate`
 - `std_msgs/Empty`
-- `hri_msgs/LiveSpeech`
+- `ControllerStatus/ControllerStatus`
 
 
 ---
 
-### ros_openpose
+### soundsourcelocalisation
+
+Node *sound source localisation* (id: `soundsourcelocalisation`) is overseen by BIU.
 
-Node *ros_openpose* (id: `ros_openpose`) is overseen by UNITN.
 
-REPO:git@gitlab.inria.fr:spring/wp4_behavior/wp4_behavior_understanding.git
-SUBFOLDER:ros_openpose
 
 #### Status
 
-**Current release: 0.0.1** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
- - Input: `/h/b/*/cropped [sensor_msg/Image]` (topic)
+ - Input: `/audio/raw_audio [spring_msgs/RawAudioData]` (topic)
 
- - Output: `/h/b/*/skeleton2d [hri_msgs/Skeleton2D]` (topic)
+ - Output: `/audio/audio_sources  [spring_msgs/RawAudioData[]]` (topic)
 
 #### Dependencies
 
-- `sensor_msg/Image`
-- `hri_msgs/Skeleton2D`
+- `spring_msgs/RawAudioData`
+- `spring_msgs/RawAudioData[]`
 
 
 ---
 
-### activityreco
+### orbslam
 
-Node *Activity reco* (id: `activityreco`) is overseen by UNITN.
+Node *ORB SLAM* (id: `orbslam`) is overseen by PAL.
 
 
 
@@ -894,51 +1265,48 @@ Node *Activity reco* (id: `activityreco`) is overseen by UNITN.
 
 #### Inputs/outputs
 
- - Input: `TF (bodies)` (undefined)
- - Input: `gaze direction` (undefined)
+ - Input: `/camera_torso/color/image_raw [sensor_msgs/Image]` (topic)
 
- - Output: `[?] output` (undefined)
+ - Output: `tf: /odom` (tf)
+ - Output: `/map [nav_msgs/OccupancyGrid]` (topic)
 
 #### Dependencies
 
-- `std_msgs/Empty`
+- `tf/transform_broadcaster`
+- `sensor_msgs/Image`
+- `nav_msgs/OccupancyGrid`
 
 
 ---
 
-### robot_behaviour_plan_actions
+### dialogue_speech
+
+Node *dialogue_speech* (id: `dialogue_speech`) is overseen by HWU.
 
-Node *robot_behaviour_plan_actions* (id: `robot_behaviour_plan_actions`) is overseen by HWU.
 
-REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/plan_actions.git
-SUBFOLDER:robot_behaviour_plan_actions
 
 #### Status
 
-**Current release: spring_dev** 
+**This node is currently auto-generated (mock-up)** 
 
 #### Inputs/outputs
 
- - Input: `TF (persons)` (undefined)
- - Input: `semantic scene description` (undefined)
- - Input: `dialogue state` (undefined)
- - Input: `/h/p/*/softbiometrics [hri_msgs/Softbiometrics]` (topic)
+ - Input: `/h/v/*/speech [hri_msgs/LiveSpeech]` (topic)
 
- - Output: `nav goals` (undefined)
- - Output: `interaction state` (undefined)
- - Output: `output` (undefined)
+ - Output: `/eos` (undefined)
+ - Output: `/dialogue_speech` (undefined)
 
 #### Dependencies
 
 - `std_msgs/Empty`
-- `hri_msgs/Softbiometrics`
+- `hri_msgs/LiveSpeech`
 
 
 ---
 
-### multipersonaudiotracking
+### voicespeechmatching
 
-Node *Multi-person audio tracking* (id: `multipersonaudiotracking`) is overseen by BIU.
+Node *Voice speech matching* (id: `voicespeechmatching`) is overseen by BIU.
 
 
 
@@ -948,25 +1316,50 @@ Node *Multi-person audio tracking* (id: `multipersonaudiotracking`) is overseen
 
 #### Inputs/outputs
 
- - Input: `/h/v/*/audio [audio_common_msgs/AudioData]` (topic)
+ - Input: `/h/v/*/raw_audio [spring_msgs/RawAudioData]` (topic)
+ - Input: `/audio/speech_streams [std_msgs/String]` (topic)
 
- - Output: `tracking information` (undefined)
- - Output: `output` (undefined)
- - Output: `source angle` (undefined)
- - Output: `tf: /voice_id` (tf)
+ - Output: `/h/v/*/speech [hri_msgs/LiveSpeech]` (topic)
+
+#### Dependencies
+
+- `spring_msgs/RawAudioData`
+- `hri_msgs/LiveSpeech`
+- `std_msgs/String`
+
+
+---
+
+### hri_person_manager
+
+Node *hri_person_manager* (id: `hri_person_manager`) is overseen by PAL.
+
+REPO:git@gitlab.inria.fr:spring/wp7_ari/hri_person_manager.git
+
+#### Status
+
+**Current release: master** 
+
+#### Inputs/outputs
+
+ - Input: `candidate_matches [hri_msgs/IdsMatch]` (undefined)
+
+ - Output: `tf: /person_id` (tf)
+ - Output: `/h/p/...` (undefined)
+ - Output: `/h/p/tracked [hri_msgs/IdsList]` (topic)
 
 #### Dependencies
 
-- `audio_common_msgs/AudioData`
 - `std_msgs/Empty`
 - `tf/transform_broadcaster`
+- `hri_msgs/IdsList`
 
 
 ---
 
-### inlocros
+### occupancymap
 
-Node *InLoc-ROS* (id: `inlocros`) is overseen by CVUT.
+Node *Occupancy map* (id: `occupancymap`) is overseen by CVUT.
 
 
 
@@ -976,50 +1369,49 @@ Node *InLoc-ROS* (id: `inlocros`) is overseen by CVUT.
 
 #### Inputs/outputs
 
- - Input: `/camera_torso/color/image_torso [sensor_msgs/Image]` (topic)
- - Input: `/camera_head/color/image_head [sensor_msgs/Image]` (topic)
- - Input: `localisation prior` (undefined)
+ - Input: `dense 3d map` (undefined)
+ - Input: `TF (bodies)` (undefined)
 
- - Output: `dense 3d map` (undefined)
- - Output: `tf: /odom` (tf)
+ - Output: `/map_refined [nav_msgs/OccupancyGrid]` (topic)
 
 #### Dependencies
 
 - `std_msgs/Empty`
-- `tf/transform_broadcaster`
-- `sensor_msgs/Image`
+- `nav_msgs/OccupancyGrid`
 
 
 ---
 
-### google_asr
+### dialogue_arbiter
 
-Node *google_asr* (id: `google_asr`) is overseen by BIU.
+Node *dialogue_arbiter* (id: `dialogue_arbiter`) is overseen by HWU.
 
-REPO:https://gitlab.inria.fr/spring/wp5_spoken_conversations/asr
-SUBFOLDER:google_asr/google_asr
+REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/dialogue.git
+SUBFOLDER:dialogue_arbiter
 
 #### Status
 
-**Current release: BIU_dev** 
+**Current release: spring_dev** 
 
 #### Inputs/outputs
 
- - Input: `/audio/postprocess_audio_streams [audio_common_msgs/AudioData]` (topic)
+ - Input: `/eos` (undefined)
+ - Input: `/dialogue_speech` (undefined)
+ - Input: `interaction messages` (undefined)
 
- - Output: `/audio/speech_streams -- array of hri_msgs/LiveSpeech` (undefined)
+ - Output: `next utterance` (undefined)
+ - Output: `DialogueState` (undefined)
 
 #### Dependencies
 
-- `audio_common_msgs/AudioData`
 - `std_msgs/Empty`
 
 
 ---
 
-### occupancymap
+### uservisualfocus
 
-Node *Occupancy map* (id: `occupancymap`) is overseen by CVUT.
+Node *User visual focus* (id: `uservisualfocus`) is overseen by UNITN.
 
 
 
@@ -1029,46 +1421,55 @@ Node *Occupancy map* (id: `occupancymap`) is overseen by CVUT.
 
 #### Inputs/outputs
 
- - Input: `dense 3d map` (undefined)
- - Input: `TF (bodies)` (undefined)
+ - Input: `gaze direction` (undefined)
+ - Input: `depth` (undefined)
+ - Input: `attention` (undefined)
+ - Input: `scene` (undefined)
 
- - Output: `/map_refined [nav_msgs/OccupancyGrid]` (topic)
+ - Output: `/h/i/gaze [hri_msgs/Gaze]` (topic)
+ - Output: `who's looking at what?` (undefined)
 
 #### Dependencies
 
 - `std_msgs/Empty`
-- `nav_msgs/OccupancyGrid`
+- `hri_msgs/Gaze`
 
 
 ---
 
-### raspicam
-
-Node *raspicam* (id: `raspicam`) is overseen by PAL.
+### recipe_planner
 
+Node *recipe_planner* (id: `recipe_planner`) is overseen by HWU.
 
+REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/plan_actions.git
+SUBFOLDER:robot_behaviour_plan_actions
 
 #### Status
 
-**This node is currently auto-generated (mock-up)** 
+**Current release: spring_dev** 
 
 #### Inputs/outputs
 
+ - Input: `dialogue state` (undefined)
+ - Input: `/h/p/*/softbiometrics [hri_msgs/Softbiometrics]` (topic)
+ - Input: `semantic scene description` (undefined)
 
- - Output: `/head_front_camera/color/image_raw [sensor_msgs/Image]` (topic)
+ - Output: `plan` (undefined)
+ - Output: `interaction state` (undefined)
 
 #### Dependencies
 
-- `sensor_msgs/Image`
+- `std_msgs/Empty`
+- `hri_msgs/Softbiometrics`
 
 
 
-### Non-executable dependency:  interaction_manager_msgs
+### Non-executable dependency:  robot_behaviour_msgs
 
-Module  interaction_manager_msgs (id: `interaction_manager_msgs`) is overseen by HWU.
+Module  robot_behaviour_msgs (id: `robot_behaviour_msgs`) is overseen by HWU.
 
-REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/interaction.git
-SUBFOLDER:interaction_manager_msgs
+REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/plan_actions.git
+SUBFOLDER:robot_behaviour_msgs
 NOT EXECUTABLE
 
 
@@ -1100,12 +1501,12 @@ NOT EXECUTABLE
 
 
 
-### Non-executable dependency:  robot_behaviour_msgs
+### Non-executable dependency:  audio_msgs
 
-Module  robot_behaviour_msgs (id: `robot_behaviour_msgs`) is overseen by HWU.
+Module  audio_msgs (id: `audio_msgs`) is overseen by HWU.
 
-REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/plan_actions.git
-SUBFOLDER:robot_behaviour_msgs
+REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/asr.git
+SUBFOLDER:audio_msgs
 NOT EXECUTABLE
 
 
@@ -1119,12 +1520,12 @@ NOT EXECUTABLE
 
 
 
-### Non-executable dependency:  audio_msgs
+### Non-executable dependency:  interaction_manager_msgs
 
-Module  audio_msgs (id: `audio_msgs`) is overseen by HWU.
+Module  interaction_manager_msgs (id: `interaction_manager_msgs`) is overseen by HWU.
 
-REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/asr.git
-SUBFOLDER:audio_msgs
+REPO:git@gitlab.inria.fr:spring/wp5_spoken_conversations/interaction.git
+SUBFOLDER:interaction_manager_msgs
 NOT EXECUTABLE