diff --git a/g5k/07_fault_injection_on_processes.ipynb b/g5k/07_fault_injection_on_processes.ipynb
index d5d03db4ca0902c75ae072f1be546f26b6b1d43b..71289825b431479020b331391b5ef314d41d21ad 100644
--- a/g5k/07_fault_injection_on_processes.ipynb
+++ b/g5k/07_fault_injection_on_processes.ipynb
@@ -61,7 +61,6 @@
     "import signal\n",
     "import os\n",
     "from datetime import datetime, timedelta\n",
-    "from pathlib import Path\n",
     "\n",
     "import enoslib as en\n",
     "\n",
@@ -133,6 +132,17 @@
     "#### Common node's configuration"
    ]
   },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Each node must have this minimal configuration :\n",
+    "- having python and pip\n",
+    "- having procps (to use pgrep)\n",
+    "- having pika (for the rabbitmq connection)"
+   ]
+  },
   {
    "cell_type": "code",
    "execution_count": null,
@@ -161,6 +171,14 @@
     "#### Server configuration"
    ]
   },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here, we does not launch anything yet, we just setup the server node to accept all our producer(s) and consumer(s). We also add a new administrator in order to have access to the management interface, the default one being blocked by the remote configuration."
+   ]
+  },
   {
    "cell_type": "code",
    "execution_count": null,
@@ -217,6 +235,14 @@
     "#### Producers' node configuration"
    ]
   },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The producers' node has to be configured such that it contains the script that is gonna be used."
+   ]
+  },
   {
    "cell_type": "code",
    "execution_count": null,
@@ -239,6 +265,14 @@
     "#### Consumers' node configuration"
    ]
   },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The consumers' node has to be configured such that it contains the script that is gonna be used."
+   ]
+  },
   {
    "cell_type": "code",
    "execution_count": null,
@@ -261,6 +295,18 @@
     "#### Utility functions"
    ]
   },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The only purpose of these functions is to facilitate and to make more readable this experiment. Their objectives are :\n",
+    "- to gather and show general statistics about the current state of the experiment (timestamp, number of received and processed messages, queue depth, number of consumer(s))\n",
+    "- to clear the experiment (kill all instances of producer(s)/consumer(s) if any, delete all output files if any, purge the rabbitmq queue)\n",
+    "- to launch all producer(s) and consumer(s)\n",
+    "- to reset the experiment by going back to its initial state (clean + launch) "
+   ]
+  },
   {
    "cell_type": "code",
    "execution_count": null,
@@ -438,6 +484,7 @@
    "metadata": {},
    "outputs": [],
    "source": [
+    "# does not work, it is just an example on how to build one \n",
     "registry = en.ProcessRegistry()\n",
     "registry.build(\"regexp\", roles)"
    ]
@@ -447,7 +494,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "One ```Process``` is defined by its pid, its host and its command."
+    "One ```Process``` is defined by its pid, its host and its command, a state (ALIVE or DEAD) is also attributed to each instance."
    ]
   },
   {
@@ -456,7 +503,9 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "process = en.Process(1234, roles, \"cmd\")"
+    "host = en.Host(\"192.168.0.3\", alias=\"one_alias\", user=\"foo\")\n",
+    "process = en.Process(1234, host, \"cmd\")\n",
+    "print(process)"
    ]
   },
   {
@@ -468,7 +517,7 @@
     "\n",
     "When acting on consumers, we will observe an increase of the queue depth, meaning that the messages are not processed as fast as they are produced.\n",
     "\n",
-    "When acting on consumers, we will observe no evolution regarding the number of processed messages."
+    "When acting on producers, we will observe no evolution regarding the number of processed messages."
    ]
   },
   {
@@ -484,7 +533,9 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Using this type of kill, the user must wait for the end before doing anything else."
+    "Using this type of kill, the user must wait for the end before doing anything else.\n",
+    "\n",
+    "We can specify the sended signal."
    ]
   },
   {
@@ -531,180 +582,18 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 51,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/html": [
-       "<div>\n",
-       "<style scoped>\n",
-       "    .dataframe tbody tr th:only-of-type {\n",
-       "        vertical-align: middle;\n",
-       "    }\n",
-       "\n",
-       "    .dataframe tbody tr th {\n",
-       "        vertical-align: top;\n",
-       "    }\n",
-       "\n",
-       "    .dataframe thead th {\n",
-       "        text-align: right;\n",
-       "    }\n",
-       "</style>\n",
-       "<table border=\"1\" class=\"dataframe\">\n",
-       "  <thead>\n",
-       "    <tr style=\"text-align: right;\">\n",
-       "      <th></th>\n",
-       "      <th>Time</th>\n",
-       "      <th>nb_received_messages</th>\n",
-       "      <th>queue_depth</th>\n",
-       "      <th>nb_consumer</th>\n",
-       "    </tr>\n",
-       "  </thead>\n",
-       "  <tbody>\n",
-       "    <tr>\n",
-       "      <th>0</th>\n",
-       "      <td>2023-07-13 16:36:14.166267</td>\n",
-       "      <td>15</td>\n",
-       "      <td>3</td>\n",
-       "      <td>3</td>\n",
-       "    </tr>\n",
-       "    <tr>\n",
-       "      <th>1</th>\n",
-       "      <td>2023-07-13 16:36:17.374099</td>\n",
-       "      <td>25</td>\n",
-       "      <td>3</td>\n",
-       "      <td>3</td>\n",
-       "    </tr>\n",
-       "    <tr>\n",
-       "      <th>2</th>\n",
-       "      <td>2023-07-13 16:36:20.649940</td>\n",
-       "      <td>35</td>\n",
-       "      <td>3</td>\n",
-       "      <td>3</td>\n",
-       "    </tr>\n",
-       "    <tr>\n",
-       "      <th>3</th>\n",
-       "      <td>2023-07-13 16:36:23.882714</td>\n",
-       "      <td>44</td>\n",
-       "      <td>3</td>\n",
-       "      <td>3</td>\n",
-       "    </tr>\n",
-       "    <tr>\n",
-       "      <th>4</th>\n",
-       "      <td>2023-07-13 16:36:27.090849</td>\n",
-       "      <td>54</td>\n",
-       "      <td>3</td>\n",
-       "      <td>3</td>\n",
-       "    </tr>\n",
-       "  </tbody>\n",
-       "</table>\n",
-       "</div>"
-      ],
-      "text/plain": [
-       "                         Time  nb_received_messages  queue_depth  nb_consumer\n",
-       "0  2023-07-13 16:36:14.166267                    15            3            3\n",
-       "1  2023-07-13 16:36:17.374099                    25            3            3\n",
-       "2  2023-07-13 16:36:20.649940                    35            3            3\n",
-       "3  2023-07-13 16:36:23.882714                    44            3            3\n",
-       "4  2023-07-13 16:36:27.090849                    54            3            3"
-      ]
-     },
-     "execution_count": 51,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
    "source": [
     "results_before_kill"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 52,
-   "metadata": {},
-   "outputs": [
-    {
-     "data": {
-      "text/html": [
-       "<div>\n",
-       "<style scoped>\n",
-       "    .dataframe tbody tr th:only-of-type {\n",
-       "        vertical-align: middle;\n",
-       "    }\n",
-       "\n",
-       "    .dataframe tbody tr th {\n",
-       "        vertical-align: top;\n",
-       "    }\n",
-       "\n",
-       "    .dataframe thead th {\n",
-       "        text-align: right;\n",
-       "    }\n",
-       "</style>\n",
-       "<table border=\"1\" class=\"dataframe\">\n",
-       "  <thead>\n",
-       "    <tr style=\"text-align: right;\">\n",
-       "      <th></th>\n",
-       "      <th>Time</th>\n",
-       "      <th>nb_received_messages</th>\n",
-       "      <th>queue_depth</th>\n",
-       "      <th>nb_consumer</th>\n",
-       "    </tr>\n",
-       "  </thead>\n",
-       "  <tbody>\n",
-       "    <tr>\n",
-       "      <th>0</th>\n",
-       "      <td>2023-07-13 16:36:31.781609</td>\n",
-       "      <td>64</td>\n",
-       "      <td>13</td>\n",
-       "      <td>0</td>\n",
-       "    </tr>\n",
-       "    <tr>\n",
-       "      <th>1</th>\n",
-       "      <td>2023-07-13 16:36:35.061763</td>\n",
-       "      <td>64</td>\n",
-       "      <td>22</td>\n",
-       "      <td>0</td>\n",
-       "    </tr>\n",
-       "    <tr>\n",
-       "      <th>2</th>\n",
-       "      <td>2023-07-13 16:36:38.278555</td>\n",
-       "      <td>64</td>\n",
-       "      <td>32</td>\n",
-       "      <td>0</td>\n",
-       "    </tr>\n",
-       "    <tr>\n",
-       "      <th>3</th>\n",
-       "      <td>2023-07-13 16:36:41.566426</td>\n",
-       "      <td>64</td>\n",
-       "      <td>42</td>\n",
-       "      <td>0</td>\n",
-       "    </tr>\n",
-       "    <tr>\n",
-       "      <th>4</th>\n",
-       "      <td>2023-07-13 16:36:44.790659</td>\n",
-       "      <td>64</td>\n",
-       "      <td>52</td>\n",
-       "      <td>0</td>\n",
-       "    </tr>\n",
-       "  </tbody>\n",
-       "</table>\n",
-       "</div>"
-      ],
-      "text/plain": [
-       "                         Time  nb_received_messages  queue_depth  nb_consumer\n",
-       "0  2023-07-13 16:36:31.781609                    64           13            0\n",
-       "1  2023-07-13 16:36:35.061763                    64           22            0\n",
-       "2  2023-07-13 16:36:38.278555                    64           32            0\n",
-       "3  2023-07-13 16:36:41.566426                    64           42            0\n",
-       "4  2023-07-13 16:36:44.790659                    64           52            0"
-      ]
-     },
-     "execution_count": 52,
-     "metadata": {},
-     "output_type": "execute_result"
-    }
-   ],
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
    "source": [
     "results_after_kill"
    ]
@@ -1004,7 +893,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Here, each kill is scheduled, we have to specify the number, the beginning and the interval between each one of them.\n",
+    "Here, each kill is scheduled and totally random, we have to specify how many processes we want to kill, the beginning (date of the first kill) and the interval between each of them.\n",
     "\n",
     "The beginning is a ```datetime.datetime``` object.\n",
     "\n",
@@ -1074,7 +963,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "We can also restart all killed processes without acting on the others. In a way we go back to the registry's initial state."
+    "We can also restart all killed processes without acting on the others. In a way we go back to the registry's initial state. This ```registry.reset()``` has nothing to do with the ```reset()``` specificly implemented for this experiment ! "
    ]
   },
   {
@@ -1102,7 +991,7 @@
    "source": [
     "We can restart an entire registry, this means killing all of them (if alive) and starting them again.\n",
     "\n",
-    "It can be done either synchronously with '''registry.restart()''', either asynchronously in the same way as the kills previously shown."
+    "It can be done either synchronously with ```registry.restart()```, either asynchronously in the same way as the kills previously shown. If asynchronous, the date of the kill(s) (or the time delta before they happen) and its interval with the start(s) of the docker container(s) can be specified."
    ]
   },
   {
@@ -1197,7 +1086,6 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "clean()\n",
     "provider.destroy()"
    ]
   }
diff --git a/g5k/08_fault_injection_on_docker_containers.ipynb b/g5k/08_fault_injection_on_docker_containers.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..4bcb29b8c2e791c2bd309c14077840dc6ed33114
--- /dev/null
+++ b/g5k/08_fault_injection_on_docker_containers.ipynb
@@ -0,0 +1,1115 @@
+{
+ "cells": [
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Fault-injection on Docker containers\n",
+    "\n",
+    "---\n",
+    "\n",
+    "- Website: https://discovery.gitlabpages.inria.fr/enoslib/index.html\n",
+    "- Instant chat: https://framateam.org/enoslib\n",
+    "- Source code: https://gitlab.inria.fr/discovery/enoslib\n",
+    "\n",
+    "---\n",
+    "\n",
+    "## Prerequisites\n",
+    "\n",
+    "<div class=\"alert alert-block alert-warning\">\n",
+    "    Make sure you've run the one time setup for your environment\n",
+    "</div>"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Docker / Rabbitmq / Cron \n",
+    "\n",
+    "[Docker](https://www.docker.com/) is an open-source platform that allows you to automate the deployment and management of software applications inside lightweight, isolated containers. These containers bundle the application code along with all its dependencies, enabling consistent and efficient deployment across different computing environments. Docker simplifies software development, testing, and deployment by providing a standardized and portable way to package and run applications, making them highly scalable, reproducible, and easy to maintain.\n",
+    "\n",
+    "[RabbitMQ](https://www.rabbitmq.com/) is an open-source message broker that enables different software applications to communicate and exchange data in a reliable and scalable manner. It follows the Advanced Message Queuing Protocol (AMQP) and provides a flexible messaging model based on the concept of queues.\n",
+    "\n",
+    "For our experiment, we will deploy a publish / suscribe environment to demonstrate the impact of our api.\n",
+    "\n",
+    "\n",
+    "[Cron](https://man7.org/linux/man-pages/man8/cron.8.html) is a time-based job scheduler in Unix-like operating systems. It allows you to schedule and automate the execution of commands or scripts at specified intervals or specific times. Cron is commonly used for repetitive tasks, system maintenance, and scheduling periodic jobs.\n",
+    "\n",
+    "All asynchronous tools shown here are based on cron. Because of that, for each event, the date is of the order of a minute."
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Setup"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import signal\n",
+    "import os\n",
+    "from datetime import datetime, timedelta\n",
+    "\n",
+    "import enoslib as en\n",
+    "\n",
+    "en.init_logging()\n",
+    "en.check()\n"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Reservation"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "CLUSTER = \"nova\"\n",
+    "HERE = os.getcwd()\n",
+    "# claim the resources\n",
+    "conf = (\n",
+    "    en.G5kConf.from_settings(\n",
+    "        job_name=\"fault-injection tutorial\",\n",
+    "        job_type=[],\n",
+    "    )\n",
+    "    .add_machine(roles=[\"server\"], cluster=CLUSTER, nodes=1)\n",
+    "    .add_machine(\n",
+    "        roles=[\"producer\"], cluster=CLUSTER, nodes=1\n",
+    "    )  # all the producers are running on the same machine\n",
+    "    .add_machine(\n",
+    "        roles=[\"consumer\"], cluster=CLUSTER, nodes=1\n",
+    "    )  # all the consumers are running on the same machine\n",
+    ")\n",
+    "\n",
+    "provider = en.G5k(conf)\n",
+    "\n",
+    "roles, networks = provider.init()\n",
+    "\n",
+    "# Fill in network information from nodes\n",
+    "roles = en.sync_info(roles, networks)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "roles"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Rabbitmq configuration"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Common node's configuration"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Each node must have this minimal configuration :\n",
+    "- having a docker agent (enable all Docker commands, more here : https://discovery.gitlabpages.inria.fr/enoslib/apidoc/docker.html)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_opts = dict(type=\"external\", ip=\"docker-cache.grid5000.fr\", port=80)\n",
+    "    \n",
+    "d = en.Docker(\n",
+    "    agent=roles[\"producer\"] + roles[\"consumer\"],\n",
+    "    bind_var_docker=\"/tmp/docker\",\n",
+    "    registry_opts=registry_opts,\n",
+    ")\n",
+    "d.deploy()\n",
+    "\n",
+    "with en.actions(roles=roles[\"producer\"] + roles[\"consumer\"]) as p:\n",
+    "    p.file(path=\"/tmp/rabbitmq\", state=\"absent\")\n",
+    "    p.file(path=\"/tmp/rabbitmq\", state=\"directory\")\n",
+    "\n",
+    "    p.command(\"apt update\")"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Server configuration"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here, we does not launch anything yet, we just setup the server node to accept all our producer(s) and consumer(s). We also add a new administrator in order to have access to the management interface, the default one being blocked by the remote configuration."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "nproducer = 3\n",
+    "nconsumer = 3\n",
+    "\n",
+    "username_monitoring = \"user\"\n",
+    "password_monitoring = \"password\"\n",
+    "\n",
+    "username_prod = \"prod\"\n",
+    "username_cons = \"cons\"\n",
+    "password_prod = \"pwd_prod\"\n",
+    "password_cons = \"pwd_cons\"\n",
+    "\n",
+    "# SETUP\n",
+    "## Server configuration\n",
+    "with en.actions(roles=roles[\"server\"]) as p:\n",
+    "    # Setting the rabbimq server\n",
+    "    p.apt(task_name=\"Installing rabbitmq-server\", name=\"rabbitmq-server\")\n",
+    "    p.command(\"rabbitmq-plugins enable rabbitmq_management\")\n",
+    "    p.command(\"systemctl start rabbitmq-server\")\n",
+    "    p.command(\"systemctl enable rabbitmq-server\")\n",
+    "\n",
+    "    # For the management interface, adding a new admin\n",
+    "    p.command(f\"rabbitmqctl add_user {username_monitoring} {password_monitoring}\")\n",
+    "    p.command(f\"rabbitmqctl set_user_tags {username_monitoring} administrator\")\n",
+    "    p.command(f\"rabbitmqctl set_permissions {username_monitoring} .* .* .* -p '/'\")\n",
+    "    \n",
+    "    # For producers\n",
+    "    for idx in range(nproducer):\n",
+    "        # Add user's specifications (username + password)\n",
+    "        p.command(f\"rabbitmqctl add_user {username_prod}_{idx} {password_prod}\")\n",
+    "        # Allow users to connect to the default vhost ('/')\n",
+    "        p.command(f\"rabbitmqctl set_permissions {username_prod}_{idx} .* .* .* -p '/'\")\n",
+    "\n",
+    "    # For consumers\n",
+    "    for idx in range(nconsumer):\n",
+    "        # Add user's specifications (username + password)\n",
+    "        p.command(f\"rabbitmqctl add_user {username_cons}_{idx} {password_cons}\")\n",
+    "        # Allow users to connect to the default vhost ('/')\n",
+    "        p.command(f\"rabbitmqctl set_permissions {username_cons}_{idx} .* .* .* -p '/'\")\n",
+    "\n",
+    "    p.command(\"systemctl restart rabbitmq-server\")"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Producers' node configuration"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The producers' node has to be configured such that it contains the script and the Dockerfile (used to build the Docker image)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "with en.actions(roles=roles[\"producer\"]) as p:\n",
+    "    p.copy(\n",
+    "        src=HERE + \"/producer.py\",\n",
+    "        dest=\"/tmp/rabbitmq/producer.py\",\n",
+    "        task_name=\"copying producer file\",\n",
+    "    )\n",
+    "    p.copy(\n",
+    "        src=HERE + \"/producer.Dockerfile\",\n",
+    "        dest=\"/tmp/rabbitmq/Dockerfile\",\n",
+    "        task_name=\"copying producer Dockerfile\",\n",
+    "    )\n",
+    "    p.command(\"docker build -t producer_image /tmp/rabbitmq/\")"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Consumers' node configuration"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The consumers' node has to be configured such that it contains the script and the Dockerfile (used to build the Docker image)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "with en.actions(roles=roles[\"consumer\"]) as p:\n",
+    "    p.copy(\n",
+    "        src=HERE + \"/consumer.py\",\n",
+    "        dest=\"/tmp/rabbitmq/consumer.py\",\n",
+    "        task_name=\"copying consumer file\",\n",
+    "    )\n",
+    "    p.copy(\n",
+    "        src=HERE + \"/consumer.Dockerfile\",\n",
+    "        dest=\"/tmp/rabbitmq/Dockerfile\",\n",
+    "        task_name=\"copying consumer Dockerfile\",\n",
+    "    )\n",
+    "\n",
+    "    p.command(\"docker build -t consumer_image /tmp/rabbitmq/\")"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "#### Utility functions"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "The only purpose of these functions is to facilitate and to make more readable this experiment. Their objectives are :\n",
+    "- to gather and show general statistics about the current state of the experiment (timestamp, number of received and processed messages, queue depth, number of consumer(s))\n",
+    "- to clear the experiment (kill all instances of producer(s)/consumer(s) if any, delete all output files if any, purge the rabbitmq queue)\n",
+    "- to launch all producer(s) and consumer(s)\n",
+    "- to reset the experiment by going back to its initial state (clean + launch) "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "## Get server's IP address on the private network\n",
+    "from ast import List\n",
+    "import pandas as pd\n",
+    "\n",
+    "\n",
+    "server = roles[\"server\"][0]\n",
+    "ip_address_obj = server.filter_addresses(networks=networks[\"prod\"])[0]\n",
+    "## This may seem weird: ip_address_obj.ip is a `netaddr.IPv4Interface`\n",
+    "## which itself has an `ip` attribute.\n",
+    "server_ip = ip_address_obj.ip.ip\n",
+    "\n",
+    "def get_recv_msg(file: str) -> int:\n",
+    "    \"\"\"\n",
+    "    Shows the total number of processed messages.\n",
+    "    \"\"\"\n",
+    "    results = en.run_command(\n",
+    "        f\"wc -l {file}\",\n",
+    "        task_name=\"getting total number of received messages\",\n",
+    "        roles=roles[\"consumer\"],\n",
+    "        gather_facts=False,\n",
+    "        on_error_continue=True,\n",
+    "    )\n",
+    "    totalnbmsg = 0\n",
+    "    for r in results:\n",
+    "        if r.status == \"FAILED\" or r.rc != 0:\n",
+    "            print(f\"Actual number of received message : 0\")\n",
+    "            continue\n",
+    "        _lines = r.stdout.split(\"\\n\")\n",
+    "        _total = _lines[-1].strip().split(\" \") # last line contain the total number of line if multiple files, else\n",
+    "        totalnbmsg += int(_total[0])\n",
+    "\n",
+    "    return totalnbmsg\n",
+    "\n",
+    "def get_queue_size() -> List:\n",
+    "    results = en.run_command(\n",
+    "        \"rabbitmqctl list_queues -p '/' messages consumers | \"\n",
+    "        \"awk 'NR>3 {printf \\\"%-15s %-15s\\\\n\\\", $1, $2}'\",\n",
+    "        task_name=\"getting number of messages waiting for processing\",\n",
+    "        roles=roles[\"server\"],\n",
+    "        gather_facts=False,\n",
+    "        on_error_continue=True,\n",
+    "    )\n",
+    "    for r in results:\n",
+    "        if r.status == \"FAILED\" or r.rc != 0:\n",
+    "            print(\"Queue is empty\")\n",
+    "            continue\n",
+    "        lines = r.stdout.strip().split(\"\\n\")\n",
+    "        line = lines[0].strip().split(\" \")\n",
+    "        return [v for v in line if v!= \"\"]\n",
+    "\n",
+    "def get_stats(duration: int) -> pd.DataFrame:\n",
+    "    \"\"\"\n",
+    "    Retreive general statistics using the rabbitmq management tool.\n",
+    "    \"\"\"\n",
+    "    results = {}\n",
+    "    results[\"Time\"] = []\n",
+    "    results[\"nb_received_messages\"] = []\n",
+    "    results[\"queue_depth\"] = []\n",
+    "    results[\"nb_consumer\"] = []\n",
+    "    for _ in range(duration):\n",
+    "        results[\"Time\"].append(str(datetime.now()))\n",
+    "        \n",
+    "        results[\"nb_received_messages\"].append(get_recv_msg(\"/tmp/rabbitmq/*_output.txt\"))\n",
+    "\n",
+    "        queue_depth, nb_consumer = get_queue_size()\n",
+    "        results[\"queue_depth\"].append(int(queue_depth))\n",
+    "        results[\"nb_consumer\"].append(int(nb_consumer))\n",
+    "\n",
+    "\n",
+    "    df = pd.DataFrame(data=results)\n",
+    "\n",
+    "    return df\n",
+    "\n",
+    "def clean():\n",
+    "    \"\"\"\n",
+    "    Kill all previouses launched processes, \n",
+    "    removes all previouses results,\n",
+    "    purges the queue.\n",
+    "    \"\"\"\n",
+    "    cleaning_registry = en.ContainerDockerRegistry()\n",
+    "    cleaning_registry.build(\n",
+    "        \"tuto_fault_\",\n",
+    "        roles[\"consumer\"] + roles[\"producer\"],\n",
+    "    )\n",
+    "    cleaning_registry.kill(signal.SIGKILL)\n",
+    "\n",
+    "    en.run_command(\n",
+    "            \"rm /tmp/rabbitmq/*_output.txt & docker rm -f $(docker ps -aq)\",\n",
+    "            task_name=\"cleaning output files and build containers\",\n",
+    "            roles=roles[\"consumer\"] + roles[\"producer\"],\n",
+    "            on_error_continue=True,\n",
+    "            gather_facts=False,\n",
+    "        )\n",
+    "\n",
+    "    en.run_command(\n",
+    "            \"rabbitmqctl purge_queue fault_injection\",\n",
+    "            task_name=\"purging the queue\",\n",
+    "            roles=roles[\"server\"],\n",
+    "            on_error_continue=True,\n",
+    "            gather_facts=False,\n",
+    "        )\n",
+    "    \n",
+    "def launch():\n",
+    "\n",
+    "    for idx in range(nconsumer):\n",
+    "        en.run_command(\n",
+    "            f\"docker run -v /tmp/rabbitmq:/tmp/rabbitmq --name tuto_fault_cons_{idx} consumer_image {idx} {server_ip}\"\n",
+    "            f\" {username_cons} {password_cons}\",\n",
+    "            task_name=f\"run consumer script number {idx}\",\n",
+    "            roles=roles[\"consumer\"],\n",
+    "            background=True,\n",
+    "            gather_facts=False,\n",
+    "        )\n",
+    "\n",
+    "    for idx in range(nproducer):\n",
+    "        en.run_command(\n",
+    "            f\"docker run -v /tmp/rabbitmq:/tmp/rabbitmq --name tuto_fault_prod_{idx} producer_image {idx} {server_ip}\"\n",
+    "            f\" {username_prod} {password_prod}\",\n",
+    "            task_name=f\"run producer script number {idx}\",\n",
+    "            roles=roles[\"producer\"],\n",
+    "            background=True,\n",
+    "            gather_facts=False,\n",
+    "        )\n",
+    "\n",
+    "def reset():\n",
+    "    \"\"\"\n",
+    "    Return to the initial state of the experiment.\n",
+    "    \"\"\"\n",
+    "    print(\n",
+    "        \"\\n ------------------------------------ \",\n",
+    "        \"\\n| RESETING THE EXPERIMENT PARAMETERS |\",\n",
+    "        \"\\n ------------------------------------ \",\n",
+    "    )\n",
+    "\n",
+    "    clean()\n",
+    "    launch()\n",
+    "    \n",
+    "    print(\n",
+    "        \"\\n ------------------------------ \",\n",
+    "        \"\\n| DONE - INITIAL STATE REACHED |\",\n",
+    "        \"\\n ------------------------------ \",\n",
+    "    )"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## General knowledge\n",
+    "\n",
+    "A ```ContainerDockerRegistry``` is a a kind of directory that records all Docker containers that follows a regexp on specific roles. "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# does not work, it is just an example on how to build one \n",
+    "registry = en.ContainerDockerRegistry()\n",
+    "registry.build(\"regexp\", roles)"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "One ```ContainerDocker``` is defined by its name and its host, a state (ALIVE or DEAD) is also attributed to each instance."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "host = en.Host(\"192.168.0.3\", alias=\"one_alias\", user=\"foo\")\n",
+    "process = en.ContainerDocker(1234, host)\n",
+    "print(process)"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "For each case below, we will act on either the consumers, either the producers, never both. It can be the entire group or only a subset.\n",
+    "\n",
+    "When acting on consumers, we will observe an increase of the queue depth, meaning that the messages are not processed as fast as they are produced.\n",
+    "\n",
+    "When acting on producers, we will observe no evolution regarding the number of processed messages."
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## First example : Synchronous case"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Using this type of kill, the user must wait for the end before doing anything else. \n",
+    "\n",
+    "We can specify the sended signal."
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Killing all consumers"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry = en.ContainerDockerRegistry()\n",
+    "registry.build(\"tuto_fault_\", roles)\n",
+    "\n",
+    "registry_on_consumers = registry.lookup(\n",
+    "    roles[\"consumer\"]\n",
+    ")\n",
+    "print(registry_on_consumers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "results_before_kill = get_stats(5)\n",
+    "registry_on_consumers.kill(signal.SIGKILL)\n",
+    "results_after_kill = get_stats(5)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "results_before_kill"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "results_after_kill"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Killing all producers"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_producers = en.ContainerDockerRegistry()\n",
+    "registry_on_producers.build(\n",
+    "    \"tuto_fault_\",\n",
+    "    roles[\"producer\"]\n",
+    ")\n",
+    "print(registry_on_producers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "results_before_kill = get_stats(5)\n",
+    "registry_on_producers.kill(signal.SIGKILL)\n",
+    "results_after_kill = get_stats(5)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "results_before_kill"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "results_after_kill"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Second example : asynchronous case with delta"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here, each kill is scheduled, we have to specify the delay before they happen.\n",
+    "\n",
+    "\n",
+    "The delay is a ```datetime.timedelta()``` object."
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Killing all consumers"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers = en.ContainerDockerRegistry()\n",
+    "registry_on_consumers.build(\n",
+    "    \"tuto_fault_\",\n",
+    "    roles[\"consumer\"],\n",
+    ")\n",
+    "print(registry_on_consumers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers.kill_async_after(\n",
+    "    signum = signal.SIGKILL,\n",
+    "    delta = timedelta(minutes=1),\n",
+    ")\n",
+    "# each iteration last for ~4 sec (2 requests + 1sec sleep)\n",
+    "results = get_stats(20)\n",
+    "results"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Killing all producers"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_producers = en.ContainerDockerRegistry()\n",
+    "registry_on_producers.build(\n",
+    "    \"tuto_fault_\",\n",
+    "    roles[\"producer\"],\n",
+    ")\n",
+    "print(registry_on_producers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_producers.kill_async_after(\n",
+    "    signum = signal.SIGKILL,\n",
+    "    delta = timedelta(minutes=1),\n",
+    ")\n",
+    "# each iteration last for ~4 sec (2 requests + 1sec sleep)\n",
+    "results = get_stats(20)\n",
+    "results"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Third example : Asynchronous case specifying a date"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here, each kill is scheduled, we have to specify the exact date at which the kill(s) happen.\n",
+    "\n",
+    "\n",
+    "The date is a ```datetime.datetime()``` object."
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Killing all consumers"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers = en.ContainerDockerRegistry()\n",
+    "registry_on_consumers.build(\n",
+    "    \"tuto_fault_\",\n",
+    "    roles[\"consumer\"],\n",
+    ")\n",
+    "print(registry_on_consumers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers.kill_async_at(\n",
+    "    signum = signal.SIGKILL,\n",
+    "    date = datetime.now() + timedelta(minutes=1),\n",
+    ")\n",
+    "# each iteration last for ~4 sec (2 requests + 1sec sleep)\n",
+    "results = get_stats(20)\n",
+    "results"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "### Killing all producers"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_producers = en.ContainerDockerRegistry()\n",
+    "registry_on_producers.build(\n",
+    "    \"tuto_fault_\",\n",
+    "    roles[\"producer\"],\n",
+    ")\n",
+    "print(registry_on_producers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_producers.kill_async_at(\n",
+    "    signum = signal.SIGKILL,\n",
+    "    date = datetime.now() + timedelta(minutes=1),\n",
+    ")\n",
+    "# each iteration last for ~4 sec (2 requests + 1sec sleep)\n",
+    "results = get_stats(20)\n",
+    "results"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Fourth example : Incremental case"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "Here, each kill is scheduled and totally random, we have to specify how many docker containers we want to kill, the beginning (date of the first kill) and the interval between each of them.\n",
+    "\n",
+    "The beginning is a ```datetime.datetime``` object.\n",
+    "\n",
+    "The interval is a ```datetime.timedelta``` object."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers = en.ContainerDockerRegistry()\n",
+    "registry_on_consumers.build(\n",
+    "    \"tuto_fault_\",\n",
+    "    roles[\"consumer\"],\n",
+    ")\n",
+    "print(registry_on_consumers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers.kill_async_incr(\n",
+    "    signum = signal.SIGKILL,\n",
+    "    number = 2,\n",
+    "    beginning = datetime.now() + timedelta(minutes=1),\n",
+    "    interval = timedelta(minutes=1),\n",
+    ")\n",
+    "# each iteration last for ~4 sec (2 requests + 1sec sleep)\n",
+    "results = get_stats(50)\n",
+    "results"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can have an updated version of the registry, with both dead and alive docker container(s)."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "after_refresh = registry_on_consumers.refresh()\n",
+    "print(after_refresh)"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can also restart all killed processes without acting on the others. In a way we go back to the registry's initial state. This ```registry.reset()``` has nothing to do with the ```reset()``` specificly implemented for this experiment ! "
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers.reset()\n",
+    "print(registry_on_consumers)"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Fifth example : Restart a registry"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "We can restart an entire registry, this means killing all of them (if alive) and starting them again.\n",
+    "\n",
+    "It can be done either synchronously with ```registry.restart()```, either asynchronously in the same way as the kills previously shown. If asynchronous, the date of the kill(s) (or the time delta before they happen) and its interval with the start(s) of the docker container(s) can be specified."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers = en.ContainerDockerRegistry()\n",
+    "registry_on_consumers.build(\n",
+    "    \"tuto_fault_\",\n",
+    "    roles[\"consumer\"],\n",
+    ")\n",
+    "print(registry_on_consumers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_consumers.restart_async_after(\n",
+    "    signum = signal.SIGKILL,\n",
+    "    delta = timedelta(minutes=1),\n",
+    "    interval = timedelta(minutes=1),\n",
+    ")\n",
+    "# each iteration last for ~4 sec (2 requests + 1sec sleep)\n",
+    "results = get_stats(40)\n",
+    "results"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "reset()"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_producers = en.ContainerDockerRegistry()\n",
+    "registry_on_producers.build(\n",
+    "    \"tuto_fault_\",\n",
+    "    roles[\"producer\"],\n",
+    ")\n",
+    "print(registry_on_producers)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "registry_on_producers.restart_async_at(\n",
+    "    signum = signal.SIGKILL,\n",
+    "    date = datetime.now() + timedelta(minutes=1),\n",
+    "    interval = timedelta(minutes=1),\n",
+    ")\n",
+    "# each iteration last for ~4 sec (2 requests + 1sec sleep)\n",
+    "results = get_stats(40)\n",
+    "results"
+   ]
+  },
+  {
+   "attachments": {},
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "## Cleaning"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "provider.destroy()"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "enoslib",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.10.4"
+  },
+  "orig_nbformat": 4
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/g5k/consumer.Dockerfile b/g5k/consumer.Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..ee10a18a851920d6e13eaf57f76cdd70b96dbc32
--- /dev/null
+++ b/g5k/consumer.Dockerfile
@@ -0,0 +1,8 @@
+FROM python:3.11
+
+WORKDIR /tmp/rabbitmq
+COPY consumer.py .
+
+RUN pip install pika
+
+ENTRYPOINT ["python", "consumer.py"]
\ No newline at end of file
diff --git a/g5k/producer.Dockerfile b/g5k/producer.Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..4fa34d35ca736f72c66f2d36c6280f601c3aa03f
--- /dev/null
+++ b/g5k/producer.Dockerfile
@@ -0,0 +1,8 @@
+FROM python:3.11
+
+WORKDIR /tmp/rabbitmq
+COPY producer.py .
+
+RUN pip install pika
+
+ENTRYPOINT ["python", "producer.py"]
\ No newline at end of file