diff --git a/heterogeneous_data/multivariate_models_v1_7.ipynb b/heterogeneous_data/multivariate_models_v1_7.ipynb
index c1c5b956ab318317be28b0dbde6714d110711d81..29fc10b48310d722789739ada203296d69cad787 100644
--- a/heterogeneous_data/multivariate_models_v1_7.ipynb
+++ b/heterogeneous_data/multivariate_models_v1_7.ipynb
@@ -1659,14 +1659,22 @@
     "\n",
     "## VAE\n",
     "The Variational Autoencoer is a latent variable model composed by one encoder and one decoder associated to a single channel.\n",
-    "The latent distribution $q(\\mathbf{z|x})$ and the decoding distribution $p(\\mathbf{x|z})$ are Gaussians with moments parametrized by Neural Networks (or a linear transformation layer in a simple case).\n",
+    "The latent distribution and the decoding distribution are implemented as follows:\n",
     "\n",
-    "Can you find the reason why we use $\\log$ values to parametrize the variance (W_logvar)? \n",
+    "$q(\\mathbf{z|x}) = \\mathcal{N}(\\mathbf{z|\\mu_x; \\Sigma_x})$\n",
+    "\n",
+    "$p(\\mathbf{x|z}) = \\mathcal{N}(\\mathbf{x|\\mu_z; \\Sigma_z})$\n",
+    "\n",
+    "They are Gaussians with moments parametrized by Neural Networks (or a linear transformation layer in a simple case).\n",
+    "\n",
+    "__Exercise__: why is convenient to use $\\log$ values for the parametrization the variance networks output (W_logvar, W_out_logvar)? \n",
     "\n",
     "<img src=\"https://gitlab.inria.fr/epione/flhd/-/raw/master/heterogeneous_data/img/vae.svg\" alt=\"img/vae.svg\">\n",
     "\n",
     "## Sparse VAE\n",
-    "To favour sparsity in the latent space we parametrize the variance of the latent distribution with log_alpha (times $\\mu^2)$.\n",
+    "To favour sparsity in the latent space we implement the latent distribution as follows:\n",
+    "\n",
+    "$q(\\mathbf{z|x}) = \\mathcal{N}(\\mathbf{z|\\mu_x; \\alpha \\odot \\mu_x^2})$\n",
     "\n",
     "<img src=\"https://gitlab.inria.fr/epione/flhd/-/raw/master/heterogeneous_data/img/sparse_vae.svg\" alt=\"img/sparse_vae.svg\">\n",
     "\n",