diff --git a/heterogeneous_data/multivariate_models_v1_7.ipynb b/heterogeneous_data/multivariate_models_v1_7.ipynb
index 6af30d77624521f324534fa850d56e1a2c9bad1a..2ff5a58f49fea1fe5456abbd52e13718ca17d8a2 100644
--- a/heterogeneous_data/multivariate_models_v1_7.ipynb
+++ b/heterogeneous_data/multivariate_models_v1_7.ipynb
@@ -30,14 +30,12 @@
     "#!pip install sklearn\n",
     "#!pip install matplotlib\n",
     "#!pip install numpy\n",
-    "#!pip install torch torchvision\n",
-    " \n",
-    "!git clone https://gitlab.inria.fr/epione_ML/mcvae.git"
+    "#!pip install torch torchvision"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 3,
+   "execution_count": 1,
    "metadata": {
     "colab": {
      "base_uri": "https://localhost:8080/"
@@ -50,16 +48,25 @@
      "name": "stdout",
      "output_type": "stream",
      "text": [
-      "2.0.0\n"
+      "Cloning into 'mcvae'...\n",
+      "remote: Enumerating objects: 359, done.\u001b[K\n",
+      "remote: Counting objects: 100% (359/359), done.\u001b[K\n",
+      "remote: Compressing objects: 100% (224/224), done.\u001b[K\n",
+      "remote: Total 359 (delta 160), reused 296 (delta 114), pack-reused 0\u001b[K\n",
+      "Receiving objects: 100% (359/359), 4.72 MiB | 4.27 MiB/s, done.\n",
+      "Resolving deltas: 100% (160/160), done.\n",
+      "Checking connectivity... done.\n",
+      "Mcvae version:2.0.0\n"
      ]
     }
    ],
    "source": [
+    "!git clone https://gitlab.inria.fr/epione_ML/mcvae.git\n",
     "import sys\n",
     "import os\n",
     "sys.path.append(os.getcwd() + '/mcvae/src/')\n",
     "import mcvae\n",
-    "print(mcvae.__version__)"
+    "print('Mcvae version:' + mcvae.__version__)"
    ]
   },
   {
@@ -1653,16 +1660,28 @@
     "\n",
     "They are Gaussians with moments parametrized by Neural Networks (or a linear transformation layer in a simple case).\n",
     "\n",
-    "__Exercise__: why is convenient to use $\\log$ values for the parametrization the variance networks output (W_logvar, W_out_logvar)? \n",
-    "\n",
     "<img src=\"https://gitlab.inria.fr/epione/flhd/-/raw/master/heterogeneous_data/img/vae.svg\" alt=\"img/vae.svg\">\n",
     "\n",
+    "### __Exercise__\n",
+    "Why is convenient to use $\\log$ values for the parametrization the variance networks output (W_logvar, W_out_logvar)?\n",
+    "\n",
     "## Sparse VAE\n",
     "To favour sparsity in the latent space we implement the latent distribution as follows:\n",
     "\n",
-    "$$q(\\mathbf{z|x}) = \\mathcal{N}(\\mathbf{z|\\mu_x; \\alpha \\odot \\mu_x^2})$$\n",
+    "$$q(\\mathbf{z|x}) = \\mathcal{N}(\\mathbf{z|\\mu_x; \\alpha \\odot \\mu_x^2}),$$\n",
+    "\n",
+    "where:\n",
     "\n",
-    "Tha parameter $\\alpha$ represents to the [odds](https://en.wikipedia.org/wiki/Odds) of pruning the $i$-th latent dimension according to (element-wise):\n",
+    "$$\n",
+    "\\mathbf{\\alpha \\odot \\mu_x^2} = \n",
+    "\\begin{bmatrix}\n",
+    "\\ddots &        & 0                        \\\\\n",
+    "       & \\alpha_i [\\mathbf{\\mu_x}]_i^2 &   \\\\\n",
+    "0      &        & \\ddots                   \n",
+    "\\end{bmatrix}.\n",
+    "$$\n",
+    "\n",
+    "Tha parameter $\\alpha_i$ represents the [odds](https://en.wikipedia.org/wiki/Odds) of pruning the $i$-th latent dimension according to:\n",
     "\n",
     "$$\\alpha_i = \\frac{p_i}{1 - p_i}$$\n",
     "\n",
@@ -1673,7 +1692,8 @@
     "\n",
     "<img src=\"https://gitlab.inria.fr/epione/flhd/-/raw/master/heterogeneous_data/img/mcvae.svg\" alt=\"img/mcvae.svg\">\n",
     "\n",
-    "__Excercise__: sketch the Sparse MCVAE with 3 channels.[[solution]](https://gitlab.inria.fr/epione/flhd/-/raw/master/heterogeneous_data/img/sparse_mcvae_3ch.svg)"
+    "### __Exercise__\n",
+    "Sketch the Sparse MCVAE with 3 channels. [[Solution]](https://gitlab.inria.fr/epione/flhd/-/raw/master/heterogeneous_data/img/sparse_mcvae_3ch.svg)"
    ]
   },
   {
@@ -2164,11 +2184,15 @@
     "\n",
     "model_sparse1 = Mcvae(sparse=True, **init_dict)\n",
     "model_sparse1.to(DEVICE)\n",
-    "print(model_sparse1)\n",
-    "\n",
-    "print('Is the log_alpha parameter the same for every channel?')\n",
-    "log_alpha = model_sparse1.vae[0].log_alpha\n",
-    "print(np.all([log_alpha is vae.log_alpha for vae in model_sparse1]))"
+    "print(model_sparse1)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "__Exercise.__ Check if the sparsity parameter log_alpha is the same for every VAE in the MCvae model.\n",
+    "- Hint: you can acces the model modules and parameters with the dot notation (model.submodel.parameter)"
    ]
   },
   {
@@ -2902,6 +2926,25 @@
     "plt.show()"
    ]
   },
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "__Exercise.__ Compare Mcvae, Sparse Mcvae, and PLS\n",
+    "- Reuse the observations X1, X2, and X3 created earlier.\n",
+    "- Fit a Mcvae model to predict X3 from X1.\n",
+    "- Fit a Mcvae Sparse model to predict X3 from X1.\n",
+    "- Fit a PLS model to predict X3 from X1.\n",
+    "- Compare the predictions for all the three methods."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  },
   {
    "cell_type": "markdown",
    "metadata": {
@@ -4025,7 +4068,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.6.12"
+   "version": "3.7.6"
   }
  },
  "nbformat": 4,