<h1 id="Using-the-PALM4MSA-MHTP-Algorithm">Using the PALM4MSA-MHTP Algorithm<a class="anchor-link" href="#Using-the-PALM4MSA-MHTP-Algorithm">¶</a></h1><p>In this notebook we shall see how to use the PALM4MSA-MHTP algorithm. A notebook has already been written on the Hierarchical PALM4MSA algorithm and its wrappers and is a prerequisite to the reading of this notebook.</p>
<h1 id="Using-the-PALM4MSA-MHTP-Algorithm">Using the PALM4MSA-MHTP Algorithm<a class="anchor-link" href="#Using-the-PALM4MSA-MHTP-Algorithm">¶</a></h1><p>In this notebook we shall see how to use the PALM4MSA-MHTP algorithm. A <a href="https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/Faust_factorization.html">notebook</a> has already been written on the Hierarchical PALM4MSA algorithm and its wrappers and is a prerequisite to the reading of this notebook.</p>
<p>The PALM4MSA-MHTP is a variant of PALM4MSA in which intervenes the Multilinear Hard Tresholdhing Pursuit algorithm (MHTP).</p>
<p>The interest of this variant is to avoid the situation in which PALM4MSA tends to stuck on some matrix supports without any way out. MHTP allows to explore the support more freely and hopefully find a more accurate factorization at the cost of just a few more dozens iterations of the gradient descent algorithm.</p>
<p>For more information on the theory, you can read the following paper in which is treated the particular case of the BHTP (Bilinear HTP, that is running the MHTP on only two factors).</p>
<p><a name="[1]">[1]</a> Quoc-Tung Le, Rémi Gribonval. Structured Support Exploration For Multilayer Sparse Matrix Fac- torization. ICASSP 2021 - IEEE International Conference on Acoustics, Speech and Signal Processing, Jun 2021, Toronto, Ontario, Canada. pp.1-5. <a href="https://hal.inria.fr/hal-03132013/document">hal-03132013</a>.</p>
<h2 id="Configuring-and-Running-PALM4MSA-MHTP">Configuring and Running PALM4MSA-MHTP<a class="anchor-link" href="#Configuring-and-Running-PALM4MSA-MHTP">¶</a></h2><p>This variant works very similarly to a classic run of PALM4MSA, that is with at least the same set of parameters. The main difference is that periodically (in term of PALM4MSA iterations) the MHTP algorithm is launched to renew each layer of the Faust being refined.</p>
<h2 id="Configuring-and-Running-PALM4MSA-MHTP">Configuring and Running PALM4MSA-MHTP<a class="anchor-link" href="#Configuring-and-Running-PALM4MSA-MHTP">¶</a></h2><p>This variant works very similarly to a classic run of PALM4MSA, that is with at least the same set of parameters. The main difference is that periodically (in term of PALM4MSA number of iterations) the MHTP algorithm is launched to renew each layer of the Faust being refined.</p>
<p>Hence running the PALM4MSA-MHTP needs two sets of parameters: <a href="https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1factparams_1_1ParamsPalm4MSA.html">ParamsPalm4MSA</a> and <a href="https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1factparams_1_1MHTPParams.html">MHTPParams</a> objects. The former should not be really new if you are used to PALM4MSA, the latter is dedicated to the configuartion of the MHTP part of PALM4MSA-MHTP.</p>
<p>The arguments to configure <code>MHTPParams</code> are basically:</p>
<ul>
<li><code>num_its</code>: the number of iterations MHTP runs on each layer of the Faust. Remember that this number of iterations is for each factor. If you have two factors the overall number of iterations is <code>2 x num_its</code> (exactly as it is for PALM4MSA).</li>
<li><code>constant_step_size</code> and <code>step_size</code>: that determines if the MHTP gradient descent will be ran according to a constant step size, and in that case how long is the step size. By default, the step size is not constant and recomputed dynamically with the Lipschitz coefficient as in PALM4MSA. In most case, it is recommended to not use a constant step size to achieve a better loss function.</li>
<li><code>constant_step_size</code> and <code>step_size</code>: that determines if the MHTP gradient descent will be ran according to a constant step size, and in that case how long is the step size. By default, the step size is not constant and recomputed dynamically with the Lipschitz coefficient as in PALM4MSA. In most cases, it is recommended to not use a constant step size to achieve a better loss function.</li>
<li><code>palm4msa_period</code>: which governs how many times to evenly run the MHTP algorithm inside PALM4MSA itself. By default, the value is 50. It means that for example if PALM4MSA is running for 200 iterations, MHTP will run 4 times: at iterations 0, 49, 99, 149 and 199 of PALM4MSA. Every time it runs MHTP will run for <code>num_its</code> iterations.</li>
<li><code>updating_lambda</code>: this boolean when set to <code>True</code> allows to update the scale factor of the Faust (the same one that is used in PALM4MSA) in the end of each iteration of MHTP.</li>
</ul>
<p>So let's run PALM4MSA-MHTP on a small example: we propose to factorize a 500x32 matrix into two factors.</p>
<b>First</b> we configure PALM4MSA as usual:
<ul>
<li>The number of iterations of PALM4SA with the StoppingCriterion (here 200 iterations).</li>
<li>The number of iterations of PALM4MSA with the <code>StoppingCriterion</code> (here 200 iterations).</li>
<li>Then we define the constraints / projectors to use, here the SPLIN projector for the first factor of size 500x32 into which we want to count 5 nonzeros per row and the NORMCOL projector for the second factor in which each column must be normalized.</li>
<p>The MEG (for magnetoencephalography) matrix is also used in <a href="#[1]">[1]</a> to compare PALM4MSA and PALM4MSA-MHTP performance.<br>
The goal is to factorize the MEG matrix as $M_{MEG} \approx A \times B$ with $M_{MEG} \in \mathbb{R}^{8293 \times 204}, A \in \mathbb{R}^{8193 \times 204}$ and $B \in \mathbb{R}^{204 \times 204}$. A and B are subject to sparsity constraints. Here we'll test only one sparsity configuration of the two factors ($k_0$ = 100 and $k_1 = 25$ being respectively the per-row number of nonzeros of A and B).</p>
The goal is to factorize the MEG matrix as $M_{MEG} \approx A \times B$ with $M_{MEG} \in \mathbb{R}^{8193 \times 204}, A \in \mathbb{R}^{8193 \times 204}$ and $B \in \mathbb{R}^{204 \times 204}$. A and B are subject to sparsity constraints. Here we'll test only one sparsity configuration of the two factors ($k_0$ = 100 and $k_1 = 25$ being respectively the per-row number of nonzeros of A and B).</p>
<p>Let's load the MEG matrix which is embedded in FAµST data package (which should be downloaded automatically).</p>
"In this notebook we shall see how to use the PALM4MSA-MHTP algorithm. A notebook has already been written on the Hierarchical PALM4MSA algorithm and its wrappers and is a prerequisite to the reading of this notebook.\n",
"In this notebook we shall see how to use the PALM4MSA-MHTP algorithm. A [notebook](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/Faust_factorization.html) has already been written on the Hierarchical PALM4MSA algorithm and its wrappers and is a prerequisite to the reading of this notebook.\n",
"\n",
"The PALM4MSA-MHTP is a variant of PALM4MSA in which intervenes the Multilinear Hard Tresholdhing Pursuit algorithm (MHTP).\n",
"\n",
...
...
@@ -19,26 +20,27 @@
"\n",
"## Configuring and Running PALM4MSA-MHTP\n",
"\n",
"This variant works very similarly to a classic run of PALM4MSA, that is with at least the same set of parameters. The main difference is that periodically (in term of PALM4MSA iterations) the MHTP algorithm is launched to renew each layer of the Faust being refined.\n",
"This variant works very similarly to a classic run of PALM4MSA, that is with at least the same set of parameters. The main difference is that periodically (in term of PALM4MSA number of iterations) the MHTP algorithm is launched to renew each layer of the Faust being refined.\n",
"\n",
"Hence running the PALM4MSA-MHTP needs two sets of parameters: [ParamsPalm4MSA](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1factparams_1_1ParamsPalm4MSA.html) and [MHTPParams](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1factparams_1_1MHTPParams.html) objects. The former should not be really new if you are used to PALM4MSA, the latter is dedicated to the configuartion of the MHTP part of PALM4MSA-MHTP.\n",
"\n",
"The arguments to configure ``MHTPParams`` are basically:\n",
"- ``num_its``: the number of iterations MHTP runs on each layer of the Faust. Remember that this number of iterations is for each factor. If you have two factors the overall number of iterations is ``2 x num_its`` (exactly as it is for PALM4MSA).\n",
"- ``constant_step_size`` and ``step_size``: that determines if the MHTP gradient descent will be ran according to a constant step size, and in that case how long is the step size. By default, the step size is not constant and recomputed dynamically with the Lipschitz coefficient as in PALM4MSA. In most case, it is recommended to not use a constant step size to achieve a better loss function.\n",
"- ``constant_step_size`` and ``step_size``: that determines if the MHTP gradient descent will be ran according to a constant step size, and in that case how long is the step size. By default, the step size is not constant and recomputed dynamically with the Lipschitz coefficient as in PALM4MSA. In most cases, it is recommended to not use a constant step size to achieve a better loss function.\n",
"- ``palm4msa_period``: which governs how many times to evenly run the MHTP algorithm inside PALM4MSA itself. By default, the value is 50. It means that for example if PALM4MSA is running for 200 iterations, MHTP will run 4 times: at iterations 0, 49, 99, 149 and 199 of PALM4MSA. Every time it runs MHTP will run for ``num_its`` iterations.\n",
"- ``updating_lambda``: this boolean when set to ``True`` allows to update the scale factor of the Faust (the same one that is used in PALM4MSA) in the end of each iteration of MHTP.\n",
"\n",
"So let's run PALM4MSA-MHTP on a small example: we propose to factorize a 500x32 matrix into two factors.\n",
"\n",
"**First** we configure PALM4MSA as usual: \n",
"- The number of iterations of PALM4SA with the StoppingCriterion (here 200 iterations).\n",
"- The number of iterations of PALM4MSA with the ``StoppingCriterion`` (here 200 iterations).\n",
"- Then we define the constraints / projectors to use, here the SPLIN projector for the first factor of size 500x32 into which we want to count 5 nonzeros per row and the NORMCOL projector for the second factor in which each column must be normalized. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f9a11d48",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -51,6 +53,7 @@
},
{
"cell_type": "markdown",
"id": "66835080",
"metadata": {},
"source": [
"**Second** we define the ``MHTPParams`` structure to configure the MHTP pass of PALM4MSA-MHTP\n",
...
...
@@ -61,6 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "c0cee9d0",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -71,6 +75,7 @@
},
{
"cell_type": "markdown",
"id": "d3ab06f1",
"metadata": {},
"source": [
"It's now time to run the PALM4MSA-MHTP algorithm passing the two structures of parameters. Before we generate a random matrix ``M`` to factorize."
...
...
@@ -79,6 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "e42eac67",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -91,6 +97,7 @@
},
{
"cell_type": "markdown",
"id": "c97afd11",
"metadata": {},
"source": [
"As you see it's pretty similar to running PALM4MSA, which we could have done with the following code."
...
...
@@ -99,6 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "2b447263",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -109,6 +117,7 @@
},
{
"cell_type": "markdown",
"id": "c34d7821",
"metadata": {},
"source": [
"We can verify that the results are however not the same:"
...
...
@@ -117,6 +126,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "928a748c",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -126,6 +136,7 @@
},
{
"cell_type": "markdown",
"id": "9377f592",
"metadata": {},
"source": [
"They are very close though! In the next part of this notebook we'll demonstrate how PALM4MSA-MHTP can really enhance the accuracy of the Faust approximate and will do that on the MEG matrix (this matrix is also discussed and factorized in <a href=\"#[1]\">[1]</a>)."
...
...
@@ -133,6 +144,7 @@
},
{
"cell_type": "markdown",
"id": "7a7b0a83",
"metadata": {},
"source": [
"## Factorizing the MEG matrix using the PALM4MSA-MHTP algorithm"
...
...
@@ -140,10 +152,11 @@
},
{
"cell_type": "markdown",
"id": "47489974",
"metadata": {},
"source": [
"The MEG (for magnetoencephalography) matrix is also used in <a href=\"#[1]\">[1]</a> to compare PALM4MSA and PALM4MSA-MHTP performance. \n",
"The goal is to factorize the MEG matrix as $M_{MEG} \\approx A \\times B$ with $M_{MEG} \\in \\mathbb{R}^{8293 \\times 204}, A \\in \\mathbb{R}^{8193 \\times 204}$ and $B \\in \\mathbb{R}^{204 \\times 204}$. A and B are subject to sparsity constraints. Here we'll test only one sparsity configuration of the two factors ($k_0$ = 100 and $k_1 = 25$ being respectively the per-row number of nonzeros of A and B). \n",
"The goal is to factorize the MEG matrix as $M_{MEG} \\approx A \\times B$ with $M_{MEG} \\in \\mathbb{R}^{8193 \\times 204}, A \\in \\mathbb{R}^{8193 \\times 204}$ and $B \\in \\mathbb{R}^{204 \\times 204}$. A and B are subject to sparsity constraints. Here we'll test only one sparsity configuration of the two factors ($k_0$ = 100 and $k_1 = 25$ being respectively the per-row number of nonzeros of A and B). \n",
"\n",
"Let's load the MEG matrix which is embedded in FAµST data package (which should be downloaded automatically)."
]
...
...
@@ -151,6 +164,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "71a15b5f",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -162,6 +176,7 @@
},
{
"cell_type": "markdown",
"id": "cf94923d",
"metadata": {},
"source": [
"Going ahead we set the PALM4MSA parameters:"
...
...
@@ -170,6 +185,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "31fdc250",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -187,6 +203,7 @@
},
{
"cell_type": "markdown",
"id": "64a6f5ec",
"metadata": {},
"source": [
"It remains the ``MHTPParams`` configuration (it's easy, we use the default parameters) :"
...
...
@@ -195,6 +212,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "d9f5b222",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -204,6 +222,7 @@
},
{
"cell_type": "markdown",
"id": "f8313943",
"metadata": {},
"source": [
"Now we are able to launch PALM4MSA and PALM4MSA-MHTP and compare the errors: the computation takes some time, it can last about 30 minutes."
...
...
@@ -212,6 +231,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "e2d4a744",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -226,6 +246,7 @@
},
{
"cell_type": "markdown",
"id": "a8aa5b45",
"metadata": {},
"source": [
"As you see the MHTP variant is twice accurate than PALM4MSA on this configuration.\n",
...
...
@@ -241,6 +262,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "720c6d2b",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -264,6 +286,7 @@
},
{
"cell_type": "markdown",
"id": "19011532",
"metadata": {},
"source": [
"This notebook is ending here. Please note that although the article <a href=\"#[1]\">[1]</a> tackles the optimization problem of approximately factorizing a matrix in two sparse factors with the Bilinear Hard Tresholding Pursuit (BHTP) algorithm, the MHTP is a generalization to N factors that needs further experiences to be mature. Hence the function palm4msa_mhtp and moreover the function hierarchical_mhtp should be considered as experimental code and might evolve significantly in the future."
...
...
@@ -271,6 +294,7 @@
},
{
"cell_type": "markdown",
"id": "eb047dfe",
"metadata": {},
"source": [
"Thanks for reading this notebook! Many other are available at [faust.inria.fr](https://faust.inria.fr).\n",
...
...
@@ -281,6 +305,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "b80572ec",
"metadata": {},
"outputs": [],
"source": [
...
...
@@ -291,7 +316,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
...
...
@@ -305,7 +330,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.4"
"version": "3.9.1"
}
},
"nbformat": 4,
...
...
%% Cell type:markdown id: tags:
%% Cell type:markdown id:813f8f3e tags:
# Using the PALM4MSA-MHTP Algorithm
In this notebook we shall see how to use the PALM4MSA-MHTP algorithm. A notebook has already been written on the Hierarchical PALM4MSA algorithm and its wrappers and is a prerequisite to the reading of this notebook.
In this notebook we shall see how to use the PALM4MSA-MHTP algorithm. A [notebook](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/Faust_factorization.html) has already been written on the Hierarchical PALM4MSA algorithm and its wrappers and is a prerequisite to the reading of this notebook.
The PALM4MSA-MHTP is a variant of PALM4MSA in which intervenes the Multilinear Hard Tresholdhing Pursuit algorithm (MHTP).
The interest of this variant is to avoid the situation in which PALM4MSA tends to stuck on some matrix supports without any way out. MHTP allows to explore the support more freely and hopefully find a more accurate factorization at the cost of just a few more dozens iterations of the gradient descent algorithm.
For more information on the theory, you can read the following paper in which is treated the particular case of the BHTP (Bilinear HTP, that is running the MHTP on only two factors).
<aname="[1]">[1]</a> Quoc-Tung Le, Rémi Gribonval. Structured Support Exploration For Multilayer Sparse Matrix Fac- torization. ICASSP 2021 - IEEE International Conference on Acoustics, Speech and Signal Processing, Jun 2021, Toronto, Ontario, Canada. pp.1-5. [hal-03132013](https://hal.inria.fr/hal-03132013/document).
## Configuring and Running PALM4MSA-MHTP
This variant works very similarly to a classic run of PALM4MSA, that is with at least the same set of parameters. The main difference is that periodically (in term of PALM4MSA iterations) the MHTP algorithm is launched to renew each layer of the Faust being refined.
This variant works very similarly to a classic run of PALM4MSA, that is with at least the same set of parameters. The main difference is that periodically (in term of PALM4MSA number of iterations) the MHTP algorithm is launched to renew each layer of the Faust being refined.
Hence running the PALM4MSA-MHTP needs two sets of parameters: [ParamsPalm4MSA](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1factparams_1_1ParamsPalm4MSA.html) and [MHTPParams](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1factparams_1_1MHTPParams.html) objects. The former should not be really new if you are used to PALM4MSA, the latter is dedicated to the configuartion of the MHTP part of PALM4MSA-MHTP.
The arguments to configure ``MHTPParams`` are basically:
-``num_its``: the number of iterations MHTP runs on each layer of the Faust. Remember that this number of iterations is for each factor. If you have two factors the overall number of iterations is ``2 x num_its`` (exactly as it is for PALM4MSA).
-``constant_step_size`` and ``step_size``: that determines if the MHTP gradient descent will be ran according to a constant step size, and in that case how long is the step size. By default, the step size is not constant and recomputed dynamically with the Lipschitz coefficient as in PALM4MSA. In most case, it is recommended to not use a constant step size to achieve a better loss function.
-``constant_step_size`` and ``step_size``: that determines if the MHTP gradient descent will be ran according to a constant step size, and in that case how long is the step size. By default, the step size is not constant and recomputed dynamically with the Lipschitz coefficient as in PALM4MSA. In most cases, it is recommended to not use a constant step size to achieve a better loss function.
-``palm4msa_period``: which governs how many times to evenly run the MHTP algorithm inside PALM4MSA itself. By default, the value is 50. It means that for example if PALM4MSA is running for 200 iterations, MHTP will run 4 times: at iterations 0, 49, 99, 149 and 199 of PALM4MSA. Every time it runs MHTP will run for ``num_its`` iterations.
-``updating_lambda``: this boolean when set to ``True`` allows to update the scale factor of the Faust (the same one that is used in PALM4MSA) in the end of each iteration of MHTP.
So let's run PALM4MSA-MHTP on a small example: we propose to factorize a 500x32 matrix into two factors.
**First** we configure PALM4MSA as usual:
- The number of iterations of PALM4SA with the StoppingCriterion (here 200 iterations).
- The number of iterations of PALM4MSA with the ``StoppingCriterion`` (here 200 iterations).
- Then we define the constraints / projectors to use, here the SPLIN projector for the first factor of size 500x32 into which we want to count 5 nonzeros per row and the NORMCOL projector for the second factor in which each column must be normalized.
They are very close though! In the next part of this notebook we'll demonstrate how PALM4MSA-MHTP can really enhance the accuracy of the Faust approximate and will do that on the MEG matrix (this matrix is also discussed and factorized in <ahref="#[1]">[1]</a>).
%% Cell type:markdown id: tags:
%% Cell type:markdown id:7a7b0a83 tags:
## Factorizing the MEG matrix using the PALM4MSA-MHTP algorithm
%% Cell type:markdown id: tags:
%% Cell type:markdown id:47489974 tags:
The MEG (for magnetoencephalography) matrix is also used in <ahref="#[1]">[1]</a> to compare PALM4MSA and PALM4MSA-MHTP performance.
The goal is to factorize the MEG matrix as $M_{MEG} \approx A \times B$ with $M_{MEG} \in \mathbb{R}^{8293 \times 204}, A \in \mathbb{R}^{8193 \times 204}$ and $B \in \mathbb{R}^{204 \times 204}$. A and B are subject to sparsity constraints. Here we'll test only one sparsity configuration of the two factors ($k_0$ = 100 and $k_1 = 25$ being respectively the per-row number of nonzeros of A and B).
The goal is to factorize the MEG matrix as $M_{MEG} \approx A \times B$ with $M_{MEG} \in \mathbb{R}^{8193 \times 204}, A \in \mathbb{R}^{8193 \times 204}$ and $B \in \mathbb{R}^{204 \times 204}$. A and B are subject to sparsity constraints. Here we'll test only one sparsity configuration of the two factors ($k_0$ = 100 and $k_1 = 25$ being respectively the per-row number of nonzeros of A and B).
Let's load the MEG matrix which is embedded in FAµST data package (which should be downloaded automatically).
As you see the MHTP variant is twice accurate than PALM4MSA on this configuration.
## Using the Hierarchical PALM4MSA-MHTP algorithm
Exactly the same way you can use the hierarchical factorization with PALM4MSA, it is possible to use the function ``pyfaust.fact.hierachical_mhtp`` to run a hierarchical factorization based on PALM4MSA-MHTP instead of simply PALM4MSA.
The launch of the algorithm function is very similar, you just need to add a ``MHTPParams`` instance to the argument list.
This notebook is ending here. Please note that although the article <ahref="#[1]">[1]</a> tackles the optimization problem of approximately factorizing a matrix in two sparse factors with the Bilinear Hard Tresholding Pursuit (BHTP) algorithm, the MHTP is a generalization to N factors that needs further experiences to be mature. Hence the function palm4msa_mhtp and moreover the function hierarchical_mhtp should be considered as experimental code and might evolve significantly in the future.
%% Cell type:markdown id: tags:
%% Cell type:markdown id:eb047dfe tags:
Thanks for reading this notebook! Many other are available at [faust.inria.fr](https://faust.inria.fr).
**Note:** this notebook was executed using the following pyfaust version: