Mentions légales du service

Skip to content
Snippets Groups Projects
Commit ddd72bf2 authored by hhakim's avatar hhakim
Browse files

Few changes in projectors (about splincol/skperm) and gpu nobtebooks.

parent 5f0ebb05
Branches
Tags
No related merge requests found
Source diff could not be displayed: it is too large. Options to address this: view the blob.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Using The FAµST Projectors API # Using The FAµST Projectors API
This notebook put the focus on the [``proj``](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/namespacepyfaust_1_1proj.html) module of the pyfaust API. This module provides a bunch of projectors which are for instance necessary to implement proximal operators in [PALM](https://link.springer.com/article/10.1007/s10107-013-0701-9) algorithms. This notebook put the focus on the [``proj``](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/namespacepyfaust_1_1proj.html) module of the pyfaust API. This module provides a bunch of projectors which are for instance necessary to implement proximal operators in [PALM](https://link.springer.com/article/10.1007/s10107-013-0701-9) algorithms.
Indeed these projectors matter in the parametrization of the [PALM4MSA](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/namespacepyfaust_1_1fact.html#a686e523273cf3e38b1b614a69f4b48af) and [hierarchical factorization](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/namespacepyfaust_1_1fact.html#a7ff9e21a4f0b4acd2107629d788c441c) algorithms, so let's maintain their configuration as simple as possible by using projectors! Indeed these projectors matter in the parametrization of the [PALM4MSA](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/namespacepyfaust_1_1fact.html#a686e523273cf3e38b1b614a69f4b48af) and [hierarchical factorization](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/namespacepyfaust_1_1fact.html#a7ff9e21a4f0b4acd2107629d788c441c) algorithms, so let's maintain their configuration as simple as possible by using projectors!
First let's explain some generalities about projectors: First let's explain some generalities about projectors:
- They are all functor objects (objects that you can call as a function). - They are all functor objects (objects that you can call as a function).
- They are all types defined by child classes of the parent abstract class [``proj_gen``](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1proj__gen.html). - They are all types defined by child classes of the parent abstract class [``proj_gen``](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1proj__gen.html).
The general pattern to use a projector unfolds in two steps: The general pattern to use a projector unfolds in two steps:
1. Instantiate the projector passing the proper arguments. 1. Instantiate the projector passing the proper arguments.
2. Call this projector (again, as a function) on the matrix you're working on. This step is optional or for test purposes, because generally it is the algorithm implementation responsibility to call the projectors. You just need to feed the algorithms (PALM4MSA for example) with them. 2. Call this projector (again, as a function) on the matrix you're working on. This step is optional or for test purposes, because generally it is the algorithm implementation responsibility to call the projectors. You just need to feed the algorithms (PALM4MSA for example) with them.
Let's see how to define and use the projectors in the code. For the brief math definitions, I let you consult this [document](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/constraint.png). Let's see how to define and use the projectors in the code. For the brief math definitions, I let you consult this [document](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/constraint.png).
Remember that the projector API is documented too, you'll find the link for each projector below. Last, if you're looking for a reference about proximal operators here it is: [proximity operators](http://proximity-operator.net/). Remember that the projector API is documented too, you'll find the link for each projector below. Last, if you're looking for a reference about proximal operators here it is: [proximity operators](http://proximity-operator.net/).
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### The SP projector (projection onto matrices with a prescribed sparsity) ### The SP projector (projection onto matrices with a prescribed sparsity)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This projector performs a projection onto matrices with a prescribed sparsity. It governs the global sparsity of a matrix given an integer k. This projector performs a projection onto matrices with a prescribed sparsity. It governs the global sparsity of a matrix given an integer k.
The matrix $ A \in \mathbb{R}^{m \times n }, A = (a_{ij}), {0<= i < m, 0<= j < n}$, is projected to the closest matrix $ B = (b_{ij})$ such that $ \|B\|_0 = \#\{(i,j): b_{ij} \neq 0 \} \leq k$ which implies , if $k < mn$, that some entries of $A$ are kept in $B$ and others are set to zero. The projector keeps the $k$ most significant values (in term of absolute values or magnitude). The matrix $ A \in \mathbb{R}^{m \times n }, A = (a_{ij}), {0<= i < m, 0<= j < n}$, is projected to the closest matrix $ B = (b_{ij})$ such that $ \|B\|_0 = \#\{(i,j): b_{ij} \neq 0 \} \leq k$ which implies , if $k < mn$, that some entries of $A$ are kept in $B$ and others are set to zero. The projector keeps the $k$ most significant values (in term of absolute values or magnitude).
Let's try on an example, here a random matrix. Let's try on an example, here a random matrix.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from numpy.random import rand from numpy.random import rand
A = rand(5,5)*100 A = rand(5,5)*100
print("A=\n", A) print("A=\n", A)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import pyfaust import pyfaust
from pyfaust.proj import sp from pyfaust.proj import sp
# 1. instantiate the projector # 1. instantiate the projector
k = 2 k = 2
p = sp(A.shape, k, normalized=False) p = sp(A.shape, k, normalized=False)
# 2. project the matrix through it # 2. project the matrix through it
B = p(A) B = p(A)
print("B=\n", B) print("B=\n", B)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The projector is simply defined by the input matrix shape and the integer k to specify the targeted sparsity. The projector is simply defined by the input matrix shape and the integer k to specify the targeted sparsity.
**Optional normalization**: **Optional normalization**:
As you noticed, the argument ``normalized`` is set to ``False`` in the projector definition. This is the default behaviour. When normalized is ``True``, the result $B$ is normalized according to its Frobenius norm. As you noticed, the argument ``normalized`` is set to ``False`` in the projector definition. This is the default behaviour. When normalized is ``True``, the result $B$ is normalized according to its Frobenius norm.
The next example gives you a concrete view of what happens when ``normalized`` is True. The next example gives you a concrete view of what happens when ``normalized`` is True.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from numpy.linalg import norm from numpy.linalg import norm
from numpy import allclose from numpy import allclose
pnorm = sp(A.shape, k, normalized=True) pnorm = sp(A.shape, k, normalized=True)
C = pnorm(A) C = pnorm(A)
print("B/norm(B, 'fro') == C: ", allclose(B/norm(B,'fro'),C)) print("B/norm(B, 'fro') == C: ", allclose(B/norm(B,'fro'),C))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
**Sparsity and optional positivity**: **Sparsity and optional positivity**:
It is also possible to "filter" the negative entries of A by setting the [``pos``](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1sp.html) argument of ``sp`` to ``True``. It is also possible to "filter" the negative entries of A by setting the [``pos``](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1sp.html) argument of ``sp`` to ``True``.
You can see the projector as a pipeline, the first stage is to filter out the negative values, then the sp projector is applied and finally the resulting image is normalized if ``normalized==True``. You can see the projector as a pipeline, the first stage is to filter out the negative values, then the sp projector is applied and finally the resulting image is normalized if ``normalized==True``.
The following example shows how the projector operates depending on combinations of ``pos`` and ``normalized`` values. The following example shows how the projector operates depending on combinations of ``pos`` and ``normalized`` values.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
p_pos = sp(A.shape, k, normalized=False, pos=True) p_pos = sp(A.shape, k, normalized=False, pos=True)
print(p_pos(A)) print(p_pos(A))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Well, it's exactly the same as the ``In [2]`` output. The reason is quite obvious, it's because A doesn't contain any negative value, so let's try on a copy of A where we set the ``p_pos(A)`` nonzeros to negative values. Well, it's exactly the same as the ``In [2]`` output. The reason is quite obvious, it's because A doesn't contain any negative value, so let's try on a copy of A where we set the ``p_pos(A)`` nonzeros to negative values.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from numpy import nonzero from numpy import nonzero
D = A.copy() D = A.copy()
D[nonzero(p_pos(A))] *= -1 D[nonzero(p_pos(A))] *= -1
print(p_pos(D)) print(p_pos(D))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The entries selected when ``p_pos(A)`` is applied are now skipped because they are negative in D, so ``p_pos(D)`` selects the new greatest two values of A in term of magnitude. The entries selected when ``p_pos(A)`` is applied are now skipped because they are negative in D, so ``p_pos(D)`` selects the new greatest two values of A in term of magnitude.
What happens now if all values of the matrix are negative ? Let's see it in the next example. What happens now if all values of the matrix are negative ? Let's see it in the next example.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
E = - A.copy() E = - A.copy()
print(p_pos(E)) print(p_pos(E))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
That should not be surprising that the resulting matrix is a zero matrix, indeed E contains only negative values which are all filtered by setting ``pos=True`` in the ``p_pos`` definition. That should not be surprising that the resulting matrix is a zero matrix, indeed E contains only negative values which are all filtered by setting ``pos=True`` in the ``p_pos`` definition.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
A last question remains: what would happen if we normalize the output matrix when ``pos==True`` and the input matrix is full of negative values ? A last question remains: what would happen if we normalize the output matrix when ``pos==True`` and the input matrix is full of negative values ?
The response is simple: a division by zero error would be raised because the norm of a zero matrix is zero, hence it's not possible to normalize. The response is simple: a division by zero error would be raised because the norm of a zero matrix is zero, hence it's not possible to normalize.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### The SPLIN and SPCOL projectors ### The SPLIN and SPCOL projectors
They are very similar to the ``sp`` projector except that ``splin`` governs the integer sparsity on a row basis and ``spcol`` does it by columns as indicated by the suffix name. They are very similar to the ``sp`` projector except that ``splin`` governs the integer sparsity on a row basis and ``spcol`` does it by columns as indicated by the suffix name.
Look at the two short examples, just to be sure. Look at the two short examples, just to be sure.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust.proj import splin, spcol from pyfaust.proj import splin, spcol
pl = splin(A.shape, k) # reminder: k == 2 pl = splin(A.shape, k) # reminder: k == 2
pc = spcol(A.shape, k) pc = spcol(A.shape, k)
B1 = pl(A) B1 = pl(A)
B2 = pc(A) B2 = pc(A)
print("B1=\n", B1) print("B1=\n", B1)
print("\bB2=\n", B2) print("\bB2=\n", B2)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Here the k most significant values are chosen (by rows for splin or by columns for spcol) and the image normalization is disabled. Here the k most significant values are chosen (by rows for splin or by columns for spcol) and the image normalization is disabled.
As for the SP projector, it is possible to incorporate a normalization and/or positivity constraint passing ``normalized=True`` and ``pos=True`` to the functor constructor. As for the SP projector, it is possible to incorporate a normalization and/or positivity constraint passing ``normalized=True`` and ``pos=True`` to the functor constructor.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### The SPLINCOL projector ### The SPLINCOL projector
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The projector [``splincol``](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1splincol.html) tries to constrain the sparsity both by columns and by rows and I wrote it "tries" because there is not always a solution. The use is again the same. The projector [``splincol``](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1splincol.html) tries to constrain the sparsity both by columns and by rows and I wrote it "tries" because there is not always a solution. The use is again the same.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust.proj import splincol from pyfaust.proj import splincol
plc = splincol(A.shape, k) plc = splincol(A.shape, k)
print(plc(A)) print(plc(A))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The image matrix support is in fact the union set of the supports obtained through ``splin`` and ``spcol`` projectors. You can refer to this [documentation page](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1splincol.html) which demonstrates in a example how this union is defined. The image matrix support is in fact the union set of the supports obtained through ``splin`` and ``spcol`` projectors (that's the reason why there is not always a solution). You can refer to this [documentation page](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1splincol.html) which demonstrates in an example how this union is defined.
Another projector for the same purpose but more precise is available for square matrices only. It is named skperm, you'll find its API doc [here](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1skperm.html). In brief, it is based on a derivate of the hungarian algorithm. Another projector for the same purpose but more precise is available for square matrices only. It is named skperm, you'll find its API doc [here](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1skperm.html). In brief, it is based on a derivate of the hungarian algorithm.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### The BLOCKDIAG projector ### The BLOCKDIAG projector
Another matrix projector is the ``blockdiag`` projector. As its name suggests it projects onto the closest block-diagonal matrix with a prescribed structure. Another matrix projector is the ``blockdiag`` projector. As its name suggests it projects onto the closest block-diagonal matrix with a prescribed structure.
The block-diagonal structure can be defined by the list of the shapes of the block diagonal submatrices you want to keep from the input matrix into the output matrix. The block-diagonal structure can be defined by the list of the shapes of the block diagonal submatrices you want to keep from the input matrix into the output matrix.
An example will show how it works easily: An example will show how it works easily:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
%matplotlib inline %matplotlib inline
from pyfaust.proj import blockdiag from pyfaust.proj import blockdiag
from pyfaust import Faust from pyfaust import Faust
R = rand(15,25) R = rand(15,25)
pbd = blockdiag(R.shape, [(1,1), (1,12) ,(R.shape[0]-2, R.shape[1]-13)]) # shapes of blocks in second argument pbd = blockdiag(R.shape, [(1,1), (1,12) ,(R.shape[0]-2, R.shape[1]-13)]) # shapes of blocks in second argument
# show it as a Faust composed of a single factor # show it as a Faust composed of a single factor
Faust([pbd(R)]).imshow() Faust([pbd(R)]).imshow()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This blockdiag projector above is defined in order to keep three blocks of the input matrix A, from the upper-left to the lower-right: the first block is the singleton block composed only of the entry (0,0), the second block is a bit of the next row starting from entry (1,1) and finishing to the entry (1, 12) (its shape is (1,12)) and the final block starts from the element (2,13) to finish on the element (R.shape[0]-1, R.shape[1]-1). It's important that the list of blocks covers the whole matrix from its entry (0,0) to its entry (R.shape[0]-1, R.shape[1]-1) or the projector will end up on an error. In other words, if you sum together the first (resp. the second) coordinates of all pair of shapes you must find R.shape[0] (resp. R.shape[1]). This blockdiag projector above is defined in order to keep three blocks of the input matrix A, from the upper-left to the lower-right: the first block is the singleton block composed only of the entry (0,0), the second block is a bit of the next row starting from entry (1,1) and finishing to the entry (1, 12) (its shape is (1,12)) and the final block starts from the element (2,13) to finish on the element (R.shape[0]-1, R.shape[1]-1). It's important that the list of blocks covers the whole matrix from its entry (0,0) to its entry (R.shape[0]-1, R.shape[1]-1) or the projector will end up on an error. In other words, if you sum together the first (resp. the second) coordinates of all pair of shapes you must find R.shape[0] (resp. R.shape[1]).
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### The CIRC, TOEPLITZ and HANKEL projectors ### The CIRC, TOEPLITZ and HANKEL projectors
These projectors all return the closest corresponding structured matrix (circ being the short name for circulant). These projectors all return the closest corresponding structured matrix (circ being the short name for circulant).
For detailed definitions of these kinds of matrices you can refer to wikipedia: For detailed definitions of these kinds of matrices you can refer to wikipedia:
- [circulant matrix](https://en.wikipedia.org/wiki/Circulant_matrix) - [circulant matrix](https://en.wikipedia.org/wiki/Circulant_matrix)
- [toeplitz matrix](https://en.wikipedia.org/wiki/Toeplitz_matrix) - [toeplitz matrix](https://en.wikipedia.org/wiki/Toeplitz_matrix)
- [hankel matrix](https://en.wikipedia.org/wiki/Hankel_matrix) - [hankel matrix](https://en.wikipedia.org/wiki/Hankel_matrix)
The output is constant along each 'diagonal' (resp. 'anti-diagonal'), where the corresponding constant value is the mean of the values of the input matrix along the same diagonal (resp. anti-diagonal). The output is constant along each 'diagonal' (resp. 'anti-diagonal'), where the corresponding constant value is the mean of the values of the input matrix along the same diagonal (resp. anti-diagonal).
In the following example, a Faust is constructed with in first factor a circulant matrix, in second position a toeplitz matrix and at the end a hankel matrix. In the following example, a Faust is constructed with in first factor a circulant matrix, in second position a toeplitz matrix and at the end a hankel matrix.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust.proj import circ, toeplitz, hankel from pyfaust.proj import circ, toeplitz, hankel
CI = rand(10,10) # circ proj input CI = rand(10,10) # circ proj input
TI = rand(10,15) # toeplitz proj input TI = rand(10,15) # toeplitz proj input
HI = rand(15,10) # hankel proj input HI = rand(15,10) # hankel proj input
cp = circ(CI.shape) cp = circ(CI.shape)
tp = toeplitz(TI.shape) tp = toeplitz(TI.shape)
hp = hankel(HI.shape) hp = hankel(HI.shape)
F = Faust([cp(CI), tp(TI), hp(HI)]) F = Faust([cp(CI), tp(TI), hp(HI)])
F.imshow() F.imshow()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
You should recognize clearly the structures of a circulant matrix, a toeplitz matrix and a hankel matrix. You should recognize clearly the structures of a circulant matrix, a toeplitz matrix and a hankel matrix.
Note that these projectors are also capable to receive the ``normalized`` and ``pos`` keyword arguments we've seen before. Note that these projectors are also capable to receive the ``normalized`` and ``pos`` keyword arguments we've seen before.
The API documentation will give you other examples: The API documentation will give you other examples:
- [circ(culant)](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1circ.html), - [circ(culant)](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1circ.html),
- [toeplitz](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1toeplitz.html), - [toeplitz](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1toeplitz.html),
- [hankel](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1hankel.html) - [hankel](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1hankel.html)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### The NORMLIN and NORMCOL projectors ### The NORMLIN and NORMCOL projectors
The ``pyfaust.proj`` module provides two projectors, ``normlin`` (resp. ``normcol``) that project a matrix onto the closest matrix with rows (resp. columns) of a prescribed 2-norm. The ``pyfaust.proj`` module provides two projectors, ``normlin`` (resp. ``normcol``) that project a matrix onto the closest matrix with rows (resp. columns) of a prescribed 2-norm.
Let's try them: Let's try them:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust.proj import normcol, normlin from pyfaust.proj import normcol, normlin
from numpy.linalg import norm from numpy.linalg import norm
pnl = normlin(A.shape, .2) pnl = normlin(A.shape, .2)
pnc = normcol(A.shape, .2) pnc = normcol(A.shape, .2)
# let's verify the norm for one column obtained by normlin # let's verify the norm for one column obtained by normlin
print(norm(pnl(A)[2,:])) print(norm(pnl(A)[2,:]))
# and the same for normcol # and the same for normcol
print(norm(pnc(A)[:,2])) print(norm(pnc(A)[:,2]))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Something that is important to notice is the particular case of zero columns or rows. When the NORMLIN (resp. NORMCOL) projector encounters a zero row (resp. a zero column) it simply ignores it. Something that is important to notice is the particular case of zero columns or rows. When the NORMLIN (resp. NORMCOL) projector encounters a zero row (resp. a zero column) it simply ignores it.
Let's try: Let's try:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
B = A.copy() B = A.copy()
B[:,2] = 0 B[:,2] = 0
print(pnc(B)[:,2]) print(pnc(B)[:,2])
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The column 2 is set to zero in B and stays to zero in the projector image matrix. The column 2 is set to zero in B and stays to zero in the projector image matrix.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### The SUPP projector ### The SUPP projector
The ``supp`` projector projects a matrix onto the closest matrix with a prescribed support. In other words, it preserves the matrix entries lying on this support. The others are set to zero. The ``supp`` projector projects a matrix onto the closest matrix with a prescribed support. In other words, it preserves the matrix entries lying on this support. The others are set to zero.
The support must be defined by a binary matrix (the ``dtype`` must be the same as the input matrix though). The support must be defined by a binary matrix (the ``dtype`` must be the same as the input matrix though).
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust.proj import supp from pyfaust.proj import supp
from numpy import eye from numpy import eye
# keep only the diagonal of A # keep only the diagonal of A
ps = supp(eye(*A.shape)) # by default normalized=False and pos=False ps = supp(eye(*A.shape)) # by default normalized=False and pos=False
print(ps(A)) print(ps(A))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This projector is also capable to receive the ``normalized`` and ``pos`` keyword arguments. This projector is also capable to receive the ``normalized`` and ``pos`` keyword arguments.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### The CONST projector ### The CONST projector
This last projector is really simple, it returns a constant matrix whatever is the input matrix. The way it's instantiated is very similar to the SUPP projector. This last projector is really simple, it returns a constant matrix whatever is the input matrix. The way it's instantiated is very similar to the SUPP projector.
Look at its documentation to get an example: [const(ant) projector](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1const.html). Look at its documentation to get an example: [const(ant) projector](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/classpyfaust_1_1proj_1_1const.html).
---------------------- ----------------------
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
**Thanks** for reading this notebook, you'll find others on the [FAµST website](faust.inria.fr). Any feedback is welcome, all contact information is as well available on the website. **Thanks** for reading this notebook, you'll find others on the [FAµST website](faust.inria.fr). Any feedback is welcome, all contact information is as well available on the website.
%% Cell type:markdown id: tags:
**Note**: this notebook was executed using the following pyfaust version:
%% Cell type:code id: tags:
``` python
import pyfaust
pyfaust.version()
```
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Using The GPU FAµST API # Using The GPU FAµST API
In this notebook we'll see quickly how to leverage the GPU computing power with pyfaust. In this notebook we'll see quickly how to leverage the GPU computing power with pyfaust.
Since pyfaust 2.9.0 the API has been modified to make the GPU available directly from the python wrapper. Since pyfaust 2.9.0 the API has been modified to make the GPU available directly from the python wrapper.
Indeed, an independent GPU module (aka ``gpu_mod``) has been developed for this purpose. Indeed, an independent GPU module (aka ``gpu_mod``) has been developed for this purpose.
The first question you might ask is: does it work on my computer? Here is the answer: the loading of this module is quite transparent, if an NVIDIA GPU is available and CUDA is properly installed on your system, you have normally nothing to do except installing pyfaust to get the GPU implementations at your fingertips. We'll see at the end of this notebook how to load manually the module and how to get further information in case of an error. The first question you might ask is: does it work on my computer? Here is the answer: the loading of this module is quite transparent, if an NVIDIA GPU is available and CUDA is properly installed on your system, you have normally nothing to do except installing pyfaust to get the GPU implementations at your fingertips. We'll see at the end of this notebook how to load manually the module and how to get further information in case of an error.
It is worthy to note two drawbacks about the pyfaust GPU support: It is worthy to note two drawbacks about the pyfaust GPU support:
- Mac OS X is not supported because NVIDIA has stopped to support this OS. - Mac OS X is not supported because NVIDIA has stopped to support this OS.
- On Windows and Linux, the pyfaust GPU support is currently limited to CUDA 9.2 version. - On Windows and Linux, the pyfaust GPU support is currently limited to CUDA 9.2 version.
In addition to these drawbacks, please notice that the GPU module support is still considered in beta status as the code is relatively young and still evolving. However the API shouldn't evolve that much in a near future. In addition to these drawbacks, please notice that the GPU module support is still considered in beta status as the code is relatively young and still evolving. However the API shouldn't evolve that much in a near future.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Creating a GPU Faust object ### Creating a GPU Faust object
Let's start with some basic Faust creation on the GPU. Almost all the ways of creating a Faust object in CPU memory are also available to create a GPU Faust. Let's start with some basic Faust creation on the GPU. Almost all the ways of creating a Faust object in CPU memory are also available to create a GPU Faust.
First of all, creating a Faust using the constructor works seamlessly on GPU, the only need is to specify the ``dev`` keyword argument, as follows: First of all, creating a Faust using the constructor works seamlessly on GPU, the only need is to specify the ``dev`` keyword argument, as follows:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust import Faust from pyfaust import Faust
from numpy.random import rand from numpy.random import rand
M, N = rand(10,10), rand(10,15) M, N = rand(10,10), rand(10,15)
gpuF = Faust([M, N], dev='gpu') gpuF = Faust([M, N], dev='gpu')
gpuF gpuF
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
It's clearly indicated in the output that the Faust object is instantiated in GPU memory (the N and M numpy arrays are copied from the CPU to the GPU memory). However it's also possible to check this programmatically: It's clearly indicated in the output that the Faust object is instantiated in GPU memory (the N and M numpy arrays are copied from the CPU to the GPU memory). However it's also possible to check this programmatically:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
gpuF.device gpuF.device
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
While for a CPU Faust you'll get: While for a CPU Faust you'll get:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
Faust([M, N], dev='cpu').device Faust([M, N], dev='cpu').device
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
In ``gpuF`` the factors are dense matrices but it's totally possible to instantiate sparse matrices on the GPU as you can do on CPU side. In ``gpuF`` the factors are dense matrices but it's totally possible to instantiate sparse matrices on the GPU as you can do on CPU side.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust import Faust from pyfaust import Faust
from scipy.sparse import random, csr_matrix from scipy.sparse import random, csr_matrix
S, T = csr_matrix(random(10, 15, density=0.25)), csr_matrix(random(15, 10, density=0.05)) S, T = csr_matrix(random(10, 15, density=0.25)), csr_matrix(random(15, 10, density=0.05))
sparse_gpuF = Faust([S, T], dev='gpu') sparse_gpuF = Faust([S, T], dev='gpu')
sparse_gpuF sparse_gpuF
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
You can also create a GPU Faust by explicitly copying a CPU Faust to the GPU memory. Actually, at anytime you can copy a CPU Faust to GPU and conversely. The ``clone()`` member function is here precisely for this purpose. Below we copy ``gpuF`` to CPU and back again to GPU in the new Faust ``gpuF2``. You can also create a GPU Faust by explicitly copying a CPU Faust to the GPU memory. Actually, at anytime you can copy a CPU Faust to GPU and conversely. The ``clone()`` member function is here precisely for this purpose. Below we copy ``gpuF`` to CPU and back again to GPU in the new Faust ``gpuF2``.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
cpuF = gpuF.clone('cpu') cpuF = gpuF.clone('cpu')
gpuF2 = cpuF.clone('gpu') gpuF2 = cpuF.clone('gpu')
gpuF2 gpuF2
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Generating a GPU Faust ## Generating a GPU Faust
Many of the functions for generating a Faust object on CPU are available on GPU too. It is always the same, you precise the ``dev`` argument by assigning the ``'gpu'`` value and you'll get a GPU Faust instead of a CPU Faust. Many of the functions for generating a Faust object on CPU are available on GPU too. It is always the same, you precise the ``dev`` argument by assigning the ``'gpu'`` value and you'll get a GPU Faust instead of a CPU Faust.
For example, the code below will successively create a random GPU Faust, a Hadamard transform GPU Faust, identity GPU Faust and finally a DFT GPU Faust. For example, the code below will successively create a random GPU Faust, a Hadamard transform GPU Faust, identity GPU Faust and finally a DFT GPU Faust.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust import rand as frand, eye as feye, wht, dft from pyfaust import rand as frand, eye as feye, wht, dft
print("Random GPU Faust:", frand(10,10, num_factors=11, dev='gpu')) print("Random GPU Faust:", frand(10,10, num_factors=11, dev='gpu'))
print("Hadamard GPU Faust:", wht(32, dev='gpu')) print("Hadamard GPU Faust:", wht(32, dev='gpu'))
print("Identity GPU Faust:", feye(16, dev='gpu')) print("Identity GPU Faust:", feye(16, dev='gpu'))
print("DFT GPU Faust:", dft(32, dev='gpu')) print("DFT GPU Faust:", dft(32, dev='gpu'))
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Manipulating GPU Fausts and CPU interoperability ### Manipulating GPU Fausts and CPU interoperability
Once you've created GPU Faust objects, you can perform operations on them staying in GPU world (that is, with no array transfer to CPU memory). That's of course not always possible. Once you've created GPU Faust objects, you can perform operations on them staying in GPU world (that is, with no array transfer to CPU memory). That's of course not always possible.
For example, let's consider Faust-scalar multiplication and Faust-matrix product. In the first case the scalar is copied to the GPU memory and likewise in the second case the matrix is copied from CPU to GPU in order to proceed to the computation. However in both cases the Faust factors stay into GPU memory and don't move during the computation. For example, let's consider Faust-scalar multiplication and Faust-matrix product. In the first case the scalar is copied to the GPU memory and likewise in the second case the matrix is copied from CPU to GPU in order to proceed to the computation. However in both cases the Faust factors stay into GPU memory and don't move during the computation.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Faust-scalar multiplication # Faust-scalar multiplication
2*gpuF 2*gpuF
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
As you see the first factor's address has changed in the result compared to what it was in ``gpuF``. Indeed, when you make a scalar multiplication only one factor is multiplied, the others don't change, they are shared between the Faust being multiplied and the resulting Faust. This is an optimization and to go further in this direction the factor chosen to be multiplied is the smallest in memory (not necessarily the first one). As you see the first factor's address has changed in the result compared to what it was in ``gpuF``. Indeed, when you make a scalar multiplication only one factor is multiplied, the others don't change, they are shared between the Faust being multiplied and the resulting Faust. This is an optimization and to go further in this direction the factor chosen to be multiplied is the smallest in memory (not necessarily the first one).
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Faust-matrix product (the matrix is copied to GPU # Faust-matrix product (the matrix is copied to GPU
# then the multiplication is performed on GPU) # then the multiplication is performed on GPU)
gpuF@rand(gpuF.shape[1],15) gpuF@rand(gpuF.shape[1],15)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
On the contrary, and that matters for optimization, there is no CPU-GPU transfer at all when you create another GPU Faust named for example ``gpuF2`` on the GPU and decide to multiply the two of them like this: On the contrary, and that matters for optimization, there is no CPU-GPU transfer at all when you create another GPU Faust named for example ``gpuF2`` on the GPU and decide to multiply the two of them like this:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust import rand as frand from pyfaust import rand as frand
gpuF2 = frand(gpuF.shape[1],18, dev='gpu') gpuF2 = frand(gpuF.shape[1],18, dev='gpu')
gpuF3 = gpuF@gpuF2 gpuF3 = gpuF@gpuF2
gpuF3 gpuF3
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Besides, it's important to note that ``gpuF3`` factors are not duplicated in memory because they already exist for ``gpuF`` and ``gpuF2``, that's an extra optimization: ``gpuF3`` is just a memory view of the factors of ``gpuF`` and ``gpuF2`` (the same GPU arrays are shared between ``Faust`` objects). That works pretty well the same for CPU ``Faust`` objects. Besides, it's important to note that ``gpuF3`` factors are not duplicated in memory because they already exist for ``gpuF`` and ``gpuF2``, that's an extra optimization: ``gpuF3`` is just a memory view of the factors of ``gpuF`` and ``gpuF2`` (the same GPU arrays are shared between ``Faust`` objects). That works pretty well the same for CPU ``Faust`` objects.
Finally, please notice that CPU Faust objects are not directly interoperable with GPU Fausts objects. You can try, it'll end up with an error. Finally, please notice that CPU Faust objects are not directly interoperable with GPU Fausts objects. You can try, it'll end up with an error.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
cpuF = frand(5,5,5, dev='cpu') cpuF = frand(5,5,5, dev='cpu')
gpuF = frand(5,5,6, dev='gpu') gpuF = frand(5,5,6, dev='gpu')
try: try:
print("A first try to multiply a CPU Faust with a GPU one...") print("A first try to multiply a CPU Faust with a GPU one...")
cpuF@gpuF cpuF@gpuF
except: except:
print("it doesn't work, you must either convert cpuF to a GPU Faust or gpuF to a CPU Faust before multiplying.") print("it doesn't work, you must either convert cpuF to a GPU Faust or gpuF to a CPU Faust before multiplying.")
print("A second try using conversion as needed...") print("A second try using conversion as needed...")
print(cpuF.clone('gpu')@gpuF) # this is what you should do print(cpuF.clone('gpu')@gpuF) # this is what you should do
print("Now it works!") print("Now it works!")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Benchmarking your GPU with pyfaust! ### Benchmarking your GPU with pyfaust!
Of course when we run some code on GPU rather than on CPU, it is clearly to enhance the performances. So let's try your GPU and find out if it is worth it or not compared to your CPU. Of course when we run some code on GPU rather than on CPU, it is clearly to enhance the performances. So let's try your GPU and find out if it is worth it or not compared to your CPU.
First, measure how much time it takes on CPU to compute a Faust norm and the dense array corresponding to the product of its factors: First, measure how much time it takes on CPU to compute a Faust norm and the dense array corresponding to the product of its factors:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust import rand as frand from pyfaust import rand as frand
cpuF = frand(1024, 1024, num_factors=10, fac_type='dense') cpuF = frand(1024, 1024, num_factors=10, fac_type='dense')
%timeit cpuF.norm(2) %timeit cpuF.norm(2)
%timeit cpuF.toarray() %timeit cpuF.toarray()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Now let's make some GPU heat with norms and matrix products! Now let's make some GPU heat with norms and matrix products!
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
gpuF = cpuF.clone(dev='gpu') gpuF = cpuF.clone(dev='gpu')
%timeit gpuF.norm(2) %timeit gpuF.norm(2)
%timeit gpuF.toarray() %timeit gpuF.toarray()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Of course not all GPUs are equal, below are the results I got using a Tesla V100: Of course not all GPUs are equal, below are the results I got using a Tesla V100:
``` ```
6.85 ms ± 9.06 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 6.85 ms ± 9.06 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
6.82 ms ± 90.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 6.82 ms ± 90.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
``` ```
Likewise let's compare the performance obtained for a sparse Faust: Likewise let's compare the performance obtained for a sparse Faust:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from pyfaust import rand as frand from pyfaust import rand as frand
cpuF2 = frand(1024, 1024, num_factors=10, fac_type='sparse', density=.2) cpuF2 = frand(1024, 1024, num_factors=10, fac_type='sparse', density=.2)
gpuF2 = cpuF2.clone(dev='gpu') gpuF2 = cpuF2.clone(dev='gpu')
print("CPU times:") print("CPU times:")
%timeit cpuF2.norm(2) %timeit cpuF2.norm(2)
%timeit cpuF2.toarray() %timeit cpuF2.toarray()
print("GPU times:") print("GPU times:")
%timeit gpuF2.norm(2) %timeit gpuF2.norm(2)
%timeit gpuF2.toarray() %timeit gpuF2.toarray()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
On a Tesla V100 it gives these results: On a Tesla V100 it gives these results:
``` ```
9.86 ms ± 3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 9.86 ms ± 3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
13.8 ms ± 39.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 13.8 ms ± 39.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Running some FAµST algorithms on GPU ### Running some FAµST algorithms on GPU
Some of the FAµST algorithms implemented in the C++ core are now also available in pure GPU mode. Some of the FAµST algorithms implemented in the C++ core are now also available in pure GPU mode.
For example, let's compare the factorization times taken by the hierarchical factorization when launched on CPU and GPU. For example, let's compare the factorization times taken by the hierarchical factorization when launched on CPU and GPU.
When running on GPU, the matrix to factorize is copied in GPU memory and almost all operations executed during the algorithm don't imply the CPU in any manner (the only exception at this stage of development is the proximal operators that only run on CPU). When running on GPU, the matrix to factorize is copied in GPU memory and almost all operations executed during the algorithm don't imply the CPU in any manner (the only exception at this stage of development is the proximal operators that only run on CPU).
**Warning: THE COMPUTATION CAN LAST THIRTY MINUTES OR SO ON CPU** **Warning: THE COMPUTATION CAN LAST THIRTY MINUTES OR SO ON CPU**
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from scipy.io import loadmat from scipy.io import loadmat
from pyfaust.demo import get_data_dirpath from pyfaust.demo import get_data_dirpath
d = loadmat(get_data_dirpath()+'/matrix_MEG.mat') d = loadmat(get_data_dirpath()+'/matrix_MEG.mat')
def factorize_MEG(dev='cpu'): def factorize_MEG(dev='cpu'):
from pyfaust.fact import hierarchical from pyfaust.fact import hierarchical
from time import time from time import time
from numpy.linalg import norm from numpy.linalg import norm
MEG = d['matrix'].T MEG = d['matrix'].T
num_facts = 9 num_facts = 9
k = 10 k = 10
s = 8 s = 8
t_start = time() t_start = time()
MEG16 = hierarchical(MEG, ['rectmat', num_facts, k, s], backend=2020, on_gpu=dev=='gpu') MEG16 = hierarchical(MEG, ['rectmat', num_facts, k, s], backend=2020, on_gpu=dev=='gpu')
total_time = time()-t_start total_time = time()-t_start
err = norm(MEG16.toarray()-MEG)/norm(MEG) err = norm(MEG16.toarray()-MEG)/norm(MEG)
return MEG16, total_time, err return MEG16, total_time, err
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
gpuMEG16, gpu_time, gpu_err = factorize_MEG(dev='gpu') gpuMEG16, gpu_time, gpu_err = factorize_MEG(dev='gpu')
print("GPU time, error:", gpu_time, gpu_err) print("GPU time, error:", gpu_time, gpu_err)
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
cpuMEG16, cpu_time, cpu_err = factorize_MEG(dev='cpu') cpuMEG16, cpu_time, cpu_err = factorize_MEG(dev='cpu')
print("CPU time, error:", cpu_time, cpu_err) print("CPU time, error:", cpu_time, cpu_err)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Depending on you GPU card and CPU the results may vary, so below are showed some results obtained on specific hardware. Depending on you GPU card and CPU the results may vary, so below are showed some results obtained on specific hardware.
<table align="left"> <table align="left">
<tr align="center"> <tr align="center">
<th>Implementation</th> <th>Implementation</th>
<th> Hardware </th> <th> Hardware </th>
<th> Time (s) </th> <th> Time (s) </th>
<th>Error Faust vs MEG matrix </th> <th>Error Faust vs MEG matrix </th>
</tr> </tr>
<tr> <tr>
<td>CPU</td> <td>CPU</td>
<td>Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz</td> <td>Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz</td>
<td>1241.00</td> <td>1241.00</td>
<td>.129</td> <td>.129</td>
</tr> </tr>
<tr> <tr>
<td>GPU</td> <td>GPU</td>
<td>NVIDIA GTX980</td> <td>NVIDIA GTX980</td>
<td>465.42</td> <td>465.42</td>
<td>.129</td> <td>.129</td>
</tr> </tr>
<tr> <tr>
<td>GPU</td> <td>GPU</td>
<td>NVIDIA Tesla V100</td> <td>NVIDIA Tesla V100</td>
<td>321.50</td> <td>321.50</td>
<td>.129</td> <td>.129</td>
</tr> </tr>
</table> </table>
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Manually loading the pyfaust GPU module ### Manually loading the pyfaust GPU module
If something goes wrong when trying to use the GPU pyfaust extension, here is how to manually load the module and obtain more information. If something goes wrong when trying to use the GPU pyfaust extension, here is how to manually load the module and obtain more information.
The key is the function [enable_gpu_mod](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/namespacepyfaust.html#aea03fff2525fc834f2a56e63fd30a54f). This function allows to give another try to ``gpu_mod`` loading with the verbose mode enabled. The key is the function [enable_gpu_mod](https://faustgrp.gitlabpages.inria.fr/faust/last-doc/html/namespacepyfaust.html#aea03fff2525fc834f2a56e63fd30a54f). This function allows to give another try to ``gpu_mod`` loading with the verbose mode enabled.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import pyfaust import pyfaust
pyfaust.enable_gpu_mod(silent=False, fatal=True) pyfaust.enable_gpu_mod(silent=False, fatal=True)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Afterward you can call ``pyfaust.is_gpu_mod_enabled()`` to verify if it works in your script. Afterward you can call ``pyfaust.is_gpu_mod_enabled()`` to verify if it works in your script.
Below I copy outputs that show what it should look like when it doesn't work: Below I copy outputs that show what it should look like when it doesn't work:
1) If you asked a fatal error using ``enable_gpu_mod(silent=False, fatal=True)`` an exception will be raised and your code won't be able to continue after this call: 1) If you asked a fatal error using ``enable_gpu_mod(silent=False, fatal=True)`` an exception will be raised and your code won't be able to continue after this call:
``` ```
python -c "import pyfaust; pyfaust.enable_gpu_mod(silent=False, fatal=True)" python -c "import pyfaust; pyfaust.enable_gpu_mod(silent=False, fatal=True)"
WARNING: you must call enable_gpu_mod() before using GPUModHandler singleton. WARNING: you must call enable_gpu_mod() before using GPUModHandler singleton.
loading libgm loading libgm
libcublas.so.9.2: cannot open shared object file: No such file or directory libcublas.so.9.2: cannot open shared object file: No such file or directory
[...] [...]
Exception: Can't load gpu_mod library, maybe the path (/home/test/venv_pyfaust-2.10.14/lib/python3.7/site-packages/pyfaust/lib/libgm.so) is not correct or the backend (cuda) is not installed or configured properly so the libraries are not found. Exception: Can't load gpu_mod library, maybe the path (/home/test/venv_pyfaust-2.10.14/lib/python3.7/site-packages/pyfaust/lib/libgm.so) is not correct or the backend (cuda) is not installed or configured properly so the libraries are not found.
``` ```
2) If you just want a warning, you must use ``enable_gpu_mod(silent=False)``, the code will continue after with no gpu_mod enabled but you'll get some information about what is going wrong (here the CUDA toolkit version 9.2 is not installed) : 2) If you just want a warning, you must use ``enable_gpu_mod(silent=False)``, the code will continue after with no gpu_mod enabled but you'll get some information about what is going wrong (here the CUDA toolkit version 9.2 is not installed) :
``` ```
python -c "import pyfaust; pyfaust.enable_gpu_mod(silent=False)" python -c "import pyfaust; pyfaust.enable_gpu_mod(silent=False)"
WARNING: you must call enable_gpu_mod() before using GPUModHandler singleton. WARNING: you must call enable_gpu_mod() before using GPUModHandler singleton.
loading libgm loading libgm
libcublas.so.9.2: cannot open shared object file: No such file or directory libcublas.so.9.2: cannot open shared object file: No such file or directory
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
------------------------------------------------------------ ------------------------------------------------------------
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Thanks for reading this notebook! Many other are available at [faust.inria.fr](https://faust.inria.fr).
%% Cell type:markdown id: tags:
**Note**: this notebook was executed using the following pyfaust version: **Note**: this notebook was executed using the following pyfaust version:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import pyfaust import pyfaust
pyfaust.version() pyfaust.version()
``` ```
%% Cell type:markdown id: tags:
Thanks for reading this notebook! Many other are available at [faust.inria.fr](https://faust.inria.fr).
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment